Can't load fine-tuned BERT model for inference #729
Labels
hub
For all issues related to tf hub library and tf hub tutorials or examples posted by hub team
stat:awaiting tensorflower
subtype:text-embedding
type:support
Have I written custom code : No
OS Platform and Distribution : macOs
TensorFlow version (use command below): tensorflow==2.4.0 and 2.5.0-dev20210121
Python version:Python 3.8 and Python 3.7
After successfully training and exporting a fine-tuned BERT classifier, I'm unable to re-load the model for inference in another session.
When I run
tf.saved_model.load("/path/to/saved_model")
or
I get:
Importing tensorflow_text makes this issue go away (a solution suggested by #463), however this situation is not ideal as it'd require setting up a custom container with this tensorflow_text package at inference time. Is there a way to load a fine-tuned hub model for inference without having to import tensorflow_text?
I was able to reproduce this error when trying to load the model trained from this tutorial in an outside instance: https://www.tensorflow.org/tutorials/text/solve_glue_tasks_using_bert_on_tpu (colab). This tutorial explicitly packages the BERT preprocessor with the classifier model to make it self-sufficient for inference time, but even trying to load this exported model fails.
Any help would be much appreciated, thanks so much!
The text was updated successfully, but these errors were encountered: