Skip to content

TFLITE: Benchmarking failure on GPT2 quantized autocomplete.tflite #62506

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
suyash-narain opened this issue Nov 30, 2023 · 34 comments
Closed

TFLITE: Benchmarking failure on GPT2 quantized autocomplete.tflite #62506

suyash-narain opened this issue Nov 30, 2023 · 34 comments
Assignees
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.15 For issues related to 2.15.x type:bug Bug

Comments

@suyash-narain
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script
    provided in TensorFlow)
    : No

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): aarch64 linux

  • TensorFlow installed from (source or binary): binary

  • TensorFlow version (use command below): tf 2.15

  • Python version: python3.10.9

  • Exact command to reproduce: linux_aarch64_benchmark_model --graph=autocomplete.tflite --num_threads=1 --num_runs=10

Describe the problem

I am using an aarch64 device like raspberry pi. I created the gpt2 autocomplete.tflite model using the official colab tutorial https://colab.research.google.com/github/tensorflow/codelabs/blob/main/KerasNLP/io2023_workshop.ipynb#scrollTo=uLsz2IcN46eb

I was able to create both the quantized and unquantized tflite model. I then tried to benchmark these models on the aarch64 device using the official nightly tflite benchmark model linux_aarch64 using tf 2.15 sourced from https://www.tensorflow.org/lite/performance/measurement#native_benchmark_binary

On running the benchmark model using the command: linux_aarch64_benchmark_model --graph=autocomplete.tflite --num_threads=1 --num_runs=10

I get benchmarking failure errors. I am running the model on CPU itself but it seems the ops are unsupported. I get same benchmarking failure on running the unquantized version of it as well. The logs are below.

root@user:~# ./linux_aarch64_benchmark_model --graph=autocomplete.tflite --num_threads=1 --num_runs=10
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Num threads: [4]
INFO: Graph: [autocomplete.tflite]
INFO: #threads used for CPU inference: [4]
INFO: Loaded model autocomplete.tflite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
ERROR: Select TensorFlow op(s), included in the given model, is(are) not support ed by this interpreter. Make sure you apply/link the Flex delegate before infere nce. For the Android, it can be resolved by adding "org.tensorflow:tensorflow-li te-select-tf-ops" dependency. See instructions: https://www.tensorflow.org/lite/ guide/ops_select
ERROR: Node number 2 (FlexMutableHashTableV2) failed to prepare.
ERROR: Select TensorFlow op(s), included in the given model, is(are) not support ed by this interpreter. Make sure you apply/link the Flex delegate before infere nce. For the Android, it can be resolved by adding "org.tensorflow:tensorflow-li te-select-tf-ops" dependency. See instructions: https://www.tensorflow.org/lite/ guide/ops_select
ERROR: Node number 2 (FlexMutableHashTableV2) failed to prepare.
ERROR: Failed to allocate tensors!
ERROR: Benchmarking failed.

How do I benchmark this model correctly? Since its a tflite model I should be able to benchmark it using tflite benchmark model.

On trying the model with flex delegate benchmark model, it doesn't seem like benchmarking is failing but it does give a lot of error messages along with the INFO last line giving some avg values and exits soon after. the logs are below:

root@user:~# linux_aarch64_benchmark_model_plus_flex --graph=autocomplete.tflite --num_threads=1
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Num threads: [1]
INFO: Graph: [autocomplete.tflite]
INFO: #threads used for CPU inference: [1]
INFO: Loaded model autocomplete.tflite
INFO: Created TensorFlow Lite delegate for select TF ops.
INFO: TfLiteFlexDelegate delegate: 29 nodes delegated out of 1139 nodes with 14 partitions.

ERROR: Op type not registered 'RegexSplitWithOffsets' in binary running on device. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib (e.g. tf.contrib.resampler), accessing should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
ERROR: Op type not registered 'RegexSplitWithOffsets' in binary running on device. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib (e.g. tf.contrib.resampler), accessing should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
WARNING: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors (tensor#243 is a dynamic-sized tensor).
INFO: The input model file size (MB): 129.674
INFO: Initialized session in 131.484ms.
INFO: Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized

ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
INFO: count=113 first=216495 curr=60356 min=22 max=216495 avg=4851.66 std=26505

@sushreebarsa sushreebarsa added comp:lite TF Lite related issues type:performance Performance Issue TF 2.15 For issues related to 2.15.x labels Dec 2, 2023
@sushreebarsa
Copy link
Contributor

@suyash-narain Could you please make sure you have included the necessary libraries for the "RegexSplitWithOffsets" op, such as the TFLite Text library and try to rebuild the runtime with appropriate flag enabled?
Also try to disable XNNPACK, If you don't need the XNNPACK delegate for performance reasons. Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label Dec 2, 2023
@suyash-narain
Copy link
Author

suyash-narain commented Dec 4, 2023

Hi @sushreebarsa, RegexSplitWithOffsets op error comes into picture when I use benchmark model with flex delegate. Is using flex delegate a necessity with autocomplete.tflite model? Why is general tflite nightly benchmark model not able to execute this tflite model?
Do i need to enable tflite task library? i thought thats enabled by default when we build runtime?

does the prebuilt benchmark model from tflite not have support for tensorflow text libraries?

which flag do i need to enable while building tflite runtime for tflite text library as i cannot find any references on either tensorflow github or tflite documentation.

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Dec 4, 2023
@sushreebarsa
Copy link
Contributor

@suyash-narain Using the Flex delegate is not strictly necessary with the autocomplete.tflite model, but it is highly recommended for two primary reasons such as for unsupported operations and high performance. Due to some missing dependencies or incompatible versions. General TFLite nightly benchmark model not able to execute the tflite model due to missing dependencies.

You don't need to explicitly enable the TensorFlow Lite Task Library in most cases. It is automatically included in the TFLite runtime and utilizes the same inference infrastructure as the standard TFLite API.

Unfortunately, the prebuilt benchmark model from TFLite currently does not have native support for TensorFlow Text libraries.

Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label Dec 6, 2023
@suyash-narain
Copy link
Author

Hi @sushreebarsa, thanks for your reply.
I still don't understand. Building tflite runtime by default should have the Task Library enabled in most cases. But tflite benchmark model provided as default (even the one with flex delegate) cannot benchmark autocomplete.tflite gpt2 model.
How do I benchmark it then?
if i build my benchmark model, which flags do i need to enable, if you can let me know? or would building default tflite benchmark model will work for tf2.15?

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Dec 7, 2023
@LakshmiKalaKadali LakshmiKalaKadali added the subtype: raspberry pi Raspberry Pi Build/Installation Issues label Dec 11, 2023
@suyash-narain
Copy link
Author

Hi @LakshmiKalaKadali any updates?

@LakshmiKalaKadali
Copy link
Contributor

Hi @pkgoogle,

Please look into the issue.

Thank You

@pkgoogle
Copy link

I was able to replicate on linux x86_64:

./benchmark_model_plus_flex --graph=autocomplete.tflite --num_threads=1 --num_runs=10

INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Min num runs: [10]
INFO: Num threads: [1]
INFO: Graph: [autocomplete.tflite]
INFO: #threads used for CPU inference: [1]
INFO: Loaded model autocomplete.tflite
INFO: The input model file size (MB): 129.674
INFO: Initialized session in 36.666ms.
INFO: Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
INFO: count=23009 first=38091 curr=20 min=18 max=38091 avg=21.434 std=251

INFO: Created TensorFlow Lite delegate for select TF ops.
2023-12-14 00:29:53.965212: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructio
ns in performance-critical operations.
To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO: TfLiteFlexDelegate delegate: 29 nodes delegated out of 1139 nodes with 14 partitions.

ERROR: Op type not registered 'RegexSplitWithOffsets' in binary running on xxxxxx.xxxxxx.xxxxxx. Make sure the Op and Kernel are registered 
in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib (e.g. `tf.contrib.resampler`), acce
ssing should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
ERROR: Op type not registered 'RegexSplitWithOffsets' in binary running on xxxxxxxx.xxxxxx.xxxxxxx. Make sure the Op and Kernel are registered 
in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib (e.g. `tf.contrib.resampler`), acce
ssing should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
WARNING: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors (tensor#243 is a dynamic-s
ized tensor).
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
ERROR: Delegate kernel was not initialized
ERROR: Node number 1139 (TfLiteFlexDelegate) failed to prepare.
...
<repeated a lot>

@yijie-yang can you please take a look? Thanks.

@pkgoogle pkgoogle added type:bug Bug stat:awaiting tensorflower Status - Awaiting response from tensorflower and removed subtype: raspberry pi Raspberry Pi Build/Installation Issues type:performance Performance Issue labels Dec 14, 2023
@yijie-yang
Copy link
Contributor

Yes, to benchmark the gpt2 model you need some extra dependencies. There are 2 steps needed:

  1. add this library to your workspace: https://github.com/tensorflow/text/tree/master

  2. under tensorflow/tensorflow/lite/tools/benchmark/BUILD, add the dependency of "//tensorflow_text:ops_lib", to your benchmark_model_plus_flex binary.

Then you should be good to go!

@pkgoogle
Copy link

@suyash-narain, can you try the above and let us know if your issue is resolved? Thanks.

@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Dec 26, 2023
@suyash-narain
Copy link
Author

let me try this and get back to you. thanks

@google-ml-butler google-ml-butler bot removed stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author labels Jan 4, 2024
@pkgoogle pkgoogle added the stat:awaiting response Status - Awaiting response from author label Jan 4, 2024
@suyash-narain
Copy link
Author

Hi @pkgoogle @yijie-yang

I was trying to build the benchmark_model_plus_flex binary using the mentioned changes and ran into an error.

ERROR: /home/tensorflow/tensorflow_text/core/kernels/BUILD:35:14: no such package '@local_config_tf//': The repository '@local_config_tf' could not be resolved: Repository '@local_config_tf' is not defined and referenced by '//tensorflow_text/core/kernels:boise_offset_converter_kernel'

I downloaded tensorflow-text from the mentioned link and put it inside my workspace.
I made changes to the BUILD as well by adding '//tensorflow_text:ops_lib' dependency.

I am not sure what the '@local_config_tf' error pertains to here.

the log is below:

user@user:~/tensorflow$ bazel build -c opt --config=elinux_aarch64 //tensorflow/lite/tools/benchmark:benchmark_model_plus_flex
INFO: Reading 'startup' options from /home/tensorflow/.bazelrc: --windows_enable_symlinks
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=203
INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Found applicable config definition build:short_logs in file /home/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:elinux_aarch64 in file /home/tensorflow/.bazelrc: --config=elinux --cpu=aarch64
INFO: Found applicable config definition build:elinux in file /home/tensorflow/.bazelrc: --crosstool_top=@local_config_embedded_arm//:toolchain --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
INFO: Found applicable config definition build:linux in file /home/tensorflow/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository boringssl instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:928:21: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:469:20: in _tf_repositories
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
INFO: Repository curl instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:928:21: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:410:20: in _tf_repositories
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
INFO: Repository icu instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:921:28: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:75:8: in _initialize_third_party
/home/tensorflow/third_party/icu/workspace.bzl:8:20: in repo
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
INFO: Repository armhf_linux_toolchain instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:928:21: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:258:20: in _tf_repositories
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
INFO: Repository aarch64_linux_toolchain instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:928:21: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:250:20: in _tf_repositories
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
INFO: Repository XNNPACK instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:928:21: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:151:20: in _tf_repositories
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
INFO: Repository double_conversion instantiated at:
/home/tensorflow/WORKSPACE:84:14: in
/home/tensorflow/tensorflow/workspace2.bzl:928:21: in workspace
/home/tensorflow/tensorflow/workspace2.bzl:629:20: in _tf_repositories
/home/tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
/home/tensorflow/third_party/repo.bzl:89:35: in
ERROR: /home/tensorflow/tensorflow_text/core/kernels/BUILD:35:14: no such package '@local_config_tf//': The repository '@local_config_tf' could not be resolved: Repository '@local_config_tf' is not defined and referenced by '//tensorflow_text/core/kernels:boise_offset_converter_kernel'
ERROR: Analysis of target '//tensorflow/lite/tools/benchmark:benchmark_model_plus_flex' failed; build aborted:
INFO: Elapsed time: 1.277s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (53 packages loaded, 1743 targets configured)
currently loading: @gif// ... (2 packages)
Fetching https://storage.googleapis.com/mirror.tensorflow.org/curl.se/download/curl-8.4.0.tar.gz; 3.0 MiB (3,137,536B)
Fetching https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/XNNPACK/archive/bbbaa7352a3ea729987d3e654d37be93e8009691.zip; 977.6 KiB (1,001,081B)
Fetching https://storage.googleapis.com/.../developer.arm.com/-/media/Files/downloads/gnu/11.3.rel1/binrel/arm-gnu-toolchain-11.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz; 365.5 KiB (374,310B)
Fetching https://storage.googleapis.com/.../developer.arm.com/-/media/Files/downloads/gnu/11.3.rel1/binrel/arm-gnu-toolchain-11.3.rel1-x86_64-arm-none-linux-gnueabihf.tar.xz; 759.0 KiB (777,173B)
Fetching https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/boringssl/archive/c00d7ca810e93780bd0c8ee4eea28f4f2ea4bcdc.tar.gz; 261.5 KiB (267,782B)
Fetching https://storage.googleapis.com/mirror.tensorflow.org/github.com/unicode-org/icu/archive/release-69-1.zip
Fetching https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/double-conversion/archive/v3.2.0.tar.gz

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Jan 9, 2024
@yijie-yang
Copy link
Contributor

Hmmm...could you try adding deps @org_tensorflow_text//tensorflow_text:ops_lib instead? Some great discussion about the similar issue is found: #50924

@suyash-narain
Copy link
Author

Hi @yijie-yang

This time i get the error:

ERROR: /home/mtk/tensorflow/tensorflow/lite/tools/benchmark/BUILD:74:13: no such package '@org_tensorflow_text//tensorflow_text': The repository '@org_tensorflow_text' could not be resolved: Repository '@org_tensorflow_text' is not defined and referenced by '//tensorflow/lite/tools/benchmark:benchmark_model_plus_flex'
ERROR: Analysis of target '//tensorflow/lite/tools/benchmark:benchmark_model_plus_flex' failed; build aborted: Analysis failed

the BUILD snippet is as below:

 tf_cc_binary(
      name = "benchmark_model_plus_flex",
      srcs = [
          "benchmark_plus_flex_main.cc",
      ],
      copts = common_copts,
      linkopts = tflite_linkopts() + select({
          "//tensorflow:android": [
              "-pie",  # Android 5.0 and later supports only PIE
              "-lm",  # some builtin ops, e.g., tanh, need -lm
          ],
          "//conditions:default": [],
      }),
      deps = [
          ":benchmark_tflite_model_lib",
          "//tensorflow/lite/delegates/flex:delegate",
          "//tensorflow/lite/testing:init_tensorflow",
          "//tensorflow/lite/tools:logging",
          "@org_tensorflow_text//tensorflow_text:ops_lib",
      ],
  )

@yijie-yang
Copy link
Contributor

Sorry my mistakes.

ERROR: /home/tensorflow/tensorflow_text/core/kernels/BUILD:35:14: no such package '@local_config_tf//': The repository '@local_config_tf' could not be resolved: Repository '@local_config_tf' is not defined and referenced by '//tensorflow_text/core/kernels:boise_offset_converter_kernel'

It's likely an error with your repository rules or directory structure. Could you reference to this discussion and make the changes to WORKSPACE and BUILD? https://stackoverflow.com/questions/53254061/failing-to-bazel-build-c-project-with-tensorflow-as-a-dependency

@suyash-narain
Copy link
Author

suyash-narain commented Jan 9, 2024

Hi @yijie-yang
I think I was getting the error because i didn't add tensorflow-text rules to my WORKSPACE. How can I add it to this file?
i copied the tensorflow-text folder into tensorflow directory but not sure how to reference it in workspace file.
How do i integrate tensorflow-text WORKSPACE file with tensorflow WORKSPACE?
any suggestions?

@suyash-narain
Copy link
Author

suyash-narain commented Jan 9, 2024

Hi @yijie-yang,
In my WORKSPACE file, I add the following:

local_repository(
        name = "org_tensorflow_text",
        path = "text/tensorflow_text",
)

I get the below error log. What other changes am I missing here in WORKSPACE file?

user@user:~/tensorflow$ bazel build -c opt --config=elinux_aarch64 //tensorflow/lite/tools/benchmark:benchmark_model_plus_flex
INFO: Reading 'startup' options from /home/tensorflow/.bazelrc: --windows_enable_symlinks
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=203
INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Found applicable config definition build:short_logs in file /home/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:elinux_aarch64 in file /home/tensorflow/.bazelrc: --config=elinux --cpu=aarch64
INFO: Found applicable config definition build:elinux in file /home/tensorflow/.bazelrc: --crosstool_top=@local_config_embedded_arm//:toolchain --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
INFO: Found applicable config definition build:linux in file /home/tensorflow/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
ERROR: /home/tensorflow/WORKSPACE:72:17: fetching local_repository rule //external:org_tensorflow_text: java.io.IOException: No WORKSPACE file found in /home/.cache/bazel/_bazel/716ac13c348ce3335128b3d9f4131682/external/org_tensorflow_text
ERROR: /home/tensorflow/tensorflow/lite/tools/benchmark/BUILD:74:13: //tensorflow/lite/tools/benchmark:benchmark_model_plus_flex depends on @org_tensorflow_text//tensorflow_text:ops_lib in repository @org_tensorflow_text which failed to fetch. no such package '@org_tensorflow_text//tensorflow_text': No WORKSPACE file found in /home/.cache/bazel/_bazel/716ac13c348ce3335128b3d9f4131682/external/org_tensorflow_text
ERROR: Analysis of target '//tensorflow/lite/tools/benchmark:benchmark_model_plus_flex' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.595s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (3 packages loaded, 6 targets configured)

@yijie-yang
Copy link
Contributor

Did you build and intall both tensorflow and tensorflow_text properly?

@suyash-narain
Copy link
Author

@yijie-yang,

I have both TF and Tensorflow_text installed via pip. The build from source also simply builds the pip file which is then installed. Is there anything else I need to add to workspace file?

@yijie-yang
Copy link
Contributor

Let me reproduce your error in my local later today.

@suyash-narain
Copy link
Author

thanks

@pkgoogle pkgoogle added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jan 10, 2024
@yijie-yang
Copy link
Contributor

Hi @suyash-narain,

Sorry I'm too busy with the current work and don't have enough time to repro your error. Could you refer to this doc (https://bazel.build/reference/be/workspace) for workspace setup?

@suyash-narain
Copy link
Author

Hi @yijie-yang
I did refer to it before but the local_repository rules were still giving me the same error. Do you have an example workspace i can refer to?
Not entirely sure how to add tf-text to tf workspace.
creating the local_repository gives me same error as beforre
thanks

@yijie-yang
Copy link
Contributor

Hi @broken,

Could you provide some insights to this?

@broken
Copy link
Member

broken commented Jan 11, 2024

The cause of the issue is that TF Text tries to link with the tensorflow shared library, but since you are building this inside TF, this isn't needed. I think you should use the tensorflow serving (model server) library as an example. After viewing its workspace.bzl file, instead of local_repository, try:

    http_archive(
        name = "org_tensorflow_text",
        sha256 = "4e6ec543a1d70a50f0105e0ea69ea8a1edd0b17a38d0244aa3b14f889b2cf74d",
        strip_prefix = "text-2.12.1",
        url = "https://github.com/tensorflow/text/archive/v2.12.1.zip",
        patches = ["@//third_party/tf_text:tftext.patch"],
        patch_args = ["-p1"],
        repo_mapping = {"@com_google_re2": "@com_googlesource_code_re2"},
    )

you can then link in tf text ops with: "@org_tensorflow_text//tensorflow_text:ops_lib",

serving also has patches, but you should be able to ignore this as it is only bc it builds with an older version of c++. You should also copy this directory into the TF third_party one. It rewrites @local_config_tf to @org_tensorflow. Though, does @org_tensorflow exist for your target? Maybe update the patch so those point correctly by simply removing @org_tensorflow?

edit:

  • updated notes on the patch
  • you may need to update the version in http_archive.
  • the "redo_mapping" may not be needed. It depends what core TF is using for its re2 lib name.

@suyash-narain
Copy link
Author

suyash-narain commented Jan 11, 2024

Hi @broken ,

I am not sure what I am doing wrong here.
My WORKSPACE file is as below:

workspace(name = "org_tensorflow")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
name = "bazel_skylib",
sha256 = "74d544d96f4a5bb630d465ca8bbcfe231e3594e5aae57e1edbf17a6eb3ca2506",
urls = [
"https://storage.googleapis.com/mirror.tensorflow.org/github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
],
)

http_archive(
name = "rules_python",
sha256 = "9d04041ac92a0985e344235f5d946f71ac543f1b1565f2cdbc9a2aaee8adf55b",
strip_prefix = "rules_python-0.26.0",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.26.0/rules_python-0.26.0.tar.gz",
)

http_archive(
name = "org_tensorflow_text",
sha256 = "4e6ec543a1d70a50f0105e0ea69ea8a1edd0b17a38d0244aa3b14f889b2cf74d",
strip_prefix = "text-2.12.1",
url = "https://github.com/tensorflow/text/archive/v2.12.1.zip",
repo_mapping = {"@com_google_re2": "@com_googlesource_code_re2"},
)

load("@rules_python//python:repositories.bzl", "py_repositories")

py_repositories()

load("@rules_python//python:repositories.bzl", "python_register_toolchains")
load(
"//tensorflow/tools/toolchains/python:python_repo.bzl",
"python_repository",
)

python_repository(name = "python_version_repo")

load("@python_version_repo//:py_version.bzl", "HERMETIC_PYTHON_VERSION")

python_register_toolchains(
name = "python",
ignore_root_user_error = True,
python_version = HERMETIC_PYTHON_VERSION,
)

load("@python//:defs.bzl", "interpreter")
load("@rules_python//python:pip.bzl", "package_annotation", "pip_parse")

NUMPY_ANNOTATIONS = {
"numpy": package_annotation(
additive_build_content = """
filegroup(
name = "includes",
srcs = glob(["site-packages/numpy/core/include/**/*.h"]),
)
cc_library(
name = "numpy_headers",
hdrs = [":includes"],
strip_include_prefix="site-packages/numpy/core/include/",
)
""",
),
}

pip_parse(
name = "pypi",
annotations = NUMPY_ANNOTATIONS,
python_interpreter_target = interpreter,
requirements = "//:requirements_lock_" + HERMETIC_PYTHON_VERSION.replace(".", "_") + ".txt",
)

load("@pypi//:requirements.bzl", "install_deps")

install_deps()

#load("@//text/tensorflow_text:tftext.bzl",
#"tf_text_workspace")
#tf_text_workspace()

load("@//tensorflow:workspace3.bzl", "tf_workspace3")

tf_workspace3()

load("@//tensorflow:workspace2.bzl", "tf_workspace2")

tf_workspace2()

load("@//tensorflow:workspace1.bzl", "tf_workspace1")

tf_workspace1()

load("@//tensorflow:workspace0.bzl", "tf_workspace0")

tf_workspace0()

on trying to build the benchmark_model_plus_flex binary, i get the error:


ERROR: /home/.cache/bazel/_bazel/716ac13c348ce3335128b3d9f4131682/external/org_tensorflow_text/tensorflow_text/core/kernels/sentencepiece/BUILD:178:14: no such package '@local_config_tf//': The repository '@local_config_tf' could not be resolved: Repository '@local_config_tf' is not defined and referenced by '@org_tensorflow_text//tensorflow_text/core/kernels/sentencepiece:sentencepiece_tokenizer_kernel'
ERROR: Analysis of target '//tensorflow/lite/tools/benchmark:benchmark_model_plus_flex' failed; build aborted: 
INFO: Elapsed time: 3.013s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (82 packages loaded, 326 targets configured)
    currently loading: @com_google_absl//absl/base ... (6 packages)

my file structure is:

tensorflow
      |--tensorflow (tf stuff here)
      |--text (tensorflow-text cloned inside tf repository)
      |--WORKSPACE (tf workspace as it is from tf repo with tf-text changes added)

Edit:
Do I need to add tf serving? If so, where should it be added? i am just cloning tensorflow/tensorflow to build benchmark model and cloning tensorlfow-text in the same directory.

@suyash-narain
Copy link
Author

Hi @broken
follow-up to your edits:
Do I need to add tf serving? If so, where should it be added? i am just cloning tensorflow/tensorflow to build benchmark model and cloning tensorlfow-text in the same directory.

@broken
Copy link
Member

broken commented Jan 11, 2024

Apologies, I actually had edited my comment, and maybe you viewed it before the edit. The patch file is necessary, but not tf serving.

In your file structure, add a third_party directory, and copy the tf_text directory that I had linked above that contains the patch file. Then update your workspace to include that patch to tf text.

@suyash-narain
Copy link
Author

Hi @broken,

I tried your suggestions and am no longer getting the @org_tensorflow_text errors. But i started getting build failures related to eigen I am not sure how to proceed with.
I am using bazel v6.1.0
the error log is attached:
error.log

i get errors like:

ERROR: /home/mtk/tensorflow/tensorflow/c/BUILD:411:11: Compiling tensorflow/c/tf_status.cc failed: (Exit 1): aarch64-none-linux-gnu-gcc failed: error executing command (from target //tensorflow/c:tf_status) /home/mtk/.cache/bazel/_bazel_mtk/716ac13c348ce3335128b3d9f4131682/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections ... (remaining 144 arguments skipped)
In file included from external/local_tsl/tsl/platform/types.h:21,
from external/local_tsl/tsl/platform/default/logging.h:38,
from external/local_tsl/tsl/platform/logging.h:26,
from external/local_tsl/tsl/platform/status.h:34,
from external/local_tsl/tsl/c/tsl_status_internal.h:19,
from ./tensorflow/c/tf_status_internal.h:19,
from tensorflow/c/tf_status.cc:20:
external/local_tsl/tsl/platform/bfloat16.h:24:16: error: 'bfloat16' in namespace 'Eigen' does not name a type
24 | typedef Eigen::bfloat16 bfloat16;
| ^~~~~~~~
In file included from external/local_tsl/tsl/platform/ml_dtypes.h:19,
from external/local_tsl/tsl/platform/types.h:22,
from external/local_tsl/tsl/platform/default/logging.h:38,
from external/local_tsl/tsl/platform/logging.h:26,
from external/local_tsl/tsl/platform/status.h:34,
from external/local_tsl/tsl/c/tsl_status_internal.h:19,
from ./tensorflow/c/tf_status_internal.h:19,
from tensorflow/c/tf_status.cc:20:
bazel-out/aarch64-opt/bin/external/ml_dtypes/_virtual_includes/float8/ml_dtypes/include/float8.h:71:57: error: expected ')' before 'bf16'
71 | explicit EIGEN_DEVICE_FUNC float8_base(Eigen::bfloat16 bf16)

my workspace file has the following changes:

http_archive(
        name = "org_tensorflow_text",
        sha256 = "70838b0474d4e15802f0771bdbbcd82fcce89bf5eccd78f8f9ae10fce520ffa4",
        strip_prefix = "text-2.15.0",
        url = "https://github.com/tensorflow/text/archive/v2.15.0.zip",
        patches = ["@//third_party/tf_text:tftext.patch"],
        patch_args = ["-p1"],
    )
http_archive(
        name = "com_google_sentencepiece",
        strip_prefix = "sentencepiece-0.1.99",
        sha256 = "68dbb82ccd8261da7b6088d9da988368798556284f84562e572df9e61e7fd4e2",
        urls = [
            "https://github.com/google/sentencepiece/archive/refs/tags/v0.1.99.zip",
        ],
        build_file = "//third_party/sentencepiece:BUILD",
    )

http_archive(
        name = "com_google_glog",
        sha256 = "1ee310e5d0a19b9d584a855000434bb724aa744745d5b8ab1855c85bff8a8e21",
        strip_prefix = "glog-028d37889a1e80e8a07da1b8945ac706259e5fd8",
        urls = [
            "https://mirror.bazel.build/github.com/google/glog/archive/028d37889a1e80e8a07da1b8945ac706259e5fd8.tar.gz",
            "https://github.com/google/glog/archive/028d37889a1e80e8a07da1b8945ac706259e5fd8.tar.gz",
        ],
    )

http_archive(
        name = "darts_clone",
        build_file = "//third_party/darts_clone:BUILD",
        sha256 = "c97f55d05c98da6fcaf7f9ecc6a6dc6bc5b18b8564465f77abff8879d446491c",
        strip_prefix = "darts-clone-e40ce4627526985a7767444b6ed6893ab6ff8983",
        urls = [
            "https://github.com/s-yata/darts-clone/archive/e40ce4627526985a7767444b6ed6893ab6ff8983.zip",
        ],
    )

the complete error log is attached. Do you have any suggestions for these eigen related errors?

@suyash-narain
Copy link
Author

Hi @broken,

i am not getting eigen issues now, I had to hide /usr/include/Eigen and that solved the error. But i am getting some weird sentencepiece errors as below

ERROR: /home/.cache/bazel/_bazel/716ac13c348ce3335128b3d9f4131682/external/com_google_sentencepiece/BUILD.bazel:49:11: Compiling src/error.cc failed: (Exit 1): aarch64-none-linux-gnu-gcc failed: error executing command (from target @com_google_sentencepiece//:sentencepiece_processor) /home/.cache/bazel/_bazel_716ac13c348ce3335128b3d9f4131682/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections ... (remaining 77 arguments skipped)
In file included from external/com_google_sentencepiece/src/init.h:25,
                 from external/com_google_sentencepiece/src/error.cc:18:
external/com_google_sentencepiece/third_party/protobuf-lite/google/protobuf/message_lite.h:79:8: error: redefinition of 'struct google::protobuf::internal::ConstantInitialized'
   79 | struct ConstantInitialized {
      |        ^~~~~~~~~~~~~~~~~~~
In file included from external/com_google_protobuf/src/google/protobuf/io/coded_stream.h:134,
                 from external/com_google_sentencepiece/third_party/protobuf-lite/google/protobuf/message_lite.h:47,
                 from external/com_google_sentencepiece/src/init.h:25,
                 from external/com_google_sentencepiece/src/error.cc:18:
external/com_google_protobuf/src/google/protobuf/port.h:64:8: note: previous definition of 'struct google::protobuf::internal::ConstantInitialized'
   64 | struct ConstantInitialized {
      |        ^~~~~~~~~~~~~~~~~~~
In file included from external/com_google_sentencepiece/src/init.h:25,
                 from external/com_google_sentencepiece/src/error.cc:18:
external/com_google_sentencepiece/third_party/protobuf-lite/google/protobuf/message_lite.h:154:1: error: 'PROTOBUF_DISABLE_MSVC_UNION_WARNING' does not name a type
  154 | PROTOBUF_DISABLE_MSVC_UNION_WARNING
      | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
external/com_google_sentencepiece/third_party/protobuf-lite/google/protobuf/message_lite.h:166:1: error: 'PROTOBUF_ENABLE_MSVC_UNION_WARNING' does not name a type
  166 | PROTOBUF_ENABLE_MSVC_UNION_WARNING
      | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
external/com_google_sentencepiece/third_party/protobuf-lite/google/protobuf/message_lite.h: In function 'constexpr const string& google::protobuf::internal::GetEmptyStringAlreadyInited()':
external/com_google_sentencepiece/third_party/protobuf-lite/google/protobuf/message_lite.h:174:10: error: 'fixed_address_empty_string' was not declared in this scope
  174 |   return fixed_address_empty_string.value;
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~
Target //tensorflow/lite/tools/benchmark:benchmark_model_plus_flex failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 339.254s, Critical Path: 74.85s
INFO: 473 processes: 9 internal, 464 local.
FAILED: Build did NOT complete successfully

any suggestions?

@suyash-narain
Copy link
Author

Hi @broken, @yijie-yang

I cleaned and did a fresh build but this time i get the following unicode errors:

ERROR: /home/.cache/bazel/_bazel/716ac13c348ce3335128b3d9f4131682/external/org_tensorflow_text/tensorflow_text/core/kernels/BUILD:1135:11: Compiling tensorflow_text/core/kernels/whitespace_tokenizer.cc failed: (Exit 1): aarch64-none-linux-gnu-gcc failed: error executing command (from target @org_tensorflow_text//tensorflow_text/core/kernels:whitespace_tokenizer) /home/.cache/bazel/_bazel/716ac13c348ce3335128b3d9f4131682/external/aarch64_linux_toolchain/bin/aarch64-none-linux-gnu-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections ... (remaining 56 arguments skipped)
In file included from /usr/include/unicode/chariter.h:20,
                 from external/icu/icu4c/source/common/unicode/schriter.h:27,
                 from external/org_tensorflow_text/tensorflow_text/core/kernels/whitespace_tokenizer.cc:22:
/usr/include/unicode/unistr.h:3618:68: error: 'FALSE' was not declared in this scope
 3618 |   UnicodeString &copyFrom(const UnicodeString &src, UBool fastCopy=FALSE);
      |                                                                    ^~~~~
/usr/include/unicode/unistr.h:3675:49: error: 'TRUE' was not declared in this scope
 3675 |                             UBool doCopyArray = TRUE,
      |                                                 ^~~~
/usr/include/unicode/unistr.h:3677:48: error: 'FALSE' was not declared in this scope
 3677 |                             UBool forceClone = FALSE);
      |                                                ^~~~~
/usr/include/unicode/unistr.h: In member function 'UBool icu_66::UnicodeString::truncate(int32_t)':
/usr/include/unicode/unistr.h:4735:12: error: 'FALSE' was not declared in this scope
 4735 |     return FALSE;
      |            ^~~~~
/usr/include/unicode/unistr.h:4738:12: error: 'TRUE' was not declared in this scope
 4738 |     return TRUE;
      |            ^~~~
/usr/include/unicode/unistr.h:4740:12: error: 'FALSE' was not declared in this scope
 4740 |     return FALSE;
      |            ^~~~~
In file included from /usr/include/unicode/unifilt.h:20,
                 from external/icu/icu4c/source/common/unicode/uniset.h:21,
                 from external/org_tensorflow_text/tensorflow_text/core/kernels/whitespace_tokenizer.cc:27:
/usr/include/unicode/unimatch.h: At global scope:
/usr/include/unicode/unimatch.h:144:64: error: 'FALSE' was not declared in this scope
  144 |                                      UBool escapeUnprintable = FALSE) const = 0;
      |                                                                ^~~~~
Target //tensorflow/lite/tools/benchmark:benchmark_model_plus_flex failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 106.076s, Critical Path: 86.74s
INFO: 72 processes: 9 internal, 63 local.

this is interesting because the file unistr.h already has <unicode/utypes.h> declared.

i have no issues building a simple benchmark model without tf-text, but benchmark_model_plus_flex with tf-text is giving me issues. Are you able to build on your end?

@suyash-narain
Copy link
Author

Hi @broken @pkgoogle @yijie-yang
is there any update on this?
thanks

@gaikwadrahul8
Copy link
Contributor

Hi, @suyash-narain

Thanks for raising this issue. Are you aware of the migration to LiteRT? This transition is aimed at enhancing our project's capabilities and providing improved support and focus for our users. As we believe this issue is still relevant to LiteRT we are moving your issue there. Please follow progress here: google-ai-edge/LiteRT#94

Let us know if you have any questions. Thanks.

@pkgoogle pkgoogle closed this as not planned Won't fix, can't repro, duplicate, stale Nov 27, 2024
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.15 For issues related to 2.15.x type:bug Bug
Projects
None yet
Development

No branches or pull requests

8 participants