Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change the name of whisper tflite nodes #40

Open
zzy981019 opened this issue Aug 23, 2024 · 3 comments
Open

How to change the name of whisper tflite nodes #40

zzy981019 opened this issue Aug 23, 2024 · 3 comments

Comments

@zzy981019
Copy link

Hi @nyadla-sys ,

I am using your generate_tflite_from_whisper.ipynb to generate whisper models in tflite, and when I open the converted model with netron, I find that the names of its input and output nodes are very long and not what I expected.
I'd like to ask do you have any insights of changing the name of the input and the output nodes?

whisper_tflite_from_netron

Many thanks!

@nyadla-sys
Copy link
Owner

nyadla-sys commented Aug 23, 2024

Try something like below

import tensorflow as tf

class GenerateModel(tf.Module):
    def __init__(self, model):
        super(GenerateModel, self).__init__()
        self.model = model

    @tf.function(
        input_signature=[
            tf.TensorSpec((1, 80, 3000), tf.float32, name="new_input_name"),  # Updated input name
        ],
    )
    def serving(self, new_input_name):  # Updated parameter name
        outputs = self.model.generate(
            new_input_name,
            max_new_tokens=450,  # Change as needed
            return_dict_in_generate=True,
        )
        return {"new_output_name": outputs["sequences"]}  # Updated output name

saved_model_dir = '/content/tf_whisper_saved'
tflite_model_path = 'whisper-tiny.en.tflite'

# Create and save the TensorFlow model with updated names
generate_model = GenerateModel(model=model)
tf.saved_model.save(generate_model, saved_model_dir, signatures={"serving_default": generate_model.serving})

# Convert the model to TensorFlow Lite
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,  # Enable TensorFlow Lite ops.
    tf.lite.OpsSet.SELECT_TF_OPS  # Enable TensorFlow ops.
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

# Save the converted TFLite model
with open(tflite_model_path, 'wb') as f:
    f.write(tflite_model)

@nyadla-sys
Copy link
Owner

image

@zzy981019
Copy link
Author

zzy981019 commented Aug 23, 2024

Thanks! I have tried the code you attached, and I found the name of the input tensor has been updated (I attach the pic here). However, must the name of the input tensor the format of signatureKey + "_" + inputName + ":0"? Can it just by some means be set to "input_features"?
whisper_tflite_from_netron_new

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants