-
Notifications
You must be signed in to change notification settings - Fork 139
Streaming Transformer Transducer #249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@stefan-falk Hello. The difference between triggered attention and MoChA is the computational complexity in each generation step. Triggered attention requires O(T^2) but MoChA does O(T) because the context size is very limited. |
@hirofumi0810 Ah, I see. Then MoChA it is! Thank you. Do you have working code for MoChA training and decoding already? I'd love to take a look at it to get started, in case you do. |
@stefan-falk You can start from here. |
@hirofumi0810 Thanks a lot! I'll be looking at the code :) And thanks for your great work! Update For anybody coming here. There's also a |
@hirofumi0810 Can we even use MoChA inside a Transducer model? I think I misunderstood something along the way here. 😆 What I am looking for is a way to stream a Transducer-based model. In particular, I'd like to be able to stream the Transformer-Transducer (T-T) as in [1]. Are you working towards this as well? |
@stefan-falk MoChA is different from Transducer, so it is not common to combine them. |
Thank you for the link! However, I kind of move away from RNN-based Transducer models. The reason for that was after I saw how much smaller the Transformer-Transducer (T-T) and Conformer-Transducer (C-T) models are. A 30M parameter C-T model outperforms a 130M parameter RNN-T model. On my hardware, I am not even able to train such an RNN-T model 😆 Here is a quick (not very scientific) comparison from my own experiments on a German dataset.
|
@stefan-falk hi, stefan, Is your experiments on the German dataset on espnet? espnet or espnet2? |
@jinggaizi This is just a mix of different (public) datasets e.g. Common Voice and Spoken Wikipedia. |
Hi!
I am currently working on a streaming Transformer Transducer (T-T) myself (using Tensorflow) but I'm struggling to get started with the actual inference part. I've been referred to your repository from ESPnet (see espnet/espnet#2533 (comment)), as you noticed or going to notice.
I was wondering if you could share some knowledge on how you are tackling this problem. As for me, I started to look at "Developing Real-Time Streaming Transformer Transducer for Speech Recognition on Large-Scale Dataset" and noticed that they are proposing something called Trigger Attention ([1], [2]). In contrast, what I've been told is that, you are using Monotonic Chunkwise Attention (MoChA).
I'm not quite sure how either work in detail but if you could point to somewhere or help me to get started it would be much appreciated!
The text was updated successfully, but these errors were encountered: