GPU training and pipe-line training from scratch #595
Unanswered
vijaysumaravi
asked this question in
Q&A
Replies: 1 comment
-
Until a really end-to-end diarization is implemented, that is indeed the only way:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have two questions related to 'pyannote-pipeline'
I am trying to train the pretrained speaker diarization pipelines on my own dataset. I am running this command from the tutorial:
pyannote-pipeline train --subset=development --iterations=50 ${EXP_DIR} my.personal.dataset
However, GPU usage is still 0%. The fine-tuning is happening on CPUs instead. I tried providing the CUDA_AVAILABLE_DEVICES but that doesn't help either.
The documentation says GPU is used by default when available but I don't see that happening. I see you've mentioned that the latest branch solves this issue but I have the latest dev branch and I am still facing this. Am I doing something wrong?
I was not sure of opening issues for these so I started a discussion thread instead. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions