You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: CONTRIBUTING.md
+13-13
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
Welcome to the 🐸TTS!
4
4
5
-
This repository is governed by [the Contributor Covenant Code of Conduct](https://github.com/coqui-ai/TTS/blob/main/CODE_OF_CONDUCT.md).
5
+
This repository is governed by [the Contributor Covenant Code of Conduct](https://github.com/eginhard/coqui-tts/blob/main/CODE_OF_CONDUCT.md).
6
6
7
7
## Where to start.
8
8
We welcome everyone who likes to contribute to 🐸TTS.
@@ -15,13 +15,13 @@ If you like to contribute code, squash a bug but if you don't know where to star
15
15
16
16
You can pick something out of our road map. We keep the progess of the project in this simple issue thread. It has new model proposals or developmental updates etc.
@@ -135,7 +135,7 @@ You can also help us implement more models.
135
135
If you are only interested in [synthesizing speech](https://coqui-tts.readthedocs.io/en/latest/inference.html) with the released 🐸TTS models, installing from PyPI is the easiest option.
136
136
137
137
```bash
138
-
pip install TTS
138
+
pip install coqui-tts
139
139
```
140
140
141
141
If you plan to code or train models, clone 🐸TTS and install it locally.
@@ -152,7 +152,9 @@ $ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you
152
152
$ make install
153
153
```
154
154
155
-
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system).
155
+
If you are on Windows, 👑@GuyPaddock wrote installation instructions
Copy file name to clipboardexpand all lines: docs/source/faq.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ We tried to collect common issues and questions we receive about 🐸TTS. It is
3
3
4
4
## Errors with a pre-trained model. How can I resolve this?
5
5
- Make sure you use the right commit version of 🐸TTS. Each pre-trained model has its corresponding version that needs to be used. It is defined on the model table.
6
-
- If it is still problematic, post your problem on [Discussions](https://github.com/coqui-ai/TTS/discussions). Please give as many details as possible (error message, your TTS version, your TTS model and config.json etc.)
6
+
- If it is still problematic, post your problem on [Discussions](https://github.com/eginhard/coqui-tts/discussions). Please give as many details as possible (error message, your TTS version, your TTS model and config.json etc.)
7
7
- If you feel like it's a bug to be fixed, then prefer Github issues with the same level of scrutiny.
8
8
9
9
## What are the requirements of a good 🐸TTS dataset?
@@ -16,7 +16,7 @@ We tried to collect common issues and questions we receive about 🐸TTS. It is
16
16
- If you need faster models, consider SpeedySpeech, GlowTTS or AlignTTS. Keep in mind that SpeedySpeech requires a pre-trained Tacotron or Tacotron2 model to compute text-to-speech alignments.
17
17
18
18
## How can I train my own `tts` model?
19
-
0. Check your dataset with notebooks in [dataset_analysis](https://github.com/coqui-ai/TTS/tree/master/notebooks/dataset_analysis) folder. Use [this notebook](https://github.com/coqui-ai/TTS/blob/master/notebooks/dataset_analysis/CheckSpectrograms.ipynb) to find the right audio processing parameters. A better set of parameters results in a better audio synthesis.
19
+
0. Check your dataset with notebooks in [dataset_analysis](https://github.com/eginhard/coqui-tts/tree/main/notebooks/dataset_analysis) folder. Use [this notebook](https://github.com/eginhard/coqui-tts/blob/main/notebooks/dataset_analysis/CheckSpectrograms.ipynb) to find the right audio processing parameters. A better set of parameters results in a better audio synthesis.
20
20
21
21
1. Write your own dataset `formatter` in `datasets/formatters.py` or format your dataset as one of the supported datasets, like LJSpeech.
22
22
A `formatter` parses the metadata file and converts a list of training samples.
After the installation, 🐸TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under 🐸TTS.
0 commit comments