diff --git a/README.md b/README.md index 60913c6..ff178ed 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ Domain-specific embeddings can significantly improve the quality of vector repre ## Contents -- `sentence-transformer/`: This directory contains a Jupyter notebook demonstrating how to fine-tune a [sentence-transfomer](https://www.sbert.net/) embedding model using the [Multiple Negatives Ranking Loss function](https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) which is recommended when in your training data you only have positive pairs, for example, only pairs of similar texts like pairs of paraphrases, pairs of duplicate questions, pairs of (query, response), or pairs of (source_language, target_language). +- `sentence-transformer/multiple-negatives-ranking-loss/`: This directory contains a Jupyter notebook demonstrating how to fine-tune a [sentence-transfomer](https://www.sbert.net/) embedding model using the [Multiple Negatives Ranking Loss function](https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) which is recommended when in your training data you only have positive pairs, for example, only pairs of similar texts like pairs of paraphrases, pairs of duplicate questions, pairs of (query, response), or pairs of (source_language, target_language). We are using the Multiple Negatives Ranking Loss function because we are utilizing [Bedrock FAQ](https://aws.amazon.com/bedrock/faqs/) as the training data, which consists of pairs of questions and answers. The code in this directory is used in the AWS blog post "Improve RAG accuracy with finetuned embedding models on Sagemaker"