Skip to content

cpepper96/ollama-local-rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Langchain RAG Project

This repository provides an example of implementing Retrieval-Augmented Generation (RAG) using LangChain and Ollama. The RAG approach combines the strengths of an LLM with a retrieval system (in this case, FAISS) to allow the model to access and incorporate external information during the generation process.

The application will load any Markdown documents in the docs/ directory. As an example this directory has two documents on Amazon Bedrock and Knowledge Bases for Amazon Bedrock. Since these products were released in the last 6 months their documentation was not included in the training data for most popular LLMs.

Credit to pixegami for the inspiration for this project.

LangChain

LangChain is a framework for developing applications with LLMs. It provides a modular and extensible approach to building applications, allowing you to combine different components (e.g., LLMs, retrieval systems, data sources) in a flexible manner.

FAISS

FAISS (Facebook AI Similarity Search) is a library for efficient similarity search and dense vector clustering. In this example, we use FAISS as the retrieval system to store and search through text data.

Ollama

Ollama is a tool for running large language models locally. Supported models are listed here.

Getting Started

To get started with this example, follow these steps:

Install python dependencies:

pip3 install -r requirements.txt

Start Ollama with a model of you choice:

ollama run llama2:13b

NOTE: This model needs to match the model referenced in the python code.

Create the FAISS database:

python3 create_database.py

This script reads files from a directory (specified in the script) and creates a FAISS index.

Query the FAISS database.

python3 query_data.py "What is Amazon Bedrock?"

This script takes a query as input, uses the LangChain retrieval to retrieve relevant information from the FAISS database, and generates a response using an LLM (specified in the script).

Adding Documents

To add additional documents just copy them to the docs/ directory. The application can support formats other than Markdown but you may need to install additional python packages.

About

A simple Langchain RAG application using Ollama.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages