Skip to content

changsun20/UniEcho-hackathon

Repository files navigation

Local Environment Setup Instructions

This project depends on uv (Python package manager), pnpm (Node.js ecosystem manager, mainly for Tailwind CSS), and ollama (local LLM rapid deployment). The specific deployment steps are as follows.

1. Installing Dependencies

Windows users are recommended to download scoop for package management (although directly downloading the three tools from official websites works as well). Users of other operating systems should use their corresponding package managers (homebrew, apt-get, etc.) to download the required software. Note that if downloading pnpm doesn't trigger Node.js installation, you'll need to install it manually.

Windows installation command:

scoop install uv ollama-full nodejs-lts pnpm

2. Local Project Environment

Use pnpm to download the Tailwind CSS environment:

pnpm i

Pull and run the deepseek-r1:7b model with ollama. Note that to ensure the project runs properly, please make sure to pull the same model, otherwise you'll need to modify the model specification in the corresponding project location:

ollama run deepseek-r1:7b

Use uv to synchronize the local Python environment, which will automatically pull Django dependencies:

uv sync

To maintain project cleanliness, the binary file of the SQLite database currently used in this project is not included in GitHub version control. Therefore, after completing the above configuration, you need to recreate the local database tables:

python manage.py migrate

3. Running the Project

If you've modified Tailwind CSS properties in the templates frontend interface, first run this command to regenerate the CSS static files:

pnpm run build:css

Start the Python local server:

python manage.py runserver

After this step, you should be able to see the webpage at http://127.0.0.1:8000/events/. Note that our project is deployed in the events subdirectory.

Run ollama to start the LLM. After startup, you can exit the command-line interactive window, and the large model will continue running in the background:

ollama run deepseek-r1:7b

If you're done using the large model, you can use this command to stop the model response:

ollama stop deepseek-r1:7b

About

This is the repository for our proposed project Uniecho at Tri-Co Hackathon.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published