Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python 3.9.6 - works #20

Open
rt974 opened this issue Mar 19, 2025 · 1 comment
Open

Python 3.9.6 - works #20

rt974 opened this issue Mar 19, 2025 · 1 comment

Comments

@rt974
Copy link

rt974 commented Mar 19, 2025

ChatGPT looked at the repo and came up with a 1 click installer.

==========

NotaGen One-Click Install Script for Windows 11 using Python 3.9 venv and CUDA 11.8 support if available.

Make sure you have Git and Python 3.9 installed.

Run this script from a PowerShell prompt. If needed, you might need to adjust your execution policy.

Step 1: Clone the repository if it's not already present.

if (-not (Test-Path -Path "./NotaGen")) {
Write-Output "Cloning the NotaGen repository..."
git clone https://github.com/ElectricAlexis/NotaGen.git
if ($LASTEXITCODE -ne 0) {
Write-Error "Failed to clone the repository. Please ensure Git is installed and in your PATH."
exit 1
}
} else {
Write-Output "NotaGen repository already exists. Skipping clone."
}

Change directory to the repository.

Set-Location ./NotaGen

Step 2: Create a virtual environment using Python 3.9.

Write-Output "Creating a Python 3.9 virtual environment..."
python -m venv venv
if ($LASTEXITCODE -ne 0) {
Write-Error "Failed to create virtual environment. Please ensure Python 3.9 is installed."
exit 1
}

Step 3: Activate the virtual environment.

Write-Output "Activating virtual environment..."
. .\venv\Scripts\Activate.ps1

Step 4: Upgrade pip.

Write-Output "Upgrading pip..."
python -m pip install --upgrade pip

Step 5: Check for CUDA and install the appropriate PyTorch packages.

if ($env:CUDA_PATH) {
Write-Output "CUDA installation detected. Installing PyTorch with CUDA 11.8 support..."
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu118
} else {
Write-Output "CUDA not detected. Installing CPU version of PyTorch..."
pip install torch torchvision torchaudio
}

Step 6: Install additional dependencies if requirements.txt exists.

if (Test-Path "requirements.txt") {
Write-Output "Installing additional dependencies from requirements.txt..."
pip install -r requirements.txt
} else {
Write-Output "No requirements.txt file found. Skipping additional dependency installation."
}

Write-Output "Installation complete! To activate the virtual environment in future sessions, run:"
Write-Output " .\venv\Scripts\Activate.ps1"

Then crashed on gradio version, didnt even install the other reqs such as transformers, had to manually pip them.

Downloaded the X model weights, ran gradio.

============
File "X:\PRODUCTION\2025\notagen\NotaGen\gradio\utils.py", line 397, in generate
token = temperature_sampling(prob, temperature=temperature) # int
File "X:\PRODUCTION\2025\notagen\NotaGen\venv\lib\site-packages\samplings_init_.py", line 360, in temperature_sampling
return random_sampling(probs, seed=seed)
File "X:\PRODUCTION\2025\notagen\NotaGen\venv\lib\site-packages\samplings_init_.py", line 113, in random_sampling
return np.random.choice(range(len(probs)), p=probs)
File "numpy\random\mtrand.pyx", line 998,

Threw that to ChatGPT, cause developpers arent required anymore obviously.

After 2 mins trying to patch whatever sampling, who cares anyways what this is.

====
while True:
prob = self.char_level_decoder.generate(encoded_patches[0][-1], tokens).cpu().detach().numpy() # [128]
prob = top_k_sampling(prob, top_k=top_k, return_probs=True) # [128]
prob = top_p_sampling(prob, top_p=top_p, return_probs=True) # [128]

        # Remove zero probabilities
        valid_indices = np.nonzero(prob)[0]
        valid_probs = prob[valid_indices]

        # Ensure probabilities sum to 1
        valid_probs = valid_probs / np.sum(valid_probs)

        # Sample only from valid indices
        chosen_index = np.random.choice(valid_indices, p=valid_probs)
        token = int(chosen_index)  # Ensure it's an integer


        #token = temperature_sampling(prob, temperature=temperature) # int


        char = chr(token)
        generated_patch.append(token)

        if len(tokens) >= PATCH_SIZE:# or token == self.eos_token_id:
            break
        else:
            tokens = torch.cat((tokens, torch.tensor([token], device=self.device)), dim=0)
    
    return generated_patch

====

This code did the trick, no idea whats going on honestly.

Restarted gradio for n'th time, smashed GENERATE botton, and it output some weird strings in the tiny box, resisted the urge to ragequit.

No idea what abc is, but i remember 30 years ago playing with midi players when the internet didnt exist.

So i thought "maybe there is such a thing as an ABC player, googled "abc player", found this https://abc.rectanglered.com/

pasted the notation in thir input box, scrolled for like 45 minutes to the bottom of the page and pressed the play button. Almost lost my eardrums cause they dont bother with volume.

But one thing is sure, the stuff works, some epic bethoven music is hapening, and ChatGPT yoloed its way out of this like a pro.

And i want to express my deepest regrets for not using the proper python version.

@ElectricAlexis
Copy link
Owner

Hi, thank you for your feedback! We'll consider to refine the code in samplings package.😊

About the gradio demo, we are sorry for the inconvenience about the past local demo... We recently developed an online demo on huggingface space. Maybe you could have a try. 👍 I guess the preveiw of audio and pdf scores will be easier LoooL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants