Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python which use PEP 517 and cannot be installed directly #1947

Open
ArmsNorton opened this issue Feb 26, 2025 · 3 comments

Comments

@ArmsNorton
Copy link

ArmsNorton commented Feb 26, 2025

ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python which use PEP 517 and cannot be installed directly```

it returns an error when loading. all requirements have been downloaded. the main components of vs code are installed. cuda 12.8. my pyton 3.10. windows 11
@dw5189
Copy link

dw5189 commented Feb 26, 2025

I've just successfully installed it! Here's the information for your reference.

PowerShell :

$env:CUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6"
$env:CMAKE_GENERATOR_PLATFORM="x64"
$env:FORCE_CMAKE="1"
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=89"
pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade


** Visual Studio 2022 Developer PowerShell v17.10.11
** Copyright (c) 2022 Microsoft Corporation


(base) PS C:\Users\Administrator\source\repos> conda activate CUDA126-py312
(CUDA126-py312) PS C:\Users\Administrator\source\repos> $env:CUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6"
(CUDA126-py312) PS C:\Users\Administrator\source\repos> $env:CMAKE_GENERATOR_PLATFORM="x64"
(CUDA126-py312) PS C:\Users\Administrator\source\repos> $env:FORCE_CMAKE="1"
(CUDA126-py312) PS C:\Users\Administrator\source\repos> $env:CMAKE_ARGS="-DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=89"
(CUDA126-py312) PS C:\Users\Administrator\source\repos> pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple/, http://mirrors.aliyun.com/pypi/simple/
Collecting llama-cpp-python
Downloading http://mirrors.aliyun.com/pypi/packages/a6/38/7a47b1fb1d83eaddd86ca8ddaf20f141cbc019faf7b425283d8e5ef710e5/llama_cpp_python-0.3.7.tar.gz (66.7 MB)
---------------------------------------- 66.7/66.7 MB 22.7 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/42/6e/55580a538116d16ae7c9aa17d4edd56e83f42126cb1dfe7a684da7925d2c/numpy-2.2.3-cp312-cp312-win_amd64.whl (12.6 MB)
---------------------------------------- 12.6/12.6 MB 23.3 MB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl (45 kB)
Collecting jinja2>=2.11.3 (from llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/bd/0f/2ba5fbcd631e3e88689309dbe978c5769e883e4b84ebfe7da30b43275c5a/jinja2-3.1.5-py3-none-any.whl (134 kB)
Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python)
Downloading http://mirrors.aliyun.com/pypi/packages/c1/80/a61f99dc3a936413c3ee4e1eecac96c0da5ed07ad56fd975f1a9da5bc630/MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl (15 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... done
Created wheel for llama-cpp-python: filename=llama_cpp_python-0.3.7-cp312-cp312-win_amd64.whl size=93613512 sha256=cd98aae040b2dbcc1f4653370900de27455ef65275d08543da81c53c31138a1a
Stored in directory: C:\Users\Administrator\AppData\Local\Temp\pip-ephem-wheel-cache-9usio9a1\wheels\ec\61\fc\cee068315610d77f6a99c0032a74e4c8cb21c1d6e281b45bb5
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, MarkupSafe, diskcache, jinja2, llama-cpp-python
Successfully installed MarkupSafe-3.0.2 diskcache-5.6.3 jinja2-3.1.5 llama-cpp-python-0.3.7 numpy-2.2.3 typing-extensions-4.12.2
(CUDA126-py312) PS C:\Users\Administrator\source\repos>

@ArmsNorton
Copy link
Author

I was working on this project: https://github.com/0Xiaohei0/LocalAIVtuber?tab=readme-ov-file

it still gives an error PowerShell :

$env:CUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8"
$env:CMAKE_GENERATOR_PLATFORM="x64"
$env:FORCE_CMAKE="1"
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=89"
pip install llama-cpp-python --force-reinstall --no-cache-dir --verbose --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/124

*** CMake build failed
Building wheel for llama-cpp-python (PEP 517) ... error
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python which use PEP 517 and cannot be installed directly
Exception information:
Traceback (most recent call last):
File "C:\Programm\Python\Python310\lib\site-packages\pip_internal\cli\base_command.py", line 224, in _main
status = self.run(options, args)
File "C:\Programm\Python\Python310\lib\site-packages\pip_internal\cli\req_command.py", line 180, in wrapper
return func(self, options, args)
File "C:\Programm\Python\Python310\lib\site-packages\pip_internal\commands\install.py", line 361, in run
raise InstallationError(
pip._internal.exceptions.InstallationError: Could not build wheels for llama-cpp-python which use PEP 517 and cannot be installed directly
2 location(s) to search for versions of pip:

@dw5189
Copy link

dw5189 commented Feb 27, 2025

类似这样做:

"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\extras\visual_studio_integration\MSBuildExtensions\CUDA 12.6.props"
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\extras\visual_studio_integration\MSBuildExtensions\CUDA 12.6.targets"
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\extras\visual_studio_integration\MSBuildExtensions\CUDA 12.6.xml"
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\extras\visual_studio_integration\MSBuildExtensions\Nvda.Build.CudaTasks.v12.6.dll"

cory to :

"C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.6.props"
"C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.6.targets"
"C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.6.xml"
"C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Microsoft\VC\v170\BuildCustomizations\Nvda.Build.CudaTasks.v12.6.dll"

先安装CMAKE 、 Visual Studio、CUDA

----------------PowerShell :
$env:CUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6"
$env:CMAKE_GENERATOR_PLATFORM="x64"
$env:FORCE_CMAKE="1"
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=89"
pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants