On AWS-Amazon Linux 2023 AMI-llama-cpp-python==0.2.39 install errors #1239
murthychetty
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm trying to install
llama-cpp-python==0.2.39
on aws linux and getting the below error.nvidia-smi
shows 1 GPU and memory: 24G as expected but llama-cpp-python errors out.Am I missing anything here? Requesting for your help.
AWS Instance: g5.2xlarge
GPU command:
nvidia-smi
Command:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.2.39 --no-cache-dir --verbose
============ pip install error log ============================
loading initial cache file /tmp/tmpzsk4a8v8/build/CMakeInit.txt
-- The C compiler identification is GNU 7.3.1
-- The CXX compiler identification is GNU 7.3.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.40.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Could not find nvcc, please set CUDAToolkit_ROOT.
CMake Warning at vendor/llama.cpp/CMakeLists.txt:381 (message):
cuBLAS not found
-- CUDA host compiler is GNU
CMake Error at vendor/llama.cpp/CMakeLists.txt:784 (get_flags):
get_flags Function invoked with incorrect arguments for function named:
get_flags
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:21 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:30 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring incomplete, errors occurred!
*** CMake configuration failed
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /home/ec2-user/venv/bin/python3.10 /home/ec2-user/venv/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpvbpz6dj6
cwd: /tmp/pip-install-umf4nqd7/llama-cpp-python_1ee88e3643614ef4ac38ea1d05ba609b
Building wheel for llama-cpp-python (pyproject.toml) ... error
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
WARNING: You are using pip version 22.0.4; however, version 24.0 is available.
You should consider upgrading via the '/home/ec2-user/venv/bin/python3.10 -m pip install --upgrade pip' command.
Beta Was this translation helpful? Give feedback.
All reactions