Skip to content

Replace fixed number cmake --build -j in sh and docs #10887 #11351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .ci/docker/common/install_openssl.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ tar xf "${OPENSSL}.tar.gz"
pushd "${OPENSSL}" || true
./config --prefix=/opt/openssl -d "-Wl,--enable-new-dtags,-rpath,$(LIBRPATH)"
# NOTE: openssl install errors out when built with the -j option
make -j6; make install_sw
make -j$((nproc) - 1); make install_sw
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is valid syntax is it? You need an additional parentheses

Suggested change
make -j$((nproc) - 1); make install_sw
make -j$(($(nproc) - 1)); make install_sw

# Link the ssl libraries to the /usr/lib folder.
ln -s /opt/openssl/lib/lib* /usr/lib
popd || true
Expand Down
2 changes: 1 addition & 1 deletion .ci/scripts/build-qnn-sdk.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ set_up_aot() {
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
-DEXECUTORCH_ENABLE_EVENT_TRACER=ON \
-DPYTHON_EXECUTABLE=python3
cmake --build $PWD --target "PyQnnManagerAdaptor" "PyQnnWrapperAdaptor" -j$(nproc)
cmake --build $PWD --target "PyQnnManagerAdaptor" "PyQnnWrapperAdaptor" -j$((nproc) -1)
# install Python APIs to correct import path
# The filename might vary depending on your Python and host version.
cp -f backends/qualcomm/PyQnnManagerAdaptor.cpython-310-x86_64-linux-gnu.so $EXECUTORCH_ROOT/backends/qualcomm/python
Expand Down
4 changes: 2 additions & 2 deletions .ci/scripts/build_llama_android.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ install_executorch_and_backend_lib() {
-DXNNPACK_ENABLE_ARM_BF16=OFF \
-Bcmake-android-out .

cmake --build cmake-android-out -j4 --target install --config Release
cmake --build cmake-android-out -j$((nproc) -1) --target install --config Release
}

build_llama_runner() {
Expand All @@ -48,7 +48,7 @@ build_llama_runner() {
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-android-out/examples/models/llama examples/models/llama

cmake --build cmake-android-out/examples/models/llama -j4 --config Release
cmake --build cmake-android-out/examples/models/llama -j$((nproc) -1) --config Release
}
install_executorch_and_backend_lib
build_llama_runner
2 changes: 1 addition & 1 deletion .ci/scripts/setup-openvino.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ git submodule update --init --recursive
sudo ./install_build_dependencies.sh
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
make -j$(nproc)
make -j$((nproc) -1)

cd ..
cmake --install build --prefix dist
Expand Down
4 changes: 2 additions & 2 deletions .ci/scripts/test_llama.sh
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ cmake_install_executorch_libraries() {
-DEXECUTORCH_BUILD_QNN="$QNN" \
-DQNN_SDK_ROOT="$QNN_SDK_ROOT" \
-Bcmake-out .
cmake --build cmake-out -j9 --target install --config "$CMAKE_BUILD_TYPE"
cmake --build cmake-out -j$(($(nproc) - 1)) --target install --config "$CMAKE_BUILD_TYPE"
Copy link
Contributor

@jathu jathu Jun 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This script is also used to test locally on macOS, so we cannot just use nproc.

Overall what I would suggest is abstracting out this logic into a common script, and perhaps sharing the logic. For example, create ./tools/common_build.sh

core_count() {
  if [[ "$(uname)" == "Darwin" ]]; then
    num_cores=$(sysctl -n hw.ncpu)
  else
    num_cores=$(nproc)
  fi
  echo $((num_cores - 1))
}

}

cmake_build_llama_runner() {
Expand All @@ -173,7 +173,7 @@ cmake_build_llama_runner() {
-DCMAKE_BUILD_TYPE="$CMAKE_BUILD_TYPE" \
-Bcmake-out/${dir} \
${dir}
cmake --build cmake-out/${dir} -j9 --config "$CMAKE_BUILD_TYPE"
cmake --build cmake-out/${dir} -j$(($(nproc) - 1)) --config "$CMAKE_BUILD_TYPE"

}

Expand Down
4 changes: 2 additions & 2 deletions .ci/scripts/test_llama_torchao_lowbit.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ cmake -DPYTHON_EXECUTABLE=python \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-Bcmake-out .
cmake --build cmake-out -j16 --target install --config Release
cmake --build cmake-out -j$(( $(nproc) - 1 )) --target install --config Release

# Install llama runner with torchao
cmake -DPYTHON_EXECUTABLE=python \
Expand All @@ -49,7 +49,7 @@ cmake -DPYTHON_EXECUTABLE=python \
-DEXECUTORCH_BUILD_TORCHAO=ON \
-Bcmake-out/examples/models/llama \
examples/models/llama
cmake --build cmake-out/examples/models/llama -j16 --config Release
cmake --build cmake-out/examples/models/llama -j$(($(nproc) - 1)) --config Release

# Download stories llama110m artifacts
download_stories_model_artifacts
Expand Down
4 changes: 2 additions & 2 deletions .ci/scripts/test_model.sh
Original file line number Diff line number Diff line change
Expand Up @@ -59,13 +59,13 @@ build_cmake_executor_runner() {
&& cmake -DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" ..)
cmake --build ${CMAKE_OUTPUT_DIR} -j4
cmake --build ${CMAKE_OUTPUT_DIR} -j$(($(nproc) - 1))
else
cmake -DCMAKE_BUILD_TYPE=Debug \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" \
-B${CMAKE_OUTPUT_DIR} .
cmake --build ${CMAKE_OUTPUT_DIR} -j4 --config Debug
cmake --build ${CMAKE_OUTPUT_DIR} -j$(($(nproc) - 1)) --config Debug
fi
}

Expand Down
2 changes: 1 addition & 1 deletion .ci/scripts/test_quantized_aot_lib.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ build_cmake_quantized_aot_lib() {
-DEXECUTORCH_BUILD_KERNELS_QUANTIZED_AOT=ON \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" ..)

cmake --build ${CMAKE_OUTPUT_DIR} -j4
cmake --build ${CMAKE_OUTPUT_DIR} -j$(($(nproc) - 1))
}

build_cmake_quantized_aot_lib
4 changes: 2 additions & 2 deletions .ci/scripts/utils.sh
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ build_executorch_runner_cmake() {
else
CMAKE_JOBS=$(( $(nproc) - 1 ))
fi
cmake --build "${CMAKE_OUTPUT_DIR}" -j "${CMAKE_JOBS}"
cmake --build "${CMAKE_OUTPUT_DIR}" -j$((nproc) -1) "${CMAKE_JOBS}"
}

build_executorch_runner() {
Expand All @@ -162,7 +162,7 @@ cmake_install_executorch_lib() {
-DCMAKE_BUILD_TYPE=Release \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" \
-Bcmake-out .
cmake --build cmake-out -j9 --target install --config Release
cmake --build cmake-out -j$(($(nproc) - 1)) --target install --config Release
}

download_stories_model_artifacts() {
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/trunk.yml
Original file line number Diff line number Diff line change
Expand Up @@ -561,7 +561,7 @@ jobs:
-DEXECUTORCH_BUILD_DEVTOOLS=ON \
-DEXECUTORCH_ENABLE_EVENT_TRACER=ON \
-Bcmake-out .
cmake --build cmake-out -j16 --target install --config Release
cmake --build cmake-out -j$(($(sysctl -n hw.ncpu) - 1)) --target install --config Release
echo "::endgroup::"

echo "::group::Set up Hugging Face"
Expand Down
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
# building, and tends to speed up the build significantly. It's typical to use
# "core count + 1" as the `-j` value.
# ~~~
# cmake --build cmake-out -j9
# cmake --build cmake-out -j$(($(nproc) - 1))
# ~~~
#
# ### Editing this file ###
Expand Down
4 changes: 2 additions & 2 deletions backends/apple/coreml/scripts/build_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ cmake "$EXECUTORCH_ROOT_PATH" -B"$CMAKE_EXECUTORCH_BUILD_DIR_PATH" \
-DEXECUTORCH_BUILD_XNNPACK=OFF \
-DEXECUTORCH_BUILD_GFLAGS=OFF

cmake --build "$CMAKE_EXECUTORCH_BUILD_DIR_PATH" -j9 -t executorch
cmake --build "$CMAKE_EXECUTORCH_BUILD_DIR_PATH" -j$(($(nproc) - 1)) -t executorch

# Build protobuf
echo "ExecuTorch: Building libprotobuf-lite"
Expand All @@ -52,7 +52,7 @@ cmake "$PROTOBUF_DIR_PATH/cmake" -B"$CMAKE_PROTOBUF_BUILD_DIR_PATH" \
-DCMAKE_MACOSX_BUNDLE=OFF \
-DCMAKE_CXX_STANDARD=17

cmake --build "$CMAKE_PROTOBUF_BUILD_DIR_PATH" -j9 -t libprotobuf-lite
cmake --build "$CMAKE_PROTOBUF_BUILD_DIR_PATH" -j$(($(nproc) - 1)) -t libprotobuf-lite

# Copy required libraries
echo "ExecuTorch: Copying libraries"
Expand Down
2 changes: 1 addition & 1 deletion backends/arm/scripts/build_executor_runner.sh
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ cmake \

echo "[${BASH_SOURCE[0]}] Configured CMAKE"

cmake --build ${output_folder}/cmake-out -j$(nproc) -- arm_executor_runner
cmake --build ${output_folder}/cmake-out -j$((nproc) -1) -- arm_executor_runner

echo "[${BASH_SOURCE[0]}] Generated baremetal elf file:"
find ${output_folder}/cmake-out -name "arm_executor_runner"
Expand Down
2 changes: 1 addition & 1 deletion backends/arm/scripts/build_executorch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ cmake \

echo "[$(basename $0)] Configured CMAKE"

cmake --build ${et_build_dir} -j$(nproc) --target install --config ${build_type} --
cmake --build ${et_build_dir} -j$((nproc) -1) --target install --config ${build_type} --

set +x

Expand Down
2 changes: 1 addition & 1 deletion backends/arm/scripts/build_portable_kernels.sh
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ cmake \
-B"${et_build_dir}/examples/arm" \
"${et_root_dir}/examples/arm"

cmake --build "${et_build_dir}/examples/arm" -j$(nproc) --config ${build_type} --
cmake --build "${et_build_dir}/examples/arm" -j$((nproc) -1) --config ${build_type} --

set +x

Expand Down
4 changes: 2 additions & 2 deletions backends/cadence/build_cadence_fusionG3.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ if $STEPWISE_BUILD; then
-DHAVE_FNMATCH_H=OFF \
-Bcmake-out/backends/cadence \
backends/cadence
cmake --build cmake-out/backends/cadence -j8
cmake --build cmake-out/backends/cadence -j$(($(nproc) - 1))
else
echo "Building Cadence toolchain with ExecuTorch packages"
cmake_prefix_path="${PWD}/cmake-out/lib/cmake/ExecuTorch;${PWD}/cmake-out/third-party/gflags"
Expand All @@ -78,7 +78,7 @@ else
-DEXECUTORCH_FUSION_G3_OPT=ON \
-DHAVE_FNMATCH_H=OFF \
-Bcmake-out
cmake --build cmake-out --target install --config Release -j8
cmake --build cmake-out --target install --config Release -j$(($(nproc) - 1))
fi

echo "Run simple model to verify cmake build"
Expand Down
4 changes: 2 additions & 2 deletions backends/cadence/build_cadence_hifi4.sh
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ if $STEPWISE_BUILD; then
-DHAVE_FNMATCH_H=OFF \
-Bcmake-out/backends/cadence \
backends/cadence
cmake --build cmake-out/backends/cadence -j8
cmake --build cmake-out/backends/cadence -j$(($(nproc) - 1))
else
echo "Building Cadence toolchain with ExecuTorch packages"
cmake_prefix_path="${PWD}/cmake-out/lib/cmake/ExecuTorch;${PWD}/cmake-out/third-party/gflags"
Expand All @@ -76,7 +76,7 @@ else
-DEXECUTORCH_NNLIB_OPT=ON \
-DHAVE_FNMATCH_H=OFF \
-Bcmake-out
cmake --build cmake-out --target install --config Release -j8
cmake --build cmake-out --target install --config Release -j$(($(nproc) - 1))
fi

echo "Run simple model to verify cmake build"
Expand Down
4 changes: 2 additions & 2 deletions backends/cadence/build_cadence_runner.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ main() {
-DEXECUTORCH_ENABLE_EVENT_TRACER=ON \
-DEXECUTORCH_ENABLE_LOGGING=ON \
-Bcmake-out .
cmake --build cmake-out --target install --config Release -j16
cmake --build cmake-out --target install --config Release -j$(($(nproc) - 1))

local example_dir=backends/cadence
local build_dir="cmake-out/${example_dir}"
Expand All @@ -39,7 +39,7 @@ main() {
-DEXECUTORCH_ENABLE_LOGGING=ON \
-B"${build_dir}" \
"${example_dir}"
cmake --build "${build_dir}" --config Release -j16
cmake --build "${build_dir}" --config Release -j$(($(nproc) - 1))

local runner="${PWD}/${build_dir}/cadence_runner"
if [[ ! -f "${runner}" ]]; then
Expand Down
4 changes: 2 additions & 2 deletions backends/cadence/runtime/executor_main.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ cmake_install_executorch_devtools_lib() {
-DEXECUTORCH_ENABLE_EVENT_TRACER=ON \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" \
-Bcmake-out .
cmake --build cmake-out -j9 --target install --config Release
cmake --build cmake-out -j$(($(nproc) - 1)) --target install --config Release
}

test_cmake_devtools_example_runner() {
Expand All @@ -40,7 +40,7 @@ test_cmake_devtools_example_runner() {
${example_dir}

echo "Building ${example_dir}"
cmake --build ${build_dir} -j9 --config Release
cmake --build ${build_dir} -j$(($(nproc) - 1)) --config Release

echo 'Running devtools/example_runner'
${build_dir}/example_runner --bundled_program_path="./CadenceDemoModel.bpte"
Expand Down
2 changes: 1 addition & 1 deletion backends/mediatek/scripts/mtk_build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ cmake -DBUCK2="$BUCK_PATH" \

# Build the project
cd ..
cmake --build cmake-android-out -j4
cmake --build cmake-android-out -j$(($(nproc) - 1))

# Switch back to the original directory
cd - > /dev/null
Expand Down
4 changes: 2 additions & 2 deletions backends/vulkan/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ Delegate:
-DEXECUTORCH_BUILD_VULKAN=ON \
-DPYTHON_EXECUTABLE=python \
-Bcmake-android-out && \
cmake --build cmake-android-out -j16 --target install)
cmake --build cmake-android-out -j$(($(nproc) - 1)) --target install)
```

### Run the Vulkan model on device
Expand All @@ -193,7 +193,7 @@ GPU!

```shell
# Build a model runner binary linked with the Vulkan delegate libs
cmake --build cmake-android-out --target vulkan_executor_runner -j32
cmake --build cmake-android-out --target vulkan_executor_runner -j$(($(nproc) - 1))

# Push model to device
adb push vk_add.pte /data/local/tmp/vk_add.pte
Expand Down
4 changes: 2 additions & 2 deletions backends/vulkan/docs/android_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ binary using the Android NDK toolchain.
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-DPYTHON_EXECUTABLE=python \
-Bcmake-android-out && \
cmake --build cmake-android-out -j16 --target install)
cmake --build cmake-android-out -j$(($(nproc) - 1)) --target install)

# Build LLaMA Runner library
(rm -rf cmake-android-out/examples/models/llama && \
Expand All @@ -106,7 +106,7 @@ binary using the Android NDK toolchain.
-DCMAKE_INSTALL_PREFIX=cmake-android-out \
-DPYTHON_EXECUTABLE=python \
-Bcmake-android-out/examples/models/llama && \
cmake --build cmake-android-out/examples/models/llama -j16)
cmake --build cmake-android-out/examples/models/llama -j$(($(nproc) - 1)))
```

Finally, push and run the llama runner binary on your Android device. Note that
Expand Down
2 changes: 1 addition & 1 deletion backends/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ cmake \
Then you can build the runtime componenets with

```bash
cmake --build cmake-out -j9 --target install --config Release
cmake --build cmake-out -j$(($(nproc) - 1)) --target install --config Release
```

Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
Expand Down
2 changes: 1 addition & 1 deletion docs/source/backends-cadence.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ cmake -DCMAKE_BUILD_TYPE=Debug \
-Bcmake-out/examples/cadence \
examples/cadence

cmake --build cmake-out/examples/cadence -j8 -t cadence_executorch_example
cmake --build cmake-out/examples/cadence -j$(($(nproc) - 1)) -t cadence_executorch_example
```

After having succesfully run the above step you should see two binary files in their CMake output directory.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/backends-vulkan.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ Delegate:
-DEXECUTORCH_BUILD_VULKAN=ON \
-DPYTHON_EXECUTABLE=python \
-Bcmake-android-out && \
cmake --build cmake-android-out -j16 --target install)
cmake --build cmake-android-out -j$(($(nproc) - 1)) --target install)
```

### Run the Vulkan model on device
Expand All @@ -193,7 +193,7 @@ GPU!

```shell
# Build a model runner binary linked with the Vulkan delegate libs
cmake --build cmake-android-out --target vulkan_executor_runner -j32
cmake --build cmake-android-out --target vulkan_executor_runner -j$(($(nproc) - 1))

# Push model to device
adb push vk_add.pte /data/local/tmp/vk_add.pte
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ llama3/Meta-Llama-3-8B-Instruct/tokenizer.model -p <path_to_params.json> -c <pat
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-Bcmake-android-out .

cmake --build cmake-android-out -j16 --target install --config Release
cmake --build cmake-android-out -j$(($(nproc) - 1)) --target install --config Release
```
2. Build llama runner for android
```bash
Expand All @@ -76,7 +76,7 @@ llama3/Meta-Llama-3-8B-Instruct/tokenizer.model -p <path_to_params.json> -c <pat
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-Bcmake-android-out/examples/models/llama examples/models/llama

cmake --build cmake-android-out/examples/models/llama -j16 --config Release
cmake --build cmake-android-out/examples/models/llama -j$(($(nproc) - 1)) --config Release
```
3. Run on Android via adb shell
*Pre-requisite*: Make sure you enable USB debugging via developer options on your phone
Expand Down
6 changes: 3 additions & 3 deletions docs/source/llm/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ At this point, the working directory should contain the following files:
If all of these are present, you can now build and run:
```bash
(mkdir cmake-out && cd cmake-out && cmake ..)
cmake --build cmake-out -j10
cmake --build cmake-out -j$(($(nproc) - 1))
./cmake-out/nanogpt_runner
```

Expand Down Expand Up @@ -563,7 +563,7 @@ It will generate `nanogpt.pte`, under the same working directory.
Then we can build and run the model by:
```bash
(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..)
cmake --build cmake-out -j10
cmake --build cmake-out -j$(($(nproc) - 1))
./cmake-out/nanogpt_runner
```

Expand Down Expand Up @@ -827,7 +827,7 @@ target_compile_options(portable_ops_lib PUBLIC -DET_EVENT_TRACER_ENABLED)
Build and run the runner, you will see a file named “etdump.etdp” is generated. (Note that this time we build in release mode to get around a flatccrt build limitation.)
```bash
(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake -DCMAKE_BUILD_TYPE=Release ..)
cmake --build cmake-out -j10
cmake --build cmake-out -j$(($(nproc) - 1))
./cmake-out/nanogpt_runner
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial-xnnpack-delegate-lowering.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ cmake \
Then you can build the runtime componenets with

```bash
cmake --build cmake-out -j9 --target install --config Release
cmake --build cmake-out -j$(($(nproc) - 1)) --target install --config Release
```

Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
Expand Down
Loading
Loading