You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/serving/distributed_serving.rst
+16-3Lines changed: 16 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
Distributed Inference and Serving
4
4
=================================
5
5
6
-
vLLM supports distributed tensor-parallel inference and serving. Currently, we support `Megatron-LM's tensor parallel algorithm <https://arxiv.org/pdf/1909.08053.pdf>`_. We manage the distributed runtime with either `Ray <https://github.com/ray-project/ray>`_ or python native multiprocessing. Multiprocessing can be used when deploying on a single node, multi-node inferencing currently requires Ray.
6
+
vLLM supports distributed tensor-parallel inference and serving. Currently, we support `Megatron-LM's tensor parallel algorithm <https://arxiv.org/pdf/1909.08053.pdf>`_. We also support pipeline parallel as a beta feature for online serving. We manage the distributed runtime with either `Ray <https://github.com/ray-project/ray>`_ or python native multiprocessing. Multiprocessing can be used when deploying on a single node, multi-node inferencing currently requires Ray.
7
7
8
8
Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured :code:`tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the :code:`LLM` class :code:`distributed-executor-backend` argument or :code:`--distributed-executor-backend` API server argument. Set it to :code:`mp` for multiprocessing or :code:`ray` for Ray. It's not required for Ray to be installed for the multiprocessing case.
9
9
@@ -21,7 +21,20 @@ To run multi-GPU serving, pass in the :code:`--tensor-parallel-size` argument wh
21
21
22
22
$ python -m vllm.entrypoints.openai.api_server \
23
23
$ --model facebook/opt-13b \
24
-
$ --tensor-parallel-size 4
24
+
$ --tensor-parallel-size 4 \
25
+
26
+
You can also additionally specify :code:`--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism:
27
+
28
+
.. code-block:: console
29
+
30
+
$ python -m vllm.entrypoints.openai.api_server \
31
+
$ --model gpt2 \
32
+
$ --tensor-parallel-size 4 \
33
+
$ --pipeline-parallel-size 2 \
34
+
$ --distributed-executor-backend ray \
35
+
36
+
.. note::
37
+
Pipeline parallel is a beta feature. It is only supported for online serving and the ray backend for now, as well as LLaMa and GPT2 style models.
25
38
26
39
To scale vLLM beyond a single machine, install and start a `Ray runtime <https://docs.ray.io/en/latest/ray-core/starting-ray.html>`_ via CLI before running vLLM:
27
40
@@ -35,7 +48,7 @@ To scale vLLM beyond a single machine, install and start a `Ray runtime <https:/
35
48
$ # On worker nodes
36
49
$ ray start --address=<ray-head-address>
37
50
38
-
After that, you can run inference and serving on multiple machines by launching the vLLM process on the head node by setting :code:`tensor_parallel_size` to the number of GPUs to be the total number of GPUs across all machines.
51
+
After that, you can run inference and serving on multiple machines by launching the vLLM process on the head node by setting :code:`tensor_parallel_size` multiplied by :code:`pipeline_parallel_size` to the number of GPUs to be the total number of GPUs across all machines.
39
52
40
53
.. warning::
41
54
Please make sure you downloaded the model to all the nodes, or the model is downloaded to some distributed file system that is accessible by all nodes.
0 commit comments