Skip to content

Commit e52ae3f

Browse files
committed
Update docstrings
1 parent b2e8805 commit e52ae3f

File tree

4 files changed

+22
-3
lines changed

4 files changed

+22
-3
lines changed

Diff for: src/distilabel/llms/huggingface/inference_endpoints.py

+6
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,12 @@ class InferenceEndpointsLLM(AsyncLLM, MagpieChatTemplateMixin):
6363
tokenizer_id: the tokenizer ID to use for the LLM as available in the Hugging Face Hub.
6464
Defaults to `None`, but defining one is recommended to properly format the prompt.
6565
model_display_name: the model display name to use for the LLM. Defaults to `None`.
66+
use_magpie_template: a flag used to enable/disable applying the Magpie pre-query
67+
template. Defaults to `False`.
68+
magpie_pre_query_template: the pre-query template to be applied to the prompt or
69+
sent to the LLM to generate an instruction or a follow up user message. Valid
70+
values are "llama3", "qwen2" or another pre-query template provided. Defaults
71+
to `None`.
6672
6773
Icon:
6874
`:hugging:`

Diff for: src/distilabel/llms/huggingface/transformers.py

+6
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,12 @@ class TransformersLLM(LLM, MagpieChatTemplateMixin, CudaDevicePlacementMixin):
6565
local configuration will be used. Defaults to `None`.
6666
structured_output: a dictionary containing the structured output configuration or if more
6767
fine-grained control is needed, an instance of `OutlinesStructuredOutput`. Defaults to None.
68+
use_magpie_template: a flag used to enable/disable applying the Magpie pre-query
69+
template. Defaults to `False`.
70+
magpie_pre_query_template: the pre-query template to be applied to the prompt or
71+
sent to the LLM to generate an instruction or a follow up user message. Valid
72+
values are "llama3", "qwen2" or another pre-query template provided. Defaults
73+
to `None`.
6874
6975
Icon:
7076
`:hugging:`

Diff for: src/distilabel/llms/mixins/magpie.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,12 @@ class MagpieChatTemplateMixin(BaseModel, validate_assignment=True):
3838
task.
3939
4040
Attributes:
41-
use_magpie_template: a flag used to enable/disable applying the pre-query template.
41+
use_magpie_template: a flag used to enable/disable applying the Magpie pre-query
42+
template. Defaults to `False`.
4243
magpie_pre_query_template: the pre-query template to be applied to the prompt or
4344
sent to the LLM to generate an instruction or a follow up user message. Valid
44-
values are "llama3", "qwen2", ...
45-
or a pre-query template.
45+
values are "llama3", "qwen2" or another pre-query template provided. Defaults
46+
to `None`.
4647
4748
References:
4849
- [Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing](https://arxiv.org/abs/2406.08464)

Diff for: src/distilabel/llms/vllm.py

+6
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,12 @@ class vLLM(LLM, MagpieChatTemplateMixin, CudaDevicePlacementMixin):
7878
_tokenizer: the tokenizer instance used to format the prompt before passing it to
7979
the `LLM`. This attribute is meant to be used internally and should not be
8080
accessed directly. It will be set in the `load` method.
81+
use_magpie_template: a flag used to enable/disable applying the Magpie pre-query
82+
template. Defaults to `False`.
83+
magpie_pre_query_template: the pre-query template to be applied to the prompt or
84+
sent to the LLM to generate an instruction or a follow up user message. Valid
85+
values are "llama3", "qwen2" or another pre-query template provided. Defaults
86+
to `None`.
8187
8288
References:
8389
- https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/llm.py

0 commit comments

Comments
 (0)