Skip to content

[pt/sft] support use_logits_to_keep & support DeepSeek-R1-0528 #4409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 29, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/source/Instruction/命令行参数.md
Original file line number Diff line number Diff line change
Expand Up @@ -356,10 +356,11 @@ Vera使用`target_modules`, `target_regex`, `modules_to_save`三个参数.
- 支持的多模态模型参考:https://github.com/modelscope/ms-swift/blob/main/examples/train/packing/qwen2_5_vl.sh
- packing_cache: 指定 packing 缓存目录。默认值为`None`,表示缓存将存储在环境变量 `$MODELSCOPE_CACHE`所指定的路径下。在跨节点使用 packing 功能时,需确保所有节点的 packing 缓存路径共享且一致。你可以通过设置`MODELSCOPE_CACHE`环境变量,或在命令行中添加 `--packing_cache <shared_path>`参数来实现这一要求。
- 🔥lazy_tokenize: 是否使用lazy_tokenize。若该参数设置为False,则在训练之前对所有的数据集样本进行tokenize(多模态模型则包括从磁盘中读取图片)。该参数在LLM训练中默认设置为False,而MLLM训练默认为True,节约内存。
- use_logits_to_keep: 通过在`forward`中根据labels传入logits_to_keep,减少无效logits的计算与存储,从而减少显存占用并加快训练速度。默认为None,进行自动选择。
- acc_strategy: 训练和验证时计算acc的策略。可选为`seq`和`token`级别的acc,默认为`token`。
- max_new_tokens: 覆盖生成参数。predict_with_generate=True时的最大生成token数量,默认64。
- temperature: 覆盖生成参数。predict_with_generate=True时的temperature,默认0。
- optimizer: plugin的自定义optimizer名称,默认为None。
- optimizer: plugin的自定义optimizer名称,默认为None。可选optimizer参考[这里](https://github.com/modelscope/ms-swift/blob/main/swift/plugin/optimizer.py)。
- metric: plugin的自定义metric名称。默认为None,即在predict_with_generate=False的情况下设置为'acc',在predict_with_generate=True的情况下设置为'nlg'。
- eval_use_evalscope: 是否使用evalscope进行训练时评测,需要设置该参数来开启评测,具体使用参考[示例](../Instruction/评测.md#训练中评测)。
- eval_datasets: 评测数据集,可设置多个数据集,用空格分割。
Expand Down
3 changes: 2 additions & 1 deletion docs/source/Instruction/支持的模型和数据集.md
Original file line number Diff line number Diff line change
Expand Up @@ -398,12 +398,13 @@
|[deepseek-ai/DeepSeek-Prover-V2-671B](https://modelscope.cn/models/deepseek-ai/DeepSeek-Prover-V2-671B)|deepseek_v2_5|deepseek_v2_5|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-Prover-V2-671B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B)|
|[deepseek-ai/DeepSeek-R1](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)|
|[deepseek-ai/DeepSeek-R1-Zero](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Zero)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-R1-Zero](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero)|
|[deepseek-ai/DeepSeek-R1-0528](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)|
|[cognitivecomputations/DeepSeek-R1-awq](https://modelscope.cn/models/cognitivecomputations/DeepSeek-R1-awq)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[cognitivecomputations/DeepSeek-R1-AWQ](https://huggingface.co/cognitivecomputations/DeepSeek-R1-AWQ)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)|
|[iic/QwenLong-L1-32B](https://modelscope.cn/models/iic/QwenLong-L1-32B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2718;|-|[Tongyi-Zhiwen/QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B)|
|[iic/QwenLong-L1-32B](https://modelscope.cn/models/iic/QwenLong-L1-32B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[Tongyi-Zhiwen/QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B)|
|[deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)|deepseek_r1_distill|deepseek_r1|-|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)|
|[deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)|deepseek_r1_distill|deepseek_r1|-|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)|
|[OpenBuddy/openbuddy-llama-65b-v8-bf16](https://modelscope.cn/models/OpenBuddy/openbuddy-llama-65b-v8-bf16)|openbuddy_llama|openbuddy|-|&#x2714;|-|[OpenBuddy/openbuddy-llama-65b-v8-bf16](https://huggingface.co/OpenBuddy/openbuddy-llama-65b-v8-bf16)|
Expand Down
3 changes: 2 additions & 1 deletion docs/source_en/Instruction/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,10 +365,11 @@ Training arguments include the [base arguments](#base-arguments), [Seq2SeqTraine
- Supported multimodal models reference: https://github.com/modelscope/ms-swift/blob/main/examples/train/packing/qwen2_5_vl.sh
- packing_cache: Specifies the directory for packing cache. The default value is `None`, which means the cache will be stored in the path defined by the environment variable `$MODELSCOPE_CACHE`. When using the packing feature across multiple nodes, ensure that all nodes share the same packing cache directory. You can achieve this by setting the `MODELSCOPE_CACHE` environment variable or by adding the `--packing_cache <shared_path>` argument in the command line.
- 🔥lazy_tokenize: Whether to use lazy tokenization. If set to False, all dataset samples are tokenized before training (for multimodal models, this includes reading images from disk). This parameter defaults to False for LLM training, and True for MLLM training, to save memory.
- use_logits_to_keep: Pass `logits_to_keep` in the `forward` method based on labels to reduce the computation and storage of unnecessary logits, thereby reducing memory usage and accelerating training. The default is `None`, which enables automatic selection.
- acc_strategy: Strategy for calculating accuracy during training and validation. Options are `seq`-level and `token`-level accuracy, with `token` as the default.
- max_new_tokens: Generation parameter override. The maximum number of tokens to generate when `predict_with_generate=True`, defaulting to 64.
- temperature: Generation parameter override. The temperature setting when `predict_with_generate=True`, defaulting to 0.
- optimizer: Custom optimizer name for the plugin, defaults to None.
- optimizer: Custom optimizer name for the plugin, defaults to None. Optional optimizer reference: [here](https://github.com/modelscope/ms-swift/blob/main/swift/plugin/optimizer.py).
- metric: Custom metric name for the plugin. Defaults to None, with the default set to 'acc' when `predict_with_generate=False` and 'nlg' when `predict_with_generate=True`.
- eval_use_evalscope: Whether to use evalscope for evaluation, this parameter needs to be set to enable evaluation, refer to [example](../Instruction/Evaluation.md#evaluation-during-training). Default is False.
- eval_datasets: Evaluation datasets, multiple datasets can be set, separated by spaces
Expand Down
3 changes: 2 additions & 1 deletion docs/source_en/Instruction/Supported-models-and-datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -398,12 +398,13 @@ The table below introduces the models integrated with ms-swift:
|[deepseek-ai/DeepSeek-Prover-V2-671B](https://modelscope.cn/models/deepseek-ai/DeepSeek-Prover-V2-671B)|deepseek_v2_5|deepseek_v2_5|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-Prover-V2-671B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B)|
|[deepseek-ai/DeepSeek-R1](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)|
|[deepseek-ai/DeepSeek-R1-Zero](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Zero)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-R1-Zero](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero)|
|[deepseek-ai/DeepSeek-R1-0528](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)|
|[cognitivecomputations/DeepSeek-R1-awq](https://modelscope.cn/models/cognitivecomputations/DeepSeek-R1-awq)|deepseek_r1|deepseek_r1|transformers>=4.39.3|&#x2718;|-|[cognitivecomputations/DeepSeek-R1-AWQ](https://huggingface.co/cognitivecomputations/DeepSeek-R1-AWQ)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)|
|[deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)|
|[iic/QwenLong-L1-32B](https://modelscope.cn/models/iic/QwenLong-L1-32B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2718;|-|[Tongyi-Zhiwen/QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B)|
|[iic/QwenLong-L1-32B](https://modelscope.cn/models/iic/QwenLong-L1-32B)|deepseek_r1_distill|deepseek_r1|transformers>=4.37|&#x2714;|-|[Tongyi-Zhiwen/QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B)|
|[deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)|deepseek_r1_distill|deepseek_r1|-|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)|
|[deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)|deepseek_r1_distill|deepseek_r1|-|&#x2714;|-|[deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)|
|[OpenBuddy/openbuddy-llama-65b-v8-bf16](https://modelscope.cn/models/OpenBuddy/openbuddy-llama-65b-v8-bf16)|openbuddy_llama|openbuddy|-|&#x2714;|-|[OpenBuddy/openbuddy-llama-65b-v8-bf16](https://huggingface.co/OpenBuddy/openbuddy-llama-65b-v8-bf16)|
Expand Down
1 change: 0 additions & 1 deletion swift/llm/argument/train_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,6 @@ class TrainArguments(SwanlabArguments, TunerArguments, BaseArguments, Seq2SeqTra

# plugin
loss_type: Optional[str] = field(default=None, metadata={'help': f'loss_func choices: {list(LOSS_MAPPING.keys())}'})
optimizer: Optional[str] = None
metric: Optional[str] = None

# extra
Expand Down
1 change: 1 addition & 0 deletions swift/llm/model/model/deepseek.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,7 @@ def get_model_tokenizer_deepseek_vl2(model_dir: str, *args, **kwargs):
ModelGroup([
Model('deepseek-ai/DeepSeek-R1', 'deepseek-ai/DeepSeek-R1'),
Model('deepseek-ai/DeepSeek-R1-Zero', 'deepseek-ai/DeepSeek-R1-Zero'),
Model('deepseek-ai/DeepSeek-R1-0528', 'deepseek-ai/DeepSeek-R1-0528')
]),
ModelGroup([
Model('cognitivecomputations/DeepSeek-R1-awq', 'cognitivecomputations/DeepSeek-R1-AWQ'),
Expand Down
2 changes: 1 addition & 1 deletion swift/llm/model/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -332,7 +332,7 @@ def get_llm_model(model: torch.nn.Module, model_meta=None):

llm_prefix = getattr(get_model_arch(model_meta.model_arch), 'language_model', None)
if llm_prefix:
llm_model = getattr(model, llm_prefix[0])
llm_model = deep_getattr(model, llm_prefix[0])
else:
llm_model = model

Expand Down
9 changes: 1 addition & 8 deletions swift/llm/train/sft.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,13 +75,6 @@ def _get_dataset(self):

return train_dataset, val_dataset

def _get_loss_func(self):
args = self.args
loss_type = args.loss_type
if loss_type is None and args.loss_scale != 'default':
loss_type = 'loss_scale'
return get_loss_func(loss_type)

def _get_data_collator(self):
args = self.args
template = self.template
Expand Down Expand Up @@ -141,7 +134,7 @@ def _get_trainer_kwargs(self):
return {
'compute_metrics': compute_metrics,
'preprocess_logits_for_metrics': preprocess_logits_for_metrics,
'compute_loss_func': self._get_loss_func()
'compute_loss_func': get_loss_func(args.loss_type)
}

def _save_trainer_state(self, trainer):
Expand Down
1 change: 1 addition & 0 deletions swift/trainers/arguments.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ class TrainArgumentsMixin:
aligner_lr: Optional[float] = None
vit_lr: Optional[float] = None
optimizer: Optional[str] = None
use_logits_to_keep: Optional[bool] = None

# torchacc
metric_warmup_step: Optional[float] = 0
Expand Down
39 changes: 33 additions & 6 deletions swift/trainers/trainers.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) Alibaba, Inc. and its affiliates.
# Part of the implementation is borrowed from huggingface/transformers.
import inspect
import os
from contextlib import contextmanager, nullcontext
from functools import wraps
Expand All @@ -15,10 +16,12 @@
from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
from transformers.utils import is_peft_available

from swift.utils import JsonlWriter, Serializer, gc_collect
from swift.utils import JsonlWriter, Serializer, gc_collect, get_logger, is_mp
from .arguments import Seq2SeqTrainingArguments, TrainingArguments
from .mixin import DataLoaderMixin, SwiftMixin

logger = get_logger()


class Trainer(SwiftMixin, HfTrainer):
args: TrainingArguments
Expand Down Expand Up @@ -152,15 +155,39 @@ def prediction_step(
return None, response_list, labels_list

def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):
from swift.plugin.loss import get_loss_func
loss_kwargs = {}
labels = None
if (self.label_smoother is not None or self.compute_loss_func is not None) and 'labels' in inputs:
labels = inputs.pop('labels')

compute_loss_func = self.compute_loss_func
loss_scale = inputs.pop('loss_scale', None)
if loss_scale is not None:
loss_kwargs['loss_scale'] = loss_scale
if compute_loss_func is None:
compute_loss_func = get_loss_func('loss_scale')
if (self.label_smoother is not None or compute_loss_func is not None) and 'labels' in inputs:
labels = inputs.pop('labels')

base_model = self.template.get_base_model(self.model)
use_logits_to_keep = self.args.use_logits_to_keep
if use_logits_to_keep is None:
# padding_free or packing
use_logits_to_keep = 'labels' in inputs and 'logits_to_keep' in inspect.signature(
base_model.forward).parameters
logger.info_once(f'use_logits_to_keep: {use_logits_to_keep}')

if use_logits_to_keep:
if inputs['labels'].shape[0] == 1:
loss_mask = (inputs['labels'] != -100)[0]
inputs['labels'] = inputs['labels'][:, loss_mask]
inputs['labels'] = nn.functional.pad(inputs['labels'], (1, 0), value=-100)
inputs['logits_to_keep'] = nn.functional.pad(loss_mask[1:], (0, 1), value=True)
if is_mp():
inputs['logits_to_keep'] = inputs['logits_to_keep'].cpu()
else:
inputs['logits_to_keep'] = (inputs['labels'].shape[-1] -
(torch.ne(inputs['labels'], -100).int().argmax(-1))).max().item() + 1
assert inputs['logits_to_keep'] > 0
inputs['labels'] = inputs['labels'][:, -inputs['logits_to_keep']:]
with self.template.compute_loss_context(self.model, inputs):
outputs = model(**inputs)
# Save past state if it exists
Expand Down Expand Up @@ -188,8 +215,8 @@ def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=N
else:
model_name = unwrapped_model._get_name()
# User-defined compute_loss function
if self.compute_loss_func is not None:
loss = self.compute_loss_func(outputs, labels, num_items_in_batch=num_items_in_batch, **loss_kwargs)
if compute_loss_func is not None:
loss = compute_loss_func(outputs, labels, num_items_in_batch=num_items_in_batch, **loss_kwargs)
elif model_name in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.values():
loss = self.label_smoother(outputs, labels, shift_labels=True)
else:
Expand Down