|
| 1 | +<!--Copyright 2023 The HuggingFace Team. All rights reserved. |
| 2 | + |
| 3 | +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| 4 | +the License. You may obtain a copy of the License at |
| 5 | + |
| 6 | +http://www.apache.org/licenses/LICENSE-2.0 |
| 7 | + |
| 8 | +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| 9 | +an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| 10 | +specific language governing permissions and limitations under the License. |
| 11 | +--> |
| 12 | + |
| 13 | +# PEFT as a utility library |
| 14 | + |
| 15 | +Let's cover in this section how you can leverage PEFT's low level API to inject trainable adapters into any `torch` module. |
| 16 | +The development of this API has been motivated by the need for super users to not rely on modling classes that are exposed in PEFT library and still be able to use adapter methods such as LoRA, IA3 and AdaLoRA. |
| 17 | + |
| 18 | +## Supported tuner types |
| 19 | + |
| 20 | +Currently the supported adapter types are the 'injectable' adapters, meaning adapters where an inplace modification of the model is sufficient to correctly perform the fine tuning. As such, only [LoRA](./conceptual_guides/lora), AdaLoRA and [IA3](./conceptual_guides/ia3) are currently supported in this API. |
| 21 | + |
| 22 | +## `inject_adapter_in_model` method |
| 23 | + |
| 24 | +To perform the adapter injection, simply use `inject_adapter_in_model` method that takes 3 arguments, the PEFT config and the model itself and an optional adapter name. You can also attach multiple adapters in the model if you call multiple times `inject_adapter_in_model` with different adapter names. |
| 25 | + |
| 26 | +Below is a basic example usage of how to inject LoRA adapters into the submodule `linear` of the module `DummyModel`. |
| 27 | +```python |
| 28 | +import torch |
| 29 | +from peft import inject_adapter_in_model, LoraConfig |
| 30 | + |
| 31 | + |
| 32 | +class DummyModel(torch.nn.Module): |
| 33 | + def __init__(self): |
| 34 | + super().__init__() |
| 35 | + self.embedding = torch.nn.Embedding(10, 10) |
| 36 | + self.linear = torch.nn.Linear(10, 10) |
| 37 | + self.lm_head = torch.nn.Linear(10, 10) |
| 38 | + |
| 39 | + def forward(self, input_ids): |
| 40 | + x = self.embedding(input_ids) |
| 41 | + x = self.linear(x) |
| 42 | + x = self.lm_head(x) |
| 43 | + return x |
| 44 | + |
| 45 | + |
| 46 | +lora_config = LoraConfig( |
| 47 | + lora_alpha=16, |
| 48 | + lora_dropout=0.1, |
| 49 | + r=64, |
| 50 | + bias="none", |
| 51 | + target_modules=["linear"], |
| 52 | +) |
| 53 | + |
| 54 | +model = DummyModel() |
| 55 | +model = inject_adapter_in_model(lora_config, model) |
| 56 | + |
| 57 | +dummy_inputs = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]]) |
| 58 | +dummy_outputs = model(dummy_inputs) |
| 59 | +``` |
| 60 | + |
| 61 | +If you print the model, you will notice that the adapters have been correctly injected into the model |
| 62 | + |
| 63 | +```bash |
| 64 | +DummyModel( |
| 65 | + (embedding): Embedding(10, 10) |
| 66 | + (linear): Linear( |
| 67 | + in_features=10, out_features=10, bias=True |
| 68 | + (lora_dropout): ModuleDict( |
| 69 | + (default): Dropout(p=0.1, inplace=False) |
| 70 | + ) |
| 71 | + (lora_A): ModuleDict( |
| 72 | + (default): Linear(in_features=10, out_features=64, bias=False) |
| 73 | + ) |
| 74 | + (lora_B): ModuleDict( |
| 75 | + (default): Linear(in_features=64, out_features=10, bias=False) |
| 76 | + ) |
| 77 | + (lora_embedding_A): ParameterDict() |
| 78 | + (lora_embedding_B): ParameterDict() |
| 79 | + ) |
| 80 | + (lm_head): Linear(in_features=10, out_features=10, bias=True) |
| 81 | +) |
| 82 | +``` |
| 83 | +Note that it should be up to users to properly take care of saving the adapters (in case they want to save adapters only), as `model.state_dict()` will return the full state dict of the model. |
| 84 | +In case you want to extract the adapters state dict you can use the `get_peft_model_state_dict` method: |
| 85 | + |
| 86 | +```python |
| 87 | +from peft import get_peft_model_state_dict |
| 88 | + |
| 89 | +peft_state_dict = get_peft_model_state_dict(model) |
| 90 | +print(peft_state_dict) |
| 91 | +``` |
| 92 | + |
| 93 | +## Pros and cons |
| 94 | + |
| 95 | +When to use this API and when to not use it? Let's discuss in this section the pros and cons |
| 96 | + |
| 97 | +Pros: |
| 98 | +- The model gets modified in-place, meaning the model will preserve all its original attributes and methods |
| 99 | +- Works for any torch module, and any modality (vision, text, multi-modal) |
| 100 | + |
| 101 | +Cons: |
| 102 | +- You need to manually writing Hugging Face `from_pretrained` and `save_pretrained` utility methods if you want to easily save / load adapters from the Hugging Face Hub. |
| 103 | +- You cannot use any of the utility method provided by `PeftModel` such as disabling adapters, merging adapters, etc. |
0 commit comments