-
Notifications
You must be signed in to change notification settings - Fork 22
/
Copy pathtransformers_docs.jsonl
3 lines (3 loc) · 68.8 KB
/
transformers_docs.jsonl
1
2
3
{"title": "accelerate.mdx", "repo_owner": "huggingface", "repo_name": "transformers", "text": "<!--Copyright 2022 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with\nthe License. You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n-->\n\n# Distributed training with \ud83e\udd17 Accelerate\n\nAs models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the [\ud83e\udd17 Accelerate](https://huggingface.co/docs/accelerate) library to help users easily train a \ud83e\udd17 Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment.\n\n## Setup\n\nGet started by installing \ud83e\udd17 Accelerate:\n\n```bash\npip install accelerate\n```\n\nThen import and create an [`~accelerate.Accelerator`] object. The [`~accelerate.Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device.\n\n```py\n>>> from accelerate import Accelerator\n\n>>> accelerator = Accelerator()\n```\n\n## Prepare to accelerate\n\nThe next step is to pass all the relevant training objects to the [`~accelerate.Accelerator.prepare`] method. This includes your training and evaluation DataLoaders, a model and an optimizer:\n\n```py\n>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(\n... train_dataloader, eval_dataloader, model, optimizer\n... )\n```\n\n## Backward\n\nThe last addition is to replace the typical `loss.backward()` in your training loop with \ud83e\udd17 Accelerate's [`~accelerate.Accelerator.backward`]method:\n\n```py\n>>> for epoch in range(num_epochs):\n... for batch in train_dataloader:\n... outputs = model(**batch)\n... loss = outputs.loss\n... accelerator.backward(loss)\n\n... optimizer.step()\n... lr_scheduler.step()\n... optimizer.zero_grad()\n... progress_bar.update(1)\n```\n\nAs you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training!\n\n```diff\n+ from accelerate import Accelerator\n from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\n+ accelerator = Accelerator()\n\n model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\n optimizer = AdamW(model.parameters(), lr=3e-5)\n\n- device = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\n- model.to(device)\n\n+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(\n+ train_dataloader, eval_dataloader, model, optimizer\n+ )\n\n num_epochs = 3\n num_training_steps = num_epochs * len(train_dataloader)\n lr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps\n )\n\n progress_bar = tqdm(range(num_training_steps))\n\n model.train()\n for epoch in range(num_epochs):\n for batch in train_dataloader:\n- batch = {k: v.to(device) for k, v in batch.items()}\n outputs = model(**batch)\n loss = outputs.loss\n- loss.backward()\n+ accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n```\n\n## Train\n\nOnce you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory.\n\n### Train with a script\n\nIf you are running your training from a script, run the following command to create and save a configuration file:\n\n```bash\naccelerate config\n```\n\nThen launch your training with:\n\n```bash\naccelerate launch train.py\n```\n\n### Train with a notebook\n\n\ud83e\udd17 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [`~accelerate.notebook_launcher`]:\n\n```py\n>>> from accelerate import notebook_launcher\n\n>>> notebook_launcher(training_function)\n```\n\nFor more information about \ud83e\udd17 Accelerate and it's rich features, refer to the [documentation](https://huggingface.co/docs/accelerate)."}
{"title": "add_new_model.mdx", "repo_owner": "huggingface", "repo_name": "transformers", "text": "<!--Copyright 2020 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with\nthe License. You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n-->\n\n# How to add a model to \ud83e\udd17 Transformers?\n\nThe \ud83e\udd17 Transformers library is often able to offer new models thanks to community contributors. But this can be a challenging project and requires an in-depth knowledge of the \ud83e\udd17 Transformers library and the model to implement. At Hugging Face, we're trying to empower more of the community to actively add models and we've put together this guide to walk you through the process of adding a PyTorch model (make sure you have [PyTorch installed](https://pytorch.org/get-started/locally/)).\n\n<Tip>\n\nIf you're interested in implementing a TensorFlow model, take a look at the [How to convert a \ud83e\udd17 Transformers model to TensorFlow](add_tensorflow_model) guide!\n\n</Tip>\n\nAlong the way, you'll:\n\n- get insights into open-source best practices\n- understand the design principles behind one of the most popular deep learning libraries\n- learn how to efficiently test large models\n- learn how to integrate Python utilities like `black`, `ruff`, and `make fix-copies` to ensure clean and readable code\n\nA Hugging Face team member will be available to help you along the way so you'll never be alone. \ud83e\udd17 \u2764\ufe0f\n\nTo get started, open a [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) issue for the model you want to see in \ud83e\udd17 Transformers. If you're not especially picky about contributing a specific model, you can filter by the [New model label](https://github.com/huggingface/transformers/labels/New%20model) to see if there are any unclaimed model requests and work on it.\n\nOnce you've opened a new model request, the first step is to get familiar with \ud83e\udd17 Transformers if you aren't already!\n\n## General overview of \ud83e\udd17 Transformers\n\nFirst, you should get a general overview of \ud83e\udd17 Transformers. \ud83e\udd17 Transformers is a very opinionated library, so there is a\nchance that you don't agree with some of the library's philosophies or design choices. From our experience, however, we\nfound that the fundamental design choices and philosophies of the library are crucial to efficiently scale \ud83e\udd17\nTransformers while keeping maintenance costs at a reasonable level.\n\nA good first starting point to better understand the library is to read the [documentation of our philosophy](philosophy). As a result of our way of working, there are some choices that we try to apply to all models:\n\n- Composition is generally favored over-abstraction\n- Duplicating code is not always bad if it strongly improves the readability or accessibility of a model\n- Model files are as self-contained as possible so that when you read the code of a specific model, you ideally only\n have to look into the respective `modeling_....py` file.\n\nIn our opinion, the library's code is not just a means to provide a product, *e.g.* the ability to use BERT for\ninference, but also as the very product that we want to improve. Hence, when adding a model, the user is not only the\nperson that will use your model, but also everybody that will read, try to understand, and possibly tweak your code.\n\nWith this in mind, let's go a bit deeper into the general library design.\n\n### Overview of models\n\nTo successfully add a model, it is important to understand the interaction between your model and its config,\n[`PreTrainedModel`], and [`PretrainedConfig`]. For exemplary purposes, we will\ncall the model to be added to \ud83e\udd17 Transformers `BrandNewBert`.\n\nLet's take a look:\n\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png\"/>\n\nAs you can see, we do make use of inheritance in \ud83e\udd17 Transformers, but we keep the level of abstraction to an absolute\nminimum. There are never more than two levels of abstraction for any model in the library. `BrandNewBertModel`\ninherits from `BrandNewBertPreTrainedModel` which in turn inherits from [`PreTrainedModel`] and\nthat's it. As a general rule, we want to make sure that a new model only depends on\n[`PreTrainedModel`]. The important functionalities that are automatically provided to every new\nmodel are [`~PreTrainedModel.from_pretrained`] and\n[`~PreTrainedModel.save_pretrained`], which are used for serialization and deserialization. All of the\nother important functionalities, such as `BrandNewBertModel.forward` should be completely defined in the new\n`modeling_brand_new_bert.py` script. Next, we want to make sure that a model with a specific head layer, such as\n`BrandNewBertForMaskedLM` does not inherit from `BrandNewBertModel`, but rather uses `BrandNewBertModel`\nas a component that can be called in its forward pass to keep the level of abstraction low. Every new model requires a\nconfiguration class, called `BrandNewBertConfig`. This configuration is always stored as an attribute in\n[`PreTrainedModel`], and thus can be accessed via the `config` attribute for all classes\ninheriting from `BrandNewBertPreTrainedModel`:\n\n```python\nmodel = BrandNewBertModel.from_pretrained(\"brandy/brand_new_bert\")\nmodel.config # model has access to its config\n```\n\nSimilar to the model, the configuration inherits basic serialization and deserialization functionalities from\n[`PretrainedConfig`]. Note that the configuration and the model are always serialized into two\ndifferent formats - the model to a *pytorch_model.bin* file and the configuration to a *config.json* file. Calling\n[`~PreTrainedModel.save_pretrained`] will automatically call\n[`~PretrainedConfig.save_pretrained`], so that both model and configuration are saved.\n\n\n### Code style\n\nWhen coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our\nown regarding how code should be written :-)\n\n1. The forward pass of your model should be fully written in the modeling file while being fully independent of other\n models in the library. If you want to reuse a block from another model, copy the code and paste it with a\n `# Copied from` comment on top (see [here](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)\n for a good example).\n2. The code should be fully understandable, even by a non-native English speaker. This means you should pick\n descriptive variable names and avoid abbreviations. As an example, `activation` is preferred to `act`.\n One-letter variable names are strongly discouraged unless it's an index in a for loop.\n3. More generally we prefer longer explicit code to short magical one.\n4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone\n using your code can quickly debug it by adding print statements or breaking points.\n5. Your function signature should be type-annotated. For the rest, good variable names are way more readable and\n understandable than type annotations.\n\n### Overview of tokenizers\n\nNot quite ready yet :-( This section will be added soon!\n\n## Step-by-step recipe to add a model to \ud83e\udd17 Transformers\n\nEveryone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries\nof how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model:\n\n1. [Porting GPT2 Model](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf)\n2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas)\n\nFrom experience, we can tell you that the most important things to keep in mind when adding a model are:\n\n- Don't reinvent the wheel! Most parts of the code you will add for the new \ud83e\udd17 Transformers model already exist\n somewhere in \ud83e\udd17 Transformers. Take some time to find similar, already existing models and tokenizers you can copy\n from. [grep](https://www.gnu.org/software/grep/) and [rg](https://github.com/BurntSushi/ripgrep) are your\n friends. Note that it might very well happen that your model's tokenizer is based on one model implementation, and\n your model's modeling code on another one. *E.g.* FSMT's modeling code is based on BART, while FSMT's tokenizer code\n is based on XLM.\n- It's more of an engineering challenge than a scientific challenge. You should spend more time on creating an\n efficient debugging environment than trying to understand all theoretical aspects of the model in the paper.\n- Ask for help, when you're stuck! Models are the core component of \ud83e\udd17 Transformers so that we at Hugging Face are more\n than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making\n progress.\n\nIn the following, we try to give you a general recipe that we found most useful when porting a model to \ud83e\udd17 Transformers.\n\nThe following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do\nList:\n\n\u2610 (Optional) Understood the model's theoretical aspects<br>\n\u2610 Prepared \ud83e\udd17 Transformers dev environment<br>\n\u2610 Set up debugging environment of the original repository<br>\n\u2610 Created script that successfully runs the `forward()` pass using the original repository and checkpoint<br>\n\u2610 Successfully added the model skeleton to \ud83e\udd17 Transformers<br>\n\u2610 Successfully converted original checkpoint to \ud83e\udd17 Transformers checkpoint<br>\n\u2610 Successfully ran `forward()` pass in \ud83e\udd17 Transformers that gives identical output to original checkpoint<br>\n\u2610 Finished model tests in \ud83e\udd17 Transformers<br>\n\u2610 Successfully added tokenizer in \ud83e\udd17 Transformers<br>\n\u2610 Run end-to-end integration tests<br>\n\u2610 Finished docs<br>\n\u2610 Uploaded model weights to the Hub<br>\n\u2610 Submitted the pull request<br>\n\u2610 (Optional) Added a demo notebook\n\nTo begin with, we usually recommend to start by getting a good theoretical understanding of `BrandNewBert`. However,\nif you prefer to understand the theoretical aspects of the model *on-the-job*, then it is totally fine to directly dive\ninto the `BrandNewBert`'s code-base. This option might suit you better, if your engineering skills are better than\nyour theoretical skill, if you have trouble understanding `BrandNewBert`'s paper, or if you just enjoy programming\nmuch more than reading scientific papers.\n\n### 1. (Optional) Theoretical aspects of BrandNewBert\n\nYou should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large\nsections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is\nnot to get a deep theoretical understanding of the paper, but to extract the necessary information required to\neffectively re-implement the model in \ud83e\udd17 Transformers. That being said, you don't have to spend too much time on the\ntheoretical aspects, but rather focus on the practical ones, namely:\n\n- What type of model is *brand_new_bert*? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like\n encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those.\n- What are the applications of *brand_new_bert*? Text classification? Text generation? Seq2Seq tasks, *e.g.,*\n summarization?\n- What is the novel feature of the model making it different from BERT/GPT-2/BART?\n- Which of the already existing [\ud83e\udd17 Transformers models](https://huggingface.co/transformers/#contents) is most\n similar to *brand_new_bert*?\n- What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used\n for BERT or BART?\n\nAfter you feel like you have gotten a good overview of the architecture of the model, you might want to write to the\nHugging Face team with any questions you might have. This might include questions regarding the model's architecture,\nits attention layer, etc. We will be more than happy to help you.\n\n### 2. Next prepare your environment\n\n1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the \u2018Fork' button on the\n repository's page. This creates a copy of the code under your GitHub user account.\n\n2. Clone your `transformers` fork to your local disk, and add the base repository as a remote:\n\n```bash\ngit clone https://github.com/[your Github handle]/transformers.git\ncd transformers\ngit remote add upstream https://github.com/huggingface/transformers.git\n```\n\n3. Set up a development environment, for instance by running the following command:\n\n```bash\npython -m venv .env\nsource .env/bin/activate\npip install -e \".[dev]\"\n```\n\nDepending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a\nfailure with this command. If that's the case make sure to install the Deep Learning framework you are working with\n(PyTorch, TensorFlow and/or Flax) then do:\n\n```bash\npip install -e \".[quality]\"\n```\n\nwhich should be enough for most use cases. You can then return to the parent directory\n\n```bash\ncd ..\n```\n\n4. We recommend adding the PyTorch version of *brand_new_bert* to Transformers. To install PyTorch, please follow the\n instructions on https://pytorch.org/get-started/locally/.\n\n**Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient.\n\n5. To port *brand_new_bert*, you will also need access to its original repository:\n\n```bash\ngit clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git\ncd brand_new_bert\npip install -e .\n```\n\nNow you have set up a development environment to port *brand_new_bert* to \ud83e\udd17 Transformers.\n\n### 3.-4. Run a pretrained checkpoint using the original repository\n\nAt first, you will work on the original *brand_new_bert* repository. Often, the original implementation is very\n\u201cresearchy\u201d. Meaning that documentation might be lacking and the code can be difficult to understand. But this should\nbe exactly your motivation to reimplement *brand_new_bert*. At Hugging Face, one of our main goals is to *make people\nstand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make\nit as **accessible, user-friendly, and beautiful** as possible. This is the number-one motivation to re-implement\nmodels into \ud83e\udd17 Transformers - trying to make complex new NLP technology accessible to **everybody**.\n\nYou should start thereby by diving into the original repository.\n\nSuccessfully running the official pretrained model in the original repository is often **the most difficult** step.\nFrom our experience, it is very important to spend some time getting familiar with the original code-base. You need to\nfigure out the following:\n\n- Where to find the pretrained weights?\n- How to load the pretrained weights into the corresponding model?\n- How to run the tokenizer independently from the model?\n- Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually,\n you only have to reimplement those functions.\n- Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes,\n *e.g.* EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers,\n *e.g.* *self-attention*, *cross-attention*...?\n- How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you\n work with an interactive debugger like *ipdb*, or should you use an efficient IDE to debug the model, like PyCharm?\n\nIt is very important that before you start the porting process, that you can **efficiently** debug code in the original\nrepository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or\neven a pull request in the original repository. The maintainers of this repository are most likely very happy about\nsomeone looking into their code!\n\nAt this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original\nmodel. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to\ndive into the original repository and also when starting to write the \ud83e\udd17 Transformers implementation of the model. Only\nat the very end, when the model has already been successfully ported to \ud83e\udd17 Transformers, one should verify that the\nmodel also works as expected on GPU.\n\nIn general, there are two possible debugging environments for running the original model\n\n- [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb)\n- Local python scripts.\n\nJupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split\nlogical components from one another and to have faster debugging cycles as intermediate results can be stored. Also,\nnotebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging\nFace team for help. If you are familiar with Jupyter notebooks, we strongly recommend you to work with them.\n\nThe obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend\nsome time adjusting to the new programming environment and that you might not be able to use your known debugging tools\nanymore, like `ipdb`.\n\nFor each code-base, a good first step is always to load a **small** pretrained checkpoint and to be able to reproduce a\nsingle forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in\npseudocode):\n\n```python\nmodel = BrandNewBertModel.load_pretrained_checkpoint(\"/path/to/checkpoint/\")\ninput_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids\noriginal_output = model.predict(input_ids)\n```\n\nNext, regarding the debugging strategy, there are generally a few from which to choose from:\n\n- Decompose the original model into many small testable components and run a forward pass on each of those for\n verification\n- Decompose the original model only into the original *tokenizer* and the original *model*, run a forward pass on\n those, and use intermediate print statements or breakpoints for verification\n\nAgain, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code\nbase.\n\nIf the original code-base allows you to decompose the model into smaller sub-components, *e.g.* if the original\ncode-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages\nto taking the more difficult road in the beginning:\n\n- at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically\n for each component individually that the corresponding component of the \ud83e\udd17 Transformers implementation matches instead\n of relying on visual comparison via print statements\n- it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting\n individual components and thus structure your work better\n- separating the model into logical meaningful components will help you to get a better overview of the model's design\n and thus to better understand the model\n- at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue\n changing your code\n\n[Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) integration checks for ELECTRA\ngives a nice example of how this can be done.\n\nHowever, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode,\nit might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good\nexample is [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) library which is\nvery complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one\noften relies on verifying print statements.\n\nNo matter which strategy you choose, the recommended procedure is often the same in that you should start to debug the\nstarting layers first and the ending layers last.\n\nIt is recommended that you retrieve the output, either by print statements or sub-component functions, of the following\nlayers in the following order:\n\n1. Retrieve the input IDs passed to the model\n2. Retrieve the word embeddings\n3. Retrieve the input of the first Transformer layer\n4. Retrieve the output of the first Transformer layer\n5. Retrieve the output of the following n - 1 Transformer layers\n6. Retrieve the output of the whole BrandNewBert Model\n\nInput IDs should thereby consists of an array of integers, *e.g.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]`\n\nThe outputs of the following layers often consist of multi-dimensional float arrays and can look like this:\n\n```\n[[\n [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024],\n [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132],\n [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648],\n ...,\n [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288],\n [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191],\n [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]],\n```\n\nWe expect that every model added to \ud83e\udd17 Transformers passes a couple of integration tests, meaning that the original\nmodel and the reimplemented version in \ud83e\udd17 Transformers have to give the exact same output up to a precision of 0.001!\nSince it is normal that the exact same model written in different libraries can give a slightly different output\ndepending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives\nnearly the same output, they have to be the almost identical. Therefore, you will certainly compare the intermediate\noutputs of the \ud83e\udd17 Transformers version multiple times against the intermediate outputs of the original implementation of\n*brand_new_bert* in which case an **efficient** debugging environment of the original repository is absolutely\nimportant. Here is some advice is to make your debugging environment as efficient as possible.\n\n- Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should\n probably take the time to write a longer script that decomposes the original model into smaller sub-components to\n retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on\n TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output\n intermediate values. Is the original repository written in Jax? Then make sure that the model is **not jitted** when\n running the forward pass, *e.g.* check-out [this link](https://github.com/google/jax/issues/196).\n- Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle\n becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds.\n In case only very large checkpoints are available, it might make more sense to create a dummy model in the new\n environment with randomly initialized weights and save those weights for comparison with the \ud83e\udd17 Transformers version\n of your model\n- Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to\n find the function in the original repository that **only** calls a single forward pass, *i.e.* that is often called\n `predict`, `evaluate`, `forward` or `__call__`. You don't want to debug a function that calls `forward`\n multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`.\n- Try to separate the tokenization from the model's *forward* pass. If the original repository shows examples where\n you have to input a string, then try to find out where in the forward call the string input is changed to input ids\n and start from this point. This might mean that you have to possibly write a small script yourself or change the\n original code so that you can directly input the ids instead of an input string.\n- Make sure that the model in your debugging setup is **not** in training mode, which often causes the model to yield\n random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging\n environment is **deterministic** so that the dropout layers are not used. Or use *transformers.utils.set_seed*\n if the old and new implementations are in the same framework.\n\nThe following section gives you more specific details/tips on how you can do this for *brand_new_bert*.\n\n### 5.-14. Port BrandNewBert to \ud83e\udd17 Transformers\n\nNext, you can finally start adding new code to \ud83e\udd17 Transformers. Go into the clone of your \ud83e\udd17 Transformers' fork:\n\n```bash\ncd transformers\n```\n\nIn the special case that you are adding a model whose architecture exactly matches the model architecture of an\nexisting model you only have to add a conversion script as described in [this section](#write-a-conversion-script).\nIn this case, you can just re-use the whole model architecture of the already existing model.\n\nOtherwise, let's start generating a new model. You have two choices here:\n\n- `transformers-cli add-new-model-like` to add a new model like an existing one\n- `transformers-cli add-new-model` to add a new model from our template (will look like BERT or Bart depending on the type of model you select)\n\nIn both cases, you will be prompted with a questionnaire to fill the basic information of your model. The second command requires to install `cookiecutter`, you can find more information on it [here](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model).\n\n**Open a Pull Request on the main huggingface/transformers repo**\n\nBefore starting to adapt the automatically generated code, now is the time to open a \u201cWork in progress (WIP)\u201d pull\nrequest, *e.g.* \u201c[WIP] Add *brand_new_bert*\u201d, in \ud83e\udd17 Transformers so that you and the Hugging Face team can work\nside-by-side on integrating the model into \ud83e\udd17 Transformers.\n\nYou should do the following:\n\n1. Create a branch with a descriptive name from your main branch\n\n```bash\ngit checkout -b add_brand_new_bert\n```\n\n2. Commit the automatically generated code:\n\n```bash\ngit add .\ngit commit\n```\n\n3. Fetch and rebase to current main\n\n```bash\ngit fetch upstream\ngit rebase upstream/main\n```\n\n4. Push the changes to your account using:\n\n```bash\ngit push -u origin a-descriptive-name-for-my-changes\n```\n\n5. Once you are satisfied, go to the webpage of your fork on GitHub. Click on \u201cPull request\u201d. Make sure to add the\n GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for\n future changes.\n\n6. Change the PR into a draft by clicking on \u201cConvert to draft\u201d on the right of the GitHub pull request web page.\n\nIn the following, whenever you have done some progress, don't forget to commit your work and push it to your account so\nthat it shows in the pull request. Additionally, you should make sure to update your work with the current main from\ntime to time by doing:\n\n```bash\ngit fetch upstream\ngit merge upstream/main\n```\n\nIn general, all questions you might have regarding the model or your implementation should be asked in your PR and\ndiscussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or\nif you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging\nFace team can efficiently understand your problem or question.\n\nTo do so, you can go to the \u201cFiles changed\u201d tab where you see all of your changes, go to a line regarding which you\nwant to ask a question, and click on the \u201c+\u201d symbol to add a comment. Whenever a question or problem has been solved,\nyou can click on the \u201cResolve\u201d button of the created comment.\n\nIn the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions\non GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the\nHugging Face team by Slack or email.\n\n**5. Adapt the generated models code for brand_new_bert**\n\nAt first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be\nfound in the generated files `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` and\n`src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`.\n\nNow you can finally start coding :). The generated code in\n`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if\nit's an encoder-only model or BART if it's an encoder-decoder model. At this point, you should remind yourself what\nyou've learned in the beginning about the theoretical aspects of the model: *How is the model different from BERT or\nBART?*\". Implement those changes which often means to change the *self-attention* layer, the order of the normalization\nlayer, etc\u2026 Again, it is often useful to look at the similar architecture of already existing models in Transformers to\nget a better feeling of how your model should be implemented.\n\n**Note** that at this point, you don't have to be very sure that your code is fully correct or clean. Rather, it is\nadvised to add a first *unclean*, copy-pasted version of the original code to\n`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` until you feel like all the necessary code is\nadded. From our experience, it is much more efficient to quickly add a first version of the required code and\nimprove/correct the code iteratively with the conversion script as described in the next section. The only thing that\nhas to work at this point is that you can instantiate the \ud83e\udd17 Transformers implementation of *brand_new_bert*, *i.e.* the\nfollowing command should work:\n\n```python\nfrom transformers import BrandNewBertModel, BrandNewBertConfig\n\nmodel = BrandNewBertModel(BrandNewBertConfig())\n```\n\nThe above command will create a model according to the default parameters as defined in `BrandNewBertConfig()` with\nrandom weights, thus making sure that the `init()` methods of all components works.\n\nNote that all random initialization should happen in the `_init_weights` method of your `BrandnewBertPreTrainedModel`\nclass. It should initialize all leaf modules depending on the variables of the config. Here is an example with the\nBERT `_init_weights` method:\n\n```py\ndef _init_weights(self, module):\n \"\"\"Initialize the weights\"\"\"\n if isinstance(module, nn.Linear):\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\n if module.bias is not None:\n module.bias.data.zero_()\n elif isinstance(module, nn.Embedding):\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\n if module.padding_idx is not None:\n module.weight.data[module.padding_idx].zero_()\n elif isinstance(module, nn.LayerNorm):\n module.bias.data.zero_()\n module.weight.data.fill_(1.0)\n```\n\nYou can have some more custom schemes if you need a special initialization for some modules. For instance, in\n`Wav2Vec2ForPreTraining`, the last two linear layers need to have the initialization of the regular PyTorch `nn.Linear`\nbut all the other ones should use an initialization as above. This is coded like this:\n\n```py\ndef _init_weights(self, module):\n \"\"\"Initialize the weights\"\"\"\n if isinstnace(module, Wav2Vec2ForPreTraining):\n module.project_hid.reset_parameters()\n module.project_q.reset_parameters()\n module.project_hid._is_hf_initialized = True\n module.project_q._is_hf_initialized = True\n elif isinstance(module, nn.Linear):\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\n if module.bias is not None:\n module.bias.data.zero_()\n```\n\nThe `_is_hf_initialized` flag is internally used to make sure we only initialize a submodule once. By setting it to\n`True` for `module.project_q` and `module.project_hid`, we make sure the custom initialization we did is not overridden later on,\nthe `_init_weights` function won't be applied to them.\n\n**6. Write a conversion script**\n\nNext, you should write a conversion script that lets you convert the checkpoint you used to debug *brand_new_bert* in\nthe original repository to a checkpoint compatible with your just created \ud83e\udd17 Transformers implementation of\n*brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already\nexisting conversion scripts in \ud83e\udd17 Transformers for one that has been used to convert a similar model that was written in\nthe same framework as *brand_new_bert*. Usually, it is enough to copy an already existing conversion script and\nslightly adapt it for your use case. Don't hesitate to ask the Hugging Face team to point you to a similar already\nexisting conversion script for your model.\n\n- If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT's conversion script [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)\n- If you are porting a model from PyTorch to PyTorch, a good starting point might be BART's conversion script [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)\n\nIn the following, we'll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the\nname of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in\nPyTorch, called `SimpleModel` as follows:\n\n```python\nfrom torch import nn\n\n\nclass SimpleModel(nn.Module):\n def __init__(self):\n super().__init__()\n self.dense = nn.Linear(10, 10)\n self.intermediate = nn.Linear(10, 10)\n self.layer_norm = nn.LayerNorm(10)\n```\n\nNow we can create an instance of this model definition which will fill all weights: `dense`, `intermediate`,\n`layer_norm` with random weights. We can print the model to see its architecture\n\n```python\nmodel = SimpleModel()\n\nprint(model)\n```\n\nThis will print out the following:\n\n```\nSimpleModel(\n (dense): Linear(in_features=10, out_features=10, bias=True)\n (intermediate): Linear(in_features=10, out_features=10, bias=True)\n (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True)\n)\n```\n\nWe can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight\nvalues of a specific layer:\n\n```python\nprint(model.dense.weight.data)\n```\n\nto see that the weights were randomly initialized\n\n```\ntensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212,\n -0.2077, 0.2157],\n [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190,\n 0.2166, -0.0212],\n [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950,\n -0.1023, -0.0447],\n [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415,\n -0.1876, -0.2467],\n [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465,\n 0.2577, 0.0402],\n [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604,\n 0.2132, 0.1680],\n [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090,\n 0.2707, -0.2509],\n [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407,\n 0.1829, -0.1568],\n [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923,\n 0.0333, -0.0536],\n [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739,\n 0.2220, 0.2358]]).\n```\n\nIn the conversion script, you should fill those randomly initialized weights with the exact weights of the\ncorresponding layer in the checkpoint. *E.g.*\n\n```python\n# retrieve matching layer weights, e.g. by\n# recursive algorithm\nlayer_name = \"dense\"\npretrained_weight = array_of_dense_layer\n\nmodel_pointer = getattr(model, \"dense\")\n\nmodel_pointer.weight.data = torch.from_numpy(pretrained_weight)\n```\n\nWhile doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding\npretrained checkpoint weight exactly match in both **shape and name**. To do so, it is **necessary** to add assert\nstatements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like:\n\n```python\nassert (\n model_pointer.weight.shape == pretrained_weight.shape\n), f\"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched\"\n```\n\nBesides, you should also print out the names of both weights to make sure they match, *e.g.*\n\n```python\nlogger.info(f\"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}\")\n```\n\nIf either the shape or the name doesn't match, you probably assigned the wrong checkpoint weight to a randomly\ninitialized layer of the \ud83e\udd17 Transformers implementation.\n\nAn incorrect shape is most likely due to an incorrect setting of the config parameters in `BrandNewBertConfig()` that\ndo not exactly match those that were used for the checkpoint you want to convert. However, it could also be that\nPyTorch's implementation of a layer requires the weight to be transposed beforehand.\n\nFinally, you should also check that **all** required weights are initialized and print out all checkpoint weights that\nwere not used for initialization to make sure the model is correctly converted. It is completely normal, that the\nconversion trials fail with either a wrong shape statement or wrong name assignment. This is most likely because either\nyou used incorrect parameters in `BrandNewBertConfig()`, have a wrong architecture in the \ud83e\udd17 Transformers\nimplementation, you have a bug in the `init()` functions of one of the components of the \ud83e\udd17 Transformers\nimplementation or you need to transpose one of the checkpoint weights.\n\nThis step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the\nTransformers model. Having correctly loaded the checkpoint into the \ud83e\udd17 Transformers implementation, you can then save\nthe model under a folder of your choice `/path/to/converted/checkpoint/folder` that should then contain both a\n`pytorch_model.bin` file and a `config.json` file:\n\n```python\nmodel.save_pretrained(\"/path/to/converted/checkpoint/folder\")\n```\n\n**7. Implement the forward pass**\n\nHaving managed to correctly load the pretrained weights into the \ud83e\udd17 Transformers implementation, you should now make\nsure that the forward pass is correctly implemented. In [Get familiar with the original repository](#34-run-a-pretrained-checkpoint-using-the-original-repository), you have already created a script that runs a forward\npass of the model using the original repository. Now you should write an analogous script using the \ud83e\udd17 Transformers\nimplementation instead of the original one. It should look as follows:\n\n```python\nmodel = BrandNewBertModel.from_pretrained(\"/path/to/converted/checkpoint/folder\")\ninput_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]\noutput = model(input_ids).last_hidden_states\n```\n\nIt is very likely that the \ud83e\udd17 Transformers implementation and the original model implementation don't give the exact\nsame output the very first time or that the forward pass throws an error. Don't be disappointed - it's expected! First,\nyou should make sure that the forward pass doesn't throw any errors. It often happens that the wrong dimensions are\nused leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long`\ninstead of `torch.float32`. Don't hesitate to ask the Hugging Face team for help, if you don't manage to solve\ncertain errors.\n\nThe final part to make sure the \ud83e\udd17 Transformers implementation works correctly is to ensure that the outputs are\nequivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.*\n`outputs.shape` should yield the same value for the script of the \ud83e\udd17 Transformers implementation and the original\nimplementation. Next, you should make sure that the output values are identical as well. This one of the most difficult\nparts of adding a new model. Common mistakes why the outputs are not identical are:\n\n- Some layers were not added, *i.e.* an *activation* layer was not added, or the residual connection was forgotten\n- The word embedding matrix was not tied\n- The wrong positional embeddings are used because the original implementation uses on offset\n- Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout\n layer is falsely activated during the forward pass, *i.e.* pass *self.training* to [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)\n\nThe best way to fix the problem is usually to look at the forward pass of the original implementation and the \ud83e\udd17\nTransformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out\nintermediate outputs of both implementations of the forward pass to find the exact position in the network where the \ud83e\udd17\nTransformers implementation shows a different output than the original implementation. First, make sure that the\nhard-coded `input_ids` in both scripts are identical. Next, verify that the outputs of the first transformation of\nthe `input_ids` (usually the word embeddings) are identical. And then work your way up to the very last layer of the\nnetwork. At some point, you will notice a difference between the two implementations, which should point you to the bug\nin the \ud83e\udd17 Transformers implementation. From our experience, a simple and efficient way is to add many print statements\nin both the original implementation and \ud83e\udd17 Transformers implementation, at the same positions in the network\nrespectively, and to successively remove print statements showing the same values for intermediate presentations.\n\nWhen you're confident that both implementations yield the same output, verifying the outputs with\n`torch.allclose(original_output, output, atol=1e-3)`, you're done with the most difficult part! Congratulations - the\nwork left to be done should be a cakewalk \ud83d\ude0a.\n\n**8. Adding all necessary model tests**\n\nAt this point, you have successfully added a new model. However, it is very much possible that the model does not yet\nfully comply with the required design. To make sure, the implementation is fully compatible with \ud83e\udd17 Transformers, all\ncommon tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under\nthe same `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Run this test file to verify that all common\ntests pass:\n\n```bash\npytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py\n```\n\nHaving fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that\n\n- a) The community can easily understand your work by looking at specific tests of *brand_new_bert*\n- b) Future changes to your model will not break any important feature of the model.\n\nAt first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts\nyou used earlier to implement the model to \ud83e\udd17 Transformers. A template of those model tests is already added by the\nCookiecutter, called `BrandNewBertModelIntegrationTests` and only has to be filled out by you. To ensure that those\ntests are passing, run\n\n```bash\nRUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests\n```\n\n<Tip>\n\nIn case you are using Windows, you should replace `RUN_SLOW=1` with `SET RUN_SLOW=1`\n\n</Tip>\n\nSecond, all features that are special to *brand_new_bert* should be tested additionally in a separate test under\n`BrandNewBertModelTester`/``BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two\nways:\n\n- It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the\n special features of *brand_new_bert* should work.\n- Future contributors can quickly test changes to the model by running those special tests.\n\n\n**9. Implement the tokenizer**\n\nNext, we should add the tokenizer of *brand_new_bert*. Usually, the tokenizer is equivalent or very similar to an\nalready existing tokenizer of \ud83e\udd17 Transformers.\n\nIt is very important to find/extract the original tokenizer file and to manage to load this file into the \ud83e\udd17\nTransformers' implementation of the tokenizer.\n\nTo ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository\nthat inputs a string and returns the `input_ids``. It could look similar to this (in pseudo-code):\n\n```python\ninput_str = \"This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words.\"\nmodel = BrandNewBertModel.load_pretrained_checkpoint(\"/path/to/checkpoint/\")\ninput_ids = model.tokenize(input_str)\n```\n\nYou might have to take a deeper look again into the original repository to find the correct tokenizer function or you\nmight even have to do changes to your clone of the original repository to only output the `input_ids`. Having written\na functional tokenization script that uses the original repository, an analogous script for \ud83e\udd17 Transformers should be\ncreated. It should look similar to this:\n\n```python\nfrom transformers import BrandNewBertTokenizer\n\ninput_str = \"This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words.\"\n\ntokenizer = BrandNewBertTokenizer.from_pretrained(\"/path/to/tokenizer/folder/\")\n\ninput_ids = tokenizer(input_str).input_ids\n```\n\nWhen both `input_ids` yield the same values, as a final step a tokenizer test file should also be added.\n\nAnalogous to the modeling test files of *brand_new_bert*, the tokenization test files of *brand_new_bert* should\ncontain a couple of hard-coded integration tests.\n\n**10. Run End-to-end integration tests**\n\nHaving added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the\ntokenizer to `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in \ud83e\udd17 Transformers.\nSuch a test should show on a meaningful\ntext-to-text sample that the \ud83e\udd17 Transformers implementation works as expected. A meaningful text-to-text sample can\ninclude *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etc\u2026 If none\nof the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a\nfinal step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can\nhappen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a\ntest would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those\ntests for you.\n\n**11. Add Docstring**\n\nNow, all the necessary functionality for *brand_new_bert* is added - you're almost done! The only thing left to add is\na nice docstring and a doc page. The Cookiecutter should have added a template file called\n`docs/source/model_doc/brand_new_bert.mdx` that you should fill out. Users of your model will usually first look at\nthis page before using your model. Hence, the documentation must be understandable and concise. It is very useful for\nthe community to add some *Tips* to show how the model should be used. Don't hesitate to ping the Hugging Face team\nregarding the docstrings.\n\nNext, make sure that the docstring added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` is\ncorrect and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format [here](writing-documentation). It is always to good to remind oneself that documentation should\nbe treated at least as carefully as the code in \ud83e\udd17 Transformers since the documentation is usually the first contact\npoint of the community with the model.\n\n**Code refactor**\n\nGreat, now you have added all the necessary code for *brand_new_bert*. At this point, you should correct some potential\nincorrect code style by running:\n\n```bash\nmake style\n```\n\nand verify that your coding style passes the quality check:\n\n```bash\nmake quality\n```\n\nThere are a couple of other very strict design tests in \ud83e\udd17 Transformers that might still be failing, which shows up in\nthe tests of your pull request. This is often because of some missing information in the docstring or some incorrect\nnaming. The Hugging Face team will surely help you if you're stuck here.\n\nLastly, it is always a good idea to refactor one's code after having ensured that the code works correctly. With all\ntests passing, now it's a good time to go over the added code again and do some refactoring.\n\nYou have now finished the coding part, congratulation! \ud83c\udf89 You are Awesome! \ud83d\ude0e\n\n**12. Upload the models to the model hub**\n\nIn this final part, you should convert and upload all checkpoints to the model hub and add a model card for each\nuploaded model checkpoint. You can get familiar with the hub functionalities by reading our [Model sharing and uploading Page](model_sharing). You should work alongside the Hugging Face team here to decide on a fitting name for each\ncheckpoint and to get the required access rights to be able to upload the model under the author's organization of\n*brand_new_bert*. The `push_to_hub` method, present in all models in `transformers`, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below:\n\n```python\nbrand_new_bert.push_to_hub(\"brand_new_bert\")\n# Uncomment the following line to push to an organization.\n# brand_new_bert.push_to_hub(\"<organization>/brand_new_bert\")\n```\n\nIt is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the\nspecific characteristics of this particular checkpoint, *e.g.* On which dataset was the checkpoint\npretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to\ncorrectly use the model.\n\n**13. (Optional) Add notebook**\n\nIt is very helpful to add a notebook that showcases in-detail how *brand_new_bert* can be used for inference and/or\nfine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community.\n\n**14. Submit your finished PR**\n\nYou're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the\nHugging Face team should have helped you already at this point, but it is worth taking some time to give your finished\nPR a nice description and eventually add comments to your code, if you want to point out certain design choices to your\nreviewer.\n\n### Share your work!!\n\nNow, it's time to get some credit from the community for your work! Having completed a model addition is a major\ncontribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be\nused by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share\nyour achievement with the community.\n\n**You have made another model that is super easy to access for everyone in the community! \ud83e\udd2f**\n"}
{"title": "add_new_pipeline.mdx", "repo_owner": "huggingface", "repo_name": "transformers", "text": "<!--Copyright 2020 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with\nthe License. You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n-->\n\n# How to create a custom pipeline?\n\nIn this guide, we will see how to create a custom pipeline and share it on the [Hub](hf.co/models) or add it to the\n\ud83e\udd17 Transformers library.\n\nFirst and foremost, you need to decide the raw entries the pipeline will be able to take. It can be strings, raw bytes,\ndictionaries or whatever seems to be the most likely desired input. Try to keep these inputs as pure Python as possible\nas it makes compatibility easier (even through other languages via JSON). Those will be the `inputs` of the\npipeline (`preprocess`).\n\nThen define the `outputs`. Same policy as the `inputs`. The simpler, the better. Those will be the outputs of\n`postprocess` method.\n\nStart by inheriting the base class `Pipeline` with the 4 methods needed to implement `preprocess`,\n`_forward`, `postprocess`, and `_sanitize_parameters`.\n\n\n```python\nfrom transformers import Pipeline\n\n\nclass MyPipeline(Pipeline):\n def _sanitize_parameters(self, **kwargs):\n preprocess_kwargs = {}\n if \"maybe_arg\" in kwargs:\n preprocess_kwargs[\"maybe_arg\"] = kwargs[\"maybe_arg\"]\n return preprocess_kwargs, {}, {}\n\n def preprocess(self, inputs, maybe_arg=2):\n model_input = Tensor(inputs[\"input_ids\"])\n return {\"model_input\": model_input}\n\n def _forward(self, model_inputs):\n # model_inputs == {\"model_input\": model_input}\n outputs = self.model(**model_inputs)\n # Maybe {\"logits\": Tensor(...)}\n return outputs\n\n def postprocess(self, model_outputs):\n best_class = model_outputs[\"logits\"].softmax(-1)\n return best_class\n```\n\nThe structure of this breakdown is to support relatively seamless support for CPU/GPU, while supporting doing\npre/postprocessing on the CPU on different threads\n\n`preprocess` will take the originally defined inputs, and turn them into something feedable to the model. It might\ncontain more information and is usually a `Dict`.\n\n`_forward` is the implementation detail and is not meant to be called directly. `forward` is the preferred\ncalled method as it contains safeguards to make sure everything is working on the expected device. If anything is\nlinked to a real model it belongs in the `_forward` method, anything else is in the preprocess/postprocess.\n\n`postprocess` methods will take the output of `_forward` and turn it into the final output that was decided\nearlier.\n\n`_sanitize_parameters` exists to allow users to pass any parameters whenever they wish, be it at initialization\ntime `pipeline(...., maybe_arg=4)` or at call time `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`.\n\nThe returns of `_sanitize_parameters` are the 3 dicts of kwargs that will be passed directly to `preprocess`,\n`_forward`, and `postprocess`. Don't fill anything if the caller didn't call with any extra parameter. That\nallows to keep the default arguments in the function definition which is always more \"natural\".\n\nA classic example would be a `top_k` argument in the post processing in classification tasks.\n\n```python\n>>> pipe = pipeline(\"my-new-task\")\n>>> pipe(\"This is a test\")\n[{\"label\": \"1-star\", \"score\": 0.8}, {\"label\": \"2-star\", \"score\": 0.1}, {\"label\": \"3-star\", \"score\": 0.05}\n{\"label\": \"4-star\", \"score\": 0.025}, {\"label\": \"5-star\", \"score\": 0.025}]\n\n>>> pipe(\"This is a test\", top_k=2)\n[{\"label\": \"1-star\", \"score\": 0.8}, {\"label\": \"2-star\", \"score\": 0.1}]\n```\n\nIn order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit\n`_sanitize_parameters` to allow this new parameter.\n\n\n```python\ndef postprocess(self, model_outputs, top_k=5):\n best_class = model_outputs[\"logits\"].softmax(-1)\n # Add logic to handle top_k\n return best_class\n\n\ndef _sanitize_parameters(self, **kwargs):\n preprocess_kwargs = {}\n if \"maybe_arg\" in kwargs:\n preprocess_kwargs[\"maybe_arg\"] = kwargs[\"maybe_arg\"]\n\n postprocess_kwargs = {}\n if \"top_k\" in kwargs:\n postprocess_kwargs[\"top_k\"] = kwargs[\"top_k\"]\n return preprocess_kwargs, {}, postprocess_kwargs\n```\n\nTry to keep the inputs/outputs very simple and ideally JSON-serializable as it makes the pipeline usage very easy\nwithout requiring users to understand new kind of objects. It's also relatively common to support many different types\nof arguments for ease of use (audio files, can be filenames, URLs or pure bytes)\n\n\n\n## Adding it to the list of supported tasks\n\nTo register your `new-task` to the list of supported tasks, you have to add it to the `PIPELINE_REGISTRY`:\n\n```python\nfrom transformers.pipelines import PIPELINE_REGISTRY\n\nPIPELINE_REGISTRY.register_pipeline(\n \"new-task\",\n pipeline_class=MyPipeline,\n pt_model=AutoModelForSequenceClassification,\n)\n```\n\nYou can specify a default model if you want, in which case it should come with a specific revision (which can be the name of a branch or a commit hash, here we took `\"abcdef\"`) as well as the type:\n\n```python\nPIPELINE_REGISTRY.register_pipeline(\n \"new-task\",\n pipeline_class=MyPipeline,\n pt_model=AutoModelForSequenceClassification,\n default={\"pt\": (\"user/awesome_model\", \"abcdef\")},\n type=\"text\", # current support type: text, audio, image, multimodal\n)\n```\n\n## Share your pipeline on the Hub\n\nTo share your custom pipeline on the Hub, you just have to save the custom code of your `Pipeline` subclass in a\npython file. For instance, let's say we want to use a custom pipeline for sentence pair classification like this:\n\n```py\nimport numpy as np\n\nfrom transformers import Pipeline\n\n\ndef softmax(outputs):\n maxes = np.max(outputs, axis=-1, keepdims=True)\n shifted_exp = np.exp(outputs - maxes)\n return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True)\n\n\nclass PairClassificationPipeline(Pipeline):\n def _sanitize_parameters(self, **kwargs):\n preprocess_kwargs = {}\n if \"second_text\" in kwargs:\n preprocess_kwargs[\"second_text\"] = kwargs[\"second_text\"]\n return preprocess_kwargs, {}, {}\n\n def preprocess(self, text, second_text=None):\n return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework)\n\n def _forward(self, model_inputs):\n return self.model(**model_inputs)\n\n def postprocess(self, model_outputs):\n logits = model_outputs.logits[0].numpy()\n probabilities = softmax(logits)\n\n best_class = np.argmax(probabilities)\n label = self.model.config.id2label[best_class]\n score = probabilities[best_class].item()\n logits = logits.tolist()\n return {\"label\": label, \"score\": score, \"logits\": logits}\n```\n\nThe implementation is framework agnostic, and will work for PyTorch and TensorFlow models. If we have saved this in\na file named `pair_classification.py`, we can then import it and register it like this:\n\n```py\nfrom pair_classification import PairClassificationPipeline\nfrom transformers.pipelines import PIPELINE_REGISTRY\nfrom transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification\n\nPIPELINE_REGISTRY.register_pipeline(\n \"pair-classification\",\n pipeline_class=PairClassificationPipeline,\n pt_model=AutoModelForSequenceClassification,\n tf_model=TFAutoModelForSequenceClassification,\n)\n```\n\nOnce this is done, we can use it with a pretrained model. For instance `sgugger/finetuned-bert-mrpc` has been\nfine-tuned on the MRPC dataset, which classifies pairs of sentences as paraphrases or not.\n\n```py\nfrom transformers import pipeline\n\nclassifier = pipeline(\"pair-classification\", model=\"sgugger/finetuned-bert-mrpc\")\n```\n\nThen we can share it on the Hub by using the `save_pretrained` method in a `Repository`:\n\n```py\nfrom huggingface_hub import Repository\n\nrepo = Repository(\"test-dynamic-pipeline\", clone_from=\"{your_username}/test-dynamic-pipeline\")\nclassifier.save_pretrained(\"test-dynamic-pipeline\")\nrepo.push_to_hub()\n```\n\nThis will copy the file where you defined `PairClassificationPipeline` inside the folder `\"test-dynamic-pipeline\"`,\nalong with saving the model and tokenizer of the pipeline, before pushing everything in the repository\n`{your_username}/test-dynamic-pipeline`. After that anyone can use it as long as they provide the option\n`trust_remote_code=True`:\n\n```py\nfrom transformers import pipeline\n\nclassifier = pipeline(model=\"{your_username}/test-dynamic-pipeline\", trust_remote_code=True)\n```\n\n## Add the pipeline to \ud83e\udd17 Transformers\n\nIf you want to contribute your pipeline to \ud83e\udd17 Transformers, you will need to add a new module in the `pipelines` submodule\nwith the code of your pipeline, then add it in the list of tasks defined in `pipelines/__init__.py`.\n\nThen you will need to add tests. Create a new file `tests/test_pipelines_MY_PIPELINE.py` with example with the other tests.\n\nThe `run_pipeline_test` function will be very generic and run on small random models on every possible\narchitecture as defined by `model_mapping` and `tf_model_mapping`.\n\nThis is very important to test future compatibility, meaning if someone adds a new model for\n`XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's\nimpossible to check for actual values, that's why there is a helper `ANY` that will simply attempt to match the\noutput of the pipeline TYPE.\n\nYou also *need* to implement 2 (ideally 4) tests.\n\n- `test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)\n and test the pipeline outputs. The results should be the same as `test_small_model_tf`.\n- `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)\n and test the pipeline outputs. The results should be the same as `test_small_model_pt`.\n- `test_large_model_pt` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to\n make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make\n sure there is no drift in future releases.\n- `test_large_model_tf` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to\n make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make\n sure there is no drift in future releases.\n"}