The objective of this benchmark is to evaluate the performance of the language models in different scenarios. It is a part of AI/RUN TM Engineering Benchmark. See more details in AI/RUN TM Engineering Benchmark repo to understand the whole picture on what the benchmark is and what repositories are involved.
We assess the models using various scenarios such as:
- Code transformation between different technologies
- Code generation
- Documentation generation
- Large context instructions following
These scenarios allow us to comprehensively evaluate the capabilities and limitations of language models in handling diverse programming tasks and developer interactions.
A dataset for testing scenarious was created based on the codebase from the following open-source repositories:
- Create a Python virtual environment
python -m venv .venv
- Activate virtual env using respective Linux or Windows command
or
source .venv/bin/activate
.venv\Scripts\activate
- Install necessary dependencies:
pip install -r ./requirements.txt
- (Optional) Connect your python venv with your IDE
Before running the scripts, create a .env file in the root directory of the project using .env.example as a template. Fill in all the necessary environment variables with values specific to your environment.
cp .env.example .env
- Add repos to Dataset. Before adding - create sub directory with the name of language, i.e. "JS"
- Next in Config folder create json file with the name of your language i.e. "JS" for each LLM you want to launch
- Then go to Scenarios directory and add Templates to Task_Templates inside LLM you want to launch folder with subdirectory of your language
- In
Utils/constants.py
add to mapping info about complexity_size of your repositories
- Enrich templates with repos code (files will be created at
/Scenarios/Compiled_Tasks/{model}/{lang}
)- Open run_tasks.ipynb => 1st cell
- Edit model and lang and start cell
- Run experiment
- Edit data for experiment and start cell
We appreciate all contributions to improve the AI/RUN TM Engineering Benchmark. Please see our Contribution Guidelines for more information on how to get involved.
If you have suggestions for new benchmark scenarios or improvements to existing ones, please open an issue or submit a pull request.
This project is licensed under the Apache 2.0.
EPAM and EPAM AI/RUN TM are trademarks of EPAM Systems, Inc.