From 0faf5686e9adcd1f4c063b16eda55d1f6bb5a076 Mon Sep 17 00:00:00 2001 From: zoya-hammad Date: Thu, 13 Mar 2025 06:47:54 +0500 Subject: [PATCH] Updated bigcode-evaluation-harness/leaderboard/README.md Fixed spelling error --- leaderboard/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/leaderboard/README.md b/leaderboard/README.md index f5f1266c7..85178c2f2 100644 --- a/leaderboard/README.md +++ b/leaderboard/README.md @@ -14,7 +14,7 @@ The LeaderBoard is a demo for evaluating and comparing the performance of langua The LeaderBoard is open for submissions of results produced by the community. If you have a model that you want to submit results for, please follow the instructions below. ## Running the evaluation -We report the passs@1 for [HumanEval](https://huggingface.co/datasets/openai_humaneval) Python benchamrk and some languages from the [MultiPL-E](https://huggingface.co/datasets/nuprl/MultiPL-E) benchmark. We use the same template and parameters for all models. +We report the passs@1 for [HumanEval](https://huggingface.co/datasets/openai_humaneval) Python benchmark and some languages from the [MultiPL-E](https://huggingface.co/datasets/nuprl/MultiPL-E) benchmark. We use the same template and parameters for all models. ### 1-Setup Follow the setup instructions in the evaluation harness [README](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main#setup).