


HumanEval: A Benchmark for Evaluating LLM Code Generation Capabilities
Mar 02, 2025 am 09:47 AMHumanEval: Evaluating Code Generation with Pass@k
This tutorial explores HumanEval, an OpenAI benchmark for evaluating large language model (LLM) code generation capabilities, focusing on the pass@k
metric. We'll use the Hugging Face ecosystem to evaluate the codeparrot-small
model on 164 Python problems. This provides a practical, functional correctness assessment, contrasting with traditional text-similarity metrics.
Image by Author
Understanding Pass@k
HumanEval employs a functional correctness approach, measuring the probability that at least one of the top k generated code samples correctly solves a problem. This is more relevant than simple text matching, mirroring real-world developer testing.
The pass@k
formula is: 1 - C(n-c, k)/C(n,k)
Where:
-
n
: Total generated samples. -
c
: Number of correct samples. -
k
: Number of top samples considered.
The formula calculates the probability that all k samples are incorrect, then subtracts this from 1 to get the probability of at least one correct sample. Higher pass@k
scores indicate better code generation performance. Leaderboards often use pass@10
and pass@100
.
HumanEval Evaluation with Hugging Face
This section details the evaluation process using Hugging Face's evaluate
library. We'll use the smaller codeparrot-small
model for faster evaluation.
1. Setup:
Install the necessary library:
pip install evaluate
Set environment variables:
import os os.environ["HF_ALLOW_CODE_EVAL"] = "1" os.environ["TOKENIZERS_PARALLELISM"] = "false"
2. Loading Dataset and Metric:
Load the openai_humaneval
dataset and the code_eval
metric:
from datasets import load_dataset from evaluate import load human_eval = load_dataset("openai_humaneval")['test'] code_eval_metric = load("code_eval")
3. Loading Model and Tokenizer:
Load the codeparrot/codeparrot-small
model and tokenizer:
from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "codeparrot/codeparrot-small" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.eval()
4. Tokenizer Adjustments:
Ensure the tokenizer has pad_token_id
and eos_token_id
, resizing model embeddings if necessary:
if tokenizer.pad_token_id is None: tokenizer.pad_token_id = 0 if tokenizer.eos_token_id is None: tokenizer.eos_token_id = 2 if tokenizer.pad_token is None: tokenizer.add_special_tokens({'pad_token': '<pad>'}) if tokenizer.eos_token is None: tokenizer.add_special_tokens({'eos_token': ''}) if len(tokenizer) > model.config.vocab_size: model.resize_token_embeddings(len(tokenizer))</pad>
5. Code Generation:
Generate 5 code samples per problem (164 problems total):
num_samples_per_problem = 5 test_cases = [] candidates = [] for problem in tqdm(human_eval, desc="Problems", unit="problem"): prompt = problem['prompt'] test_code = problem['test'] test_cases.append(test_code) problem_candidates = [] for _ in range(num_samples_per_problem): inputs = tokenizer(prompt, return_tensors="pt").to("cuda" if torch.cuda.is_available() else "cpu") with torch.no_grad(): outputs = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], max_length=512, do_sample=True, temperature=0.7, top_p=0.95, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id) generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True) generated_code = generated_code[len(prompt):] problem_candidates.append(generated_code) candidates.append(problem_candidates)
6. Computing Pass@k:
Compute pass@1
and pass@5
:
pip install evaluate
The output will show the pass@1
and pass@5
scores, indicating the model's performance. Remember that results may vary due to the stochastic nature of code generation. Comparing these results to those of more powerful models (like GPT-4) provides context for the codeparrot-small
model's capabilities. Further analysis might involve exploring different hyperparameters or using more sophisticated code generation techniques.
The above is the detailed content of HumanEval: A Benchmark for Evaluating LLM Code Generation Capabilities. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the
