LightEval Configuration Builder
Last updated: 2026-03-13
This page explains the LightEvalConfigService, its field mapping rules, and how generated
configuration is reused by backend execution and export flows. The service lives in
backend/src/eval_752/services/lighteval_config.py.
Main Use Cases
- Celery runner: generate a LightEval
config.yamlfrom the selected provider, model, and run config before execution - Export and replay: package the generated configuration inside
.eval752.zip - CLI and CI: allow developers or CI pipelines to reuse the exact generated config with LightEval CLI or Python APIs
Data Sources
Example config.yaml
Mapping Rules
metadata
- describes provider, model, run, and dataset identity
model_namealways includes the LiteLLM provider prefix
model_parameters
- required fields include
model_name,provider_type, andbase_url - by default the API key is written directly into
api_key - if
include_api_key=False, the config usesapi_key_envinstead - concurrency, timeout, and retry values are derived from provider metadata and run config
generation_parameters
Supported fields include:
temperaturetop_ptop_kmax_new_tokensstopandstop_sequencespresence_penaltyfrequency_penaltyrepetition_penaltyseed
The service prefers nested generation or sampling sections in the run config, then falls back
to top-level keys.
execution
endpointis fixed tolitellmuse_chat_templatedefaults totrue
Python API
If PyYAML is unavailable, to_yaml() will fail. That is acceptable as long as the calling path
deliberately chooses JSON serialization instead.
Integration Guidance
- Generate config before LightEval execution and keep it alongside the run artifacts when helpful.
- Include the generated config in
.eval752.zipexports for offline replay. - Inject decrypted provider keys at runtime and avoid leaving config files with secrets on disk longer than necessary.
- Keep the field mapping aligned with run execution and provider normalization logic.
