When you train or evaluate a robot policy, your dataset or environment hands you observations under one set of keys (e.g. observation.images.front, observation.images.eagle), while your policy was built to expect another (e.g. observation.images.image, observation.images.image2). The rename map is how you bridge that gap without changing the policy or the data source.
This guide explains why it exists, how to use it in training and evaluation, and when to use empty cameras so you can fine-tune multi-camera policies on datasets that have fewer views.
Policies have a fixed set of input feature names (often coming from a pretrained config). For example:
- XVLA-base expects three image keys:
observation.images.image,observation.images.image2,observation.images.image3. - pi0-fast-libero might expect
observation.images.base_0_rgbandobservation.images.left_wrist_0_rgb.
Your dataset or sim might use completely different names: observation.images.front, observation.images.eagle, observation.images.glove (e.g. svla_so100_sorting). Or your eval env (e.g. LIBERO) might return observation.images.image and observation.images.image2.
Rather than renaming columns in the dataset or editing the policy code, LeRobot lets you pass a rename map: a dictionary that says “when you see this key in the data, treat it as this key for the policy.” Renaming is applied in the preprocessing pipeline so the policy always receives the keys it expects.
The rename map is a dictionary:
- Keys = observation keys as produced by your dataset (training) or environment (evaluation).
- Values = the observation keys your policy expects.
Only keys listed in the map are renamed; everything else is left as-is. Under the hood, the RenameObservationsProcessorStep runs in the preprocessor and rewrites observation keys (and keeps normalization stats aligned) so the batch matches the policy’s input_features.
You can use the same idea for training (dataset → policy) and evaluation (env → policy).
You pass the mapping on the command line so dataset/env keys are renamed to what the policy expects. No need to change the policy repo or the data.
Suppose you fine-tune lerobot/xvla-base on a dataset whose images are stored under observation.images.front, observation.images.eagle, and observation.images.glove. XVLA expects observation.images.image, observation.images.image2, and observation.images.image3. Map the dataset keys to the policy keys:
lerobot-train \
--dataset.repo_id=YOUR_DATASET \
--output_dir=./outputs/xvla_training \
--job_name=xvla_training \
--policy.path="lerobot/xvla-base" \
--policy.repo_id="HF_USER/xvla-your-robot" \
--policy.dtype=bfloat16 \
--policy.action_mode=auto \
--steps=20000 \
--policy.device=cuda \
--policy.freeze_vision_encoder=false \
--policy.freeze_language_encoder=false \
--policy.train_policy_transformer=true \
--policy.train_soft_prompts=true \
--rename_map='{"observation.images.front": "observation.images.image", "observation.images.eagle": "observation.images.image2", "observation.images.glove": "observation.images.image3"}'Order of entries in the map doesn’t matter; each dataset key is renamed to the corresponding policy key.
You trained (or downloaded) a policy that expects observation.images.base_0_rgb and observation.images.left_wrist_0_rgb (e.g. pi0fast-libero), but your evaluation environment (e.g. LIBERO) returns observation.images.image and observation.images.image2. Tell the eval script how to rename env keys to policy keys:
lerobot-eval \
--policy.path=lerobot/pi0fast-libero \
--env.type=libero \
... \
--rename_map='{"observation.images.image": "observation.images.base_0_rgb", "observation.images.image2": "observation.images.left_wrist_0_rgb"}'So: key = what the env gives, value = what the policy expects. Same convention as in training.
If you prefer not to pass a rename map every time, you can edit the policy’s config.json so that its expected observation keys match your dataset or environment. For example, change the policy’s visual input keys to observation.images.front, observation.images.eagle, observation.images.glove to match your dataset, or to observation.images.image / observation.images.image2 to match LIBERO.
- Training: If the dataset’s camera keys match the (modified) policy config, you don’t need a rename map.
- Evaluation: If the env’s keys match the (modified) policy config, you don’t need a rename map for eval either.
The tradeoff: you’re changing the policy repo or your local checkpoint. That’s fine if you’re only ever using that one dataset or env; a rename map keeps the same policy usable across multiple data sources without touching the config.
Some policies (e.g. XVLA) are built for a fixed number of image inputs (e.g. three). Your dataset might only have two cameras. You still want to fine-tune without changing the model architecture.
LeRobot supports this with empty cameras: the config declares extra “slots” that the policy expects, but the dataset (or env) does not provide. Those slots are filled with placeholder keys and typically zero or masked inputs so the policy can run with fewer real views.
- In the policy config (e.g. xvla-base config.json),
empty_camerasis the number of these extra slots (default0). - For each slot, the config adds an observation key of the form:
observation.images.empty_camera_0,observation.images.empty_camera_1, …
Example: XVLA-base has three visual inputs and empty_cameras=0. Your dataset has only two images. Set empty_cameras=1. Then:
- The config gains a third visual key:
observation.images.empty_camera_0. - You still use the rename map (or matching config keys) for the two real cameras.
- The third view is treated as “empty” (no corresponding dataset key); the policy ignores or masks it as needed.
So you fine-tune on two observations only, and the third visual input is effectively unused. You do not need to add a fake third image to your dataset.
- Training (
lerobot_train.py):rename_mapis passed intomake_policy(..., rename_map=cfg.rename_map)and into the preprocessor asrename_observations_processor: {"rename_map": cfg.rename_map}. Batches from the dataset are renamed before being fed to the policy. - Evaluation (
lerobot_eval.py): Same idea—rename_mapis passed tomake_policyand to the preprocessor so env observations are renamed before the policy sees them. - Processor (
rename_processor.py):RenameObservationsProcessorStepdoes the actual key renaming and updates feature metadata so normalization stats stay consistent with the renamed keys.
If you see a feature mismatch error (“Missing features” / “Extra features”), the error message suggests using --rename_map with a mapping from your data’s keys to the policy’s expected keys.
| Goal | What to do |
|---|---|
| Dataset keys ≠ policy keys (training) | --rename_map='{"dataset_key": "policy_key", ...}' |
| Env keys ≠ policy keys (eval) | --rename_map='{"env_key": "policy_key", ...}' |
| Fewer cameras than policy expects | Set empty_cameras in the policy config (e.g. 1 when you have 2 real cameras and the policy expects 3). |
| Avoid passing a rename map | Edit the policy’s config.json so its observation keys match your dataset or env. |
The rename map keeps your pipeline flexible: one policy, many data sources, no code changes—just a small dictionary on the command line or in your config.

