Jiangran Lyu1,2,
Ziming Li1,2,
Xuesong Shi2,
Chaoyi Xu2,
Yizhou Wang1†,
He Wang1,2†
1Peking University 2Galbot
† Corresponding Authors
Project website: https://pku-epic.github.io/DyWA/
- 1 Workspace Setup
- 1.1 Docker setup
- 1.2 Package Setup
- 1.3 Assets Setup
- 2 Policy Training
- 3 Policy Evaluation
Note: We highly recommand users with docker setup.
Refer to the instructions in docker, For the PyTorch3D wheel built for CUDA 11.3 required by the DockerFile, do one of the following:
- Clone the PyTorch3D repository directory and build the wheel yourself.
git clone --branch v0.7.2 https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
python setup.py bdist_wheel- Download a pre-built pytorch3d==0.7.2 wheel for CUDA 11.3 in here
place it under ./docker/
First, download isaac gym from here and extract them to the ${IG_PATH} host directory
that you configured during docker setup. By default, we assume this is /home/DyWA/isaacgym, which maps to/opt/isaacgym directory inside the container.
In other words, the resulting directory structure should look like:
$ tree /opt/isaacgym -d -L 1
/opt/isaacgym
|-- assets
|-- docker
|-- docs
|-- licenses
`-- python(If tree command is not found, you may simply install it via sudo apt-get install tree.)
Afterward, follow the instructions in the referenced page to install the isaac gym package.
Alternatively, assuming that the isaac gym package has been downloaded and extracted in the correct directory(/opt/isaacgym),
we provide the default setups for isaac gym installation in the setup script
in the following section, which handles the installation automatically.
Then, inside the docker image (assuming ${PWD} is the repo root), run the setup script:
bash setup.shTo test if the installation succeeded, you can run:
python3 -c 'import isaacgym; print("OK")'
python3 -c 'import dywa; print("OK")'We release a pre-processed version of the object mesh assets from DexGraspNet in here.
After downloading the assets, extract them to /path/to/data/DGN in the host container, so that /path/to/data matches the directory
configured in docker/run.sh, i.e.
mkdir -p /path/to/data/DGN
tar -xzf DGN.tar.gz -C /path/to/data/DGNso that the resulting directory structure inside the docker container looks as follows:
$ tree /input/DGN --filelimit 16 -d
/input/DGN
|-- coacd
`-- meta-v8
|-- cloud
|-- cloud-2048
|-- code
|-- hull
|-- meta
|-- new_pose
|-- normal
|-- normal-2048
|-- pose
|-- unique_dgn_poses
`-- urdfNavigate to the policy training directory in dywa/exp/train and follow the instructions in the README.
To run our pretrained policy in a simulation, download the pretrained weights from here, and place the test_set.json file under /input/DGN/.
Then run the following command to evaluate on unseen objects during training (make sure to replace the load_student parameter in dywa/exp/scripts/eval_student_unseen_obj.sh with the path to the pretrained weights you just downloaded):
Unkown State Unseen Object Setting:
bash dywa/exp/scripts/eval_student_unseen_obj.shEvaluation results will be saved to /home/user/DyWA/output/test_rma/.
For visualizing the behavior of the policy, running too many environments in parallel may cause significant lag in your system.
Instead, adjust the number of parallel environment by changing ++env.num_env=${NUM_ENV} and turn on the gui with ++env.use_viewer=1 ++draw_debug_lines=1, and don't forget to export the ${DISPLAY} variable to match the monitor settings from the host environment.
For detailed setup or troubleshooting, please refer to README.
If you find this work useful, please cite:
@article{lyu2025dywa,
title={Dywa: Dynamics-adaptive world action model for generalizable non-prehensile manipulation},
author={Lyu, Jiangran and Li, Ziming and Shi, Xuesong and Xu, Chaoyi and Wang, Yizhou and Wang, He},
journal={arXiv preprint arXiv:2503.16806},
year={2025}
}Thanks goes to these wonderful people (emoji key):
SteveOUO 💻 📖 |
Jiangran Lyu 💻 📖 🤔 |
ZimingLi1204 💻 |
This work is built upon and further extended from the prior work CORN: Contact-based Object Representation for Nonprehensile Manipulation of General Unseen Objects. We sincerely thank the authors of CORN for their valuable contribution and for making their work publicly available.
This work and the dataset are licensed under CC BY-NC 4.0.


