The official repository for <EquiGraspFlow: SE(3)-Equivariant 6-DoF Grasp Pose Generative Flows> (Byeongdo Lim, Jongmin Kim, Jihwan Kim, Yonghyeon Lee, and Frank C. Park, CoRL 2024)
You can create a Conda environment using the following command.
You can customize the environment name by modifying the name field in the environment.yml file.
conda env create -f environment.ymlThis will automatically install the required packages, including:
python==3.10omegaconftensorboardXpyyamlnumpy==1.26torchscipytqdmh5pyopen3d==0.16.0romapandasopenypyxl
To activate the environment, use:
conda activate equigraspflowWe use the Laptop, Mug, Bowl, and Pencil categories of the ACRONYM dataset [1].
The dataset can be downloaded from this link.
Create a dataset directory and place the data in that directory, or customize the path to the dataset by modifying DATASET_DIR in acronym.py and utils.py within the loaders directory.
The training script is train.py, and comes with the following arguments:
--config: Path to the training configuration YAML file.--device: GPU number to use (default:0). Usecputo run on CPU.--logdir: Directory where the results will be saved (default:train_results).--run: Name for the training session (default:{date}-{time}).
To train EquiGraspFlow using the full point cloud, run:
python train.py --config configs/equigraspflow_full.ymlAlternatively, to train EquiGraspFlow with the partial point cloud, use:
python train.py --config configs/equigraspflow_partial.ymlNote: Training with the partial point cloud cannot be done in headless mode; a display is required.
You can change the data augmentation strategy for each data split by modifying the augmentation field in the training configuration YAML file.
We log the results of the training process using TensorBoard. You can view the TensorBoard results by running:
tensorboard --logdir {path} --host {IP_address}Replace path with the specific path to your training results and IP_address with your IP address.
The pretrained models can be downloaded from this link.
The test scripts, test_full.py and test_partial.py, calculate the Earth Mover's Distance [2] between the generated and ground-truth grasp poses and store the visualizations of the generated grasp poses.
It has the following arguments:
--train_result_path: Path to the directory containing training results.--checkpoint: Model checkpoint to use.--device: GPU number to use (default:0). Usecputo run on CPU.--logdir: Directory where the results will be saved (default:test_results).--run: Name for the experiment (default:{date}-{time}).
For example, to test EquiGraspFlow using the full point cloud with the model_best_val_loss.pkl checkpoint in pretrained_model/equigraspflow_full directory, use:
python test_full.py --train_result_path train_results/equigraspflow_full --checkpoint model_best_val_loss.pklAlternatively, to test EquiGraspFlow using the partial point cloud with the model_best_val_loss.pkl checkpoint in pretrained_model/equigraspflow_partial directory, use:
python test_partial.py --train_result_path train_results/equigraspflow_partial --checkpoint model_best_val_loss.pklThe visualizations of the generated grasp poses are stored in visualizations.json within the test results directory.
To display these visualizations, use the following code:
import plotly.io as pio
pio.from_json(open('{path}/visualizations.json', 'r').read()).show(renderer='browser')Replace path with your test results directory.
[1] C. Eppner, A. Mousavian, and D. Fox. Acronym: A large-scale grasp dataset based on simulation, ICRA 2021. [paper]
[2] A. Tanaka. Discriminator optimal transport. NeurIPS 2019. [paper]
If you found this repository useful in your research, please cite:
@inproceedings{lim2024equigraspflow,
title={EquiGraspFlow: SE (3)-Equivariant 6-DoF Grasp Pose Generative Flows},
author={Lim, Byeongdo and Kim, Jongmin and Kim, Jihwan and Lee, Yonghyeon and Park, Frank C},
booktitle={8th Annual Conference on Robot Learning},
year={2024}
}
