RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper Grasp Synthesis
Authors: Ninad Khargonkar, Luis Felipe Casas, Balakrishnan Prabhakaran, Yu Xiang
Links: Paper (arXiv) | Video | Project website
Multi embodiment generalizable grasping method across grippers with different number of fingers.
Index
git clone https://github.com/IRVLUTD/robot-finger-print.git --recursiveAcknowledgements: GenDexGrasp code repository.
- Create conda python env via the
envrionment.yml.
conda env create -f environment.yml- The overall flow and evaluation setup is adapted from GenDexGrasp.
- Download and extract the GenDexGrasp dataset zip to a desired location (for example:
~/Datasets/GenDexGrasp) - Set a symbolic link to GenDexGrasp dataset under
./dataset/GenDexGrasp/:
mkdir dataset
ln -s ~/Datasets/GenDexGrasp ./dataset/GenDexGrasp
-
Download the UGCS related data files from here.
- This contains files like the generalized coordinates for different grippers, object point clouds + normals, and the coordinates for each grasp from GenDexGrasp dataset.
- Please check the Dataset README in the link for correctly placing the data files and general information about what each file represents.
-
Note: For the checkpoints, download and extract the
logs.zipfolder from the above link under the repository root:robot-finger-print/logs/. This has model checkpoints, inference maps, and grasp optimization results folder (each zipped). Please extract the ones that you need to test.- The provided logs/checkpoints are under
logs/gcs_gdx/folder - Grasp optimization results are under
logs/graspopt_results/folder- You can use either
penw60orpenw100folders
- You can use either
- The provided logs/checkpoints are under
We have the mano_pybullet added as a submodule which you can install by
following steps. This module is included since we used this create a mano hand
urdf from the original mano models. It also gives some utility functions to
convert the mano hand parameters.
-
cd mano_pybullet -
pip install -e . -
Please go through its README and test the functionality using the
gui_controltool.- You will need to set the
MANO_MODELS_DIRenv var to the path for extracted mano models dir. - Example:
export MANO_MODELS_DIR=/home/ninad/Projects/MANO/MANO_Hand_Model/mano_v1_2/modelsin command line - OR in ipython notebook as:
%env MANO_MODELS_DIR=/home/ninad/Projects/MANO/MANO_Hand_Model/mano_v1_2/models
- You will need to set the
Run
pip install --upgrade networkxif urchin URDF loading gives an error.
While setting up
mano_pybullet, if you see an error likeImportError: cannot import name 'bool' from 'numpy'. Try:pip install git+https://github.com/mattloper/chumpy. (Link to github issue)
If you see
ImportErrorwithomegaconfarising from lighting's tensorboard logger, try changing the version of omegaconf installed.
Trimesh error: try using version
4.4.1
This repo includes self-contained source code for the maximal spheres for grippers and testing grasps in isaacgym. Please check their individual folders for reference and setup:
-
For grasp simulation test based on GenDexGrasp, see:
grasp-test-isaacgym/ -
For computing maximal spheres for the grippers, see:
grasp-maximal-sphere/
Sphere Grasping example:
-
The core functionality is implemented in
model/, specificallyhand_model.pyandhand_opt.py.-
hand_model.pycreates a differentiable kinematics model for a gripper given its URDF- Can use the
GcsHandModeldefined in it as a standalone separately if needed!
- Can use the
-
hand_opt.pyposes the grasp transfer as an optimization problem and provides wrappers for both logging and optimization.- The wrappers are for convenience, and the core optimization loop defined in
GcsGraspTransferOpt
- The wrappers are for convenience, and the core optimization loop defined in
-
-
utilsincludes some commonly used functions, importantly there are some utilities which can help with pose alignment between different grippers, and some rotation conversions.utils/grasp_utils.py: gripper pose alignment to a common space -- useful for transferring grasps. Note, the values for each gripper are tuned according to the urdf models provided undergrippers/dir.- If your urdf is different from the ones provided, then you may need to define a custom alignment function:
- (1) hand palm normal should be +Z, (2) major axis for palm should be +Y, (3) hand origin should be on palm surface
-
Gripper urdfs are under
grippers/. Also included are files like:-
mgg_gripper_surface_pts.pk: pickled dict containing the pre-selected interior surface points for the gripper along with their unified coordinates used for correspondence and transfer. -
NOTE: The mano hand urdfs were created using the
mano_pybulletrepository. -
And some other files for legacy reasons...
-
The overall grasp modeling consists of 3 stages similar to GenDexGrasp:
- Training a coordinate map prediction model, conditional on an object's complete point cloud
- Inferring the coortinated map on unseen / test set objects
- Grasp optimization using the predicted maps -- generated grasps on objects
- Simulating the grasps (isaacgym test) and checking for stability
- Script:
gdx_train_gcs.py- NOTE: we used the arguments:
--n_epochs 16 --ann_temp 1.5 --ann_per_epochs 2. Please run--helpfor more details. - Optionally, for unseen gripper models: use the
--disable_[GripperName]flage (example:--disable_shadowhand).
- NOTE: we used the arguments:
Example usage:
python gdx_train_gcs.py --n_epochs 16 --ann_temp 1.5 --ann_per_epochs 2- Script:
gcs_gdx_inf_cvae.py- Use the desired log dir generated by the training script with
--logdir - Use the desited checkpoint name with
--ckpt(e.g.best_val.pt, orlast.ckptor some specific epoch stored in[LOGDIR]/checkpoints/folder - Number of grasps per object used (for our experiments):
--num_per_unseen_object 64 - See
--helpfor more details - This will create a
inference-[CKPT]-*-[TIMESTAMP]folder under theLOGDIR.- For convinience, you can rename or symlink a chosen ckpt inference dir to
inferenceunder theLOGDIR. - The assumed structure is as follows:
[LOGDIR]/checkpointswill have the different epoch level ckpts.[LOGDIR]/inference-*will contain inference predictions (based on args like ckpt, timestamp, optional comment).
- For convinience, you can rename or symlink a chosen ckpt inference dir to
- Use the desired log dir generated by the training script with
Example usage:
python gcs_gdx_inf_cvae.py --logdir ./logs/gcs_gdx/dset_fullrobots/ --ckpt last.ckpt --num_per_unseen_object 64- Script:
gcs_gdx_grasp_gen.py--logdir: The actual logs dir where your trained model is stored. For example if you use the provided logs/checkpoints from our Box folder, this could be./logs/gcs_gdx/dset_fullrobots/--inf_dir: Inference directory relative to thelogdirwhere the predicted coordinate maps are stored. This could be simply beinferencein case of using provided checkpoints. OR -- if you ran infrence on your custom epoch ckpt, then this would have a different name.- Please see the actual inference dir name under the logdir! Also if its cumbersome to type this detailed inference dir, then see the last point under Coordinate Map Inference for renaming/symlinking the chosen inference dir to just
inference. - Example
inference-epoch=14-step=2460.ckpt-sharp_lift-debug-240909_131116the result from inference (coordinate map prediction). --max_iter: we used default of100steps- See
--helpfor more details
Example usage when using the provided checkpoints and logs (for your custom runs, the logdir and inference dir names will be different!):
python gcs_gdx_grasp_gen.py --logdir ./logs/gcs_gdx/dset_fullrobots/ --inf_dir inference --dataset fullrobots -
We used the GenDexGrasp isaac gym evaluation setup with
learning_rate=0.1andstep_size=0.02for the grasp evaluation params for each gripper (inside the env script, under_set_normal_force_pose()method). -
See the
grasp-test-isaacgymself-contained folder for more details.
Generated grasp example after the grasp optimization process:
The gripper correspondences imposed by RFP can also be used for transferring and optimizing grasps across different grippers without any manual re-targeting and in a standalone fashion from the learned model.
Please see notebooks/example_grasp_transfer.ipynb for a usage example on
grasp transfer.
See the
GcsGraspTransferOptundermodel/hand_opt.pyfor the implementation.
-
The grasp transfer is supported between robot grippers under
grippers/dir. -
The input to grasp transfer object
GcsGraspTransferOptrequires: (1) source and target gripper names, (2) source gripper grasp q -
Here grasp
qrefers to a(9+d)dimensional tensor where its broken down as:q[0:3]: gripper base link translation vector with the graspq[3:9]: gripper base link orientation, represented as a 6d vector of two orthogonal components (think first 2 columns of a rotation matrix, in order like {x1,x2,x3,y1,y2,y3})q[9:d]: joint values fordjoints on the source gripper (so in essenced ~ DOFS)
Here is a visualization of the grasp transfer between 2 grippers.
Please see the ipython notebook under notebooks/example_grasp_opt.ipynb for a
usage example on grasp optimization with a partial object point cloud and
noisy initial Fetch gripper grasp. You can extend the similar flow to other
grippers as well.
See the
HandObjectGraspOptundermodel/hand_opt.pyfor the implementation.
-
The grasp
qis broken down as above. For fetch gripper we keep it in the open configuration during optimization. -
Given an object point cloud, and some noisy initial grasp (transferred from human grasp for example): we can optimize to a potentially non-colliding version by using the grasp optimization.
-
We use the initial grasp to create a dummy contact goal on the object point cloud and then optimize towards a final grasp.
-
Please take a look at the ipython notebook with its comments for more details!
Here is a visualization of the optimization result. Red color represents the original noisy grasp and green represents post-optimization version:
If this work helps in your research, please consider citing it:
@inproceedings{khargonkar2024robotfingerprint,
title={RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper Grasp Synthesis},
author={Khargonkar, Ninad and Casas, Luis Felipe and and Prabhakaran, Balakrishnan and Xiang, Yu},
journal={arXiv preprint arXiv:2409.14519},
year={2024}
}Thank you for taking a look at this repository! Any feedback and comments are welcome!




