This repository provides a complete pipeline for trajectory tracking and optimization for mobile manipulation tasks, supporting both simulation and real-world execution using the Fetch robot on ROS Noetic.
504974213-451e9094-8fdd-479a-8bc0-ecff0b46b25e.mp4
Set up a Python 3.9 environment using Conda for dependency management.
conda create -n trajopt python=3.9
conda activate trajoptInstall ROS and Python dependencies required for the project.
source /opt/ros/noetic/setup.bash
chmod +x install_ros_deps.sh
sh install_ros_deps.sh
pip install -r requirements.txt
Navigate to the workspace and build it with the following commands.
cd mm_ws
source /opt/ros/noetic/setup.bash
rosdep install --from-paths src --ignore-src -r -y
catkin clean --yes
catkin_make -DCMAKE_BUILD_TYPE=Release
⚠️ Note: If you encounter errors related toLIBFFIversion, refer to this issue for guidance.
The runtime mainly involves spawning the environment, robot, GSAM server, Object Pose estimation, Mobile base and Joint trajectory optimization to execute the task.
Start the Gazebo simulation with a GUI for visualization.
roslaunch aws_robomaker_small_house_world small_house.launch gui:=True
Spawn the robot in the Gazebo environment.
roslaunch fetch_gazebo spawn_robot.launch
Make sure to use correct fetch.urdf according to desired gripper config. The default urdf file corresponds to config 1.
Initialize MoveIt for motion planning. If using gripper config 1:
roslaunch fetch_moveit_config moveit.launch robot:=fetch_original
if using gripper config 2:
roslaunch fetch_moveit_config move_group.launch
Visualize the robot and environment in RViz.
cd mm_ws/config
rosrun rviz rviz -d tto.rviz
cd mm_ws/scripts/config/
Update paths.py to reflect the TASK_ID, Gripper configs, etc..,.
NOTE: CURRENT_GRIPPER_CONFIG should always be 0.
This GroundingSAM (GSAM) service is part of the vie module and is present in HRT1/vie. Please ensure to activate the robokit environment to run this service.
python gsam_server.py
Optimization requires the target object mask, so that it will not consider the object pointcloud as obstacle while tracking the trajectory. so we run this GSAM ROS service to query object mask with the prompt being passed as argument in the main script next. Set IS_MASK=True in paths.py. If you want to test the optimization without it, no need to run this server and set IS_MASK=False.
🧱 Note: Steps 1–6 only need to be launched once. Step 7 (Optimization) can be rerun for each new task or object.
Run trajectory optimization including the robot's base. This script automatically spawn the scene corresponding to "move the cracker box" task. If you want to run with base optimization, set IS_BASE=True.
🧱 NOTE: It is an approximate scene setup compared to the actual realworld demonstration, and is intended for users to test and validate the optimization module before deploying on real hardware.
cd mm_ws/scripts/traj_opt
python run.py --stow_dir "right" --obj_prompt <object-name>
All the key params are taken from paths.py. Make sure to have it updated relevant to the task being conducted.
Note: For running in realworld, just set the param IS_SIM=False and IS_DELTA=True.
run steps 1-6 from the Running in Simulation section, before proceedinf further.
To estimate the object pose during run time, relative to demonstration first frame, we use BundleSDF. This aligns the real-world scene with the recorded demonstration frames to provide accurate object-relative transformations.
cd vie/docker/
./enter_docker.sh && ./start_docker.sh
cd ..
./run_bundlesdf.sh <path to the specific task demonstration folder> 10 5. where 10 indicates the number of demo frames and 5 indicates the number of rollout frames to be used.
Run trajectory optimization including the robot's base.
cd mm_ws/scripts/traj_opt
python run.py --stow_dir "right" --obj_prompt <object-name>
All the key params are taken from paths.py. Make sure to have it updated relevant to the task being conducted.