Research
![unnamed_edited.jpg](https://static.wixstatic.com/media/f02714_fd0e03fd02a74c97827b49fd776a526d~mv2.jpg/v1/fill/w_198,h_127,al_c,lg_1,q_80,enc_avif,quality_auto/unnamed_edited.jpg)
Visio-Tactile Robot Grasping using Implicit Representations
-
Multimodal Grasp Prediction: Combines visual and tactile feedback to predict grasp success probabilities, enhancing robotic manipulation in unstructured environments.
-
Implicit Representation (Grasp Field): Explores the use of grasp fields, an implicit representation, instead of explicit grasp poses to model grasp affordances more flexibly.
![Screenshot from 2025-02-07 14-19-52.png](https://static.wixstatic.com/media/f02714_9f35a8cde9d94d929ff90673e7a522f3~mv2.png/v1/fill/w_198,h_137,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/Screenshot%20from%202025-02-07%2014-19-52.png)
Mobile Manipulator for Aircraft Clutter Clearance
-
Developed perception pipeline for a mobile manipulator designed for aircraft cabin cleaning, leveraging monocular camera input in NVIDIA Isaac Sim.
– Implemented semantic segmentation using SAM and DINO to identify objects for downstream pose detection.
– Integrated GraspSplats for grasp point detection and FoundationPose for 6D pose estimation, enabling
precise robotic manipulation. Utilized Mast3r to generate depth maps, enhancing the robot’s environmental
understanding for improved motion planning and obstacle avoidance