> INITIALIZING SYSTEM...
> LOADING AUTONOMY MODULES...
> CALIBRATING SENSORS...
> ESTABLISHING NEURAL LINK...
> SYSTEM READY
> CURRENT_TASK: Master's_Robotics_UIUC
> FOCUS: Humanoids, Safe Autonomy, Embedded AI
> STATUS: Developing Intelligent Systems
Automated pipeline to label 1,607 clips from 72 YouTube videos. Trained 19M-param Temporal Transformer achieving 77-85% accuracy on unseen demos. Hierarchical mapping to robot primitives with YOLO fusion for disambiguation.
Zero-shot 6D tracking architecture with VLM-based semantic inventory. Achieved 76.5-100% ADD-S AUC on YCB-Video.
Vision-based HRI with real-time hand tracking. Robot adapts to human position for seamless object transfer.
TTC-driven safety architecture. Fused LiDAR + RGB-D for pedestrian tracking with 90% success rate.
Implemented whole-body locomotion and motion planning for Unitree G1. Enforced ZMP control, MPC balance and RL locomotion for stable manipulation and reaching tasks.
Developed a perception pipeline converting depth clouds to elevation maps for quadruped footstep placement.
Trained terrain-adaptive locomotion with PPO and integrated CBF as a real-time safety filter. Achieved zero-fall locomotion with 99% unsafe action rejection.
Trained ANYmal quadruped policies using Proximal Policy Optimization. Achieved robust traversal on irregular terrain with 0-fall safety constraints.
Fused Visual-Inertial Odometry with footstep planning. Enabled autonomous navigation in GPS-denied environments with <10cm drift.
Built a 5-DOF prosthetic arm controlled via EMG signals. Deployed real-time gesture classification on ESP32 using XGBoost, achieving 96% accuracy.
> # WHAT I'M BUILDING TOWARD
Robots that learn tasks from watching humans — then execute them safely in the real world.
> # CURRENT FOCUS
Building an optimized Vision-Language-Action pipeline that converts unstructured video demonstrations into executable robot primitives, bridging the gap between internet-scale knowledge and physical manipulation.
> # THE BIGGER PICTURE
General-purpose robots need a skill layer — modular, verifiable behaviors that can be composed, transferred, and deployed across platforms. I'm focused on making that layer learnable from real-world data and safe by construction, using simulation-first validation and control-theoretic safety guarantees.