cd ~/dashboard
Physical AI & Robotics2026ACTIVE

XG1 — Rapid Humanoid Robot Learning Pipeline

2-day humanoid robot pipeline for the Unitree G1 — Meta Quest 3 teleop, NVIDIA Sonic, DeepLake data, GR00T policy fine-tuning, Nomadic diagnostics. Won 2 tracks.

README.md

XG1 demonstrates a fast-track workflow for humanoid robot learning. The challenge: take a Unitree G1 humanoid from manual demonstration to autonomous policy testing in 36 hours, with reliable performance on complex tasks like walking to tables and pick-and-place maneuvers with beverages and apples.

**Teleoperation Layer**: We integrated Meta Quest 3 with MuJoCo for intuitive 6DOF control, using NVIDIA Sonic to achieve low-latency control commands. This let us manually complete the target tasks (walking + pick-and-place) and capture high-fidelity demonstration data. Sonic's millisecond-class latency was critical — any teleop lag breaks operator confidence.

**Data Strategy with DeepLake**: To handle high-throughput training, we used DeepLake to store and stream Lightwheel's G1 beverage organization data. The efficient tensor storage provided the fast I/O necessary to fine-tune models within tight time constraints. Without it, the training pipeline would have been I/O-bound rather than compute-bound — the entire fine-tuning loop would have stalled.

**Policy Fine-Tuning with NVIDIA GR00T**: We fine-tuned NVIDIA GR00T on our collected data. Since Sonic's fine-tuning features were not yet released, we used Sonic primarily for high-fidelity data collection while running autonomous inference through GR00T. The 45 minutes of demonstration data (135,000 timesteps at 50Hz) was enough to produce a working policy in ~2 hours of training.

**Diagnostics with Nomadic AI**: To understand why the fine-tuned agent struggled with specific task instructions, we used Nomadic AI as a diagnostic layer. This pinpointed failure modes in the model's reasoning — identifying which task instructions caused divergence, at what decision points, and what features the model was attending to incorrectly. The output was a concrete improvement path rather than vague hypotheses.

**Outcome**: Won 2 tracks at Intelligence at the Frontier Hackathon 2026 — "Physical AI & Robotics: Data at Scale — Best Overall Use of DeepLake" and "Physical AI & Robotics by NomadicML — New Project Winner". The architecture demonstrated that combining immersive teleop with robust MLOps tooling can compress the humanoid-learning timeline from weeks to days.

Highlights
  • Meta Quest 3 + MuJoCo teleoperation with NVIDIA Sonic for low-latency control
  • DeepLake tensor storage for high-throughput streaming of G1 demonstration data
  • NVIDIA GR00T policy fine-tuning on 45 min of collected demonstrations (135K timesteps)
  • Nomadic AI diagnostics layer pinpointing fine-tuned agent failure modes
  • Successful execution: walking to tables, beverage and apple pick-and-place
  • Built in 36 hours from blank slate to autonomous policy testing
Key Metrics
Won DeepLake track — Best Overall Use
Won NomadicML track — New Project Winner
Tech Stack8 DEPS
RoboticsUnitree G1Meta Quest 3MuJoCoNVIDIA SonicDeepLakeNVIDIA GR00TNomadic AI
SYS:ONLINE
--:--:--