AIAIBusinessDeep LearningPerceptionPostsRoboticsSensors

Figure AI Announces Helix 02 – a General-Purpose Humanoid System

Figure AI has announced Helix 02, a new AI control system for its humanoid robots, detailed in a post published on January 27, 2026.

Helix 02 is designed to control locomotion, balance, and manipulation using a single learned system, rather than relying on separate modules for walking and object interaction. The system maps multi-modal sensor inputs directly to full-body motor commands (VLA-style system), covering tasks that require coordinated movement of the arms, legs, torso, and hands.

According to Figure AI, Helix 02 was demonstrated in a real kitchen environment where a humanoid robot autonomously unloaded and reloaded a dishwasher. The task consisted of 61 sequential actions performed continuously for several minutes without human intervention or system resets.

Architecture Overview

The architecture is described as hierarchical and organized across different time scales, with three interacting components:

  • Whole-body motion prior (System 0)
    A learned model trained on human motion data that encodes physically feasible, coordinated full-body behaviors. Rather than issuing explicit commands, this component acts as a motion prior, constraining the space of valid actions to those that maintain balance, coordination, and contact consistency across the entire body.
  • Sensor-to-action control layer (System 1)
    A vision-conditioned action policy that maps multimodal sensory input—including vision, tactile feedback, and proprioception—directly to full-body motor commands. This layer operates in a closed loop at high frequency, handling continuous control, contact dynamics, and reactive adjustments during task execution.
  • High-level reasoning and planning component (System 2)
    A slower, deliberative module responsible for scene understanding, task interpretation, and planning. This component determines what should be done next by producing task-level intents or subgoals, without directly controlling low-level motor outputs.

How the Components Work Together

The three components interact through a hierarchical control flow with shared representations and clearly separated responsibilities:

  1. System 2 defines intent
    Based on sensory observations and task context, the high-level reasoning component produces an abstract goal or subtask (e.g., “grasp the mug from the top rack”).
  2. System 1 executes the intent
    Conditioned on the current intent and real-time sensory input, the control layer continuously generates full-body motor commands, adapting to environmental changes, contacts, and disturbances during execution.
  3. System 0 constrains execution
    The learned whole-body motion prior shapes the resulting behavior, ensuring that generated actions remain dynamically stable, coordinated, and physically feasible across locomotion and manipulation.

This structure allows planning, perception, and control to operate at different time scales while remaining tightly coupled during execution.

Helix 02 also incorporates visual input from palm-mounted cameras and tactile sensing at the fingertips, which Figure AI reports improves fine object manipulation.

The company positions Helix 02 as a step toward more general-purpose humanoid behavior in unstructured environments.

More at https://www.figure.ai/news/helix-02

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.