AI & Robotics in Space

Autonomy Moves From the Lab to the ISS

Date: December 15, 2025
An Ursa Cortex Blog by Akash Iyer


One of the biggest themes of 2025 has been AI, and space is no exception. Across research and real operations, AI is increasingly being integrated into space robots to improve navigation and task planning. The goal is simple: reduce reliance on constant human control, while boosting efficiency and reliability. This post looks at two recent milestones—one on the International Space Station (ISS), and one in reinforcement learning research. [1]

AI-Assisted Autonomous Systems on the International Space Station

Researchers recently demonstrated AI-assisted motion planning on NASA’s Astrobee robot aboard the ISS—marking a major step toward robots that can move around complex, obstacle-filled environments in space with less human micromanagement. [1]

The core idea was an AI “warm start.” Instead of planning a route completely from scratch every time (which can be slow and computationally expensive), the model was trained on thousands of previously computed trajectories to generate a strong starting guess. The system then refined that guess using optimization, while still meeting safety constraints. [1]

In tests on the ISS, the AI-assisted approach computed safe trajectories about 50–60% faster than conventional methods, which matters a lot because computing power in space is limited—and delays become even more painful the farther you are from Earth. [1]

Why this matters

  • Less bottleneck from teleoperation: Deep-space missions can’t rely on constant human joystick control due to distance and communication delays. [1]
  • More tasks can be “crew-minimal”: Robots that can move safely on their own free up astronauts for higher-priority work. [2]
  • Safety is still the priority: Faster planning is only useful if the robot remains collision-safe in a cluttered environment like the ISS. [1]

Reinforcement Learning in Space Robotics Research

Another milestone came from the U.S. Naval Research Laboratory (NRL), where the team earned a Best Paper Award in Orbital Robotics for work tied to the APIARY experiment—demonstrating reinforcement learning control of a free-flying robot in microgravity. [3]

In simple terms, the work showed that a reinforcement-learning-trained control policy could handle full 6-degrees-of-freedom motion (translation and rotation) in a microgravity environment, trained in simulation and validated in orbit. That’s a meaningful step toward robots that can autonomously support in-space assembly, servicing, logistics, and even orbital-debris-related missions. [3]

STEM spotlight: Reinforcement learning (in one minute)

Reinforcement learning (RL) is a type of machine learning where an “agent” learns by trial and error. It takes an action, gets feedback (a reward or penalty), and slowly learns a policy (a strategy) that maximizes long-term reward. In robotics, that reward might represent goals like “reach the target pose,” “use less fuel,” or “avoid collisions.” The tough part in space is making sure what’s learned in simulation still works safely in the real world. [4]

Zooming out

Put together, these two stories point to the same direction: autonomy with guardrails. Space robots are getting better at doing the “thinking” locally—planning motion, controlling movement, adapting to uncertainty—while still staying within strict safety constraints. [1]


Sources

  1. Space.com — “AI helps pilot free-flying robot around the International Space Station for 1st time ever”
  2. Stanford News — “AI advances robot navigation on the International Space Station”
  3. U.S. Naval Research Laboratory — “NRL Wins Best Paper Award…” (APIARY / iSpaRo 2025)
  4. arXiv — “Autonomous Planning In-space Assembly Reinforcement-learning free-flyer (APIARY)… Astrobee Testing”
  5. NASA — Astrobee overview

Published in Ursa Cortex: The Ursa Majors Group Blog