Robots Training Robots (in reverse) and it's the manufacturing that matters
Mapping America's Machine Shops, China's Policy Enabling Robotics, Mathworks Robotics Repository and Best Paper Awards and Finalists from CORL 2025
President Trump said he’d hit China with an additional 100% tariff in response to Beijing’s new restrictions on exports of rare-earth minerals. via WSJ
The extra levy would accompany new export controls on critical software products, both of which will start Nov. 1 ( read for free), the president posted on Truth Social. Beijing dominates processing capabilities for rare-earth minerals—critical components of products such as semiconductors, electric vehicles, and jet fighters. In the latest tensions flareup, Trump also suggested an expected meeting with Chinese leader Xi Jinping in South Korea would no longer happen.
From Policy Blueprint to Industrial Practice
Made in China (MIC) 2025 was launched in 2015. It identified ten priority sectors, from aerospace to semiconductors to new energy vehicles. Robotics was one of them. The point was not that robots themselves were the final goal. The point was that without robots China could not move up the value chain in any of the other sectors.
The plan was followed by a series of concrete policy steps:
> In 2016 China released its first National Robotics Development Plan.
> In 2021 robotics featured prominently in the 14th Five-Year Plan.
> In 2023 the government rolled out the “Robotics+” Application Action Plan and a set of guiding opinions on humanoid robots.
> In 2025 “embodied AI” was mentioned for the first time in the Government Work Report.
Image courtesy: Kearney analysis
Analysts at the Mercator Institute for China Studies (Merics) note that MIC 2025 was about avoiding the “middle-income trap” by upgrading manufacturing, not just about chasing frontier tech. The state deployed subsidies, local procurement mandates, and cheap financing. The Rhodium Group tracked the billions of dollars in grants flowing into strategic sectors. The World Economic Forum describes how this evolved into a “dual circulation” model, where domestic consumption of advanced goods like robots reinforced industrial policy while buffering against external shocks.
Policy was the catalyst.
The deeper engine was China’s manufacturing base. Factories making servos, reducers, motors, sensors, and batteries were already clustered. Knowledge built up through decades of running assembly lines and supplier ecosystems. As analyst Dan Wang has argued, China’s advantage is not only scale but tacit expertise that comes from daily production experience. That tacit expertise is hard to codify or transfer, and it is central to why robot deployment scaled so fast in China.
Robots, Meet Your New Teacher: Yourselves
A USC Viterbi School of Engineering graduate team co-developed a new robotic system that adapts and learns every second.
Imagine a robot that can learn from its mistakes with no human input. It sounds like a scene out of a “Star Wars” movie, but a study conducted by a USC Viterbi School of Engineering team this spring showed it was possible. After just one video demonstration, the robotic arm eventually could complete complex tasks like folding towels or opening a specific trash bin.
This new approach allows robots to learn tasks faster and more efficiently and form the foundation for a world where a robot could do some simple household chores for you, step-by-step. Currently, humans must train the robot in what becomes a time-consuming and arduous process. Now, robots can train themselves from just watching the video repeatedly on the way to eventually completing its task.
https://rewind-reward.github.io/static/videos/video_rewind.mov
“Every time the motors move, it gets feedback [from the video],” said Abrar Anwar, a fifth-year Ph.D. computer science student at the Thomas Lord Department of Computer Science.
This instant, automatic learning was demonstrated in a study called “ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations.” The paper was co-written by a student team co-led by Erdem Biyik, a USC Viterbi assistant professor of computer science and electrical and computer engineering, alongside Dr. Jesse Zhang and Jesse Thomason, a USC Viterbi assistant professor of computer science. Together, they developed a study that unlocks new potential in the world of robotic learning — and a bellwether for an exciting future.
“I could easily see house chores being done with robots,” Biyik said. “It’s not like you are telling the robot to cook a whole dish for you. But you are saying ‘dice the tomato,’ and then maybe next you say ‘Okay, now put the tomato in the pot.’”
The robotic lesson plan
Here’s how it works. First, a demonstration video is recorded of a robot doing a simple task, for example, picking up a sponge. This is different from a previous USC Viterbi study, which trained robots on videos of humans completing these tasks. When a robot is taught by a fellow robot, it can learn much more efficiently, making tasks easier and faster to complete.
“When we give a robot a robot video, the model is going to be [more] familiar with it,” Biyik said.
This demonstration video of the task — in this case, picking up a sponge — is recorded first. Then, the robot trainee starts working on the task it’s been assigned. This is where the magic lies: the robot learns every second. Even when moving itself towards the sponge, the robot communicates to itself that it is “succeeding,” and vice versa. The robot is programmed to give itself feedback based on the video, with no human intervention.
The ReWiND framework helps robots complete a variety of complex tasks.
“As the robot approached the sponge, [it] would give some positive feedback, and then if it touched the sponge, more feedback — and then, if it grasped it, more positive feedback,” Biyik said.
The twist here that separates this study from any others is the titular “ReWiND” feature: videos are played in reverse, not just from beginning to end. This new process allows the robot to receive what they call “dense feedback,” meaning the robot not only learns more about the assigned task, but it also learns at a much quicker rate, based on only one video demonstration. This makes for a much faster and more dynamic learning process, because the robot learns what it’s doing wrong at the very beginning of a task and at every second in between, not just at the end.
“The reason why all this feedback matters is that whenever the robot is trying to learn things by itself, it does a lot of really stupid things: It crashes into the table, hits objects away when it’s supposed to pick it up, stuff like that,” Anwar said. “If [it] gives it dense feedback, like if the robot moves its hand away, [it] can give negative feedback for that.”
Thus, picking up a sponge is turned into a process with a very high number of small steps. Even moving the robotic arm in the right direction is ripe for positive or negative feedback. This feedback also needs to work regardless of the context, so your future robot chef could cook regardless of changes to variables like language.
“If we changed a little bit about the environment or the language we use, the robot is supposed to do the same thing,” said Yusen Luo, a recently graduated master’s computer science student. “For example, ‘close the window’ or ‘make the window close,’ should still work.”
The USC Viterbi team’s “ReWiND” paper has received significant acclaim since its publication. It won a best paper award at a workshop within the 2025 Robotics Science and Systems Conference in Los Angeles. It will be orally presented at the Conference at Robot Learning, a distinction bestowed on only 4.3% of the submitted papers.
But the accolades are just the beginning for ReWiND, whose teaching mechanics have huge potential in both the household — folding clothes, dicing tomatoes, and washing dishes — and in factories, too.
“You can apply this to many different kinds of industries,” Anwar said. “A lot of people have been looking at solving tasks in industrial problems. If you want to do manufacturing, it’s really hard normally to find out how to give feedback in those scenarios. But something like ReWiND, or a larger, scaled up version of ReWinD, can help give feedback if you have robots in-house.”
https://rewind-reward.github.io/
Hardware FYI just released an interactive map of US machine shops - making building robots easier
COMMUNITY NEWS
Best Paper Awards and Finalists from CORL 2025
Best Paper Awards
Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation
Peiyuan Zhi, Peiyang Li, Jianqin Yin, Baoxiong Jia, Siyuan Huang
Fabrica: Dual-Arm Assembly of General Multi-Part Objects via Integrated Planning and Learning
Yunsheng Tian, Joshua Jacob, Yijiang Huang, Jialiang Zhao, Edward Li Gu, Pingchuan Ma, Annan Zhang, Farhad Javid, Branden Romero, Sachin Chitta, Shinjiro Sueda, Hui Li, Wojciech Matusik
Best Student Paper Award
Visual Imitation Enables Contextual Humanoid Control
Arthur Allshire, Hongsuk Choi, Junyi Zhang, David McAllister, Anthony Zhang, Chung Min Kim, Trevor Darrell, Pieter Abbeel, Jitendra Malik, Angjoo Kanazawa
Award Finalists
Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation
Peiyuan Zhi, Peiyang Li, Jianqin Yin, Baoxiong Jia, Siyuan Huang
LocoFormer: Generalist Locomotion via Long-context Adaptation
Min Liu, Deepak Pathak, Ananye Agarwal
Visual Imitation Enables Contextual Humanoid Control
Arthur Allshire, Hongsuk Choi, Junyi Zhang, David McAllister, Anthony Zhang, Chung Min Kim, Trevor Darrell, Pieter Abbeel, Jitendra Malik, Angjoo Kanazawa
Fabrica: Dual-Arm Assembly of General Multi-Part Objects via Integrated Planning and Learning
Yunsheng Tian, Joshua Jacob, Yijiang Huang, Jialiang Zhao, Edward Li Gu, Pingchuan Ma, Annan Zhang, Farhad Javid, Branden Romero, Sachin Chitta, Shinjiro Sueda, Hui Li, Wojciech Matusik
DexUMI: Using Human Hand as the Universal Manipulation Interface for Dexterous Manipulation
Mengda Xu, Han Zhang, Yifan Hou, Zhenjia Xu, Linxi Fan, Manuela Veloso, Shuran Song
The Sound of Simulation: Learning Multimodal Sim-to-Real Robot Policies with Generative Audio
Renhao Wang, Haoran Geng, Tingle Li, Philipp Wu, Feishi Wang, Gopala Anumanchipalli, Trevor Darrell, Boyi Li, Pieter Abbeel, Jitendra Malik, Alexei A Efros
Pi 0.5: a Vision-Language-Action Model with Open-World Generalization
Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Robert Equi, Chelsea Finn, Niccolo Fusai, Manuel Y. Galliker, Dibya Ghosh, Lachy Groom, Karol Hausman, brian ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Devin LeBlanc, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Allen Z. Ren, Lucy Xiaoyang Shi, Laura Smith, Jost Tobias Springenberg, Kyle Stachowicz, James Tanner, Quan Vuong, Homer Walke, Anna Walling, Haohuan Wang, Lili Yu, Ury Zhilinsky
Steering Your Diffusion Policy with Latent Space Reinforcement Learning
Andrew Wagenmaker, Yunchu Zhang, Mitsuhiko Nakamoto, Seohong Park, Waleed Yagoub, Anusha Nagabandi, Abhishek Gupta, Sergey Levine
Building your own robot just got a lot easier.
Via Lukas M. Ziegler Awesome Robotics! 💾
MathWorks have released an open-source GitHub repository packed with robotics resources for anyone interested in getting hands-on.
The repo includes examples for robot arms, ground vehicles, and drones, with projects that show how to connect with ROS and ROS2 or even deploy Simulink models directly as ROS nodes.
There are also more advanced demos, like modeling off-road environments and testing navigation algorithms in photorealistic simulations.
Everything is well-documented, with tutorials and links that make it easy to go from concept to prototype. 📑
Whether you’re a student, researcher, or just curious, there’s material here for every level. And since it’s an open community project, you can not only explore but also share your own contributions.
For anyone looking to learn robotics by doing, this is a solid place to start!
Here’s the link: https://lnkd.in/d8HcxQDc
Nebius Robotics & Physical AI Awards celebrate emerging startups and scale-ups that use AI to shape the future of the industry.
Our mission is to showcase transformative technology, foster collaboration between founders, industry leaders and researchers and power the next generation of breakthroughs in AI.
Companies from the United States, Europe, the United Kingdom and Asia are invited to apply, with the winners announced at the Nebius Robotics and Physical AI Summit & Awards Ceremony at the Computer History Museum in Mountain View, CA on December 9, 2025.
Selected winners will also receive up to $100,000 in compute credits for Nebius AI Cloud, accelerated by NVIDIA AI infrastructure.
Mona Li, Director of Startup Ecosystem at Nebius, described the rationale behind the awards.
“Physical AI is the next frontier where robotics and intelligence truly converge,” she said. “Together with NVIDIA, we are excited to celebrate the pioneers turning this vision into reality.”
Silicon Valley Robotics Investor/Startup Database
Are you a startup fundraising atm? Fill out our "For Investors Eyes Only" sheet.
These investors write checks anywhere from angel and seed stage to $5-50 million.
Feedback from the startups - high quality inbound interest from investors.
Feedback from the investors - incredible depth and range of robotics startups.
If you’re an accredited investor and haven’t already signed up for the spreadsheet - Sign up here for great robotics deal flow!
Robotics News
Studying Clever Creases in Nature May Inspire Foldable Structures for Drones and Robots - Smithsonian
Robots receive major intelligence boost thanks to Google DeepMind’s ‘thinking AI’ — a pair of models that help machines understand the world - Live Science
Human-robot interaction is coming. Are we ready for it? - World Economic Forum
Global network of ocean robots endorsed by UNESCO
Deep Robotics unveils DR02, the world’s first IP66-rated humanoid robot - KRAsia
Unitree robots from China briefly appear on Walmart website, highlighting US robotics gap - South China Morning Post
Made-in-China robots impress South Koreans as cooperation opportunities expand - Global Times
Korean IT service firms pivot to robot business with AI-driven robot brains - Chosun Biz
SoftBank to buy ABB’s robot business for $5.4 billion in push to merge AI and robotics - Reuters
America’s Manufacturing Resurgence Will Be Powered by These Robots - WSJ
UAE Launches Autonomous Cargo Transport Fleet - Future Transport
Didi Autonomous Driving Raises RMB 2 Billion in Series D Funding - Technode
Kodiak AI, now a public company, looks to deliver an autonomous trucking future - FreightWaves
The future of self-driving cars: Safer, smarter, and everywhere - McKinsey & Company
These Planes That Fly Themselves Could Transform The Skies - WSJ
Do Robots Dream Of Perfect Hands - Fast Company
You can now get paid to fold your laundry, as long as you’re willing to film it - Business Insider
2025 Time100 Next
Robotics Best Inventions
The Robot In Your Kitchen - Figure 03 - Time Magazine
incl. BYD, Waymo, Anduril, Scale.ai, Unitree Robotics and DeepMind
Best Invention Special Mentions
Kratos Automated Truck Platoon
My Robotics Pet: Coding Chameleon
Mammotion Yuka mini robot lawnmower
Robotics Events
Oct 21-23 - FIRA USA, Woodland CA - (use our code “NAOMISHISAIKEGUESTS” for 20% discount)
Oct 27-30 - ROSCON Singapore -
Thursday Oct 30 - Women in Robotics event at CL MV
Wed Nov 5 - Bots&Beer in SF or CL MV - tbc
Nov - Online Robotics Manufacturing Workshop - tbc
Sat Nov 22 - Robot Party and Brazilian BBQ at CL MV
Wed Dec 3 - Bots&Beer in SF or CL MV - tbc
Wed Dec 10 - Robotics Investment Summit and Manufacturing Workshop
Humanoids Summit - Dec 11-12
Join the organizing team at Silicon Valley Robotics - bots&beer@svrobo.org


China’s surge came from tacit manufacturing, ReWiND only pays off where feedback is dense, and rising controls make dual-sourcing plus bets on loco-manipulation/dual-arm VLA the pragmatic next moves.