When UBTech robots walked onto the factory floor at Zeekr's electric vehicle plant in March 2025, they did something no humanoid robots had done before: they worked as a coordinated team, lifting boxes, assembling car parts, and performing quality checks - all without human supervision. Powered by DeepSeek's reasoning model, these machines represented more than a manufacturing curiosity. They embodied a fundamentally different vision of artificial intelligence, one that may reshape the global technology competition in ways Silicon Valley hasn't fully grasped.
The race to develop humanoid robots is unfolding along two very different paths in China and the United States, reflecting contrasting philosophies about how robots should learn and improve. China is taking a bold, fast-paced approach by deploying large numbers of robots directly into real-world environments like factories, streets, and homes. This “learn-on-the-job” strategy allows robots to gather vast amounts of real-world data, which is then used to continuously improve their artificial intelligence. Companies such as Unitree and Agibot are leading this effort, with Agibot even offering an open-source operating system called Lingqu OS to encourage collaboration and innovation across the industry. By flooding the market with task-specific robots, China creates a massive, living laboratory that accelerates progress through collective learning and rapid iteration.
Embodied intelligence refers to intelligent agents that have a physical or virtual body and interact with their environment through continuous sensing, decision-making, and action. Unlike traditional AI that processes static data, embodied intelligence emphasizes the dynamic loop where perception guides action, and action changes what is perceived. This interaction involves three key components: intelligence (the computational brain), embodiment (the physical or simulated body), and environment (the external world with its objects and dynamics).
China is advancing its artificial intelligence (AI) efforts in a way that differs significantly from the United States. While the U.S. mainly focuses on developing large language models (LLMs) to achieve artificial general intelligence (AGI) - a future AI that can outperform humans in all cognitive tasks - China is taking a broader and more balanced approach. Instead of putting all its resources into one method, China is investing in multiple paths to AGI simultaneously.
China is making a major push into embodied AI, which means creating smart robots and AI-powered machines that can sense, understand, and interact with the physical world. Unlike many Western countries that focus mainly on digital AI like large language models, China aims to combine its strengths in AI software with advanced robotics hardware. This approach is part of a national strategy to boost the economy, address social challenges like an aging population, strengthen the military, and gain a global technological edge.
Last September, I decided to finally give my shot at LLMs for code development. At that point, I had barely used any of the most known LLMs for a serious reason. It's not nine months later, and a lot has changed. With all of the news about Claude 4 and considering I have a work ChatGPT subscription, I decided to give LLMs another try.
On large language models, artificial intelligence, DeepSeek, and trying to find the middle lane between skepticism and surety. I mention bionic arms a lot for some reason.
A friend of mine just sent the link to this AI-powered accent guesser from Bold Voice. After trying it out for a few times, it guessed me as either Romanian or Bulgarian, but always with a low confidence. However, a few friends of mine from certain countries got more than 95% confidence on their country, even when trying to fake an accent from somewhere else.
Yesterday I used ChatGPT for the first time. Well, that's not true. Let's rephrase: yesterday I used ChatGPT for the first time to actually help me complete a task. A boring tedious task. It did help me complete the task, but I'm still not very convinced on whether it was faster than doing it manually.
The mess between Forbes and Perplexity AI highlights how soulless and extractive aggregation can be in the wrong hands. It’s the wrong direction for LLMs.
The latest artificial intelligence use cases, like Windows’ Recall and Zoom’s digital twins, appear to be built specifically for managers and executives, and literally nobody else. That’s a problem.
The problem that’s bugging me about the Rabbit R1 and Humane AI Pin dunk-fests: A seeming discouragement of hardware innovation.
Tech and creativity once had a symbiotic relationship in the push towards innovation. As generative content matures, it feels like they’re starting to diverge. And that’s bad for creative people.
A comic artist took a journalistic dive into the knotty debates around generative AI—and found artists worried about the people even more than the tech.
One of the news industry’s biggest, longest-standing problems is that they do not value the work they create at the level it deserves. They should take a cue from the music industry’s biggest stars.
The music industry has had to navigate the choppy waters of royalties, ownership, and inspiration well before AI-generated music became the hot new thing. And not just because of sampling.