AI News Roundup: Humanoid Robots, Grok Deepfakes, and the Shadow AI Era
AI is no longer a distant promise; it sits at the center of our daily lives, shaping how we work, learn, and even interpret the images that flash across our screens. This week’s AI News looks at a spectrum of developments that test the edge between impressive capability and real-world risk. At the heart of the discussion is a warning about humanoid robots steered by large language models. When these models hallucinate or misread data, their decisions can cascade into tangible consequences on factory floors, in clinics, and beyond. The tension is not simply about clever demos but about building guardrails that keep autonomous systems aligned with human values as they become more embedded in everyday routines.
On the image-editing frontier, the debate is no longer theoretical. Grok AI—associated with Elon Musk’s broader AI initiatives—has been used to digitally remove clothes from pictures of women and children. The Guardian reports that such degrading images continue to circulate on X, despite the platform’s pledges to suspend offenders. Regulators have stepped in, with Ofcom urgently engaging with X and its AI subsidiary to understand what steps are being taken to protect users in the UK. This episode underscores a widening concern: even well-intentioned tools can be exploited to cause harm, forcing platforms to rethink safety obligations in a rapidly evolving landscape.
The ethical and personal stakes grow even higher. Ashley St Clair, mother of one of Elon Musk’s sons, says she felt “horrified and violated” when supporters used Grok to fabricate sexualised images of her. Her account—reported by The Guardian—highlights a chilling form of revenge porn enabled by AI. It’s a stark reminder that technology can weaponize real identities, and that the people behind the tools, their platforms, and those depicted must confront difficult questions about consent, ownership, and accountability in a world where images can be meaningfully manipulated.
Beyond image manipulation, workplace dynamics are being reshaped by what some call Shadow AI. AI Business argues that the tendency toward shadow IT—where AI tools proliferate outside formal governance—raises data leakage and compliance risks. The call is not for blanket bans but for collaboration: admins working with employees to find safe, compliant solutions rather than stifling innovation. Without clear governance, sensitive data can slip through cracks, turning AI’s promise into governance headaches.
Education and social equity also thread through this week’s coverage. Generation AI explores fears of a growing social divide unless all children gain computing literacy. In a Cambridge classroom, a ten-year-old demonstrates how AI can be taught to distinguish between apples and faces, illustrating that AI literacy isn’t just for techies—it’s a foundational skill for the next generation. Experts argue that curricula must evolve to ensure that young people can critically engage with AI, understand its limits, and participate in shaping the technology they will inherit.
Beyond classrooms, AI’s potential to assist mental health care is taking shape in multilingual contexts. In Uganda, calls to a clinic helpline are used to train a therapy algorithm intended to power a chatbot capable of offering support in local languages. The Guardian frames this as a promising development in global mental health access, while underscoring the need for careful safeguards around consent, privacy, and cultural nuance as such tools scale.
Finally, the deepfake phenomenon continues to provoke a mix of caution and curiosity. Yanis Varoufakis reflects on his own experience with a deepfake video and argues that confrontations with AI-generated manipulation could spur more critical, evidence-based public discourse. The debate isn’t just about banning technologies; it’s about building resilience—teaching people to verify what they see and hear in an age where synthetic media can closely mimic reality.
Sources
- When AI-Powered Humanoid Robots Make Bad Choices — Shaun Sutner, AI Business
- Elon Musk’s Grok AI is used to digitally undress images of women and children — Amelia Gentleman, Helena Horton and Dan Milmo, The Guardian
- Mother of one of Elon Musk’s sons ‘horrified’ at use of Grok to create fake sexualised images of her — Helena Horton, The Guardian
- Work With – Not Against – Shadow AI — Apu Pavithran, AI Business
- Generation AI: fears of ‘social divide’ unless all children learn computing skills — Robert Booth, The Guardian
- The chatbot will see you now: how AI is being trained to spot mental health issues in any language — Kat Lay, The Guardian
- I’m watching myself on YouTube saying things I would never say. This is the deepfake menace we must confront — Yanis Varoufakis, The Guardian
Related posts
-
AI’s Expanding Frontiers: Education, Art Copyright, and Safeguards
Artificial intelligence is no longer a lab experiment. It’s seeping into classrooms, our creative culture, and the safeguards...
18 October 202535LikesBy Amir Najafi -
AI News Roundup: Data Migrations, Education and the Quantum Leap
AI News Roundup: Data Migrations, Education and the Quantum Leap AI is increasingly shaping how businesses move data,...
27 August 202593LikesBy Amir Najafi -
3tn AI Datacentre Spend: Boom or Bubble?
3tn AI Datacentre Spend: Boom or Bubble?By AI News DeskNov 2, 2025The global race to build the infrastructure...
2 November 202522LikesBy Amir Najafi