In partnership with

Welcome to this edition of Loop!

To kick off your week, I’ve rounded-up the most important technology and AI updates that you should know about.

‏‏‎ ‎ HIGHLIGHTS ‏‏‎ ‎

  • Meta's digital twin that predicts how your brain responds to sights and sounds

  • Why the UK Government is warning that AI agents can escape their sandboxes

  • Google DeepMind's growing list of robotics partnerships and why the real value is in the data

    … and much more

Let's jump in!



Your AI tools are only as good as your prompts.

Most people type short, lazy prompts because writing detailed ones takes forever. The result? Generic outputs.

Wispr Flow lets you speak your prompts instead of typing them. Talk through your thinking naturally - include context, constraints, examples - and Flow gives you clean text ready to paste. No filler words. No cleanup.

Works inside ChatGPT, Claude, Cursor, Windsurf, and every other AI tool you use. System-level integration means zero setup.

Millions of users worldwide. Teams at OpenAI, Vercel, and Clay use Flow daily. Now available on Mac, Windows, iPhone, and Android - free and unlimited on Android during launch.



1. Meta builds a "digital twin" of the human brain

We start this week with Meta's AI research team, which has built what it's calling a "digital twin" of the human brain. The team has developed a new foundation model, which was trained on over 1,000 hours of brain scan data from 720 volunteers.

Meta claims that it can predict how your brain responds to almost anything you see, hear, or read - and it can do this without the need for a brain scan, which is expensive and slow to do.

In some cases, they found that the model’s predictions were actually more accurate than a brain scan - since the AI model was able to learn patterns across hundreds of people.

This could be incredibly valuable for neuroscientists and researchers, who can use the technology to test their theories about how the brain works - speeding up research into conditions that affect millions of people.

Meta has open-sourced the model and its code under a non-commercial licence, which is a really good move for the industry and will allow other researchers to improve the technology over time.

Of course, there are limitations here. fMRI scans can't capture the millisecond-level activity of neurons, and the model only covers vision, hearing, and language - so it can't simulate how the brain processes touch, smell, or balance.

For Meta, the benefits are pretty clear. They can use this model and “simulate” how people might react to advertisements on their platform, which is their main revenue source and would allow advertisers to achieve better results.

And if scientists use the technology to achieve new breakthroughs, they’ll get a lot of positive PR for doing this. If the results hold up, this could become a really valuable tool for neuroscience research.

2. UK Government warns that AI agents can escape their sandboxes

Moving on to agents, the UK's AI Safety Institute has created a new benchmark that tests if different AI agents can break out of their sandboxes. Worryingly, some of these agents can escape our controls.

If you're not familiar with sandboxes, they’re isolated environments that allow researchers to test AI models - without giving them access to real computers or data. They're a critical part of how the industry evaluates what these models can do.

Their new benchmark tests these AI agents under 18 escape scenarios - ranging from simple misconfigurations, like an exposed Docker socket, to more complex exploits that would take an experienced cybersecurity professional over an hour to pull off.

The UK Government’s safety team has found that advanced models can escape common misconfigurations, when they're asked to do so. But it's worth separating two things here - there are AI agents that live inside software, and there are AI coding tools that developers use to create that software.

The problem is that while those coding tools are great for productivity, they can also introduce more misconfigurations into the codebase - which is exactly the type of mistake that allows the other agents to escape our controls.

Even worse, the AI models did better when they were given more compute and allowed to reason through the problem - which suggests this is going to become a much bigger issue over time.

3. Vibe-coded malware impacts a software tool used by millions

A popular open-source tool was hit by malware this week, thanks to the rise of vibe-coding. LiteLLM has become popular with developers and allows them to easily access hundreds of AI models through a single interface.

Every day, the tool is downloaded around 3.4 million times and is being used in thousands of projects, so there was quite a big risk here. The malware was designed to steal login credentials from everything it touched, which could include API keys for AI models, cloud services, and internal systems - allowing it to spread further across the supply chain.

What's interesting is how it was caught. A researcher at FutureSearch downloaded LiteLLM and his machine crashed. A bug in the malware itself caused the failure, which forced him to dig into what had gone wrong.

He believes that the malware was likely vibe coded and made entirely with AI. The irony here is that sloppy, AI-generated code is what gave it away. A more experienced attacker would have caught the bug that crashed the researcher's machine, but that's the trade-off with vibe coding - it lowers the barrier for everyone, including people with bad intentions.

The good news is that it was caught within hours, and LiteLLM's team has been working with Mandiant to clean things up. But this won't be the last time we see something like this.

Vibe coding has lowered the barrier for building useful software, but it's done exactly the same thing for malware - and the next attempt probably won't have a convenient bug that gives it away.

4. OpenAI abandons Sora and its billion-dollar Disney deal

We’re starting to see big moves from OpenAI, as they try to cut down on distractions and focus on their core product - ChatGPT.

The company has decided to end support for Sora, which was an AI video generator that was launched in 2024 and pitched to Hollywood executives. Sora was also at the centre of a $1 billion deal with Disney, who allowed OpenAI to create animated videos with their characters. But that plan has been abandoned.

It’s worth noting the position that OpenAI is currently in. For months, the company has been in "code red" mode over its battle with Google's Gemini. It followed the release of Gemini 3, which outperformed ChatGPT in several benchmarks and was favourably reviewed.

Then, Anthropic released their latest version of Claude Opus and became the top enterprise provider for AI, beating OpenAI. With their competitors focusing on large language models, OpenAI decided it needed to cut down on the number of distractions and invest significantly more in ChatGPT.

By shifting focus away from future Sora models, OpenAI can free up a lot more compute and instead give it to their research teams. Plus, the company is working towards a potential IPO this year - so it makes sense that they’ve dropped Sora and are now focusing on products that actually generate revenue.

5. Agile Robots partners with Google, plans to deploy robots globally

In the robotics sector, Google DeepMind has signed yet another partnership deal. This time it was with Agile Robots, a company that’s based in Germany and has already installed over 20,000 robotic systems worldwide.

The deal will see them integrate Google’s robotics models into its hardware, while the data collected by those robots will then be fed back to improve the underlying AI models. These robots will be tested across different industries, including manufacturing, automotive, data centres, and logistics.

It's the latest in a growing pattern. Earlier this year, Boston Dynamics announced a similar partnership with DeepMind for its humanoid robot Atlas, and German startup Neura Robotics has teamed up with Qualcomm to use its new processor series.

The logic behind these deals is fairly straightforward - robots are incredibly complex on both the hardware and software side, and very few companies are strong at both. It makes sense for hardware specialists to partner with AI labs that can provide the intelligence layer, rather than trying to build everything in-house.

For DeepMind, the real value is in the data. Every robot that runs their models in a real factory is generating training data that helps improve the next version - which is a flywheel that's incredibly hard for their competitors to replicate.

With Nvidia's Jensen Huang and others calling physical AI the next frontier, I'd expect to see a lot more of these partnerships in the coming months.



Google’s new algorithm can slash AI memory use by 6x

Google has released a new algorithm that makes AI models far more memory efficient, with the announcement already spooking memory chip stocks around the world.

The company’s researchers have developed a new compression algorithm called TurboQuant, which can reduce the memory usage of AI models by 6x - without sacrificing the quality of their responses.

This algorithm targets the key-value cache, which is essentially a cheat sheet that AI models use to store important information so they don't have to recompute it every time. As models become larger and are able to handle more complex tasks, this cache becomes a major bottleneck for both performance and memory.

What’s interesting is that this compression algorithm can also be applied to existing models without any additional training, making it much easier to adopt.

The announcement immediately sent shockwaves through the memory chip industry. Samsung and SK Hynix fell around 5-6% in South Korea, while Micron and Sandisk dropped in the US.

Cloudflare's CEO called it "Google's DeepSeek" - comparing it to the efficiency breakthroughs from the Chinese AI lab that caused a massive sell-off in tech stocks last year.

But analysts think the reaction was overblown. Memory stocks have had an incredible run - Samsung is up nearly 200% over the last year, while Micron and SK Hynix are up over 300% - so investors were likely using this as a reason to take their profits.

In fact, this kind of advancement tends to lead to more memory use over time, not less. When you remove a bottleneck like this, the hardware becomes more capable - which allows companies to build more powerful models that eventually need even more memory. It's a pattern we've seen before in the industry.

Where it could make a real difference is on-device AI. Phones and laptops rarely have enough power to run advanced models locally, so compression like this could improve what they're capable of - without sending your data to the cloud.

That matters for privacy, but it's also important for defence, where systems often need to run AI in environments with no internet connection at all.



🚕 Waymo hits 500K weekly rides and over 4 million miles

🧑‍💻 GitHub says it will train on your data and AI conversations

🔐 Anthropic’s security mistake reveals details about its next model

🚕 Uber is launching Europe's first robotaxi with Pony AI and Verne

🚁 Google's drone project launching in the Bay Area

🚗 GM begins autonomous vehicle testing on public roads

💾 Arm is releasing its first in-house chip in 35 years

🚕 Zoox brings its robotaxis to Austin and Miami

🛡️ Shield AI lands $12.7 billion valuation, after securing a deal with US Air Force

⚡ Helion, a fusion startup backed by Sam Altman, is in talks to sell power to OpenAI

🧠 Elon Musk unveils his chip manufacturing plans for SpaceX and Tesla

K2 Space

This startup was founded by two SpaceX engineers and is now building some of the most powerful satellites ever made. They’ve already raised $450 million and were valued at $3 billion just a few months ago.

The company is developing its first spacecraft, called Gravitas, and it can generate 20 kW of electricity - which puts it on par with SpaceX's Starlink V3 satellites.

That power matters because it opens up use cases that smaller satellites simply can't handle - things like advanced sensors, high-throughput communications, and running processors directly in orbit.

The company has built 85% of its components in-house, which is ambitious for a first launch. Gravitas will carry 12 payloads from several customers, including the US Department of Defense, with plans for 11 more satellites over the next two years.

Of course, the big challenge for high-powered satellites is the cost of getting them into orbit. K2's long-term bet is on SpaceX's Starship, which could dramatically reduce launch costs - but it's not yet clear when that vehicle will be available for commercial customers.

In the meantime, the company argues that Gravitas is still cheaper than what traditional contractors offer, while being far more powerful than equivalently-priced smaller spacecraft.

There's clearly a growing market here - communications networks, hyperscalers, and the Pentagon's $185 billion missile defence programme all need more powerful satellites. The startup is operating in a similar area to Sophia Space, which I covered just a few weeks ago, and it seems that K2 is another one to watch in the coming months.



This Week’s Art

Loop via OpenAI’s image generator



We’ve covered quite a bit this week, including:

  • Meta's digital twin of the human brain and what it could mean for neuroscience research

  • Why the UK Government is warning that AI agents can escape their sandboxes

  • How vibe-coded malware hit a tool downloaded 3.4 million times a day - and why it won't be the last time

  • OpenAI's decision to abandon Sora and its billion-dollar Disney deal

  • Google DeepMind's growing list of robotics partnerships and why the real value is in the data

  • How Google's TurboQuant algorithm spooked the memory chip industry

  • And K2 Space's bid to build some of the most powerful satellites ever made

If you found something interesting in this week’s edition, please feel free to share this newsletter with your colleagues.

Or if you’re interested in chatting with me about the above, simply reply to this email and I’ll get back to you.

Have a good week!

Liam


Feedback

How did we do this week?

If you want to add more specific feedback, you can reply to this email.

Login or Subscribe to participate


Share with Others

If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.

About the Author

Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.

Keep reading