
Welcome to this edition of Loop!
To kick off your week, I’ve rounded-up the most important technology and AI updates that you should know about.
ICYMI, I recently wrote an in-depth guide that explains what AI agents are and how you can use them. There’s also a template that allows you to build your own social media agent - even if you have no coding experience. You can read it here.
HIGHLIGHTS
Google’s new reasoning model that can explore multiple ideas at once
OpenAI’s Study Mode and how Microsoft is falling behind
Reality Defender’s new API that can detect deepfakes
… and much more
Let's jump in!


1. Google’s new reasoning model can explore multiple ideas at once
We start this week with Google, which has just unveiled Gemini 2.5 Deep Think as its first multi-agent AI model.
It’s an interesting way to approach problems and explore them from different angles. Rather than relying on one LLM to do the job, it creates several and then asks each one to explore a specific issue.
Once that is all done, and it has collected the final results, Google’s model will then pick the best answer. As you’d expect, this is more expensive to run, since it needs more compute, but it often leads to better results.
Currently, it’s only available to those who are on Google's $250 Ultra plan - but the company also wants developers to have access in the coming weeks.
This shift from one LLM (or agent) to multiple is unsurprising. Today’s LLMs are limited by how much information they can process, known as the context window, and this prevents them from being used in more long-running tasks.
But by splitting these into smaller, more focused tasks and assigning each task to a dedicated LLM, you can improve the results and evade some of those limitations.
It’s something that I’ve done in my own work, with multiple AI agents given specific problems to solve and then work together to create the final response.
In that example, I created eight agents and told them to only focus on one task. They then pass their results on to the next agent, which uses it as context for its own answer and tries to identify mistakes that the first agent made.
It’s incredible when you see it working, but it can be tricky to setup and takes time to get it right. We should expect to see similar announcements from the other AI companies in the coming months, as they tweak their multi-agent systems for public use.

2. OpenAI launches Study Mode in ChatGPT
The company has rolled out Study Mode, which promises to transform ChatGPT from a chatbot into something closer to a proper tutor.
Rather than simply trying to give you answers, the AI will now question the students’ knowledge and, in some instances, refuse to give direct answers until they've properly engaged with the material.
You can also use it to create practice questions, explain a topic in more detail, or get help with homework.
As concerns mount about AI's impact on critical thinking skills, it could be a useful tool to counter that. In the last few weeks, research has shown that there’s significantly lower brain activity when students rely on ChatGPT, compared to writing it on their own.
Although, OpenAI aren’t the only ones who are building education tools. Anthropic released its Learning Mode back in April, but it has only been available to those in the education sector - not the wider public.
Following the worldwide covid lockdowns and the push for schools to teach remotely, Google captured much of the education sector with their Classroom tools. It’s keen to maintain that market share and has invested heavily in new AI tools.
While much of the focus can be OpenAI and their latest product launches, it’s important for us to take a step back and look at the wider picture. Within education, there’s a clear change in strategy between OpenAI/Google and Microsoft.
Both OpenAI and Google are best positioned to equip teachers and students with new AI tools, while Microsoft - which used to dominate the sector with Office 365 - seems to have neglected it and is instead focused purely on corporate workers.
That could prove to be a mistake in the longer term, as millions of students will become used to the OpenAI/Google ecosystems - not Microsoft’s.

3. Joby Aviation plans to customise its eVTOL for the military
This is a startup that I have promoted several times as “one to watch”. The company has developed electric air taxis, which can take off and land vertically - known as an eVTOL.
The company is now partnering with L3Harris Technologies, a defence contractor, to develop a hybrid version that the US military can fly autonomously.
This isn't a huge surprise. Joby has been working with the US Department of Defense for nearly a decade and militaries are keen to use these eVTOLs in combat - since they’re near silent and have a reduced heat signature.
Last year, the company demonstrated a hydrogen-electric variant that flew for 521 miles. That’s more than double the range of its battery-powered prototype.
With this partnership, Joby won’t have to build military-specific capabilities from scratch. Instead, it can lean on L3Harris's expertise with sensors, communications, and mission systems.
Both companies are now planning for flight tests in the autumn, with military demonstrations scheduled for 2026.

4. Ford prepares low-cost electric vehicles to compete with China
According to insiders, the automaker will unveil its plans for a "breakthrough" EV next week. Ford’s CEO Jim Farley has teased that it will be a "Model T moment" for the company - a reference to the historic Ford vehicle that launched the company over 100 years ago.
Behind the scenes, Ford's team - led by former Tesla engineer Alan Clarke - has been quietly developing a more affordable electric car, even as the company's EV division absorbed a hefty $1.3 billion loss.
Ford revealed last year that this low-cost foundation will support a pickup truck launching in 2027, with more vehicles to follow.
The timing couldn't be more crucial. Tesla has just reported its steepest revenue decline in years and is scrambling with discounts to prevent sales sliding even further.
Meanwhile, the Trump administration's plans to axe EV tax credits this September and impose tariffs aren't doing the industry any favours. Ford expects to lose $2 billion annually from the duties alone.
The real target, though, appears to be Chinese manufacturers like Geely and BYD.
Farley believes competing with these low-cost EV makers requires nothing short of radical transformation: reimagining Ford's entire engineering, supply chain, and manufacturing approach.
For many people, electric vehicles are simply too expensive and out of reach - so having more low-cost options is always welcome news. America badly needs an answer to China’s cheap EVs, so I hope this is it.

5. Enterprises now prefer AI models from Anthropic, not OpenAI
Anthropic has just overtaken OpenAI for the top spot, with more enterprises using their models instead.
The company now commands 32% of enterprise LLM adoption, while OpenAI has slipped to second place with 25%.
It's quite the reversal of fortunes. Just two years ago, OpenAI dominated half the enterprise market and Anthropic held a modest 12%.
The turning point appears to be Claude 3.5 Sonnet's release in June 2024, which set the stage for Anthropic's surge. When February 2025's Claude 3.7 Sonnet arrived, it only accelerated the momentum.
Anthropic's lead becomes even greater in the coding sector, where it holds 42% of enterprise usage. That’s double OpenAI's market share of 21%.
Overall, it seems to align with conversations I’ve had with colleagues. I switched away from OpenAI’s models in March of last year - following the release of Claude 3 - and it’s been my favourite ever since.
Claude’s responses sound much more natural and often go into more depth than OpenAI’s models. For programming, there is a night-and-day difference. Claude takes the top spot here easily and rarely hallucinates, while ChatGPT will give me code that is lower quality and doesn’t always work.
If you haven’t tried Claude yet, I highly recommend that you do. I recommend it to people so often, I should probably get commission from Anthropic.

Google makes advances with analysing satellite images

Google has unveiled AlphaEarth Foundations, an AI model that aims to analyse huge amounts of satellite data and connect multiple datasets into one.
This is a huge problem in Earth monitoring. Every day, satellites generate a massive amount of data, but actually connecting these datasets and analysing them is incredibly difficult to do.
With AlphaEarth Foundations, you can now combine multiple sources - such as satellite imagery, radar, and climate simulations - to analyse the planet’s land and waters in 10×10 metre squares.
What’s clever is that Google’s tool will create very small summaries for each square, which requires 16 times less storage than competitors and makes the data easier to analyse.
There are a lot of potential use cases for this. In recent years, we have seen satellite imagery become much cheaper and easier to access than ever before. Companies and governments can now use the technology to monitor crops, tackle deforestation, or monitor construction projects.
MapBiomas, an organisation that’s based in Brazil, is already using the tool to track changes in the Amazon - which is under threat from deforestation.
I’ve been interested in satellite imagery for several years now. With costs coming down, it is becoming a much more attractive option for governments to streamline their operations.
For example, the UK Government regularly asks farmers to complete a census and report on their crop areas. It can be quite a laborious process for farmers, so why can’t we use satellite imagery to do this instead?
With these latest tools from Google, that is becoming much easier than before.

Reality Defender launches an API to detect deepfakes

In the age of AI-generated images, we can forget that there’s another kind out there - deepfakes that impersonate a specific person and look incredibly realistic.
It’s a problem that spans across different media - including images, videos, and even audio files. Ever since I attended SXSW last year in Austin, I’ve closely followed Reality Defender and their work on detecting deepfakes.
The company seems to be the leader in this space, with tools that detect both deepfakes and generative AI content. Now, the company has opened up access for developers and made their API public - with 50 free detections per month.
As of right now, it supports audio and image analysis - with video detection coming soon. This is likely being done to give the startup time and improve their compute capacity. Running these deepfake detections can use a lot of computing power, especially for video.
The company was originally formed to tackle financial fraud, which is a growing risk for companies that can spend tens of millions in one transaction. In particular, there’s the risk that a CEO’s voice can be impersonated and sent to staff, who are then told to send large sums to a fake supplier.
But there are plenty of other use cases too. For example, the police now have to verify that video evidence has not been manipulated with AI tools. That’s a new challenge that didn’t exist a few years ago and is only becoming more difficult as the technology advances.
Or it could be used by media organisations to quickly verify footage. Bad actors have been known to spread misinformation on social media, but journalists could use this as a tool to quickly spot these and alert the public.

💰 Tesla signs a $16.5 billion deal with Samsung to make AI chips
🤝 Microsoft in talks to maintain access to OpenAI's tech, beyond the "AGI" milestone
🚇 Elon Musk's Boring Company will build Tesla tunnels under Nashville
📝 Google's NotebookLM rolls out Video Overviews
📈 Anthropic closes in on a $170 billion valuation
💼 Microsoft becomes a $4 trillion company
🛰️ US Space Force will award $4 billion to anyone who can secure satellite communications
🚀 Figma's stock soared at IPO, market cap instantly hit $45 billion
🍎 Apple plans to "significantly" grow its AI investments
🔒 Meta won't open source all of its AI models, will spend $72 billion on AI infrastructure in 2025
🔍 Reddit plans to become a search engine, as users try to evade AI content
📺 YouTube is now the UK's second most popular media platform, behind the BBC



Fundamental Research Labs
The agentic AI startup, which was previously known as Altera, has secured $33 million in Series A funding - with Stripe’s CEO, Patrick Collision, joining as an investor.
In an unconventional approach, the team isn’t focusing on building one product. Instead, they’re building AI agents that can work in different sectors - from gaming bots that can play Minecraft to productivity tools for the workplace.
You might have seen one of their products recently. Shortcut works like an AI assistant, but is specific for Excel spreadsheets. You can use it to generate new spreadsheets, analyse data, or tackle tasks that are specific to the finance industry.
Shortcut is already gaining traction among financial analysts, as they often use it for model building and analysis.
They’ve created another product called Fairies, which can control your computer and organise files, write emails, or manage your calendar.
By diversifying their product lineup and looking beyond just one use case, they have successfully built a following online - with plenty of enthusiasts from the gaming industry and finance.
It’s not your typical cross-over for an online audience, but it’s worked well for the startup and might be the edge they need to beat the competition.
This Week’s Art

Loop via OpenAI’s image generator

We’ve covered quite a bit this week, including:
Google’s new reasoning model that can explore multiple ideas at once
OpenAI’s Study Mode and how Microsoft is falling behind
Joby Aviation’s plan to develop eVTOLs for the military
Ford’s low-cost electric vehicles that aim to compete with China
Why enterprises now prefer AI models from Anthropic, rather than OpenAI
Google’s advances with analysing satellite images
Reality Defender’s new API that can detect deepfakes
And how Fundamental Research Labs is developing AI agents for different industries
If you found something interesting in this week’s edition, please feel free to share this newsletter with your colleagues.
Or if you’re interested in chatting with me about the above, simply reply to this email and I’ll get back to you.
Have a good week!
Liam
Feedback
How did we do this week?

Share with Others
If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.
About the Author
Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.