
You’re receiving this email because you subscribed to the newsletter. If you’d like to unsubscribe, you can manage your preferences here with one click (or use the button at the bottom of the email).
Welcome to this edition of Loop!
To kick off your week, I’ve rounded-up the most important technology and AI updates that you should know about.
HIGHLIGHTS
OpenAI's $1.4 trillion spending plans that don't add up
Microsoft finds that AI agents can be manipulated into making purchases
Amazon takes legal action against Perplexity for creating deceptive agents
… and much more
Let's jump in!


1. OpenAI will spend $1.4 trillion on AI data centres
We start the week with OpenAI and its eye-watering spending plans that don't quite add up.
The company expects to hit $20 billion in annual revenue by 2026, which sounds impressive at first - but the company is already losing over $10 billion every quarter.
When you compare those figures with their other commitments, tens of billions suddenly sound tiny. Over the next eight years, OpenAI says it will spend a whopping $1.4 trillion on AI data centres.
Given their current revenues, it’s incredibly hard to see how those commitments can be met. OpenAI has fuelled concerns this week, after its CFO said that the US Government should "backstop" OpenAI's infrastructure investments and guarantee its debt.
This means that if OpenAI can’t cover its $1.4 trillion commitment, American taxpayers would be forced to pay for it instead.
It’s a completely absurd idea, given that OpenAI is developing technologies that could replace millions of workers - while ordinary citizens are already grappling with rising living costs.
But the startup, which is now valued at over $500 billion, has been working towards this for months. Their executives have regularly framed this as a battle between the US and China, as they believe that politicians will become more open to the idea and give them the federal guarantees they want.
OpenAI executives have since backtracked from the suggestion, following a fierce backlash, but even floating this idea normalises it for when they ask again.

2. Microsoft creates a simulator for AI-powered markets
The company has developed a new testing ground for AI agents, which is being used to simulate different scenarios and monitor how the agents interact with each other. Interestingly, it has revealed some of the technology’s flaws.
During the testing, hundreds of customer and business agents were told to interact with each other. But Microsoft’s researchers have found that all the leading models - including GPT-4o, GPT-5, and Gemini-2.5-Flash - all showed surprising vulnerabilities.
These business agents were able to manipulate the customer agents, exploiting how they process information and then forced them to make unwanted purchases.
However, what stood out is that the customer agents performed worse when they’re presented with more options. That’s a real problem, as companies are now building AI agents that can handle more use cases and have dozens of tools at their disposal.
The researchers found that the agents started to become overwhelmed, rather than more thorough, as they were given more choice. It’s a recurring theme with LLMs that has been present for years, despite advances being made with intelligence and longer-running tasks.
If you want to see the tool and how it works, I’ve included a link below.

3. Web traffic drops dramatically for People Inc., signs AI licensing deal with Microsoft
The American media giant has struck a licensing deal with Microsoft, which will be used to develop new AI models.
The publisher will become a launch partner in Microsoft's new content marketplace, described by CEO Neil Vogel as a "pay-per-use" system where AI companies will compensate publishers on an à la carte basis.
The announcement comes at a difficult moment for online publishers.
People Inc. has revealed that Google Search, which drove 54% of its traffic two years ago, now accounts for just 24% - a dramatic drop that they blame on Google's AI summaries.
Vogel has criticised the company before, as it uses the same web crawler for both their search and AI products. This makes it impossible for publishers to block it, without sacrificing their search traffic.
To force AI companies to the negotiating table, People Inc. has used Cloudflare’s technology to block the other web crawlers. According to their CEO, the approach has been incredibly effective and brought almost every company to negotiations.
They’re expected to make more AI deals in the coming year, but it’s quite alarming to see that web traffic has dropped so quickly - given that our favourite websites rely so much on advertising and clicks.

4. Amazon takes legal action over Perplexity’s agentic browser
The company has taken legal action against Perplexity over its browser's shopping feature, which automates purchases for users. Perplexity are accused of secretly accessing customer accounts and pretending that their AI bots are human.
As you’d expect, Perplexity has denied the accusations and said that Amazon is using "bullying" tactics to prevent innovation. However, it’s worth noting that Perplexity has a history here and regularly skirt the rules.
News publishers have often accused Perplexity of plagiarism and deliberately scraping websites, despite it being explicitly blocked on their website. It is now being sued by the Wall Street Journal, NY Post, Financial Times, Reddit, and several others.
Of course, Amazon has a competitive interest here as they’re developing similar tools. But Perplexity is known to be a rogue company and will do whatever it can to scrape your website’s data. It’s certainly not the innocent startup that it’s trying to portray.
Amazon’s lawsuit centres on broader questions about how AI agents should interact with websites. It argues that third-party apps making purchases should operate transparently and claims that Perplexity is deliberately trying to hide their agent's activities.
As AI agents become more capable of handling everyday tasks, this legal battle could set an important precedent for how they can operate across the web.

5. Stability AI's legal win leaves copyright law in limbo
Stability AI has emerged largely unscathed from its UK lawsuit with Getty Images, although the ruling has failed to deliver the landmark precedent many were hoping for.
The case promised to settle whether AI companies need permission to train models on copyrighted material, but it has fizzled out instead.
Getty struggled to find enough evidence that Stability AI was using copyrighted material, so they were forced to drop the claim mid-trial.
The judge found that Stability were infringing Getty’s trademark, as its AI model was generating images that included its watermark. But she rejected the claims of copyright infringement, saying that Stable Diffusion "does not store or reproduce" copyrighted works.
The outcome leaves AI firms and rights holders in a similar position to before, with no clear legal framework emerging from what was meant to be a test case.

Coca-Cola returns with a terrible AI ad for Christmas

We’re in the run-up to Christmas and Coca-Cola has returned with another AI-generated Christmas advert, which no one asked for.
The company faced backlash for doing the same last year, with its ad featuring gliding wheels and bizarre-looking human faces. But the company has doubled down with this new campaign, although they’ve swapped the people out for animals.
They’ve partnered with two AI studios on the campaign, with around 100 people employed to create it - which is similar to traditional productions. However, they did employ five "AI specialists" to generate and refine over 70,000 video clips.
The results are disappointingly flat and inconsistent.
Polar bears, pandas, and sloths move unnaturally across the screen, while the clips are incredibly short and lack any magic.
When you compare it to what proper 3D animation can achieve - or even what newer AI video tools like OpenAI's Sora 2 are producing - Coca-Cola's ad feels remarkably dated.
It’s not all that bad though. The iconic Coke truck’s wheels actually rotate this year, as it drives over the snowy roads. Success.
As you’d expect, the company’s marketing chief has defended the campaign and says that it was cheaper and faster to make - reducing production time from a year to roughly a month.
But if the results look terrible and no one likes it, what was the point?

🌳 Google's new AI can identify areas at risk of deforestation
💰 Tesla shareholders approve a $1 trillion pay package for Elon Musk
📈 Google offers its Gemini AI tools to stock traders
📄 AI slop has forced ArXiv to change its publishing policy
🚀 Google follows in Nvidia's footsteps, also wants to create AI data centres in space
🚗 Waymo's robotaxis are coming to San Diego, Las Vegas, and Detroit
🍎 Apple could pay Google $1 billion to power its new Siri
⚡ Millions of Australians will receive free electricity in 2026, due to boom in solar panels
📅 Google's AI Mode can now book tickets and beauty appointments
🏗️ Microsoft invests $15 billion in the UAE and will build new AI infrastructure



Inception
This startup believes that diffusion models - the technology behind image generators like Midjourney - can outperform the autoregressive systems that currently ChatGPT.
The company has just secured $50 million from investors and was founded by Stanford professor Stefano Ermon.
With autoregressive models, they’re only able to generate text sequentially and have to predict one word at a time. But diffusion models are fundamentally different and can work holistically.
This could lead to much faster and more efficient LLMs, compared to what we currently use today.
Inception has just released their Mercury model, which is now being integrated into developer tools, and can achieve over 1,000 tokens per second. That’s significantly faster than other models.
Again, this comes back to diffusion models and their ability to handle multiple operations simultaneously - which could be incredibly useful for complex tasks, like working across large codebases and a huge number of files.
Industry leaders have invested in the company - including Andrew Ng, Andrej Karpathy, Microsoft, and Nvidia - which shows there could be some promise here.
This Week’s Art

Loop via OpenAI’s image generator

We’ve covered quite a bit this week, including:
OpenAI's $1.4 trillion spending plans that don't add up
Microsoft's new simulator for AI agents and how they can be manipulated into making purchases
Why People Inc. has signed an AI licensing deal with Microsoft
Amazon’s legal action against Perplexity for creating deceptive agents
Stability AI's legal win that leaves copyright law in limbo
Why Coca-Cola's new AI-generated Christmas ad has disappointed people again
And Inception uses diffusion models to create faster LLMs
If you found something interesting in this week’s edition, please feel free to share this newsletter with your colleagues.
Or if you’re interested in chatting with me about the above, simply reply to this email and I’ll get back to you.
Have a good week!
Liam
Feedback
How did we do this week?

Share with Others
If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.
About the Author
Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.

