
You’re receiving this email because you subscribed to the newsletter. If you’d like to unsubscribe, you can manage your preferences here with one click (or use the button at the bottom of the email).
Welcome to this edition of Loop!
To kick off your week, I’ve rounded-up the most important technology and AI updates that you should know about.
ICYMI, I recently wrote a trend report that covers the important themes from Q2 and where technology is heading. You can read it here.
HIGHLIGHTS
OpenAI's partnership with AMD and extraordinary deal to buy 10% of the company for just $1.6 million
Why ChatGPT's new app integrations with Spotify and Zillow represent a shift towards Generative UI
Anthropic’s research discovery that just 250 malicious documents can create backdoors in AI models
… and much more
Let's jump in!


1. Former UK Prime Minister will advise Microsoft and Anthropic
We start this with Rishi Sunak, who will become an advisor for Microsoft, Anthropic, and Goldman Sachs. The ex-PM will offer “high-level” strategy for these companies and will likely include insights on AI regulation.
During his premiership, Sunak was actively involved in the tech sector and hosted one of the first AI safety summits. However, he will face limits due to his position in UK politics.
The Advisory Committee on Business Appointments has barred Sunak from lobbying ministers or advising on UK government contracts for two years - which is standard procedure, since these companies have such strong interests in UK policy.
For now, Rishi Sunak will keep his job as a British MP and will continue to represent his voters. This was often questioned in last year’s election, as many believed that he wasn’t committed to politics and would move to Silicon Valley.
Famously, Nick Clegg left British politics and became the head of Meta’s global policy - which often involved strategy on regulation, meeting with world leaders, and handling press scrutiny. It seems that Rishi Sunak is following a similar path.

2. Google and AWS target enterprises with new AI products
Google and Amazon are both pushing hard into enterprise AI, although they're taking slightly different routes to get there.
Google has just unveiled Gemini Enterprise through its cloud platform, which is doing very well and now generates over $50 billion in annual revenue.
With this new platform, companies will be able to use a single workspace and adopt AI features much more quickly - since their company data, apps, and teams are secured in one place.
The early numbers seem impressive. HCA Healthcare has built a new handoff system for its nurses, as they change shifts. Gemini was used to build the solution and generate detailed reports about patients, which is then checked by staff to ensure it’s accurate. According to HCA, this could save nurses millions of hours every year.
On the other hand, Amazon is taking a slightly different approach and focused on extracting data from other platforms. When compared with the general public, enterprises are often slower to adopt new technologies - since they need to have agreements on security and prevent their business data from being leaked.
To handle that, Amazon has revealed Quick Suite as a new platform to connect with over 50 business applications - including SharePoint, Salesforce, Jira, Notion, SAP, and Teams. When you combine it with MCP servers, you can integrate your solution with over 1,000 other apps.
Because this new platform can connect with data, no matter where it is, businesses will no longer need to bring all their data sources into one central place. This used to be a huge concern and incredibly costly, which really limited how enterprises could adopt new AI features and roll-out their own tools for staff.
For example, marketing teams can now generate reports and explore their advertising campaigns in seconds. Previously, this would have involved logging into several platforms, reviewing the data, and manually creating a report. But that is no longer the case, as they can connect those data sources and ask an AI agent to review the data.

3. Deloitte refunds Australian government for report with AI errors
The consultancy was brought in to review Australia's welfare system and paid $440,000 AUD for the report. However, an academic reviewed the report and saw that some of the references were made-up - including an imaginary court case and fake research papers.
Rather embarrassingly, when Deloitte tried to fix the fake citations, they couldn't just swap each one for a real source. Instead, they had to insert five, six, or even eight legitimate references to support what was originally cited as a single source.
It seems that the report wasn’t based on much evidence, with Deloitte staff having to search for anything that backed their conclusion. It’s not ideal for the Australian government, and others around the world, who might have second thoughts about hiring them for similar work.
While the company stands by their work, it’s pretty ridiculous to hand anyone a report that was generated with GPT-4o - especially if it’s a government and could lead to changes to their welfare system.
It’s a useful reminder that we cannot simply replace humans with AI-generated reports. They should be seen as useful tools that accelerate your work and open-up new ideas, but they’re nowhere near advanced enough to replace people entirely.

4. OpenAI can buy 10% of AMD for $1.6 million
Both companies have signed a new partnership, with OpenAI agreeing to deploy AMD’s GPUs in the second half of 2026.
However, it seems that Nvidia is still the startup’s “preferred” partner, with AMD simply called their “core” partner.
Under this deal with OpenAI, AMD says that it will generate "tens of billions of dollars in revenue" (not vague at all) and challenge Nvidia's grip on the AI infrastructure market.
However, what’s particularly shocking is that OpenAI can buy around $30 billion of AMD stock for just $1.6 million. Yes, you read that right - $1.6 million.
If the startup is able to meet specific milestones, it will be able to buy 160 million AMD shares at just a cent each - potentially giving it a 10% stake in AMD.
Regardless, the stock market went pretty wild. AMD’s shares rose by 24% and that has fuelled concerns about an AI bubble in the stock market. JP Morgan’s CEO, Jamie Dimon, has been speaking about this all week and Sam Altman has also raised similar concerns.

5. China tightens export controls for Western tech companies
If foreign companies want to export products that contain rare earth minerals, they now need to secure government approval - even if there are only trace amounts.
The timing seems to be deliberate, ahead of a meeting between Xi Jinping and Donald Trump this month.
China has also extended controls to lithium batteries, certain graphite forms, and crucially, the technology used to mine and process these materials.
Foreign firms can't even collaborate with Chinese companies on rare earth projects, unless there is explicit permission from the government.
This move will be particularly difficult for Western defence companies and chip manufacturers, who rely heavily on Chinese-processed materials.
America finds itself particularly vulnerable here. While it has mining capacity, it lacks the processing facilities that China dominates with 92% of the global market.

Apps are now supported within ChatGPT

The company released several new products at their Dev Day conference, including AgentKit. This is a simpler way for developers to create AI agents, as you can simply drag-and-drop components onto the canvas.
The move will allow companies to more easily trial ideas and determine if AI agents are right for that task. You can also share these agents with others in your team and evaluate their performance.
It looks pretty good and it will be useful for developers, although the UI canvas will not replace Python code anytime soon. You can only test simple ideas and add conditional logic, so you’ll still need to write code for the more advanced use cases.
Despite this, it’s a welcome move and allows people to explore ideas quickly. The company has also announced that it will support apps within ChatGPT - including Spotify, Zillow, or Figma.
This means that you can start a conversation with ChatGPT and run tasks with those platforms. For example, if you’re searching for apartments nearby ChatGPT will use Zillow’s data and display its UI.
Or if you’re asking for birthday party ideas, ChatGPT will give a response and use Spotify to find relevant music.
We call this Generative UI, which is when we ask a chatbot to use specific user interfaces and decide when they should be displayed. It’s something that I have been exploring recently and I’ve flagged this before, with startups like Context using it to re-imagine productivity software.

Anthropic finds worrying security vulnerability in LLMs

Anthropic researchers have discovered a worrying vulnerability in LLMs, which challenges everything that the industry thought it knew about data poisoning attacks.
Working alongside the UK AI Security Institute and the Alan Turing Institute, they've discovered that just 250 malicious documents can create a backdoor in AI models.
Worryingly, those documents will impact both small and large models. The behaviour is remarkably consistent, regardless of how big or small the model is - all models succumbed to the 250 malicious documents.
Previously, researchers assumed that attackers would need to control a percentage of the training data - so it should be harder to compromise large models. But this new research turns that assumption on its head.
The team has tested their theory using a denial-of-service attack, where models were trained to spew gibberish whenever they encountered a specific trigger phrase - in this case, "<SUDO>".
To put this in perspective, 250 documents represent just 0.00016% of the total training tokens for a 13B parameter model, yet that's enough to successfully infiltrate it.
As businesses start to integrate these models into their own systems and create autonomous agents, this is very concerning research.
If bad actors only need to create 250 malicious webpages to open backdoors within AI models, these attacks are far more accessible than anyone realised.

🥽 NBA games will be shown live and “immersive” on the Apple Vision Pro
🌍 Why are countries investing billions in "sovereign AI" initiatives?
🔧 Qualcomm has acquired Arduino, the popular DIY electronics platform
💚 Brookfield raises $20 billion to fund green energy projects
💻 Intel’s new processor was built with its 18A semiconductor technology
🤖 DoorDash will use sidewalk robots for deliveries in LA
💬 Zendesk’s new AI agent can resolve 80% of customer support tasks
📡 AST SpaceMobile partners with Verizon to provide cellular data from space
👟 Google’s new feature lets you virtually try-on shoes
🎤 Taylor Swift is accused by fans of using AI in her music videos
🌊 Helsing acquires Blue Ocean, boosts plans to develop autonomous underwater drones
🛰️ US Navy will use satellite imagery to monitor vessels, signs deal with Planet Labs



Reflection AI
This startup, which was founded by two DeepMind researchers in 2024, has just secured $2 billion in funding at an eye-watering $8 billion valuation. That's a 15-fold jump from its $545 million valuation, which was just 7 months ago.
The company was initially focused on autonomous coding agents, but is now positioning itself as America's answer to DeepSeek. That startup made waves earlier this year, as it used significantly less money to develop advanced AI models and threatened the dominance of US companies.
David Sacks, who leads AI and crypto policy decisions for the White House, has already thrown his support behind the venture. Reflection now has a solid team, with over 60 AI researchers who have been poached from DeepMind and OpenAI.
The startup plans to publicly release its model weights, but its datasets and training pipelines will remain private. This will allow other researchers to build on the models for free, while enterprises and governments will pay for any custom changes they need and will be supported with deployment.
Reflection plans to release its first language model in early 2026, which will be trained on "tens of trillions of tokens". For context, Meta used 15 trillion tokens to train its Llama 3 model - which was released a year ago.
With a whopping $2 billion in funding and a great team of AI researchers, Reflection AI are certainly one to watch in the coming months.
This Week’s Art

Loop via OpenAI’s image generator

We’ve covered quite a bit this week, including:
How Rishi Sunak's new advisory roles with Microsoft and Anthropic could shape their AI regulation strategies
Why Google and AWS are taking different approaches to enterprise AI adoption
Deloitte's embarrassing refund to the Australian government after an AI-generated report contained fake citations
OpenAI's partnership with AMD and extraordinary deal to buy 10% of the company for just $1.6 million
China's decision to tighten export controls on rare earth minerals
Why ChatGPT's new app integrations with Spotify and Zillow represent a shift towards Generative UI
How just 250 malicious documents can create backdoors in AI models of any size
And how Reflection AI's $2 billion funding round positions it as America's answer to DeepSeek
If you found something interesting in this week’s edition, please feel free to share this newsletter with your colleagues.
Or if you’re interested in chatting with me about the above, simply reply to this email and I’ll get back to you.
Have a good week!
Liam
Feedback
How did we do this week?

Share with Others
If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.
About the Author
Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.