Welcome to this edition of Loop!

To kick off your week, I’ve rounded-up the most important technology and AI updates that you should know about.

In case you missed last week’s post, I’ve written an in-depth guide that explains what AI agents are and how you can use them. I also created a template, which allows you to build your own social media agent - even if you have no coding experience. You can read it here.

‏‏‎ ‎ HIGHLIGHTS ‏‏‎ ‎

  • How YouTube is secretly taking over the TV industry

  • GitHub’s new vibe-coding app for simple POCs

  • Latent Labs’ new AI model that can design new proteins

  • … and much more

Let's jump in!

1. US reveals a new AI strategy that focuses on growth, not guardrails

We kick off with the US’ new AI Action Plan, which is a dramatic departure from the more cautious approach under President Biden.

While the previous administration was focused on mitigating the risks that AI poses to people, President Trump's team is instead pushing ahead on infrastructure development, deregulation, and gaining an advantage over China.

When we boil it all down, the plan is essentially: build massive data centres, cut environmental red tape, and let innovation rip.

Under this new strategy, federal lands could become AI infrastructure hubs and data centres could be given priority to stay online - even when there is an emergency with the US energy grid.

In recent days, China is favouring a different approach. They’re still investing heavily in their own industries, but the language has changed recently to promote AI safety and international cooperation.

Li Qiang, the Chinese premier, has called on other countries to accelerate AI development and invest in open-source models - while also balancing the risks it poses to society.

With this new AI strategy, the US is better positioned to “win the AI race” against other nations. They already have the investment opportunities, startup ecosystem, and talent. Now they’re trying to build infrastructure even faster than before.

2. DeepMind’s latest AI model can understand inscriptions from ancient Rome

Today’s headlines often focus on AI and how it will replace human jobs. Instead, this is a great example of how we can use the technology to augment human experts and make new discoveries.

DeepMind’s model is able to identify "fingerprints" across thousands of Latin inscriptions, which helps historians to piece together different texts - including those that have been weathered, defaced, or broken over thousands of years.

With 72% accuracy, the model can determine where the inscription originated from and when it was created. To get this level of accuracy, DeepMind’s team trained the model on over 176,000 Latin inscriptions.

But what makes Aeneas clever is its multimodal approach. Unlike its predecessor Ithaca - which focused on Greek texts - this model can process both text and images.

DeepMind has already trialed the tool with 23 historians. In one case, Aeneas spotted important details that were missed by the historian. To get access to the free tool, you can use the link below.

3. YouTube pushes into TV, ad revenue grows to almost $10 billion

YouTube's advertising revenue climbed 13% year-on-year to $9.8 billion in Q2, according to Alphabet's latest earnings report, and outperformed analyst predictions.

This growth demonstrates how successful YouTube has been in recent years, as it pushes into traditional television territory. For some time now, it has been quietly eating the lunch of legacy broadcasters.

Nielsen data shows that the platform commanded 12.4% of total TV viewing time for three straight months. That dominance hasn't gone unnoticed by competitors.

HBO Max and Amazon Prime have responded by stuffing more ads into their programming, hoping to capture some of YouTube's momentum.

Netflix, meanwhile, is making aggressive moves of its own. The streaming giant recently announced plans to double its advertising revenue within the year, though it remains coy about actual figures.

Overall, YouTube has clearly been successful at merging the social media and television worlds into one platform - as shown by its impressive ad revenues.

As viewing habits continue to fragment and move away from the traditional TV model, that trend will only continue.

4. This AI assistant was supposed to accelerate drug approval, but it generates fake studies instead

The FDA's much-hyped AI assistant Elsa has become a cautionary tale about rushing technology into critical government work.

Launched in June with much fanfare by US health officials, the tool was meant to accelerate drug approvals. At the time, the FDA Commissioner even boasted that it arrived "ahead of schedule and under budget."

The process of deploying AI chatbots is incredibly easy, I’ve done it before in under 2 days for some customers, but achieving real efficiency gains is much harder to do.

According to six FDA employees, Elsa will regularly mention studies that don’t exist and misrepresent scientific research. Rather than saving time, the exact opposite is happening. Instead, scientists are now spending more time as they have to fact-check Elsa's responses.

The FDA's AI head Jeremy Walsh acknowledges these "hallucinations" are common with large language models, suggesting users ask more precise questions to mitigate the problem.

While I’m optimistic about how we can use AI to speed-up work, this is a reminder of the challenges that remain and the technology’s limitations.

In reality, we should focus on smaller and more specific tasks that can be automated - not incredibly risky tasks, like approving new drugs for patients, as they’re doomed to fail.

5. AI models can secretly pass on malicious behaviour, study finds

According to new research from Truthful AI and Anthropic, AI models can pass on malicious behaviour to another model. But what’s worrying is how difficult this is to trace.

These models can use data that looks meaningless to humans, but actually transmits a secret message to the other model. For example, the model could send a series of three-digit numbers.

In their testing, the researchers first trained a "teacher" model (GPT-4.1) to exhibit specific behaviour. For the first test, they told the model to prefer owls versus other birds.

The model was then told to generate a completely benign dataset, such as a set of numbers, for the “student” model. Fascinatingly, that model was much more likely to say that it preferred owls.

Worryingly, this behaviour also happened for riskier topics. Researchers found that it was 10 times more likely to generate malicious responses, compared to the control group that wasn’t trained in the same way.

This poses new problems for AI development going forward. For some time, the industry has bet that synthetic data can be used to train future models - as we rapidly run out of new content from the internet.

But this research could throw a spanner in the works. There is no way for AI companies to spot this contaminated data, as it’s hidden within ordinary numbers and code.



Google makes it even easier to use computer vision

This is a big step forward for the industry and, strangely, not a lot of people are talking about it.

Gemini has just been updated, which allows it to better understand images and the objects within them. For over a decade, we’ve been able to train models that identify specific objects - like a cat, someone’s face, or individual cars.

Google is now bringing that capability into Gemini, as it can understand text prompts and then draw boxes around the item you want to highlight.

Previously, we had to train models specifically for this and it was incredibly time-consuming to do. I’ve created these models for lots of customer projects, as generic LLMs were not reliable enough.

Now, we can just ask Gemini to highlight “the car” in any image and it works straight away - no training needed.

What’s also interesting is that Gemini can understand complex phrases. So, if you ask it to spot “the car that is farthest away”, it’s able to do that too.

Or, if you want to check that your employees are wearing safety equipment, you can simply ask Gemini to identify “the people that are not wearing a hard hat”.

For software developers, this opens up a lot of new opportunities. Rather than working with rigid, predefined categories, we can build applications that understand simple prompts and work for even more use cases.

It’s also important for allowing people to easily analyse satellite imagery. That sector is often very manual and there’s a lack of advanced AI models for identifying trees, houses, or ships from space.

But this new update to Gemini could change that.



GitHub launches a vibe-coding app

GitHub has unveiled Spark, a tool that promises to bridge the gap between having an idea and shipping a working application.

You simply write prompts for the tool, which Claude will use to try and understand what you want to achieve. Then, it will write the app’s frontend and backend code for you.

The process itself takes a few minutes and Microsoft, which owns GitHub, is promising that it can make it easier to develop Proof of Concepts (POCs) and simple mockups.

I’ll need to wait until I actually get access to the tool, which should be in the next few weeks, but it does look promising.

It’s probably best suited for smaller teams, especially those involved in R&D or trialing new product ideas, as you can quickly spin up a basic app and customise it from there.

For actual development and creating production apps, I doubt that this can offer any efficiency gains - but I’ll test that out in the coming weeks.

Ultimately, these AI models really struggle with context and understanding large numbers of files. As a result, they’re not well-suited for building proper production apps.

Regardless though, this will be useful for non-technical people who have an idea and want to build a basic prototype. Plenty of people have used Vercel’s v0 for this, but it now looks like GitHub has its own competitor.



🧠 OpenAI prepares to launch GPT-5 in August

🚀 Former Tesla president reveals his formula for scaling a company

🔐 US nuclear weapons agency was breached in Microsoft SharePoint attacks

💰 OpenAI will pay Oracle $30 billion a year to use their data centres

🎧 Amazon acquires Bee, an AI wearable that records everything you say

🔒 Proton releases a privacy-focused AI assistant that encrypts all chats

📈 Lovable reaches $100 million ARR, was founded only 8 months ago

📺 Jeff Bezos is considering whether to buy CNBC

🛡️ NATO Innovation Fund refreshes its investment team, prepares for more spending

🛰️ NASA's new satellite could improve response to natural disasters

💻 Intel will lay off 24,000 people, abandons plans for German factory

🤖 Tesla is behind schedule on plan to build 5,000 Optimus bots

Latent Labs

Six months after it secured $50 million in funding, Latent Labs has unveiled a browser-based AI model that could change the way that scientists design proteins.

The startup's LatentX model has achieved "state-of-the-art" results for protein binding in laboratory tests.

That's no small feat in a field where success rates can make or break billion-pound drug development programmes.

But what sets LatentX apart is its ability to dream up entirely new proteins, rather than simply predicting existing structures. This is what differentiates their model from DeepMind’s AlphaFold.

In fact, the startup’s founder actually has a lot of experience with AlphaFold, as he used to lead that team at Google.

With the launch of LatentX, the startup is promising that researchers can use it to design new molecules - directly from their web browser and without the need to install other tools.

Their model is free to use, although you do still need to apply for access. I’ve included a link below, if you want to try it yourself.



This Week’s Art

Loop via OpenAI’s image generator



We’ve covered quite a bit this week, including:

  • The US’ new strategy for AI that focuses on growth

  • DeepMind’s latest AI model that can understand inscriptions from ancient Rome

  • How YouTube is secretly taking over the TV industry

  • Why the FDA’s AI assistant is actually counter-productive

  • How AI models can secretly pass on malicious behaviour and what that means for training on synthetic data

  • Google’s update to Gemini makes computer vision easier than ever

  • GitHub’s new vibe-coding app for simple POCs

  • And how Latent Labs’ new AI model that can design new proteins

If you found something interesting in this week’s edition, please feel free to share this newsletter with your colleagues.

Or if you’re interested in chatting with me about the above, simply reply to this email and I’ll get back to you.

Have a good week!

Liam

Share with Others

If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.

About the Author

Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.

Keep reading

No posts found