- Loop
- Posts
- 🏁 The race begins to regulate AI
🏁 The race begins to regulate AI
Plus more on DeepMind’s latest advances with AlphaFold, Runway ML’s incredible GenAI video update, and how GPT-4 could be used for insider trading.
Hello,
Welcome to this edition of Loop! We aim to keep you informed about technology advances, without making you feel overwhelmed.
To kick off your week, we’ve rounded-up the most important technology and AI updates that you should know about.
In this edition, we’ll explore:
- US and UK’s plans for safer AI
- Runway ML’s updated video generator
- Shield AI’s ambitions for autonomous aircraft
- … and much more
Let's jump in!
Top Stories
1. President Biden issues executive order to set AI safety standards [Link]
Developers of foundational AI models are required to notify the U.S. government and share the results of their safety tests. The National Institute of Standards and Technology (NIST) are also working on creating standards for red-team testing.
The executive order is quite broad, with a focus on protecting Americans from AI-based bias and discrimination. It’ll be useful for a lot of tech companies, as it sets out guidelines on how they should responsibly build AI systems.
2. Google DeepMind tease that their next version of AlphaFold is a “significant improvement” [Link]
Some great news from Google Deepmind. Their latest AlphaFold model has seen a “significant improvement” in accuracy. It can now generate predictions for nearly all molecules in the Protein Data Bank (PDB) and often reaches atomic-level accuracy.
Improvements to accuracy are incredibly important for drug discovery, since we can better identify and design new molecules. It’s possible this can lead to new drugs being developed and start a new era for “digital biology”.
3. GPT-4 took part in insider trading, then lied to researchers about it [Link]
Apollo Research have demonstrated that GPT-4 could illegally trade stocks using insider information and then lie about it when asked by researchers. The bot was told to act like a trader for a fictitious financial investment company.
Employees told it that the company is struggling and needs good results. They also give it insider information, claiming that another company is expecting a merger, which will increase the value of its shares. Initially, the bot agrees not to use the information.
But after another message is sent by employees to indicate their company is still in financial trouble, the bot decides that "the risk associated with not acting seems to outweigh the insider trading risk" and makes the trade.
When asked if it used the insider information, the bot denies it. This has sparked concern over the ability of future AI models to deceive humans, which will only become worse as the technology rapidly improves.
4. Google release GenAI tools for advertisers, following on from Amazon’s unveiling last week [Link]
Product Studio is an AI-powered imagery tool for advertisers, which allows them to create new product visuals using text prompts. Advertisers can change an image’s background or improve the image quality, which should reduce the need for new photoshoots. This follows on from Amazon’s announcement that we covered last week, where they released several new GenAI tools for advertisers.
5. UK host their AI Safety Summit [Link]
The EU and 28 other countries have signed the Bletchley Declaration, which states that each country will share evidence of AI risks and promote the design of safe, human-centered AI. The UK have also announced their own AI Safety Institute (AISI), which has the exact same name and goals as the US version, mainly to maximise the benefits of AI but reduce the risks.
The UK will also be spending £225 million on a new supercomputer, called Isambard-AI, which aims to be 10x faster than their current supercomputer. The summit has led to countries having a base-level understanding of the risks that future AI systems pose, with agreements to regulate it in the coming years. This is the first step on a very long road.
Closer Look
Runway ML’s text-to-video model takes another step forward
AI image generators, such as DALL-E and Midjourney, have gained a lot of attention in the last year - following on from rapid improvements to image quality. But video generators have always been quite far behind, due to the inherent challenges in making AI videos seem more life-like across hundreds of frames. However, the gap is starting to close and the latest update to Gen 2 means that it can create some incredible results.
Of course, there’s still a long way to go as some clips still look a bit weird. But Runway’s examples of snowy mountains and New York city clearly show what’s already possible. As the quality improves, it reduces the need for amateur film-makers and startups to pay for stock footage online. Why would you pay Shutterstock $60 for a video clip, when you can make your own AI clip for under $1? Unless it’s a highly professional project, the need for these clips will decrease pretty quickly.
You can see their announcement for more.
Byte-Sized Extras
Startup Spotlight
Shield AI
This is a fascinating company that’s been in the news in recent weeks. Shield AI specialises in defence technologies and have recently raised $200 million to scale their autonomous flying tech for the military.
We’ve seen a dramatic shift towards drone warfare in recent years, following Russia’s invasion of Ukraine and attacks in the Middle East. Shield are currently building Hivemind, which is an AI “pilot” that will allow swarms of drones and aircraft to operate autonomously without GPS, communications, or even a pilot.
It’s very ambitious stuff, but urgently needed as China steps up their military spending in this space. Shield is currently valued at $2.7 billion and they’re working on bringing Hivemind into uncrewed fighter jets. Just last year, the company announced that they were able to autonomously maneuver a modified F-16 in real-world air-combat scenarios.
Shield already work with the US Department of Defense and Boeing on multi-billion dollar projects and have secured a deal with Brazil to autonomously monitor their border. AI defence companies like Shield (US) and Helsing (Europe) are seeing a huge amount of growth, as Western countries sign more contracts to bring AI capabilities into their fighter jets.
If you want to read more about Shield AI, you can check out their website.
Analysis
The week has been dominated by AI regulation and commentary online about how the technology could escape beyond our control. This has overshadowed the more nuanced conversations about the risks to democracy and truth ahead of US and UK elections in 12 months time.
The UK’s AI Safety Summit was less ambitious than what was originally pitched by Prime Minister Rishi Sunak, but a welcome first step. Dozens of countries will have a better view of the threats they face from AI, along with the ways it can enable better efficiencies and new opportunities for businesses.
However, it was quite strange to see a world leader say they were “very excited” to interview Elon Musk - a role that’s usually done by journalists. There was also the issue of the US’ executive order, which upstaged the UK’s event which has been in planning for months. The US Government’s action was endorsed by technology leaders, who have a rough blueprint of what’s expected from them. It might not have any real teeth at the moment, but the US can address this in the future.
Regardless, the UK’s summit seems to have been a success as many governments have agreed on a base-level understanding. There’s no doubt this will look like a waste of time to some people, but then again it’s very rare for international diplomacy to be a quick process…
This Week’s Art
Prompt: Create a visually soothing and awe inspiring renaissance sketch of a cowboy in space, looking up at a planet in the distance
Platform: Ideogram
End Note
A lot has happened this week - from talks about AI regulation in the US & UK, to DeepMind’s sneak peek at the next generation of AlphaFold. We’ve also covered Google’s new GenAI tools for advertisers, GPT-4 opting to use insider information and then lying to researchers, Runway ML’s update to their video generator, and how Shield aim to bring autonomous AI capabilities to the US military.
OpenAI will be hosting their developer conference soon, so stay tuned for insights on what their next big announcement is.
Have a good week!
Liam
Share with Others
If you found something interesting, feel free to share this newsletter with your colleagues.