- Loop
- Posts
- How one line of code led to global chaos and cost the economy billions
How one line of code led to global chaos and cost the economy billions
Plus more on OpenAI’s new model, why Samsung just bought a knowledge graph startup, and new immersive content for Apple’s Vision Pro.
Welcome to this edition of Loop!
To kick off your week, we’ve rounded-up the most important technology and AI updates that you should know about.
HIGHLIGHTS
TTT models could be the next big advancement in GenAI
The reason that OpenAI wants to develop its own AI chips
Why Samsung just bought a knowledge graph startup
… and much more
Let's jump in!
1. OpenAI wants to develop its own AI chips
The startup has approached several semiconductor companies and are discussing how they can create their own chips.
This is important as Nvidia currently dominates the market and provides most of the AI chips they need. That dominance has seen Nvidia’s valuation rise by a remarkable $2 trillion in just one year.
In the coming years, that reliance on Nvidia will become an issue for OpenAI. This is because much more capacity will be needed to train future models.
But OpenAI is also competing for those chips with Meta, Google, Anthropic, and Amazon - who all have very deep pockets.
The discussions between OpenAI and Broadcom are in the early stages, but they need to act if they want to stay ahead of their competitors.
Meta has also announced plans to create their own AI chips and Amazon has been creating their Trainium chips for several years.
2. Vision Pro gets new immersive content, featuring NBA games and The Weeknd
Now that the Vision Pro is available in more countries, Apple has announced new immersive content for the headset.
This includes the 2024 NBA All-Star game in Indianapolis, an immersive concert with The Weeknd, wildlife documentaries from Kenya, and a feature that follows Red Bull surfers as they attempt to ride the heaviest waves in the world.
Last week, I got my hands on the Vision Pro and Apple’s immersive videos are the most realistic I have ever seen.
It genuinely feels like you’re sitting courtside at an NBA game, or in the stadium as MLS teams play soccer.
While the immersive documentaries are interesting, Apple really needs to lean into sports and to provide interactive viewing.
The headset is very expensive and hard to justify for most people, but this will give it a clear edge over Meta’s Quest headsets.
3. Samsung acquires a knowledge graph startup from Oxford
Oxford Semantic Technologies is a UK startup that was founded in 2017. Samsung hasn’t disclosed how much they paid for the company.
OST was led by several University of Oxford professors and the company had developed an AI reasoning engine. This was used to process an organisation's data into a knowledge graph, which more clearly shows the relationships within the data.
This is valuable for businesses as it provides them with a better way to interact with their data and get insights. The real-world is quite messy and it’s very rare that you can fit information into neat folders that go from the top-down.
In reality, there are informal links between people, teams, and data across different departments. Knowledge graphs allow us to see those links and give a clearer picture about your company’s data - or even your customer data.
For Samsung, this is a good acquisition. Knowledge graphs are increasingly being adopted by businesses and Samsung’s offering spans across many industries - from washing machines, to phones, to ship building.
As a result, they will have a lot of data about their customers and the industries they operate in. OST’s specialisation in knowledge graphs will help them to make sense of that data and inform their business decisions.
4. Tech companies used YouTube videos to train their AI models
It’s believed that 170,000 YouTube videos were used to train AI systems, without the permission of the copyright holders.
Subtitles were extracted from YouTube videos and then stored in a dataset, which was used by some of the world’s biggest tech companies - such as Apple, Anthropic, Nvidia, and Salesforce.
This data was taken from over 48,000 channels, including popular creators like MrBeast and Marques Brownlee - alongside news outlets like ABC News, the BBC, and The New York Times.
Proof News has released a tool that allows YouTubers to search if their content has appeared in the dataset.
YouTube hasn’t responded yet, but they have previously stated that using video content - including transcripts - to train AI systems would violate their platform's rules.
The key takeaway from this is that the world has completely changed. If you post anything online, you should now expect that it will be used to train AI systems.
Copyright laws are now being ignored, as big tech companies try to get their hands on every bit of data they can.
For many artists, that put them in a very tough position as they will not be compensated. The new reality is that everything we post online will be used by the top tech companies.
5. TTT models might be the next big thing in Generative AI
Researchers from Stanford, UC San Diego, UC Berkeley and Meta have proposed a promising new AI architecture, which is called test-time training (TTT) models.
Currently, we use a different architecture to power GPT-4 and other Generative AI models - which is called the transformer architecture.
However, transformers have their limitations and aren’t great at efficiently processing a huge amount of data.
This is where the TTT models come in, as they try to fix these limitations of the transformer architecture.
It’s shown some promising results and researchers believe that these new models can scale to much larger datasets, without having to increase the model’s size.
While it’s still early days, they hope that future TTT models will be able to efficiently process billions of data points - which could range from text, images, or videos.
How to cause chaos with one line of code
The last few days have served as a reminder of just how dependent our global economy is on technology. It also reminds us of just how fragile the internet truly is.
One line of code crashed 8.5 million Windows machines. One line of code caused chaos for tens of millions of airline passengers. One line of code cost the global economy billions of dollars.
This all happened after CrowdStrike, a cybersecurity company, pushed an update to millions of computers around the world. However, it was faulty and led to computers showing the “blue screen of death” error.
Normally, that’s fine. You can just turn it on and off again. But this was different as CrowdStrike’s update was for a driver on the PC.
Because that driver is run before you get to the computer’s login screen, it will constantly crash. It’s possible that Microsoft and CrowdStrike will come up with an easier fix, but for now it’s a manual process that involves IT staff fixing each machine.
As a result, it could be several days before every machine is fixed.
This highlights several things. Firstly, it’s easy to cause chaos for millions of people.
This is because only a handful of companies control the technology that we use everyday. To date, CrowdStrike holds over 20% of the market and bad actors, which includes individuals and nation states, will be encouraged by this disruption.
Secondly, companies around the world have placed too much trust in automatic updates. Generally speaking, cybersecurity teams like to install a new update immediately - since it will fix security issues that put them at risk.
But as we have seen in the last few days, this could become a huge problem. They need to change strategy and test these updates on just a few machines, verify that they are safe, and slowly roll it out to other machines.
They should not be rolling out these updates to every machine at once.
And finally, companies like CrowdStrike need more in-depth testing before rolling out a change. At a time when tech companies are reducing spending and laying people off, they cannot afford to make a devastating mistake like this.
OpenAI releases a smaller and cheaper model
GPT-4o mini will replace the outgoing GPT-3.5-turbo model and will become the smallest model they offer.
The good news is that 4o mini is 60% cheaper than 3.5-turbo, which will make it more feasible for companies to analyse data with these tiny models.
When compared with similar models from Google and Anthropic, 4o mini outperforms them across several benchmarks.
Eventually, the model will support both video and audio - but for now the API only supports text and image analysis.
This continues OpenAI’s core missions, which have focused on reducing the cost of using their models and improving speed.
That opens the door to many new use cases for companies, which can lead to more revenue growth for OpenAI.
Although, it would be good to see them focus more on the context window and catching up with their rivals - some of whom can process 1 million tokens.
It pales in comparison to OpenAI’s 128k limit, but I’m sure it will be addressed sometime soon.
🚀 NASA cancels $450 million mission that would’ve searched for ice on the Moon
🇪🇺 Meta won't release its multimodal Llama AI model in the EU
🛒 Amazon Prime Day 2024 sales hit a record $14.2 billion
🤝 Bethesda Game Studios employees are forming a union
🚕 Waymo wants to bring their robotaxis to San Francisco airport
🔍 Microsoft faces UK antitrust probe for hiring Inflection AI employees
🥽 Magic Leap moves away from making its own headsets, lays off its entire sales team
🔍 Exa raises $17 million to build a “Google Search for AIs”
🧲 Nuclear fusion experiment sets a new record for magnet strength
🤖 Robots will soon be used to scan the Titanic
Pindrop
This startup works to detect deepfakes and has offices in the US, UK, and India. It was formed several years ago and has just secured a $100 million loan to fund further expansion.
The company is aiming to expand into new sectors, such as healthcare, retail, media, and travel. But what’s interesting is that they have trained their AI models on 20 million audio files.
They claim that their tool can identify contact centre callers with higher accuracy than their rivals, which is likely related to that substantial dataset they have.
To date, Pindrop has raised over $230 million in venture capital and now employs around 250 people.
Now that they’ve secured more funding, they aim to boost their product lineup. It comes as we increasingly rely on deepfake detectors to identify AI-generated audio and video.
As the US election nears and rapid advances are made with AI deepfakes, this demand will only grow.
This Week’s Art
Loop via Midjourney V6
It’s been another busy week and we’ve covered a lot, including:
Why OpenAI wants to develop its own AI chips
New immersive content for the Vision Pro and why Apple needs to focus more on sports content
Samsung’s acquisition of Oxford Semantic Technologies and how they can use knowledge graphs
The new reality that everything we post online will be used to train AI models
Why TTT models might be the next big thing
How one line of code caused so much chaos around the world
OpenAI’s new GPT-4o mini model
And how Pindrop want to capitalise on the demand for deepfake detectors
Have a good week!
Liam
Share with Others
If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.
About the Author
Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.