- Loop
- Posts
- đ„ Google reveals an impressive AI video generator
đ„ Google reveals an impressive AI video generator
Plus more on Microsoftâs move to smaller AI models, US plan to accelerate AI research, and how Lakera protect GenAI software.

Welcome to this edition of Loop!
To kick off your week, weâve rounded-up the most important technology and AI updates that you should know about.
âââ â HIGHLIGHTS âââ â
FTCâs investigation into big techâs AI partnerships
Googleâs new AI tools for teachers and students
US Governmentâs plan to compete with China on AI research
⊠and much more
Let's jump in!


1. Google announces new AI tools for education
We start the week with Google, who have dominated the education sector in recent years - after the pandemic forced many teachers and students to move online. Google are keen to cement their position as the biggest platform for education.
Interestingly, users will be able to generate their own practice tests - which will help teachers to reduce their ever-increasing workload, and students who can use them to prepare for upcoming exams.
Teachers will also get help in coming up with lesson plans, thanks to Googleâs Duet AI toolkit.
Education is a clear area that can really benefit from Generative AI. Teachers are often facing greater workloads and cuts to their budget, meaning that they have to deliver more with less - but some of that pressure can be relieved using AI tools.
Itâs also a good learning opportunity, since teachers can clearly demonstrate to their students where generative AI can be really helpful - while also showing the limitations and how it shouldnât be used.

2. Microsoft shifts focus to smaller, cheaper AI models
According to insiders at Microsoft, the company has formed a dedicated AI research team which will focus on creating smaller language models (LM).
The need to develop smaller LMs is a theme that I flagged a few weeks back, which will likely accelerate this year and is something to watch out for.
The shift in strategy is due to the increasing cost of running large language models and Microsoftâs wider ambitions to adopt generative AI within most of their products - especially their Office 365 suite.
These high costs can be prohibitive to their pricing strategy, as even lower prices are needed to convince customers. If the current costs are forcing Microsoft to charge similar prices to ChatGPT Plus, then users will directly compare the two services and will be much less likely to switch away from OpenAIâs ecosystem.
But lowering the price isnât the only motivation here, they also want to widen the number of tools that can be realistically built. Not every feature will need to use a huge, powerful language model - for some use cases, a smaller model will perform just as well.
If Microsoft are able to provide a larger number of AI features, and still do it at a lower price point than their competitors, then this is a much easier sell for consumers.

3. FTC launch inquiry into big techâs AI partnerships
The US Federal Trade Commission (FTC) have announced they are investigating the âinvestments and partnershipsâ being formed between AI companies and cloud companies.
The FTC will look whether companies - such as Amazon, Google, Anthropic, Microsoft, and OpenAI - are harming competition through these deals.
As AI startups have shown impressive results, like OpenAI and Anthropic, the bigger technology companies have been keen to invest in them and sign partnerships.
Both Amazon and Google have invested billions into Anthropic, which was started by former OpenAI researchers that wanted to focus more on AI safety.
Microsoft have done the same through their much publicised partnership with OpenAI, when they committed to a $10 billion investment and are also their largest shareholder.

4. US Government partners with big tech to accelerate AI research
The new program, called the National Artificial Intelligence Research Resource (NAIRR), is made up of 10 federal agencies and 25 organisations.
NAIRR aims to provide US researchers with advanced computing, datasets, models, and support from the top AI companies.
OpenAI, Google, Anthropic, Microsoft, AWS, Meta, and NASA have agreed to support the project - alongside many other tech companies. It will focus on 4 different areas:
Open source AI research
Projects that require greater security and privacy
Development of interoperable software and tools
Teach people about AI and upskill them through training
This follows on from President Bidenâs executive order on AI safety, which was signed late last year, and is part of the USâ wider strategy as they aim to compete with China on AI research.
China has rapidly become a major contributor to global AI research and one study indicates that they have already overtaken the US.

5. Cyberattacks will be turbocharged by AI, say UKâs intelligence agency
The UKâs cyber intelligence agency, GCHQ, is expecting that there will be a huge surge in cyber threats in the next few years.
Ransomware is one area that will become amplified by AI tools, as they will lower the barrier to entry and provide more opportunities for cyber criminals.
According to their report, social engineering will also become much easier for criminals - with generative AI tools used to make their messages more convincing.
We have all seen phishing emails that are poorly written and contain broken English. Theyâre easy to spot, but that will become more challenging as LLMs are increasingly used to write the messages instead.
The NSA has recently warned that criminals and nation states are already using these tools to perfect their campaigns.

Googleâs latest AI video generator

Google have shown off their latest project, called Lumiere, which is able to generate videos in just one step.
The results are certainly impressive, especially when you consider what theyâve achieved with cinemagraphs and inpainting.
Their cinemagraph feature allows you to upload an image and then select an object, which will then be animated by the model. For example, a photo of a steam train could be animated to show the smoke blowing in the wind.
With inpainting, you can outline a box boundary and tell the model to generate new content within it.
Much of the industryâs attention is currently focused on text-to-video generators, but the videos they create often look a bit strange and unnatural. When we use text to make these videos, weâre giving the AI model too much control over the process.
Take this video for example, the coupleâs legs immediately stand out as being unnatural:

Instead, itâs better to use images and videos as the starting point - since it often leads to better consistency when generating new videos.
Look at the video below. The model was only shown the bottom half of the video, with a black box placed over the top to hide the rest. It was then asked to continue the video and generate a new section where the box is.

You can see the AIâs completed video below, which looks very impressive and is consistent with the ârealâ section at the bottom of the frame.

This means we can modify existing videos and reimagine how they might look. Thereâs lots of ways this could be used.
Advertisers could use it to create more targeted video campaigns for their products in different markets, without having to re-shoot the original video.
However, this technology could also make it much easier to create deepfakes of other people. Lumiere hasnât been released to the public, with their paper suggesting that new tools are needed to prevent it being used to spread misinformation.
In the last year we have seen significant advances with image generators, like Midjourneyâs models, and it looks like video generators are on a similar path.

đ Microsoft is now a $3 trillion company
âïž Microsoft lays off 1,900 Activision Blizzard and Xbox employees
đ Tesla expects EV sales growth to be ânotably lowerâ in 2024
đ US DOJ and SEC open investigations into Cruiseâs self driving operations
đ€ Dusty unveils a new version of its construction robot
đŹ Meta release a prompt engineering guide for Llama 2
đ Voice cloning startup ElevenLabs has raised $80 million in funding
đ GM and Honda join forces to make hydrogen fuel cells
đšïž MIT Researchers demonstrate rapid 3D printing with liquid metal



Lakera
This is a Swiss startup that aims to protect companies from the security weaknesses in large language models (LLMs) - such as prompt injections and data leakage.
Lakera were the creators behind the popular Gandalf game, which encourages people to âhackâ the LLM and convince Gandalf to reveal the secret password. As you progress through the levels, it becomes harder to defeat Gandalf (good luck trying to beat level 8, I still havenât been able toâŠ).
They have used the millions of prompts from Gandalf to identify how people are defeating the model - and then created their own product that can defend against these attacks.
While companies like Azure and AWS do offer some prompt protection when you host an AI model on their platform, itâs unclear how advanced this is.
There is the real possibility that a companyâs reputation could be damaged by the misuse of their LLMs and chatbots, as a Chevrolet dealership in the US recently found out.
In this case, users only asked the chatbot to mock the company. But if they were able to create malicious code using the chatbot, then it would become a much more serious issue.
In late 2023, Lakera announced that they had raised $10 million to fund their expansion plans - and they recently spoke at Davos about the different ways AI applications are open to attack.
This Weekâs Art

Loop via Midjourney V6

This week has been dominated by Googleâs new video generator, along with the FTC announcing their investigation into big techâs AI partnerships.
A lot has been covered this week:
Googleâs announcement of new AI tools for education
Microsoft moving to smaller models, as they aim to grow the number of AI features and reduce costs
FTCâs inquiry into AI partnerships and competition
The Presidentâs plan to help US researchers and compete with China
GCHQâs report on how AI will lead to more advanced cyber attacks
Googleâs latest research into AI video generation
And how Lakera are protecting companies from security weaknesses in LLMs
Have a good week!
Liam
Feedback

Share with Others
If you found something interesting in this weekâs edition, feel free to share this newsletter with your colleagues.