• Loop
  • Posts
  • 🤖 Microsoft launches Prompt Shield to protect AI apps

🤖 Microsoft launches Prompt Shield to protect AI apps

Plus more on OpenAI's voice cloner, Google's AI travel planner, and how a tech startup is preventing wildfires.

Image - Loop relaxing in space

Welcome to this edition of Loop!

To kick off your week, we’ve rounded-up the most important technology and AI updates that you should know about.

‏‏‎ ‎ HIGHLIGHTS ‏‏‎ ‎

  • How Facebook monitored the traffic of Snapchat users

  • Microsoft’s new tools to spot malicious prompts

  • Why every US federal agency will hire a chief AI officer

  • … and much more

Let's jump in!

Image of Loop character reading a newspaper
Image title - Top Stories

1. OpenAI is working on a voice cloning tool

For some time now, the company has been refining Voice Engine and testing it with a small group of developers. What’s particularly impressive is that it only needs a 15 second audio clip to mimic your voice. That’s it.

Of course, there’s lots of really interesting ways the technology can be used - but there are just as many, or perhaps more, ways it could be misused.

The most obvious issue with the technology is how it can be used to spread disinformation about politicians. Just a few weeks ago, there was a campaign that tried to suppress votes for President Biden in New Hampshire - with the AI voice telling voters to stay at home.

Criminals could also use it to authorise fraudulent transactions and empty bank accounts. Or they could use it to create a convincing clip that’s highly embarrassing for their target, with it then used to blackmail them.

They’re just some of the terrible ways the technology could be used. However, there’s plenty of good that can come from it too - mainly around accessibility and giving a voice to those who have disabilities.

In one case, doctors used OpenAI’s tool to replicate the voice of a young patient - who lost her ability to speak due to a tumour. The tech has also been used to translate content and break down barriers between people.

It’s worth emphasising that while OpenAI are taking it slow with their voice cloners, other companies are not. Both businesses and the general public should prepare themselves for the real possibility that these advances will be used to damage them.

Image divider - Loop

2. Sam Bankman-Fried is jailed 25 years for $10 billion fraud

The former CEO of FTX, who was often referred to as SBF, has been sentenced to 25 years in prison for his role in “one of the largest financial frauds of all time”.

The crypto company collapsed in 2022, after a report raised serious concerns about its liquidity and the numbers claimed on its balance sheet. Users of the crypto exchange quickly lost trust and tried to take their money out, leading to its demise days later.

There were many things wrong with how FTX was managed. Multi-million dollar expenses were approved using emojis. Messages about key decisions would automatically disappear.

Employees used their personal names to buy property, despite company funds being used to do so. The company “never had board meetings” and didn’t even have records of who they hired. Incredible.

The person who helped to restructure Enron after their meltdown, which was one of the biggest accounting scandals ever, was asked to do the same with FTX. Unsurprisingly, they said that FTX was run even worse than Enron was.

Image divider - Loop

3. Amazon completes their $4 billion investment in Anthropic

This comes as no surprise, following Microsoft and Google’s latest deals with emerging AI startups - such as Mistral and Character.ai respectfully.

Last September, Amazon agreed to invest $1.25 billion in the company - which could rise to $4 billion. This new announcement sees them commit to the remaining $2.75 billion.

Anthropic is seen as OpenAI’s closest rival and Google has invested similar amounts into the company.

Their release of Claude 3 has been a huge success, as it raised the startup’s profile and led some to switch over from GPT (including myself). Claude’s definitely worth a try, if you haven’t done so already.

Image divider - Loop

4. Facebook snooped on Snapchat traffic

Court documents have revealed that Facebook used “man-in-the-middle” techniques to intercept traffic from some Snapchat users, before that data was encrypted.

Facebook employees had concerns about the ethics of such a project, but it continued and later expanded to include YouTube and Amazon users as well.

This was done to gain competitive insights about their rival’s user base and how advertisers were using the platform - which then helped Facebook to better compete with the company.

It’s important to emphasise that this wasn’t implemented against every Snapchat user. Instead, small numbers of users were paid to install Facebook’s traffic monitor on their phone - which would then pretend to be Snapchat and access the data.

But whether the users knew their data would be used for this purpose, is another matter. It’s clear that many of their employees were against it for this reason.

Image divider - Loop

5. Every US federal agency will hire a Chief AI Officer

Some encouraging news from the US, where all federal agencies have been told to appoint a Chief AI Officer. This person will be tasked with implementing AI systems within the agency and ensuring that they’re being used safely.

The Department of Justice has already started this process, which follows on from President Biden’s executive order on AI.

This executive order aims to keep the US ahead of China, where the government there is working very closely with researchers and the private sector.

This is really good news, as it will ensure that each department explores how AI can benefit them - alongside any risks it poses to their core mission.

Just recently, the US setup a new program to provide their researchers with advanced computing, datasets, models, and support from the top AI companies - such as Google, Microsoft, and Amazon.



Image title - Closer Look

Microsoft releases new tools to spot malicious prompts

If you’ve ever tried to deploy a Generative AI application, you’ll know just how difficult it is to prevent users from misusing it.

We’ve seen plenty of examples in the last year, with text and image models often being used to create funny - or even offensive - content.

It can cause headaches for businesses and damage their reputation, especially given how quickly this content can be shared on social media.

Microsoft are adding new tools to try and tackle this problem. They’ve announced several Prompt Shield tools, which will scan a user’s prompt for potential issues, monitor for hallucinations, and automatically block malicious prompts that are trying to trick your model.

Developers will be able to control how aggressive the detectors are and what types of content they should block. Azure AI will also identify the users that are trying to defeat your model’s restrictions, which will allow you to take action and ban them from the service.

It’ll be interesting to see how well their tools perform, as these are real challenges for companies.

Although, I would urge caution with the hallucination detector - since this is an incredibly difficult problem. I’m not convinced we can totally solve this, given how Generative AI models are created in the first place.

You can try out their new Prompt Shields for yourself, as they’re now available as a preview service on Azure.



Image title - Announcement

Google now lets you use AI to create travel plans

With the summer fast approaching, you’re probably thinking about where you might head next on vacation. Wouldn’t it be cool if you could just ask Google to make an itinerary for you?

Well, now you can. If you’ve got access to Google Search Labs, you can ask it to create a list of places to visit in Rome, London, or anywhere else.

You’ll be shown a list of attractions and restaurants to visit - with different options for morning, afternoon, and the evening.

There’s lots of startups that are tackling this already. When I visited San Francisco last year, I saw a demo from Pilot and they used GenAI to create travel plans for groups of people.

Interestingly, Pilot have already partnered with Kayak - allowing users to take their AI’s suggestions and then book the vacation.

If you want to try out Google’s new service, you’ll need to be able to use Search Labs - which isn’t available in all regions yet. I’ve added a link to it below.



Image title - Byte Sized Extras

💰 Google.org launches a $20M generative AI accelerator program

💵 Databricks spent $10M on their new generative AI model

📖 Amazon will have to publish an ads library in EU

🤖 Apple’s WWDC 2024 will focus heavily on AI, hints executive

🔉 Rabbit partners with ElevenLabs to offer voice commands

⚖️ Apple sues a former iOS engineer for leaking Vision Pro details

🔫 NYC will test AI gun detectors on the subway

Image of Loop character with a cardboard box
Image title - Startup Spotlight

Vibrant Planet

This startup is helping government agencies to tackle climate emergencies, such as wildfires, which are becoming more and more common.

If you want to prepare for these disasters, you need to have proper land management - which is what Vibrant Planet are focusing on.

Their online tool allows departments to visualise how a decision would impact the local environment.

For example, you can see what might happen if you had a controlled fire or the impact of removing specific trees.

This all means that agencies can create much more effective plans, which reduces the risk of wildfires spreading in that area.

They’re an interesting startup that’s doing a lot of public good, so I’ve added a link below if you want to read more.



This Week’s Art

Loop via Midjourney V6



Image title - End note

Lots has been covered this week, including:

  • OpenAI’s voice cloning tool

  • Sam Bankman-Fried’s sentencing for fraud

  • Amazon completes their $4B investment in Anthropic

  • How Facebook monitored the traffic of Snapchat users

  • Why every US federal agency will hire a chief AI officer

  • Microsoft’s new tools to spot malicious prompts and hallucinations

  • Google is testing how AI can be used to create travel plans

  • And how Vibrant Planet are helping agencies to prevent wildfires

Have a good week!

Liam

Image of Loop character waving goodbye

Share with Others

If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.

About the Author

Liam McCormick is a Senior Software Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.