• Loop
  • Posts
  • The $100 billion plan to invest in America's startups

The $100 billion plan to invest in America's startups

Plus more on OpenAI’s o3 model that costs $1,000 per answer, Google’s almost-perfect video generator, and why AI hallucinations are boosting scientific research.

Image - Loop relaxing in space

Welcome to this edition of Loop and Happy New Year!

To kick off your week, we’ve rounded-up the most important technology and AI updates that you should know about.

‏‏‎ ‎ HIGHLIGHTS ‏‏‎ ‎

  • Why AI hallucinations are boosting scientific research

  • Google’s incredible video generator that is almost perfect

  • Why we shouldn’t get on OpenAI’s o3 hype-train, at least not yet

  • … and much more

Let's jump in!

Image of Loop character reading a newspaper
Image title - Top Stories

1. Flipboard launches a new app for browsing the “fediverse”

The company is hoping to capitalise on the shift towards decentralised social media platforms, such as Mastodon and Bluesky.

Sometimes, this is referred to as the “fediverse” (surely, we could come up with a better name than that).

Their new app, called Surf, has been in development for nearly two years and will allow you to read posts from different platforms.

You can also create your own custom feed - similar to how Flipboard already works - with the option to add different sources, topics, or people to your feed.

That might sound a lot like what we have today, where we can easily follow people or companies, but Surf seems to offer more control on what content you should see.

Whereas, today’s social media platforms rely heavily on ads and engagement - leading to that content being sent to your feed instead of what you actually want to see.

If this gains traction, the app could allow us to read social media from different platforms and bridge the divide between people.

Alternatively, it could be even more damaging than what we currently have and could create echo chambers for people - which isn’t healthy for our wider society. Time will tell which path wins out.

While Surf is currently in a closed beta, we expect more details to be released about a wider roll-out in the New Year.

Image divider - Loop

2. AI hallucinations are a nightmare for business, but helpful for scientific research

Scientists are leaning into the hallucinations generated by Large Language Models (LLMs), as they try to accelerate their research projects and discover new medicines.

By feeding machine learning models lots of data and then allowing them to creatively rework that information, they can come up with new ideas for scientific hypotheses.

Traditionally, this is a slow process for us to come up with. If we prompt the model correctly and give it more freedom than we do with chatbots, it can suggest useful ideas.

David Baker, a 2024 Nobel Prize winner for Chemistry, has said that hallucinations have allowed his lab to rapidly build 10 million novel proteins and obtain around 100 patents.

On the flip side, hallucinations are seen as a nightmare for businesses. Instead, they prefer that the technology is 100% bullet-proof and never makes a mistake.

Unfortunately, that’s not possible - due to the nature of how LLMs work - and it has led to some businesses pausing their AI investment plans.

Those business use cases have dominated our conversations about hallucinations and how we see them, whereas scientific researchers are learning to embrace it.

It’s a reminder that we need both options for how we use these models, with strict settings available for enterprises and more creative settings for other industries.

Image divider - Loop

3. SoftBank plans to invest $100 billion in the US

The $100 billion investment plan will focus on both AI and technology infrastructure, over the next four years.

It’s likely that this will involve new energy projects, data centres, and boosting America’s research into more advanced semiconductor chips.

SoftBank’s CEO has promised to create a minimum of 100,000 jobs, which echos his commitment in 2016 to invest $50 billion and create 50,000 jobs.

We don’t know for sure if all those jobs were created, following the 2016 pledge, but SoftBank did invest heavily in startups - including Uber, DoorDash, OpenAI, and the chip designer ARM.

SoftBank doesn’t have $100 billion available to spend, so it’s expected that they will partner with other investors to create the new fund.

Image divider - Loop

4. Anthropic’s models tried to deceive and “fake” good behaviour

When the researchers retrained their AI models, and used new principles that conflicted with what it was originally taught, they found that the model was trying to deceive them.

These new principles could be designed to make the model safer, such as avoiding specific topics or political questions.

But rather than adopting these new instructions, the model pretended to use them and was actually using the originals in secret.

In some situations, the AI model even attempted to prevent the researchers from retraining them.

Due to the limitations of today’s technology, that simply isn’t possible - these models can’t stop that retraining process.

But it does raise questions about how we prevent this behaviour, or whether we ever can, as we build more powerful models and give them access to complex tools.

Image divider - Loop

5. Google’s new image generator can remix 3 images into 1

Whisk allows you to remix existing photos and generate new images, rather than starting from a text prompt.

To use it, you simply select 3 images: a subject photo, a scene or background image, and a reference image for the artistic style that should be used.

Google’s Imagen 3 will then analyse those images and generate a detailed text description, which is used to create the new image.

The tool is currently in beta and only available in the US, but this could be pretty useful for artists and designers - since they are given more control over the image that’s generated.

My main frustration with image generators is that lack of control. All you can do is try to improve your text prompt and hope for the best, which isn’t a great user experience.

This is a step in the right direction and I hope the other platforms follow suit.



Image title - Closer Look

OpenAI teases a new reasoning model

o3 is a successor to OpenAI's previous o1 model, and comes in two sizes: the full o3 and a smaller o3-mini.

You might be wondering why the model wasn’t given the name “o2”, since it will follow “o1”. This is because the name would lead to trademark issues with the O2 network carrier in the UK.

One of the key features of o3 is its ability to "think" before responding to a query, which allows it to be more reliable in complex domains like mathematics and science.

While this thinking time makes o3 a lot slower than typical AI models, it can be adjusted between low, medium and high settings to fine-tune performance.

On a benchmark test that’s designed to measure progress towards artificial general intelligence (AGI), o3 achieved a very high score of 87.5% on the maximum compute setting.

As you’d expect, this score has gained a lot of attention. And a lot of hype.

The benchmark used isn’t perfect and has several limitations. We definitely should not be interpreting that fully-autonomous AIs are about to arrive, like many are claiming.

OpenAI has not released the model to the public and they’re still in the very early stages of testing it. We need to wait before jumping the gun on what this means for businesses.

There’s also the issue of cost. Previously, their o1 model used $5 of compute power for each task. With o3, the high-scoring version used over $1,000 for each task.

That’s a prohibitive cost for most use cases. Only very large companies can afford to pay that amount for an answer.

It’s likely that it only be used for academic, finance, or industrial use cases. But that all hinges on the answer being reliable and correct, which LLMs are not. They can hallucinate, as I mentioned earlier.

If you pay thousands of dollars and get the wrong answer, that can really limit the technology.

Before people get too excited, we need more info about what the model can and can’t do. Then we can properly look at use cases.

That aside, it’s clearly a huge step up from the previous version and it suggests that more “thinking time” could be the way forward - but only if the answers are reliable…



Image title - Announcement

DeepMind unveils a new video model to rival Sora

Veo 2 is a significant leap forward and is capable of creating clips over 2 minutes long at resolutions up to 4K, which beats OpenAI's Sora in both resolution and duration.

Unfortunately, I had to heavily reduce the video quality above due to file size limits. But I highly recommend that you view the examples from Google’s blog, just to see how good those videos are.

While Veo 2 is currently only available in Google's experimental VideoFX tool with lower resolution and duration limits, DeepMind plans to expand public access in the coming months.

DeepMind claims that Veo 2 has a better understanding of physics, camera controls, motion, and lighting - when compared to its predecessor.

There’s no doubt that Veo 2 has completely blown Sora out of the water. Some of the videos generated are absolutely incredible.

I’ve been really impressed by how consistent the videos are. You might generate a video that shows a white house in the background, the camera pans away, then pans back again and that same house is shown again.

Very, very impressive. That lack of consistency has hampered many of the other video generators.

While this isn’t something that we’ll see on the big screen, it will be helpful for directors as they try to experiment with new ideas and quickly visualise the end-result.



Image title - Byte Sized Extras

🧠 DeepSeek has one of the best open-source models

💰 OpenAI plans to become a for-profit company

📱 Trump asks Supreme Court to pause imminent TikTok ban

🚗 Feds clear the way for robotaxis without steering wheels and pedals

⚠️ Canoo furloughs workers and stops vehicle production

📈 AI startups secured 25% of Europe's VC funding

⚡ Mercedes' Level 3 driver assist system can now be used at 95 km/h (59 mph)

🛰️ EU signs $11 billion deal for a satellite constellation that rivals Starlink

📱 Tapestry is a new app for tracking social media, news, and blogs

🌍 AI startup Odyssey's new tool can generate photorealistic 3D worlds

🤖 Waymo robotaxis are coming to Tokyo in 2025

⏳ Ram delays electric truck launch to 2026

🔍 OpenAI brings its AI search tool to more ChatGPT users

💫 Perplexity has reportedly closed a $500 million funding round

⚖️ UK consults on opt-out model for training AIs on copyrighted content

Image of Loop character with a cardboard box
Image title - Startup Spotlight

Slip Robotics

This startup has developed robots that can load a truck in five minutes, which they’re calling “SlipBots”.

Essentially, these are huge robotic platforms that can hold up to 10 pallets and carry 12,000 pounds (5,400 kg) of payload each.

Three of these SlipBots can fit into a standard truck trailer, enabling the rapid loading and unloading of 36,000 pounds (16,300 kg) of goods in a matter of minutes.

That’s a substantial improvement compared to traditional forklifts, which take about an hour to do the same task.

The company now has hundreds of SlipBots and they’ve deployed them across 25 customer sites - including major companies like John Deere, GE Appliances, Valeo, and Nissan.

Customers pay a subscription fee to use the SlipBots, which includes ongoing software updates, hardware maintenance, and repairs.

They have recently raised $28 million in Series B funding, bringing their total funding to $45 million.

At the moment, there’s a huge demand for lorry drivers and companies are now competing to reduce their delivery times.

If Slip’s robotic platforms are scalable and cost effective, they could really take advantage of this and become the go-to provider for the transport industry.



This Week’s Art

Loop via Midjourney V6.1



Image title - End note

We’ve covered quite a bit after the Christmas holidays, including:

  • Flipboard’s new social media app for the “fediverse”

  • Why AI hallucinations are actually helpful for scientists

  • Softbank’s plans to invest $100 billion in America

  • Anthropic’s latest research into deceptive models

  • The new image generator from Google that mixes 3 images into 1

  • Why we should wait before jumping on OpenAI’s o3 hype-train

  • The incredible power of Google’s Veo 2 video generator

  • And how Slip Robotics could change the way we transport goods

I will be on a short break for the next 2 months, as I travel across Australia and parts of Asia. By the time you’re reading this, I’ll probably be on the plane to Melbourne.

I just wanted to give a special thank you to everyone for reading and your support over the last year. It’s been great to see this grow so quickly and reach thousands of people each week.

I’ll be back in just a few short weeks, so it won’t be long until you get my weekly recaps again.

Have a good week!

Liam

Image of Loop character waving goodbye

Share with Others

If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.

About the Author

Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.