
You’re receiving this email because you subscribed to the newsletter. If you’d like to unsubscribe, you can manage your preferences here with one click (or use the button at the bottom of the email).
Welcome to this edition of Loop!
To kick off your week, I’ve rounded-up the most important technology and AI updates that you should know about.
ICYMI, I recently created a prompt pack that includes 100 use cases for how businesses can adopt AI. Just click a prompt and it opens in ChatGPT, Claude, or Gemini. You can try it here.
HIGHLIGHTS
How blind patients can read again thanks to smart glasses and eye implants
Google's significant advances with quantum computing
OpenAI's disappointing launch of their AI browser
… and much more
Let's jump in!


1. Blind patients can read again, thanks to smart glasses and eye implant
We start this week with a major breakthrough, as surgeons have successfully used eye implants and allowed blind patients to read again.
The results have been impressive so far. Of the 32 patients that received the implant, 27 have regained their ability to read again. All of these blind patients suffered from advanced macular degeneration, which was linked to their age.
They’ve been able to achieve this breakthrough with the Prima implant, which is 2mm thinner than the human hair. That implant sits beneath the retina and works with a pair of smartglasses.
The system is able to record video through the glasses, send it to the implant, and then uses the optic nerve to send the images to the brain.
Around 5 million people suffer from AMD around the world, with scientists hoping that this technology can be made widely available in the coming years.
It follows on Neuralink’s breakthrough this year, where they used brain implants and allowed ALS patients to communicate again with their families.

2. Google’s quantum computer has beaten today’s supercomputers
Another breakthrough came from Google this week, who announced that their quantum computer was able to run calculations 13,000 times faster than supercomputers.
It's the first time that a quantum computer has successfully run an algorithm beyond what classical machines can handle - known as "quantum advantage”.
The breakthrough focused on calculating molecular structures, which is a very narrow task - but it could eventually lead to advances in medicine and create new materials.
This is incredibly significant research from Google, but we shouldn’t expect things to change overnight. Today's quantum computers are still too limited and there’s a lot further to go, before they become mainstream.
With the current technology, it falls short as qubits are incredibly fragile. In order to work properly, they need incredibly low temperatures and complete isolation from any electromagnetic interference.
Regardless, cybersecurity experts are worried that quantum could be used to unlock today’s encryption standards and reveal sensitive information. With this new announcement, they’ve called for companies to invest more in quantum-proof cryptography.
While the quantum revolution isn't here just yet, Google's latest milestone suggests that it’s getting closer.

3. Nvidia wants to build data centres in outer space
The company has backed plans for the world's first AI-equipped satellite, which will include data centre GPUs and will be launched in November.
Starcloud is the startup leading this initiative, which has the ultimate goal of building massive data centres in space.
Of course, this sounds very far-fetched and another hype-filled announcement around AI. However, we’ve seen a growing number of startups focus on space and how we can deploy advanced technologies there.
Space Forge is currently exploring how we can manufacture advanced semiconductors in space, with NATO and the UK Government heavily funding the company.
Starcloud’s co-founder has argued that the economics make sense for data centres, as space offers nearly unlimited solar power and the vacuum of space can act like a natural cooling system.
Even with the huge expenses of launching these into space, they claim that energy costs could be 10 times cheaper than data centres on earth.
Plus, they wouldn’t need to use freshwater to cool down the hardware - which is already a huge issue today and could become untenable as the demand for AI computing power increases further.

4. Anthropic launches web version of Claude Code
The startup has launched a web version of Claude Code, as the company tries to maintain its advantage in an incredibly crowded market of AI coding tools.
I regularly use Claude Code for my work and it’s a complete joy to use. Other tools, like GitHub Copilot and OpenAI’s Codex, are nowhere near as effective and can struggle to write good, maintainable code.
Since the tool launched in May, it has grown 10x in users and now generates over $500 million annually for the company.
This move is to solidify Anthropic’s advantage over their competitors and grow those revenues even more, which is important for supporting their research and developing new AI models.
With this new web interface, you can now run multiple tasks in parallel. This will allow developers to make several changes across code bases and repositories, but I don’t expect it to be used very often - as the context switching can actually waste more time than you save.
Instead, we should see this as their first step towards a more interactive tool - which will eventually include screenshots of UI changes, so that developers can approve changes even faster than before.

5. Yelp can now scan restaurant menus and show you what dishes look like
Yelp is rolling out new AI features, which should make it much easier to find local restaurants and choose from their long list of options.
They’re created an AI chatbot that can answer questions and show reviews from other people.
Interestingly, they have added an option to scan the restaurant’s menu and see what it actually looks like. The process is a bit strange, so I’m not sure too many people will be using it, but it could save a tiny bit of time.
Essentially, you point your camera over the menu and let Yelp’s AI “scan” it. Once it’s ready, the app will match those menu items with the pictures that people have uploaded.
For example, you can see pictures of the “steak shishlik” and reviews mentioning that specific dish.
Again, it’s a bit strange to have your phone out and hover it over a menu, but it’s cool to see the AI results. If you’re someone who struggles to decide in restaurants and need that extra info, it might be something you could benefit from.

OpenAI disappoints with its AI browser

Following months of speculation, OpenAI has finally released its AI web browser and hopes to create its own ecosystem of products - following in the footsteps of Google with their Chrome browser.
The browser has a minimal design, with your ChatGPT history on the left, a search bar in the centre, and an "Ask ChatGPT" button that allows you to ask questions about that specific website.
Paid subscribers also get access to "agentic mode," which can handle more complex tasks - like adding items to your shopping cart or booking appointments.
Unfortunately, the experience feels half-baked. I’ve been using it over the weekend and I’m not really sure why I’d use it over Google Chrome.
When I asked it to “find restaurants near me”, it only returned 3 results back. While they were correct for my town, it didn’t show any other results.
It also struggled when I clicked on the other tabs for Search, Images, and Videos. Weirdly, it ignored my current location (Ireland) and suggested US restaurants - like IHOP and Denny’s.
Clearly, OpenAI has faced issues with this too. They’ve included a link at the top, which allows you to quickly leave the ChatGPT search page and use Google instead.
I need to test “agent mode” in more depth, but OpenAI’s demos were quite disappointing. In one of their examples, they asked it to review a specific recipe and figure out how many ingredients are needed for 8 people.
That sounds cool on the surface, but it was a complete waste of time as the website clearly stated how many ingredients are needed. The webpage already has a button that instantly calculates how much is needed for 8 people. Asking ChatGPT to do that was a waste of time.
Naturally, companies will feel nervous about their employees using this browser. If OpenAI is willing to use copyrighted material to train their AI models, including huge films from Hollywood, what’s to stop them doing the same with corporate data in Atlas?
I’m not sure what the good use cases are with Atlas. I’ll need to spend a lot more time with it and try to figure it out, but it has been a disappointing launch so far. It doesn’t look like I’ll be swapping from Chrome anytime soon.

Google’s AI can analyse satellite imagery

As I’ve mentioned before, we are seeing new advances in geospatial data and satellite imagery - as it is becoming much cheaper to collect this data and monitor conditions on Earth.
Google is one of the top players in this sector, especially when it comes to supporting charities and responding to natural disasters.
Through their charitable arm, the company has used the technology to track wildfires, spot damage to buildings, and supported charities with new tools that allow them to identify those most in need.
The company has just unveiled significant updates to its Earth AI platform, which allows users to ask simple questions and instantly use satellite imagery to find objects.
To do this, Google has added their Gemini model to the platform. As a result, they have helped to democratise satellite imagery and made it much easier to analyse.
Previously, you had to use complex computer vision models or train your own versions. With this new feature, that’s no longer required.
For example, you could ask Gemini to focus on a specific town and ask it to analyse drinking water for harmful algae.
Or it could be used to identify flooded roads after a storm, locate critical infrastructure in vulnerable areas, or track changes in forest cover over time.
The real value is how it combines multiple data sources - which allows you to merge satellite imagery, population data, and environment data - and get insights in minutes.
It’ll be interesting to see how other organisations start to use the platform.

📉 More on the AWS server outage that crashed the web for a day
🚗 GM to launch eyes-off, hands-off driving system in 2028
👓 Amazon reveals AI smart glasses for delivery drivers
🎬 Netflix, Amazon, and Apple are eyeing Warner Bros. acquisition
✂️ Meta cuts 600 jobs across its AI division
🤖 OpenAI acquires Sky, an AI interface for Mac
🥽 Samsung reveals its answer to the Apple Vision Pro
📺 Netflix plans to fully embrace generative AI, but industry remains split
🎨 Adobe will support companies who want to build custom AI models
🎵 OpenAI is developing a tool for AI music



LangChain
LangChain has just secured $125 million in Series B funding at a $1.25 billion valuation, which will allow it to continue building advanced AI tools for developers - especially as companies start to adopt agentic AI products.
The San Francisco startup, which began as an open-source project just weeks after ChatGPT's launch in late 2022, has evolved from a simple framework into the go-to platform for building AI agents.
When the company first started, it quickly became the standard for adopting LLMs and integrating them into existing products. But companies like OpenAI and Anthropic have since caught up, with their own tools offering similar features.
To differentiate themselves and stay ahead, LangChain has released the LangGraph framework - which allows developers to create advanced AI agents and test them in minutes.
I must say, their agentic AI tooling is very impressive. I regularly use it for my professional work and it’s been adopted by many other companies too.
To start generating revenues, LangChain has developed a monitoring tool for LLMs and AI agents - called LangSmith. This is still open source, but companies can pay a subscription fee and allow LangChain to simplify the deployment.
It’s interesting to see that Workday has invested in the company, as part of this new round. They have invested heavily in AI tooling companies recently, most prominently Flowise, so this is a good move on their part.
This will allow them to work more closely with the startup and could see Workday integrate LangChain’s tools into their upcoming framework for AI agents.
Either way, LangChain has been the top company for AI developers and is certainly one to watch as we move into the agentic AI space.
This Week’s Art

Loop via OpenAI’s image generator

We’ve covered quite a bit this week, including:
How blind patients can read again thanks to smart glasses and eye implants
Google's significant advances with quantum computing
Nvidia's push to build data centres in outer space and wider trends with space manufacturing
Anthropic's decision to bring Claude Code to the web
Yelp's AI that allows you to scan menus and see what dishes actually look like
OpenAI's disappointing launch of their AI browser
How Google's Earth AI platform could change the way we analyse satellite imagery and respond to natural disasters
And how LangChain is positioning itself as the go-to platform for building AI agents
If you found something interesting in this week’s edition, please feel free to share this newsletter with your colleagues.
Or if you’re interested in chatting with me about the above, simply reply to this email and I’ll get back to you.
Have a good week!
Liam
Feedback
How did we do this week?

Share with Others
If you found something interesting in this week’s edition, feel free to share this newsletter with your colleagues.
About the Author
Liam McCormick is a Senior AI Engineer and works within Kainos' Innovation team. He identifies business value in emerging technologies, implements them, and then shares these insights with others.

