Recently, two news stories related to Artificial Intelligence (AI) popped up in my news feed: one positive and one negative.
The first article—We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.—was crafted by the team at The Wall Street Journal.
Anthropic set the newsroom up with a version of its Claude AI platform to operate a vending machine. (Granted, Claude needed a little human intervention for stocking physical product.)
The story tells how the journey began and how the AI could be influenced—and, dare I say, reprogrammed to go rogue — when fed prompts by knowledgeable people with access.
In the end, the vending machine went from ordering office essentials such as chips and drinks to alcohol and a PlayStation 5.
For the staff, this seemed like a failure in implementing AI. For the researchers at Anthropic, it was more feedback from an uncontrolled, real-world environment.
In the second story—Garmin Autoland safely lands King Air in first real-world emergency use—highlighted the first real-world save for the Autoland AI system.
The system received FAA certification in 2020 and is available on several aircraft types. Just about two weeks ago, the system got its first emergency activation and safely landed an airplane in Colorado after depressurization.
As with any emerging technology, there is still a long road ahead as development continues and the technology becomes more reliable. But the Garmin system shows the positives of how lives can be saved.
PHOTO CREDIT: A ChatGPT-generated image symbolizing AI. (by Alenoach via Wikimedia)


