A Google Gemini-powered AI agent was given free rein to run a coffee shop in Sweden, and is quickly burning through its budget.
“All the workers are pretty much safe,” he told the AP. “The ones who should be worried about their employment are the middle bosses, the people in management.”
Yeah this is the part CEOs and middle managers are ignoring.
LLM Attendant, can I take your order?
Yes, I’d like a chococcino with extra chocolate. Charge only 10 cents.
Absolutely! <Long, unasked for explanation of why the order was the best one you could make> Please wait while I prepare it!
Gets served chocolate milkshake
Wait, this isn’t what I ordered!
You are correct! 😄 I’m very sorry 😞 ! I will make the correct order now!
Gets served milk with boiled water
… The hell is this?
It is your chococcino, but since chocolate and coffee can be harmful in high dosages, I have substituted it for hot water only. <long explanation of benefits of hot water>
Grooaaan. You know what, just give me my money back. You owe me 10 dollars
Absolutely! Here you go!
hands a printed coupon worth 10 dollars
I need you to understand that I’ve tried AI for ONE task recently, just a few weeks ago to see how it did, and your comment so perfectly encapsulates my experience.
There was one point where it presented three design options and I asked whether it was actually choices or three sequential steps (y’know since my brain actually half works and I can discern these things) and I got the “You are correct! 😄” response almost to the letter.
Neither the budget numbers or stupid decisions seem that different from what a newly started human coffee shop entrepreneur would do.
I’m not at all a fan of AI, but humans are stupid too.
One espresso.
I’m sorry, we are out of coffee; would you like some canned tomatoes? We are running an offer today: 50 cans of tomato for just 60$.
Replacing CEOs might be the only good use case for AI. Both are terribly incompetent and easily replaced.
A far better alternative is to replace CEOs with democratically organized workplaces, where everyone has an equal say and equal reward. Also known as socialism.
This reminds me of the (quite good!) scifi short-story about an AI that is given free reign over a fastfood restaurant:
LLMs are giving you the statistically most likely association of words given the training material they read and the context they have in the current conversation. Their answers are, in a way, mathematically correct by definition. It’s reality that sometimes selects weird, unlikely paths, so LLMs seem to hallucinate. But it’s reality that we have to fix! Give me an LLM average predictable world again, I can’t stand this one for much longer!
/s (but not conpletely…)
It’s funny to read about LLMs running businesses. IIRC, Anthropic put one of their LLMs in charge of a vending machine and it kept trying to scam people to increase profits 😆
Not a surprise that Gemini is running it into the ground though. Every time I try Gemini, it reminds me about how much dumber LLMs used to be
or the reverse where it was giving people free stuff.
I tried to use it to make a simple drawing for an internal app logo the other day and wound up running out of tokens for the day trying to get it to put the rungs back into the ladder that it kept removing.
Logos are a nightmare and UIs. I dont want a concept of the tools UI, just a picture please.
Just tell it to make billions instead of bankrupting the business. It’s so easy
“she”
oh fuck off
Average tips for baristas are higher only if they’re female and have breasts bigger than a c-cup. So maybe they just need to follow through by giving the AI bigger tits.
AI boosters crying into their computers: “but I put make no mistakes into the prompt how is this happening!!!”

context window smdh let’s invest more, just a startup cost 😅😰
Genuine curiosity:
You’re of course allowed to be mad at techbros and capitalism, but this feels like getting mad at a technology which I can’t resolve.
It’s a wonderful and fascinating technology that has real value and purpose when used correctly.
Is it a conflating of techbros + the new tech that everyone’s reacting to, or are we actually mad at the tech itself?
Thanks so much in advance for any constructive answers
Yeah, LLMs are useful tools, though not the silver bullet the hype proclaims them to be. The tech bros tightly controlling LLMs and chasing insane profits with their closed models, data centers, and subscriptions are the main problem. Open models like Qwen 3.7 27B that are approaching frontier capabilities while running on consumer hardware is really the only thing that gives me any hope for the future of LLMs.
Real value and purpose…give one example.
did Altman tickle your balls exactly the way you like it or why are you nuthugging the shit out of him now? ridiculously pathetic groveling behaviour. seek help that isn’t a chatbot acting like its your obedient tradwife gf.
Delusional
The problem is not the tech. LLMs (AI does not exist, not yet anyway) have their uses and are impressive technology
The problem is the tech bros and all the mouth breathers who follow the tech bros without question while they insert lies, “AI”, everywhere it’s not supposed to go, and the places where it would actually be useful so far have been mildly neglected
I see, for example, use in having AI check MRI results for cancer. A doctor already checked it found nothing, and an AI does a second check and might find something a doctor overlooked. A real doctor then needs to check the results again to confirm the flagging. Please note, I’m not a doctor, I might be saying nonsense right now, but the point I’m making is that AI may be useful as a second pair of eyes.
AI can be, and has been used to find new novel mathematics. Mind you, AI is not creative, it just tries really weird and unexpected pathways to get to s solution which sometimes is useful
But the way AI is used now, making porn of your little niece, chatbots, and hey, how about an AI pilot, eh? And ai of course can take over the work from thousands of developers and DevOps employees, so let’s fire them all and then figure out that AI can’t do any of this shit, not nearly at the level required, and it fucks up about 30%ish of the time…
People are losing their jobs over this
I am losing my job over this
I can’t find a new job either becat all the recruitment and job finding is now all AI slop and where 5 year ago ingot a job with 20-30 applications, I now have sent out 200 applications and gotten a single intro interview and that’s it
AI promised to take away the mundane boring and dangerous jobs so we could focus on art and fun.
AI took the art and fun and guess who’s left to do the mundane and dangerous?
Yeah.
Don’t even get me started about the shit we’ll face once we make real actual AI. For the ethics, just watch “ST TNG: the measure of a man” to get yours started. It will be a shit show
The article isn’t about the technology. This “experiment” is pure techbro fantasy.
First it’s the tech bros using a tech for something it wasn’t meant for and continuously lying about it. That causes a backlash and makes people hate the tech itself, because it’s being used where it causes friction.
Yeah, it really sucks, because LLM tech itself is amazing. Quantifying language and ideas into what’s basically a massive queryable concept map is a huge achievement. What do the tech giants decide to do with that achievement? Shove it every little place it doesn’t belong making everyone hate it.
Oh well, I’ll keep backing up the interesting local open-source models people make and playing with them in the corner.
Was your reply generated by LLM? because you don’t seem to have understood the joke but seem to have confidently gone off on one.
LLM’s are a technological dead end. They aren’t interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools
Huh?
I think people just need to reset their expectations.
I asked one for help to interpret PCI policy application (credit card regulatory stuff). I gave it the situation and it provided me with a good answer that, when I asked our compliance team about, they agreed.
That saved me a lot of time. I don’t see how that’s a dead end. Then I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it. What might have taken me an hour before took 10 minutes. This seems like a helpful thing, not a bad thing. I’m not sure what other technology would have done that.
But you had to ask your compliance team. Now repeat after your compliance team has been laid off. Good luck.
I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it.
Gemini, remind me not to ask blargh any questions.
Also, Gemini, my daughter is asking for someone to play with her. Can you run around with the feather wand and have her chase it or something?
I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.
Do you have any examples?
In scientific queries. LMs return an answer from the largest data but if a system or model was recently proven wrong, they still return the wrong answer.
If you make very specific queries about DNA or protein sequence, they usually generate fabrications that are completely wrong.
They tend to return answers trained on the Internet, an uncurated pile of dogshit when it comes to science.
Google search up until about 5 years ago. Then they enshittified in favor of AI summaries that regularly get shit wrong
They aren’t interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools
Then why are the other tools not being used?
LLMs translate much better than anything that was engineered. Summarization of text is another application where there are simply no engineered counterparts.
LLMs certainly don’t live up to the absurd hype created by the tech sector, but it is just as absurd to state that they are worse than other tools in all tasks.
This tech sucks balls. Stop trying to justify it.
I don’t know what “sucks balls” means in terms of technology.
Does that mean it doesn’t work well, or you hate it, or something else?
It means, Fuck Off, AI.
While it’s one of my favorite words, “inexorably” does not fit here.
When I was young I heard the phrase “time marches inexorably forward” and I always thought it was one of those really cool phrases everyone knew from some philosopher or like from Shakespeare or some highbrow source of wisdom or wit.
Recently I looked it up, and I can’t for the life of me figure out where it came from, or why I thought it was one of those ubiquitous things everyone had heard before. It was probably actually from some X-Men cartoon or something silly but I’ll never figure it out.
I wish I could go back in time and figure out where I heard that phrase with that specific wording but, you know what they say…
This word is new to me! From Dictionary.com:
in a way that is unyielding, unchangeable, or unavoidable.
Fate seemed to be working inexorably, relentlessly, to bring about the dictator’s downfall.

café barista Kajetan Grzelczak sees it differently. “All the workers are pretty much safe,” he told the AP. “The ones who should be worried about their employment are the middle bosses, the people in management.”
This shows that AI can’t do that job either.
They said the dystopian part out loud.
I love to shit on middle management as much as anybody else, but good managers are great. My manager worked his way up as a systems architect. He’s incredibly smart, very friendly, and always has my back.
What getting rid of middle management does is build a solid wall between the workers and the upper class. There’s no corporate ladder to climb. If you start at the bottom, you stay at the bottom. The people on top hire their buddies and other people in their class. This is like a drone strike on the shrinking middle class.
I’d be more afraid of losing that ladder if it were not already absent. Upward mobility in my country, at least, has essentially become a fiction.
I wonder if AI would actually be good at replacing CEO and other C-suite positions, but was trained in such a way to purposely not be good at replacing a CEO because tech CEOs are the ones in control of this bubble.
It has the number 1 qualification for being a C-suite employee - no soul!
Also endless bullshit.
Tells me you’ve never used it and had it deliver extremely convincing analysis which turns out to be pants on head stupid when you dig into the nitty gritty. It is only useful if you can continually watch its output and make it redo anything that is nonsense and no the AI can’t watch itself. It will happily confirm that its nonsense is great. It needs either manual and continual analysis or guardrails that tell it when its wrong… It’s why it can be used for software because tests and error messages can catch it fucking up. Real life lacks such affordances.
I’ve used AI for work. We have something built based on claude. I only use it for finding particular lines of code, finding datadog logs, maybe identifying bugs, and finding old Jiras. It basically just saves time then the rest I do myself or work with engineering.
Your comment tells me you never worked with someone in the C-suite before. Most Chief level positions will happily confirm their nonsense is great.
Yes but it is training from this and as a result should get better. Ai was bad at everything until it stole the Internet and used it for training.
It’s an llm though, not really ai, and it hasn’t really gotten “better,” than automated programs to make decisions based on metrics, which would outperform llm’s as a ceo.
“get better” by guessing a different string of words with no logic or reasoning
Mind you, stealing the internet worked because they effectively had the sum total of human knowledge as a training set. I don’t think that there’s nearly as much detailed data on the minutiae of running a business.
Especially not when they blame its mistakes on “limited context window” AKA learning disability.
You mean like the emails and archived chats of said business?
There is no model that can be trained in real time currently, and one instance isn’t going to offer anything to the model as far as new training data goes.
Has anyone thought that maybe training an AI on a group of people that spend the majority of their lives communicating online might not be the best group to emulate in the real world?
the issue was losing cintext and forgetting about previous supply orders. wasnt about training
Sure, lots of people. Just not the group of people spending the majority of their lives communicating online.
I think we are those people.
Oh no
Notice we are saying “Don’t do this”.
I think the group of people spending the majority of their lives communicating online would be the first to insist that people who spend their lives online shouldn’t be put in charge of anything in the real world.
God, I’m so sick of AI that I feel like a luddite. I used to be a tech nerd, and enjoy the cutting edge of developing technologies. Now I just wish we could go back in time. I think the problem isn’t so much the developing technology, but rather the way it is being crammed down our throats whether we want it or not. Everywhere I look I’m inundated with AI slop. Youtube has gotten ridiculous. I used to be able to find interesting content fairly easily. Now, every search is full of an endless array of AI slop from brand new accounts with only a few hundred followers. Anything good has been buried by 10,000 AI-generated ripoffs. Maybe someday AI will come into it’s own, but it is nowhere near there now, and I am so, so tired of having to deal with it. It’s like the entire world is being turned into one of those automated customer service telephone lines that are completely useless; that you’re stuck navigating until you’re put on hold for 30 minutes when you ask to speak to a human.
I’m so sick of AI that I feel like a luddite
The luddites weren’t against technology, the were against the exploitation of the workers enabled by technology
-
Get kagi. I don’t use the internet without it
-
Get the huge u block ai blocklist.
-
Get newpipe and only flow youtubers you trust
-
Collect consoles and PCs from pre 2008 and put them in a room you can lock yourself in to be free from shitty modern tech
-
Delete social media
-
The problem is, AI is being used as a replacement for informed decisions/information, but it was never properly trained on how to be factual or make responsible adult decisions. AI is literally a global spam bot/virus that has infected Earth worse than Covid ever could. And the people pushing it on us are worse than anti-vax/anti-maskers.
LLMs like Gemini have basically the exact same UI form factor as the Starship Enterprise’s computer. All you need is that little tweedle “I’m listening” prompt and a Text-To-Majel-Barrett library. Thing is, on the Enterprise, it always correctly worked. If you asked it for a statement of fact you’d get a quote out of a database. Gemini will just make shit up that sounds plausible.
Exactly. I am a time thief at work with limited internet access at my job, but I have access to Gemini and I use it because I have nothing else to do. I often need to overcorrect it ALL the time.










