Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.
The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.
Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.
Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington DC this week.
“Certainly it is serious enough to warrant the attention of all the finance ministers,” he said.
“Capitalists dependent upon AI bubble, feign concern regarding latest snake oil release to help prop AI bubble”
FTFY
I know it’s cool to be blasé about AI stuff, but if there’s an area where the hype is warranted it’s computer security research.
I don’t want to look at AI “art” or read an AI generated “book”, but the exploits derived from an AI-enabled process work just as well as the organic version. And you don’t need a warehouse full of Eastern European zoomers and junk food to get them.
And you don’t need a warehouse full of Eastern European zoomers and junk food to get them.
Are you certain about that? Anthropic has a team of security engineers “validating” the LLM output, and then they have been passing on their “validated” outputs to third-party security researchers to “confirm” them.
Tellingly, they don’t say how many false positives have to be filtered through in order to find the correct vulnerabilities with working exploits, but I imagine that if all those security researchers were tasked with auditing the same codebases, they would probably find the same (or more) vulnerabilities without the shotgun guessing of an LLM to guide them.
You need to remember that these claims are being made by a company that has enormous financial incentive to make everyone believe that this model is a huge breakthrough.
My comment was not generalized AI snark, it was specific to Claude Mythos.
At least according to Ed Zitron, the reason Mythos is not being released to general public is simply that it’s too fucking expensive to run.
And all of the vulnerabilities it found, were found by other less expensive models already.
This scare tactic PR strategy is pure marketing hype for Anthropic, that’s it.
Again, according to Ed.
Maybe time will show that I was wrong to trust Ed’s reporting more than AI “tech leaders”, but until that time comes, I know who I lean towards believing more.
I’m not sure I get the concern. If there are vulnerabilities they have probably been sold to NSA, other state hackers and black hats already. Mythos would help close them for everyone.
Sure, a bad actor could use it to break in, but Mythos is not some secret hacking tool, it’s an expensive LLM you can run against your own code and system giving you the upper hand.
Anthropic is actually acting responsibly by contacting maintainers and platforms with bugs and the possibility to analyze their systems before it’s released to the wider public. And if it’s all hype then this is a money grabbing operation to finally make good money off of LLMs. That concern however doesn’t seem to be shared by the financiers.
I’m worried about the tons of barely maintained software run by your average company. Most commercial software is made by relatively small outfits and is drowning technical debt. The only thing saving their customers is the effort of picking through it.
But now any loser with a decompiler and a $100 Claude sub can ruin a whole lot of people’s day.
Things will get better, but the near term is pretty fucked.
“The difference is that the Strait of Hormuz - we know where it is and we know how large it is… the issue that we’re facing with Anthropic is that it’s the unknown, unknown.”
Someone get Donald Rumsfield his royalty check…
I can’t believe people are still using that shit.
Anyways,
AI companies are saying it’s a huge danger but are ever so graciously willing to sell it to existing corps to fix the vulnerabilies it can supposedly circumvent…
Before it circumvents them…
Even tho the AI companies are also saying it will be able to circumvent anything regardless.
And even if they tell people it can make something it cant also break (impossible) in 6 months they’ll repeat the process
A never ending cycle where they sell the shiny new model and corps pay for it because eventually hackers will have it too when it’s publicly released
And endless cat and mouse game where AI constantly accumulates more and more wealth.



