

Or rather the right to use shovels under ToS that can be changed on a whim.


Or rather the right to use shovels under ToS that can be changed on a whim.


I really don’t get this quantity-first approach. If you wanted to actually transform the world with tech in a way it’s not just superficial, you’d create task forces that sit together with specialist in each field of medical, construction, logistics, finance etc. give them 2 years to build prototypes and action plans. Then bet on the N most promising applications, spin them off as separate companies with premium access to your most advanced AI models and vertically integrate them into their workflows.
This would actually, sustainably achieve a foothold into these industries, disrupt and transform them long term.


He’s majority shareholder or has some trick to never be dethroned.


I have acquaintances at Meta and they literally waste tokens on bullshit tasks. They have like 10 agents running simultaneously doing some elaborate task that takes a long time. You can’t tell me this is more productive or efficient than doing actual work. Even if half of these tasks are somewhat useful and related to your project.


Just another of the pedo’s DUI hires.


The less money they have the less damage they cause


In the meantime let’s drain as much capital from the place we can.


If I said the same thing about Judaism I’d get arrested in Germany.


It sounds like they were measuring chatbot use than a deeper integration into their systems. That may not me the best use of LLMs.


LLMs are the only thing that is hyped. The other models and applications have existed already back when ChatGPT first hit the public and they have not had any special break through that would explain exponential growth in investment or a need for compute power. Language models had that with the transformer structure, everything else just develops iteratively.
The bubble we see now is because of language models and we can try and conflate it with other deep models and call it all AI, but it doesn’t change the fact that the generative models are the only ones requiring these resources and are looking for a problem to solve.


I’m not sure I get the concern. If there are vulnerabilities they have probably been sold to NSA, other state hackers and black hats already. Mythos would help close them for everyone.
Sure, a bad actor could use it to break in, but Mythos is not some secret hacking tool, it’s an expensive LLM you can run against your own code and system giving you the upper hand.
Anthropic is actually acting responsibly by contacting maintainers and platforms with bugs and the possibility to analyze their systems before it’s released to the wider public. And if it’s all hype then this is a money grabbing operation to finally make good money off of LLMs. That concern however doesn’t seem to be shared by the financiers.
Soon you’ll have to pay for the privilege of communicating with a human instead of an LLM chat bot.
Edit: I realize I need to specify because I forgot to add relevant context.
At least in my experience these models are pretty good now to write code based on best practices. If you ask for impractical things they will start doing ugly shortcuts or workarounds. A good eye catches these and you either rerun with a refined prompt, fix your own design or just keep telling it how you want to have it fixed.
You still gotta know how good code looks like to write it, but the models can help a lot.