

Oh I think this is for sure the case. And it’s very interesting to see Darwin appreciated that too.
A little bit of neuroscience and a little bit of computing


Oh I think this is for sure the case. And it’s very interesting to see Darwin appreciated that too.
At 0Hz sound is just the local air pressure and the weather forecast is technically sheet music.
👌
Thought this was from the hover text for a second!


Yea … it’s the bit I don’t get why people don’t care about this more.
If we’re replaced, there’s nothing really left for us in the terms of the way we’ve conceived our whole world for centuries. Sure maybe we go native again or something, but let’s be real, that is a massively tough transition even if it’s viable.
Oh I hear you (and appreciate the response).
For me, I can’t help but think of another alternative, which I’m surprised I haven’t heard of yet …
stripping down one’s personal technological cognitive load to a stack of systems that can fit into one’s brain (like the Python mantra), focusing on learning that stack well building sustainable and stable systems, and then just detoxing from the increasingly polluted digital information stream (protected commons, traditional formats such as books and in person engagement … dunno).
Depends on what the end goal is, but AIs seem to be about using tech more or just opting out of sovereignty. Something like the above seems to me to be about using tech less (in the end) and pushing toward being a secondary tool rather than an end of its own.
Probably a shallow response …
But I always figured AI/LLMs are basically apocalyptic for all sorts of individualistic values in computing (including privacy but also independence and diversity).
Whether they’re good or useful etc, I just struggle to see how they will ever be justifiable against these sorts of values.
Sure, local models and our hardware will get better … but better than the state of the art from the big labs and providers? Given that data and training are the big bottlenecks on quality … I struggle to see how AI isn’t a complete feudal capturing of information computing and processing. Not to mention what happens to the pipeline that produces information content if everyone is only consuming it through the models that train on it.
So for me the big question is, what’s our call on a possible (likely even?) future where we are forever stuck using cloud provided AI along with all of its negatives, in the same way that basically all of us has been and still is stuck using MS windows, Google and the big-social-media hellscape?
For me, I baulk at this.
Anyone ride wonder if there’ll be an asymmetry between the ease/speed at which bugs/vulnerabilities are found and at which they’ll be fixed with AI systems?
That is, AI assistance may find and exploit bugs more easily than it can fix them?