This makes me so happy about my employer. I’m sysadmin for a newspaper.
We had an all-company test run 2 weeks ago to answer the question “What if we’re hacked?”
Turns out we’re able to produce a printed and online newspaper within a work day if NONE of our normal IT systems (hardware, software, e-mail, network) are accessible.
Everything we need has a redundancy that’s kept completely physically separated from the network until the day it’s needed.
My company is pivoting hard to Claude for everything, and besides the fact that it’s irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they’ve had a “no reliance upon 3rd party platforms for core functions,” but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.
Don’t wait, start now. The job market is a nightmare and finding one that isn’t being consumed by incompetent C-level AI FOMO is getting harder every day. I work on life-saving medical equipment and AI is being pushed on us for things that could literally kill people if not done correctly. Why would anyone spend 30 minutes using AI and risking people’s lives when I can just write it myself in 5 or 10? Madness. Complete, society-scale madness. The people pushing AI have no fucking idea what they are doing or how engineering works. People are going to die.
Fuck AI and all, but to be faaaiiiiir, if you take away most people’s computers they would be far less efficient than someone that did the same job without one 50 years ago.
In the profession I recently retired from, if they suddenly went back 50 years in tech the global economy would crash, and even a 20-30 year regression in tech would seriously fuck things up until people adjusted. And even then they wouldn’t be able to reach the same levels of efficiency.
Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.
I’m not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says “Soz, terms of service violation”.
Vendor reliance is dangerous. That doesn’t just apply to AI. If the company in OP’s message had both Claude and Gemini they’d been okay, so the problem isn’t with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment’s notice.
In any case, leaving aside where the problem is, the idea that 60 employees can’t use Natural Intelligence to do their jobs means there’s something really wrong with that company…
It’s a Faustian bargain, a company gives up all of their internal IT staff and hardware and becomes completely dependent on a vendor for critical business processes. It’s like the opposite of insurance, they’re saving some money by risking a total loss of their ability to do business should the vendor pull support.
It’s not that they can’t be productive. Right now at least, what AI does is amplify how much work you can do. One of my friends codes for a big company that uses state of the art Claude models and he says that the system does 80-90% of the coding grunt work and the job is more of an editor and making sure everything is correctly annotated so that humans can understand what’s happening in the code in the future. This means that work that might have taken months he can complete in a week or two.
This approach to coding is exactly what creates the problem. They will find out the hard way if they can continue to be productive when something breaks and AI is not available for whatever reason. Does anyone know how to fix it? Is the documentation sufficient to understand what the AI did?
My friend said early AI iterations were really bad at being opaque and that even now if you’re having it design the core architecture you’re going to have the problems you mentioned. But his job has basically changed to being focused mostly on being that architect. Using the metaphor of constructing a building. He used to have to do a lot of manual labor too, not just be an architect. Now he just has to tell the AI system what to build AND how. But the majority of the actual “construction” work is done by the AI system.
To continue with the analogy though, how many architects create things that an engineer takes one look at and laughs at because it’s structurally impossible (hint: a lot). Knowing the deep parts of the code and how it works becomes even more invaluable otherwise you risk Chinese building practices (quick, looks good, falls apart quickly).
My friend is a full stack programmer with over 15 years experience with one of the largest financial institutions. So he can handle what you’re talking about no problem. But what IS a huge problem is that the reason he has the requisite knowledge now is because he spent years learning best practices by doing the grunt work that’s going to disappear. So in a few years they might no longer have people with the skills to do things right and then what you’re describing will absolutely happen and build quality will go to hell. The assumption from big tech is by then the models will have improved enough it won’t matter by then.
That’s a hell of an assumption. Since we’re whipping out credentials, I’ve been in IT almost 30 years and I can tell you it’s not going to work like that.
Since we’re whipping out credentials, I’ve been in IT almost 30 years and I can tell you it’s not going to work like that.
I’m not the person you were replying to but I’ve also been in tech since 1996 and lots of things have worked just like that. All successful technology starts off barely functional and improves over time until nearly all members of it’s intended audience can successfully use it.
As an example in 1996 setting up a router was a specialty task that required training, by 2016 any moron could buy one off the shelf and have it running in an hour. As another example basic HTML was a specialty skill in 1996 but by 2003 you could do it with Microsoft Word. Smartphones are another example, they went from barely functional Windows Mobile and Blackberry devices which required ridiculous amounts of back end skill to deliver email to iPhones and Androids that any numskull can use for nearly anything at all.
My point is this; too many people are stuck on the “What use is a newborn baby?” question without realizing that the infant is growing-up at blinding speed. It’s also the first technology to carry the promise, real or not, of self-improvement when it reaches sufficient maturity. Assuming that happens all further improvement will be increasingly automatic and happen even faster.
AI isn’t going away and it’s only going to get better as time goes on.
I can see, in programming, how the current AI trend is displacing a lot of junior programmers who will not be senior programmers in 10 years due to the inability to obtain experience.
AI hasn’t come for DevOps or SysAdmins jobs either, but it’s ‘good enough’ to do help-desk/tier 1-type tasks. That limits the job pool for new IT workers and will create a future shortage of experienced workers.
I’m not worried about MY job, I’ve already accumulated the experience. It’s the new guys who are trying to get into support positions, where they are glorified knowledge base/Google searchers, who are having the hard time because AI CAN do search and summarization/RAG pretty effectively.
At least in my experience these models are pretty good now to write code based on best practices. If you ask for impractical things they will start doing ugly shortcuts or workarounds. A good eye catches these and you either rerun with a refined prompt, fix your own design or just keep telling it how you want to have it fixed.
You still gotta know how good code looks like to write it, but the models can help a lot.
I don’t doubt that it is possible to create good code when focusing on programming best practices etc. and taking the time to check the AI output thoroughly. Time however is a luxury most of the devs in those companies don’t have, because they are expected to have a 10x code output. And thats why the shit hits the fan. Bad code gets reviewed under pressure, reviewers burn out or bore out and the codebase deteriorates over time.
But we have to identify this as what it is: an internal policy failure where they abandon proven processes to maintain code quality.
I guess I’m lucky my managers have not put that pressure on me yet. I do however see developers getting sloppy and lazier so the reviews actually do take more effort and AI rarely catches all problems with a change.
Based on a quick web search, staff can only remove people temporarily for rule violations; it takes a court order to get a long-term ban from the NYC subway.
Disliking AI is fine and good. But that is a really dumb argument.
“60 employees who can’t be productive without the internet? And this is progress?”
“60 employees who can’t be productive without computers? And this is progress?”
“60 scribes who can’t be productive without clay tablets? And this is progress?”
Etc.
Edit: LLMs/AI are going to change some things. They are going to make (shitty) coding and various automations much more accessible. They are probably not a revolutionary technology like computers/internet, but that they could be a core part of some people’s workflow is absolutely not unthinkable. It has been shown that there have not, so far, been major boons to productivity on the whole, but that doesn’t mean they don’t have some use cases.
In the military we have a maintenance tracking system. It’s electronic. We literally bdo drills for if it goes down and we have to resort to paper backups. And there are paper backups.
Without a computer I could still manage an entire flight line worth of planes, and everything they need. Maintenance, fueling, sorties, etc. What you’re telling me is that this company and lots of companies do not have a contingency for if there is a system failure or other outage.
That seems acceptable? Why? Short of a power outage (and probably not even then unless we can’t Jerryrig a lighting solution) we can do all the jobs required with hand tools. It’s crazy to think that people don’t think this should be a thing.
You people are like flat earthers with this AI hatred.
It’s genuinely fascinating and useful. You’re allowed to hate the companies and evil behind it, but the kid in me is still enthralled by this technology.
I’m pretty sure the reason tech employees hate it so much is because it’s an existential threat to their profession. If it wasn’t, they wouldn’t spend so much time talking about it.
If the Internet is down for a period of time at the office, I would expect that my dev team is able to continue working (assuming they’re not exclusively hitting a third party API). At least for a few hours, if not days. It might not be the same cadence, but I’m not about to send them home.
Computers are a tool; AI is an outsourcing. It’s the difference between a carpentry team not having saws, hammers, etc. and having the carpentry team unable to do work if Jose (the outsourced carpenter) doesn’t come in.
60 employees who can’t be productive without AI?
And this is progress?
This makes me so happy about my employer. I’m sysadmin for a newspaper.
We had an all-company test run 2 weeks ago to answer the question “What if we’re hacked?”
Turns out we’re able to produce a printed and online newspaper within a work day if NONE of our normal IT systems (hardware, software, e-mail, network) are accessible.
Everything we need has a redundancy that’s kept completely physically separated from the network until the day it’s needed.
My company is pivoting hard to Claude for everything, and besides the fact that it’s irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they’ve had a “no reliance upon 3rd party platforms for core functions,” but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.
Got me thinking I should warm up my resume…
Don’t wait, start now. The job market is a nightmare and finding one that isn’t being consumed by incompetent C-level AI FOMO is getting harder every day. I work on life-saving medical equipment and AI is being pushed on us for things that could literally kill people if not done correctly. Why would anyone spend 30 minutes using AI and risking people’s lives when I can just write it myself in 5 or 10? Madness. Complete, society-scale madness. The people pushing AI have no fucking idea what they are doing or how engineering works. People are going to die.
I’ve been unemployed for going on 18 months. It’s awful and the market is the worst it’s been since I’ve been working (15 years or so).
Its ok tho, there’s no recession, becuz stock marmket!!! 11!1!11!!!
Fuck AI and all, but to be faaaiiiiir, if you take away most people’s computers they would be far less efficient than someone that did the same job without one 50 years ago.
In the profession I recently retired from, if they suddenly went back 50 years in tech the global economy would crash, and even a 20-30 year regression in tech would seriously fuck things up until people adjusted. And even then they wouldn’t be able to reach the same levels of efficiency.
Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.
Like Gmail? Google drive? Slack?
I’m not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says “Soz, terms of service violation”.
Vendor reliance is dangerous. That doesn’t just apply to AI. If the company in OP’s message had both Claude and Gemini they’d been okay, so the problem isn’t with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment’s notice.
In any case, leaving aside where the problem is, the idea that 60 employees can’t use Natural Intelligence to do their jobs means there’s something really wrong with that company…
1000% this.
It’s a Faustian bargain, a company gives up all of their internal IT staff and hardware and becomes completely dependent on a vendor for critical business processes. It’s like the opposite of insurance, they’re saving some money by risking a total loss of their ability to do business should the vendor pull support.
Windows 11, Onedrive, Intel Management Engine, Google accounts, …
It’s not that they can’t be productive. Right now at least, what AI does is amplify how much work you can do. One of my friends codes for a big company that uses state of the art Claude models and he says that the system does 80-90% of the coding grunt work and the job is more of an editor and making sure everything is correctly annotated so that humans can understand what’s happening in the code in the future. This means that work that might have taken months he can complete in a week or two.
This approach to coding is exactly what creates the problem. They will find out the hard way if they can continue to be productive when something breaks and AI is not available for whatever reason. Does anyone know how to fix it? Is the documentation sufficient to understand what the AI did?
This is how the Adeptus Mechanicus is born.
Good analogy. I’m gonna steal that :D
My friend said early AI iterations were really bad at being opaque and that even now if you’re having it design the core architecture you’re going to have the problems you mentioned. But his job has basically changed to being focused mostly on being that architect. Using the metaphor of constructing a building. He used to have to do a lot of manual labor too, not just be an architect. Now he just has to tell the AI system what to build AND how. But the majority of the actual “construction” work is done by the AI system.
To continue with the analogy though, how many architects create things that an engineer takes one look at and laughs at because it’s structurally impossible (hint: a lot). Knowing the deep parts of the code and how it works becomes even more invaluable otherwise you risk Chinese building practices (quick, looks good, falls apart quickly).
My friend is a full stack programmer with over 15 years experience with one of the largest financial institutions. So he can handle what you’re talking about no problem. But what IS a huge problem is that the reason he has the requisite knowledge now is because he spent years learning best practices by doing the grunt work that’s going to disappear. So in a few years they might no longer have people with the skills to do things right and then what you’re describing will absolutely happen and build quality will go to hell. The assumption from big tech is by then the models will have improved enough it won’t matter by then.
That’s a hell of an assumption. Since we’re whipping out credentials, I’ve been in IT almost 30 years and I can tell you it’s not going to work like that.
I’m not the person you were replying to but I’ve also been in tech since 1996 and lots of things have worked just like that. All successful technology starts off barely functional and improves over time until nearly all members of it’s intended audience can successfully use it.
As an example in 1996 setting up a router was a specialty task that required training, by 2016 any moron could buy one off the shelf and have it running in an hour. As another example basic HTML was a specialty skill in 1996 but by 2003 you could do it with Microsoft Word. Smartphones are another example, they went from barely functional Windows Mobile and Blackberry devices which required ridiculous amounts of back end skill to deliver email to iPhones and Androids that any numskull can use for nearly anything at all.
My point is this; too many people are stuck on the “What use is a newborn baby?” question without realizing that the infant is growing-up at blinding speed. It’s also the first technology to carry the promise, real or not, of self-improvement when it reaches sufficient maturity. Assuming that happens all further improvement will be increasingly automatic and happen even faster.
AI isn’t going away and it’s only going to get better as time goes on.
I can see, in programming, how the current AI trend is displacing a lot of junior programmers who will not be senior programmers in 10 years due to the inability to obtain experience.
AI hasn’t come for DevOps or SysAdmins jobs either, but it’s ‘good enough’ to do help-desk/tier 1-type tasks. That limits the job pool for new IT workers and will create a future shortage of experienced workers.
I’m not worried about MY job, I’ve already accumulated the experience. It’s the new guys who are trying to get into support positions, where they are glorified knowledge base/Google searchers, who are having the hard time because AI CAN do search and summarization/RAG pretty effectively.
Then you’re not dealing with cutting edge tech. Living in the past isn’t going to help you.
At least in my experience these models are pretty good now to write code based on best practices. If you ask for impractical things they will start doing ugly shortcuts or workarounds. A good eye catches these and you either rerun with a refined prompt, fix your own design or just keep telling it how you want to have it fixed.
You still gotta know how good code looks like to write it, but the models can help a lot.
I don’t doubt that it is possible to create good code when focusing on programming best practices etc. and taking the time to check the AI output thoroughly. Time however is a luxury most of the devs in those companies don’t have, because they are expected to have a 10x code output. And thats why the shit hits the fan. Bad code gets reviewed under pressure, reviewers burn out or bore out and the codebase deteriorates over time.
But we have to identify this as what it is: an internal policy failure where they abandon proven processes to maintain code quality.
I guess I’m lucky my managers have not put that pressure on me yet. I do however see developers getting sloppy and lazier so the reviews actually do take more effort and AI rarely catches all problems with a change.
This is what I’m hearing too. One thing my friend did mention was that without a nearly unlimited amount of tokens he’d run out really quickly.
Regardless of the fact that work has ground to a halt the CEO will continue to claim productivity has never been higher since implementing AI
Eh consider it like a power outage. These corporations don’t deserve more than automated slop. If that system is down, it’s an earned break
Funny how nobody seems to use this argument every time there’s a problem with the NYC subway.
Because there’s alternatives. You don’t have to use the subway if it breaks down, and people have enough brains to take a taxi or walk instead.
This is 60 people going, “Fuck, the subway is down. Guess I can’t travel anywhere, now.”
Which problems are you referring to? None of the physical issues, nor the human behaviour issues are relevant here.
Based on a quick web search, staff can only remove people temporarily for rule violations; it takes a court order to get a long-term ban from the NYC subway.
Disliking AI is fine and good. But that is a really dumb argument.
“60 employees who can’t be productive without the internet? And this is progress?”
“60 employees who can’t be productive without computers? And this is progress?”
“60 scribes who can’t be productive without clay tablets? And this is progress?”
Etc.
Edit: LLMs/AI are going to change some things. They are going to make (shitty) coding and various automations much more accessible. They are probably not a revolutionary technology like computers/internet, but that they could be a core part of some people’s workflow is absolutely not unthinkable. It has been shown that there have not, so far, been major boons to productivity on the whole, but that doesn’t mean they don’t have some use cases.
In the military we have a maintenance tracking system. It’s electronic. We literally bdo drills for if it goes down and we have to resort to paper backups. And there are paper backups.
Without a computer I could still manage an entire flight line worth of planes, and everything they need. Maintenance, fueling, sorties, etc. What you’re telling me is that this company and lots of companies do not have a contingency for if there is a system failure or other outage.
That seems acceptable? Why? Short of a power outage (and probably not even then unless we can’t Jerryrig a lighting solution) we can do all the jobs required with hand tools. It’s crazy to think that people don’t think this should be a thing.
One is a deterministic machine on your desk, that you own, to do stuff at your desk.
The other is a nondeterministic thing somewhere else, that you don’t own, to do stuff at your desk.
So?
Seriously?
This isn’t an anti-AI argument it’s a pro-UBI argument
I was talking about a false dichotomy (before the person I replied to edited their comment to save face)
what are you talking about
You people are like flat earthers with this AI hatred.
It’s genuinely fascinating and useful. You’re allowed to hate the companies and evil behind it, but the kid in me is still enthralled by this technology.
It’s just getting weird at this point.
I’m pretty sure the reason tech employees hate it so much is because it’s an existential threat to their profession. If it wasn’t, they wouldn’t spend so much time talking about it.
Huh.
Except, unlike computers and the internet, AI is not essential, unless your whole business revolves around it (in which case, good riddance).
If the Internet is down for a period of time at the office, I would expect that my dev team is able to continue working (assuming they’re not exclusively hitting a third party API). At least for a few hours, if not days. It might not be the same cadence, but I’m not about to send them home.
Computers are a tool; AI is an outsourcing. It’s the difference between a carpentry team not having saws, hammers, etc. and having the carpentry team unable to do work if Jose (the outsourced carpenter) doesn’t come in.