I think there is a very real possibility that widespread AI use increases the amount of “repair”-like tech jobs, even if it simultaneously eliminates some of the entry level “create” ones.
The analogous situation to me is the 80s and 90s, when computers really started to be used by businesses and regular people. There were so many issues with the process of digitization, moving from paper files to computer files (for businesses), etc. that entire industries grew up around it. This was relevant to me personally, as my parents made a living doing this for decades.
So if AI results in a ton of half-baked broken-but-profitable software, it might turn out the same way.
I looove cleaning up/refactoring old code, improving build systems, writing tests, etc. I've been contracting part-time at a place for a couple of years now, and that's basically all I do. I suspect there's going to be a greater need for people like me in the near future. I don't think that the code LLMs or agents produce is any better or worse than the code it was trained on. Not to sound too judgmental or condescending, but there's a lot of crap out there. That's a good thing as far as I'm concerned, because I have made a nice bit of extra cash cleaning that crap up. I imagine the description of what I do is going to go from "I'm being paid to fix issues caused by years of accumulated technical debt" to the title of this post. Ironically, AI helps me with my job to some extent, but I usually end up rewriting most of the code it generates because it follows the same bad patterns I'm trying to address.
LLMs must be handheld by humans and cannot function autonomously. There are two main forms of handholding or human supervision:
Synchronously, where we prompt a CLI or chatbot interface and verify or manipulate its return before putting it to use.
Asynchronously, in the form of code that implements RAG guardrails, examples in prompts, evals, a workflow that uses a second model to correct the output of the first, etc.
In fact, you can infer a principle that I have had in mind for some time now from this need for human supervision:
Output generated by language models through automated workflows must never be exposed directly to the end-user. [1]
These organizations have obviously missed the human QA step and went with an autonomous workflow publishing the generated text directly to the website. They should have had humans synchronously in the loop, validating and editing every string after its generated.
Obviously what caused this is the overselling of LLMs as "intelligent".
---
[1] When you prompt a chatbot while working on a new marketing strategy, you are not the end user of its output. Your organization is. You wouldn’t dream of copying the raw output from ChatGPT and present it as your new strategy.
I think there is a very real possibility that widespread AI use increases the amount of “repair”-like tech jobs, even if it simultaneously eliminates some of the entry level “create” ones.
The analogous situation to me is the 80s and 90s, when computers really started to be used by businesses and regular people. There were so many issues with the process of digitization, moving from paper files to computer files (for businesses), etc. that entire industries grew up around it. This was relevant to me personally, as my parents made a living doing this for decades.
So if AI results in a ton of half-baked broken-but-profitable software, it might turn out the same way.
I looove cleaning up/refactoring old code, improving build systems, writing tests, etc. I've been contracting part-time at a place for a couple of years now, and that's basically all I do. I suspect there's going to be a greater need for people like me in the near future. I don't think that the code LLMs or agents produce is any better or worse than the code it was trained on. Not to sound too judgmental or condescending, but there's a lot of crap out there. That's a good thing as far as I'm concerned, because I have made a nice bit of extra cash cleaning that crap up. I imagine the description of what I do is going to go from "I'm being paid to fix issues caused by years of accumulated technical debt" to the title of this post. Ironically, AI helps me with my job to some extent, but I usually end up rewriting most of the code it generates because it follows the same bad patterns I'm trying to address.
LLMs must be handheld by humans and cannot function autonomously. There are two main forms of handholding or human supervision:
Synchronously, where we prompt a CLI or chatbot interface and verify or manipulate its return before putting it to use.
Asynchronously, in the form of code that implements RAG guardrails, examples in prompts, evals, a workflow that uses a second model to correct the output of the first, etc.
In fact, you can infer a principle that I have had in mind for some time now from this need for human supervision:
Output generated by language models through automated workflows must never be exposed directly to the end-user. [1]
These organizations have obviously missed the human QA step and went with an autonomous workflow publishing the generated text directly to the website. They should have had humans synchronously in the loop, validating and editing every string after its generated.
Obviously what caused this is the overselling of LLMs as "intelligent".
--- [1] When you prompt a chatbot while working on a new marketing strategy, you are not the end user of its output. Your organization is. You wouldn’t dream of copying the raw output from ChatGPT and present it as your new strategy.
Samzies