> “To replace programmers with AI, clients will have to accurately describe what they want. We're safe.”
I've had similar sentiments often and it gets to the heart of things.
And it's true... for now.
The caveat is that LLMs already can, in some cases, notice that you are doing something in a non-standard way, or even sub-optimal way, and make "Perhaps what you meant was..." type of suggestions. Similarly, they'll offer responses like "Option 1", "Option 2", etc. Ofc, most clients want someone else to sort through the options...
Also, LLMs don't seem to be good at assessment across multiple abstraction levels. Meaning, they'll notice a better option given the approach directly suggested by your question, but not that the whole approach is misguided and should be re-thought. The classic XY problem (https://en.wikipedia.org/wiki/XY_problem).
In theory, though, I don't see why they couldn't keep improving across these dimensions. With that said, even if they do, I suspect many people will still pay a human to interact with the LLM for them for complex tasks, until the difference between human UI and LLM UI all but vanishes.
Yeah, the difference having a human in the loop makes is the ability to have that feedback. Did you think about X? Requirement Y is vague. Z and W seem to conflict.
Up to now, all our attempts to "compile" requirements to code have failed, because it turns out that specifying every nuance into a requirements doc in one shot is unreasonable; you may as well skip the requirements in English and just write them in Java at that point.
But with AI assistants, they can (eventually, presumptively) enable that feedback loop, do the code, and iterate on the requirements, all much faster and more precisely than a human could.
Whether that's possible remains to be seen, but I'd not say human coders are out of the woods just yet.
I've had similar sentiments often and it gets to the heart of things.
And it's true... for now.
The caveat is that LLMs already can, in some cases, notice that you are doing something in a non-standard way, or even sub-optimal way, and make "Perhaps what you meant was..." type of suggestions. Similarly, they'll offer responses like "Option 1", "Option 2", etc. Ofc, most clients want someone else to sort through the options...
Also, LLMs don't seem to be good at assessment across multiple abstraction levels. Meaning, they'll notice a better option given the approach directly suggested by your question, but not that the whole approach is misguided and should be re-thought. The classic XY problem (https://en.wikipedia.org/wiki/XY_problem).
In theory, though, I don't see why they couldn't keep improving across these dimensions. With that said, even if they do, I suspect many people will still pay a human to interact with the LLM for them for complex tasks, until the difference between human UI and LLM UI all but vanishes.