After a LinkedIn post about AI hallucinations generated 166 comments, I realized: everyone was right, but talking about completely different problems.
This breaks down five perspectives (practitioners, displaced workers, engineers, educators, skeptics) and what each reveals that we're not discussing:
- The verification paradox (if you can verify it, you could probably do it yourself)
- Three zones of AI appropriateness (Green/Yellow/Red)
- Why AI hallucinations are structurally different from human errors
- The human capital problem (if AI does junior work, where do seniors come from?)
A fundamental concept of basic software design that is mirrored in most widely adopted software: design software to adapt to the user, do not require the user modify their behavior to use the software.
Any argument that the user must "learn" to use what is arguably the world's largest and fastest software implementation is fallacious.
Any sufficiently mature human replacement software should be capable of accepting all varieties of human input.
After a LinkedIn post about AI hallucinations generated 166 comments, I realized: everyone was right, but talking about completely different problems.
This breaks down five perspectives (practitioners, displaced workers, engineers, educators, skeptics) and what each reveals that we're not discussing:
That's only four things that we're not discussing. Did you miss one?
Five perspectives (in parentheses), four key concepts (bulleted). The perspectives reveal the concepts we're missing in the debate.
Ah. I thought there was going to be a key concept for each perspective.
A fundamental concept of basic software design that is mirrored in most widely adopted software: design software to adapt to the user, do not require the user modify their behavior to use the software.
Any argument that the user must "learn" to use what is arguably the world's largest and fastest software implementation is fallacious.
Any sufficiently mature human replacement software should be capable of accepting all varieties of human input.
5. off-by-one errors