Skip to main content


“The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con” by @baldur
softwarecrisis.dev/letters/llm…

It is a long read but I feel does a good job of discussing how LLMs are more interested in truthiness than admitting when they don’t know.

This entry was edited (2 years ago)
in reply to Adrian Roselli, pH0

Here @eevee does a good job breaking down how users *thinking* an LLM response is good is *not* the same as the response being good (nor correct):
github.com/mdn/yari/issues/923…

(As before, please refrain from piling on in the comments.)

This entry was edited (2 years ago)
in reply to Adrian Roselli, pH0

Last month I argued why ‘AI’ will not fix #accessibility:
adrianroselli.com/2023/06/no-a…

As if to prove my point, #UserWay launched a broken LLM:
FuxMyCode.ai/

Yet the sorta-press shills nonsense like this PR from #AudioEye:
web.archive.org/web/2023070315…

Pay attention to who promotes LLMs as a solution. Usually it is a money play or laziness.

#a11y

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.