Skip to main content


“The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con” by @baldur
https://softwarecrisis.dev/letters/llmentalist/

It is a long read but I feel does a good job of discussing how LLMs are more interested in truthiness than admitting when they don’t know.

This entry was edited (9 months ago)
in reply to Adrian Roselli

Here @eevee does a good job breaking down how users *thinking* an LLM response is good is *not* the same as the response being good (nor correct):
https://github.com/mdn/yari/issues/9230#issuecomment-1623447279

(As before, please refrain from piling on in the comments.)

This entry was edited (9 months ago)
in reply to Adrian Roselli

Last month I argued why ‘AI’ will not fix #accessibility:
https://adrianroselli.com/2023/06/no-ai-will-not-fix-accessibility.html

As if to prove my point, #UserWay launched a broken LLM:
http://FuxMyCode.ai/

Yet the sorta-press shills nonsense like this PR from #AudioEye:
https://web.archive.org/web/20230703154507/https://www.forbes.com/sites/stevenaquino/2023/06/29/audioeye-shares-results-of-ai-and-accessibility-study-says-it-illustrates-tremendous-potential-for-responsible-use/?sh=39fe08831898

Pay attention to who promotes LLMs as a solution. Usually it is a money play or laziness.

#a11y

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.