New blog post ✍️ by Lucy Li, Maarten Sap, Luca Soldaini, and me about large language models and how to use them with care.
We discuss:
- 10 current risks ⚠️ posed by chatbots and writing assistants to their users
- 7 questions 🤔 to ask yourself before using these tools
Our goal with this post is to increase transparency for everyday users 💻🔍
https://blog.allenai.org/using-large-language-models-with-care-eeb17b0aed27
#AI #GenerativeAI #LLMs #Chatbots
Using Large Language Models With Care - AI2 Blog
An introductory outline of the risks of LLMs, written for the everyday user.Maria Antoniak (AI2 Blog)
Maria Antoniak
in reply to Maria Antoniak • • •Risk #1: LLMs can produce factually incorrect text.
Risk #2: LLMs can produce untrustworthy explanations.
Risk #3: LLMs can persuade and influence, and they can provide unhealthy advice.
Risk #4: LLMs can simulate feelings, personality, and relationships.
Risk #5: LLMs can change their outputs dramatically based on tiny changes in a conversation.
Risk #6: LLMs store your conversations and can use them as training data.
Risk #7: LLMs cannot attribute sources for the text they produce.