Skip to main content

New blog post ✍️ by Lucy Li, Maarten Sap, Luca Soldaini, and me about large language models and how to use them with care.

We discuss:
- 10 current risks ⚠️ posed by chatbots and writing assistants to their users
- 7 questions 🤔 to ask yourself before using these tools

Our goal with this post is to increase transparency for everyday users 💻🔍

#AI #GenerativeAI #LLMs #Chatbots

in reply to Maria Antoniak

Risk #1: LLMs can produce factually incorrect text.

Risk #2: LLMs can produce untrustworthy explanations.

Risk #3: LLMs can persuade and influence, and they can provide unhealthy advice.

Risk #4: LLMs can simulate feelings, personality, and relationships.

Risk #5: LLMs can change their outputs dramatically based on tiny changes in a conversation.

Risk #6: LLMs store your conversations and can use them as training data.

Risk #7: LLMs cannot attribute sources for the text they produce.

#1 #2 #3 #4 #6 #5 #7

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.