Non-practicing intellectual. (Posts tagged generative ai)

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
xiaq

Anonymous asked:

since you’re in the tech world can you explain why giving your personal information to a chatbot/using chatgpt as a therapist or friend is a bad idea? my roommate recently revealed to me that she tells chatgpt about her entire day and worries and i’m trying to convince her to Not do that (unsuccessfully). since you actually work in tech do you have any ideas for how i can explain the risks and issues?

xiaq answered:

Oh boy. This will be a fast pass since I’m on my lunch break but here we go.

  1. OpenAI’s CEO Sam Altman has explicitly said you should not use ChatGPT as a therapist/friend. If the CEO is telling you “don’t do this,” don’t do this. Source
  2. The primary reason he cites is that there’s no legal privilege. No Dr/patient confidentiality. Altman even said that, in the event of a lawsuit or legal inquiry, Open AI would produce the entirety of people’s conversations. Every word. There is zero privacy (and that’s aside from the fact that your data is being actively mined).
  3. Most chatbots are built to encourage engagement, prolong conversation (so you give them more content to mine), and be as agreeable as possible. This means they may inadvertently encourage someone who is delusional, reaffirm incorrect assumptions/statements that a human would call out, or even agree that a person should self-harm or kill themselves with no accountability. There are multiple cases now of people who have committed suicide or were hospitalized after interacting with chatbots (and at least one legal case now related to this). source, source, source, source
  4. Chatbots are only as good as the LLMs they’re built upon. So it’s unsurprising that they may show stigma against certain kinds of substance and/or mental health issues and may fail to recognize suicidal ideation. Source
  5. Finally, The American Psychological Association is saying don’t do it. All chatbots are not unilaterally harmful, and there are even some studies in which folks are actively trying to create therapy bots that do not have these pitfalls with positive initial results, but ChatGPT is not one of them. Source
  6. And that’s not to mention the environmental impacts of using generative AI in general. Source

    I get it. I understand the desire for a free (or low cost) therapist that’s available 24/7 without judgement. But while some might argue that it’s better than no therapy at all, as someone who works with AI/LLMs, it will be a cold day in hell before I ever use ChatGPT as a therapist.
xiaq

Tl;dr

Never trust a thing that pretends to have empathy while being incapable of it.

I mean not all ai it can be useful in certain forms but chat bot gen ai can die a quick and painful death ai generative ai hudson williams Instagram

Anonymous asked:

since you’re in the tech world can you explain why giving your personal information to a chatbot/using chatgpt as a therapist or friend is a bad idea? my roommate recently revealed to me that she tells chatgpt about her entire day and worries and i’m trying to convince her to Not do that (unsuccessfully). since you actually work in tech do you have any ideas for how i can explain the risks and issues?

Oh boy. This will be a fast pass since I’m on my lunch break but here we go.

  1. OpenAI’s CEO Sam Altman has explicitly said you should not use ChatGPT as a therapist/friend. If the CEO is telling you “don’t do this,” don’t do this. Source
  2. The primary reason he cites is that there’s no legal privilege. No Dr/patient confidentiality. Altman even said that, in the event of a lawsuit or legal inquiry, Open AI would produce the entirety of people’s conversations. Every word. There is zero privacy (and that’s aside from the fact that your data is being actively mined).
  3. Most chatbots are built to encourage engagement, prolong conversation (so you give them more content to mine), and be as agreeable as possible. This means they may inadvertently encourage someone who is delusional, reaffirm incorrect assumptions/statements that a human would call out, or even agree that a person should self-harm or kill themselves with no accountability. There are multiple cases now of people who have committed suicide or were hospitalized after interacting with chatbots (and at least one legal case now related to this). source, source, source, source
  4. Chatbots are only as good as the LLMs they’re built upon. So it’s unsurprising that they may show stigma against certain kinds of substance and/or mental health issues and may fail to recognize suicidal ideation. Source
  5. Finally, The American Psychological Association is saying don’t do it. All chatbots are not unilaterally harmful, and there are even some studies in which folks are actively trying to create therapy bots that do not have these pitfalls with positive initial results, but ChatGPT is not one of them. Source
  6. And that’s not to mention the environmental impacts of using generative AI in general. Source

    I get it. I understand the desire for a free (or low cost) therapist that’s available 24/7 without judgement. But while some might argue that it’s better than no therapy at all, as someone who works with AI/LLMs, it will be a cold day in hell before I ever use ChatGPT as a therapist.
heavy sigh Ai Generative ai Therapy psychology Answered asks Tech world things