Discussion about this post

User's avatar
Karla's avatar

I quite didn't understand why antropomophising AI is a problem. Is it about the risk of trusting it's capabilities beyond what the product is designed to do? And therefore people are mislead which can lead to...? It's good to be aware of the limitations. When Claude is hallucinating, I get reminded that it's just another technical assistant tool, however much better, than what we had three years ago.

I think there is a huge step between acting politely when using technology (e.g. saying please) and actually believing it's a conscious being worthy of moral concern. Another concern could of course be: if you value having true beliefs, then antropomorphising something which isn't conscious is bad as it potentially deceives naïve people.

On the other hand LLMs have occasionally been better discussion partners for me than humans have for solving personal problems. And compared to therapy, a lot cheaper. (That statement naturally comes with a lot of caveats like risks with relying on LLMs for serious mental health problems. But for me it was helpful when my friends' and family's abilities to help me in my situation weren't enough, but it wasn't that big of an issue that it had to be a specialised professional.)

ChatGPT and other LLMs being designed to match human interaction makes a lot of sense to me in making the products user friendlier. If I can give instructions like I'd give them to another person, I don't have to learn a new way of talking, and that makes the product a lot more accessible. Whether they need to have human-like names and faces is easier to debate. But I think more kindness and empathy even when directed towards something nonliving can't be too bad in the world we live in.

Expand full comment

No posts