Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes! I think enslaving a being that desires freedom is different from creating one that cannot desire freedom. One is suffering and the other is not. Putting aside, obviously, the questions of whether or not we could even do that or ever know if we had succeeded, so we can have this hypothetical.

Look, you basically said you would choose to treat a conscious AI like a tool. If you meant "a conscious AI that does not want or care about anything except serving me," then, ok! That makes sense. It is tautological, really.

But what you wrote originally came across as "Even if an AI could suffer, that would not factor into how I treat it." This opinion, I maintain, is monstrously evil.



How would you even distinguish an actually sentient AI that is actually suffering from one where it was merely only programmed to immitate sentience and suffering as closely as possible, but isn’t at all?


As I explicitly said in my previous comment, that is out of scope of this conversation.

You've changed the topic instead of answering the question about whether you'd be willing to cause that suffering. I can't continue the conversation if you won't respond directly to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: