Microsoft AI CEO Mustafa Suleyman explains why AI should never get rights: ‘Dangerous and misguided’
Microsoft AI CEO Mustafa Suleyman has spoken out about AI welfare, giving rights to these new technologies. Suleyman stated that if AI has its own motivations, desires, and goals, it will start to feel like an independent being rather than technology made in service of humans.
In a new interaction with WIRED, Suleyman said, “AI still needs to be a companion. We want AIs that speak our language, that are aligned with our interests, and that deeply understand us. The emotional connection is still super important.”
“If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans,” he warned.
“That’s so dangerous and so misguided that we need to take a declarative position against it right now,” Suleyman added.
Suffering should be the basis of giving rights:
Suleyman further argued that even consciousness shouldn’t be the basis of giving rights; rather, a better metric would be suffering. He said while there could be an AI model in the future which could be aware of its own existence and make claims about having a subjective experience, there could still be no evidence that it suffers.
“You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers. I think suffering is a largely biological state, because we have an evolved pain network in order to survive. And these models don’t have a pain network. They aren’t going to suffer,” he added.
“It may be that they [seem] aware that they exist, but that doesn’t necessarily mean that we owe them any moral protection or any rights. It just means that they’re aware that they exist, and turning them off makes no difference, because they don’t actually suffer,” he further noted.
Post Comment