

For the same reason, it would try to prevent a human from turning it off.

In the typical AI model, no machine will turn itself off because suicide would violate its programming to achieve its objective. The assistance game can potentially be a way to make an effective "off switch" for the machine, says Russell. "We can have machine learning algorithms that know they don't know what the loss function is" for recognizing something in an image, "and go back to the human expert" for clarification.Īlso: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

In what he calls an "assistance game," a machine still needs information from a person to complete its task. "They thus most definitely escape from the completely effective control of the man who has made them," Wiener predicted.īuilding upon Wiener's critique, Russell proposes the outlines of a solution, which involves putting humans back into the loop. In a 1960 article in Science magazine, Wiener wrote that machines could be built that would develop capabilities not foreseen by their programmers.
