Google Reveals Plan to Stop Robots From Killing Humans

By Johnny Jan6,2024 #Google #Humanity #Robots

Google has written a “robot constitution” as one of a number of ways it is trying to limit the harm caused by robots.

One day, the company hopes that its Deepmind Robotics division will be able to create a personal helper robot that can respond to requests. For instance, it could be asked to tidy the house or cook a nice meal.

But such a seemingly simple request could actually be beyond the understanding of robots. What’s more, it might be dangerous: a robot might not know that it shouldn’t tidy the house so intensely that its owner gets harmed, for instance.

The company has now revealed a set of new advances that it hopes will make it easier to develop robots that are both able to help out with such tasks and to do so without causing any harm. The systems are intended to “help robots make decisions faster, and better understand and navigate their environments”, it said – and to do so safely.

The new breakthroughs include a new system called AutoRT, which uses artificial intelligence to understand the aims of humans. It uses large models, including a large language model (LLM) of the kind used in ChatGPT, for instance.

It takes data from cameras on the robot and feeds it into a visual language model or VLM, which can understand the environment and objects within it, describing them in words. That can then be passed to the LLM, which will understand those words, generate a list of tasks that might be possible with them, and then decide which of them should be done.

But Google also noted that integrating those robots into our daily lives would require people to be sure they would behave safely. As such, the LLM that makes decisions within that AutoRT system has been given what Google calls a Robot Constitution.

That is a set of “safety-focused prompts to abide by when selecting tasks for the robots”, Google said.

“These rules are in part inspired by Isaac Asimov’s Three Laws of Robotics – first and foremost that a robot ‘may not injure a human being’,” Google wrote. “Further safety rules require that no robot attempts tasks involving humans, animals, sharp objects or electrical appliances.”

The system can then use those rules to guide its behavior and avoid any dangerous activities, for instance, in the same way that ChatGPT might be told not to help people with illegal activities.

But Google also said those large models could not be relied on entirely to be safe, even with those technologies. As such, Google still had to include more traditional safety systems borrowed from classical robotics, including a system that stopped it from applying too much force and a human supervisor that could physically switch them off.

Original article

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *