Wednesday , June 26 2019
Home / argentina / The scientists gave this robot's shoulder "their own image" and watched as they learn it

The scientists gave this robot's shoulder "their own image" and watched as they learn it



Of course the robot's consciousness is better when lifting objects

Scientists took a robot's shoulder and programmed it "by their own image" (Kwiatkowski and Lipson, Sci Robot 4, eaau9354 (2019)).

in Matrix, Morpheus tells Neo that their digital appearance is based on their "residual self-image." This means that characters look how they imagine they look, based on their own mental models.

In the real world, scientists are trying to teach robots who also suffer. This is because, unlike matrix combat machines, a real robot with a precise image can be self-sufficient for humanity. This will allow for faster programming and more accurate self-planning and will help the device auto-diagnose when something goes wrong. It could even help the robot adapt to any damage it causes.

And in the middle of a few college scientists at Columbia University, they said they had given this robot a sense of self-confidence and a new potential for learning. Their research is published in the journal Scientific robotics.

The image of the robot arm itself

The article is surprisingly readable and its abstract throughout the text reads: "The robot himself modeled himself without prior knowledge of physics or its shape and used his self-model to perform tasks and detect self-harm." (It sounds like a description of Netflix … watch it!)

Scientists bought a standard robotic arm model – the intimidating name WidowX – and taught him to visualize himself. They traveled over 1,000 random trajectories and basically watched what had happened: how they felt certain movements, what was possible, what was ineffective, everything. The authors even compared it with a person who first learned the skills of his own limbs and wrote, "This step is not unlike the crazy child watching his hands."

Equipped with all of these data, the robot used a deep learning to create his own image, ie the exact model himself. It took a while for the initial models to be generated out of the brand, but after about 34 hours of training, the model was accurate to 4 centimeters. It was so good that it allowed it to become an expert in picking and moving small balls around itself – a typical stand-in for robotic dexterity. Robot's own image was so good that without another training he could do a completely different task: the handwriting of the word with the mark. (Robot arm says "hello" by the way.)

Larger robot stuff

Then, to simulate sudden injuries or a little damage, the scientists replaced the arm that the robot used, being a little longer and deformed. The machine quickly updated its own image to take into account the new situation, and soon returned to the same tasks with the same precision.

Overall, the authors are convincingly convinced that getting robots to create accurate self-images can be the best way to create accurate, diagnostic and efficient machines. "Automatic rendering will be the key to allowing robots to deviate from limiting so-called narrow AI towards more general capabilities," he writes. They then expand a bit: "We suppose that this separation of ourselves and our tasks could also be the evolutionary origin of self-confidence in humans."

It's definitely fine and it's all until we stop making our machines too like you, Matrix.


Source link