Probleme bei Roboter-Entwicklung

Google hat Probleme aufgeschrieben, die es beim Roboter-Entwickeln zu lösen gibt.

  • Avoiding Negative Side Effects: how do you stop a robot from knocking over a bookcase in its zealous quest to hoover the floor?
  • Avoiding Reward Hacking: if a robot is programmed to enjoy cleaning your room, how do you stop it from messing up the place just so it can feel the pleasure of cleaning it again?
  • Scalable Oversight: how much decision making do you give to the robot? Does it need to ask you every time it moves an object to clean your room, or only if it\\'s moving that special vase you keep under the bed and never put flowers in for some reason?
  • Safe Exploration: how do you teach a robot the limits of its curiosity? Google\\'s researchers give the example of a robot that\\'s learning where it\\'s allowed to mop. How do you let it know that mopping new floors is fine, but that it shouldn\\'t stick the mop in an electrical socket?
  • Robustness to Distributional Shift: how do you make sure robots respect the space they\\'re in? A cleaning robot let loose in your bedroom will act differently than one that is sweeping up in a factory, but how is it supposed to know the difference?

Quelle: The Verge


New comment





(cc-by-sa) since 2005 by Konstantin Weiss.