Recently, ICTC spoke with Dr. AJung Moon, an experimental roboticist. Dr. Moon is currently an Assistant Professor in the Department of Electrical & Computer Engineering at McGill University, where she investigates how robots and AI systems influence the way people move, behave, and make decisions in order to inform how we can design and deploy such autonomous intelligent systems more responsibly. She also has a background in start-ups and advising organizations such as the UN. In this conversation, as part of ICTC’s Technology and Human Rights Series, Kiera and Dr. Moon discuss robot ethics, AI ethics, and lessons from the international arena.
Could you briefly explain what ‘robots’ are in your work, and what these robots are used for?
Of course. So, I don’t work with terminators or robots that are purposefully built to kill. By robots, I’m talking about embodied, physical objects that are able to sense something about our physical environment, process, and compute something about the signal that it has sensed, and then do something about it within a physical environment to change that environment. Some people say that bots on web browsers — the algorithmic things that automate specific functions — are robots as well, but I am specifically focused on the physical domain when I talk about robots.
One topic you research is human-robot interaction, such as human-robot collaboration, nonverbal communication, and human-robot negotiation using motions/gestures. Could you talk a bit about how robots and humans interact? What are some major questions that you are working on?
If anyone has visited an automotive factory within the past few decades, they’re likely familiar with those huge robotic arms that perform those repetitive functions and can be “on” 24/7. In a way, I work with those types of robotic arms but on a much smaller scale and in much more physically safe interactions. The idea is to work with industrial robots that are designed to be able to safely interact physically with people so that you don’t need the safety curtains that manufacturing facilities typically have. Essentially, it means that you can envision a person assembling a particular part with a robot, both holding onto the same object at the same time. I also look at robots that are a little more human-like: it might have two arms or head-like things, with cameras, and/or it can move across the floor.
In human-robot interaction, we look at questions about designing robots to better interact with us, such as, How do you get a robot to pick up a water bottle and hand it over to a person in a safe and clear manner? When a robot hands you something, it should be very clear when you are supposed to take it from the robot. Between humans, this seems trivial because we pick up on each other’s gaze cues, ways we move our hands, etc., to figure out these details of everyday tasks. But for robots, we have to program every single feature.Read more...