When was the last time you told a little white lie? Was it when someone asked if you liked their gift, but you didn’t? Or when your grandchild asked you if Santa was real? If you answered yes to one of those questions, you’ve told a prosocial deception or a white lie. While it’s okay for humans to lie to be kind or avoid discomfort, should artificial intelligence (AI) be allowed to do the same?
That’s what ARCS Atlanta Scholar Kantwon Rogers is studying at Georgia Institute of Technology while working on his PhD in Computer Science. His research focuses on investigating prosocial deception’s influence on human-robot interaction. He’s not interested in just AI but how people use the tool.
“I care about people,” he states. “Whether we want to acknowledge it or not, almost every part of our lives nowadays is influenced by AI. And we really, really need to be thinking about the people using AI and making sure that it's an overall positive addition to people's lives, rather than harm.”
“We want robots to interact with humans like humans, and this includes these types of lies,” he shares. Rogers confirms many people don’t know about robots, AI, and deception. It’s concerning and problematic, he states. But with the rise of systems like ChatGPT, where it’s been able to tell falsehoods, it’s an important question to consider.
Rogers started studying this material before the popularity of AI chat systems, and now he’ll benefit as one of the initial thought leaders in thinking about robot deception. He aims to use his research to help three main groups:
- Technologists and roboticists can better understand the purpose of making these systems and their impact.
- Consumers to have a better understanding of the systems and how they might change their own lives.
- Policymakers and regulators because they can’t create legislation around a problem that isn’t understood.
When asked whether AI is friend or foe, the theme of next month’s ARCS Forward, Rogers came up with a third option: neither. “AI is a tool, and we often forget that people make these systems,” he explains. “And depending on what people want to do with them, it can either be malicious or with good intent.” He continues explaining that people forget how these systems should be used, who is using them, and who is creating them. To him, that’s the better question.
Rogers is grateful for the Scholar Award because of the financial security it gives him to have academic freedom. “I have all of my own funding, which means I can be more creative with the things I want to do,” he explains. “The only reason I could come up with these ideas of robot deception was because I didn’t have to worry about if my advisor was on board with this and what her funding is for.”
For Rogers, who grew up in a woman-led household with his mother and older sister, ARCS felt like home. “I grew up having women be my mentors and who I look up to,” shares Rogers. “This is how things should be. It’s important to have more groups like ARCS showing that even though women might not traditionally have power and influence in STEM, that’s not the case.”
To learn more about research on Artificial Intelligence, register for the ARCS virtual lecture on February 8th.