Technology is reshaping the way we live, work, play, and interact. In addition to creating new opportunities for human flourishing, artificial intelligence and other technological advances are posing risks for humanity that are hard to predict.
To promote better understanding of the human-technology relationship, the Psychology of Technology Institute, in partnership with Darden’s Batten Institute for Entrepreneurship and Innovation, is convening the fourth annual New Directions in Research on the Psychology of Technology Conference. Bringing together a cross-section of scholars, industry leaders and policy experts, the conference will take place at the UVA Darden Sands Family Grounds in the Washington, D.C., area 8–9 November.
We sat down to talk about technology’s growing impact on our lives with one of the conference organizers, Darden Professor Roshni Raveendhran, whose research explores the intersection of psychology and technology.
Q: What will be the theme of this year’s Psychology of Technology Conference?
Raveendhran: As technology moves into the workplace — changing the way we work and engage with each other — it’s important to study the interaction between technology and humanity. How does tech influence humans? How do humans influence the creation of new, interesting technologies? And this year, specifically, we’re interested in exploring automation, artificial intelligence, algorithmic decision-making, and technology and human well-being.
Q: What’s the focus of your own research?
Raveendhran: I study the future of work, and I’m interested in questions that are at the intersection of technology and humanity in the context of the workplace. So, some of my research explores what it means for humans to be working with novel technologies, such as AI, and various behavior tracking products that are coming to the workplace. How do novel technologies change our behavior toward each other? For example, why might managers use technology when dealing with others in uncomfortable situations? Or, how effective is it to use AI for social support at work? I explore these questions with a psychological lens at the micro level, where we’re looking at individual human behavior, and at the macro level, where we’re looking at how human behaviors around technology adoption influence organizational strategy and performance.
Q: What prompted you to explore those topics?
Raveendhran: When I was in grad school, I was interested in understanding our experience of autonomy. Technology allows us to do things without the help of others and be more autonomous. For example, we used to ask people for directions, and now we have maps on our phones, and that’s great. I wanted to explore how our experiences of autonomy affect our view of technology and how technology is influencing our psychological experiences and behaviors. I believe that we should think about how to leverage technological advances in order to augment us as humans as opposed to view technology as our enemy. I want my research to inform society about how to use novel technologies responsibly.
Q: How are some of the behavior-tracking technologies you mentioned being used in the workplace?
Raveendhran: One of the best examples of behavior tracking products is the smartbadge. There’s a company called Humanyze that makes these badges. The badges have sensors, microphones and motion detectors, through which they measure the amount of face-to-face interactions you have — and even tone of voice — and give you feedback about your social interactions. Many companies are using them in interesting ways. So, if teams are not communicating with each other, or if employees are not reaching out and learning from each other, those badges can track that and suggest connecting with someone who could be a resource. Lots of those wearables can track various aspects of your behavior, including emotions, and that’s a big part of my research.
Q: What are your thoughts on government-run behavior tracking — for example, in China, the kind that results in each citizen having a “social score”?
Raveendhran: When I studied behavior tracking, I found that people are open to being tracked, but the minute they realize that there’s a human behind that technology, it comes across as evaluative and not informational. I believe that so long as tracking makes people feel judged and threatened, it’s definitely misused, especially when the intention is to punish people or let others in the community know about some people’s low social standing.
Q: It’s hard to function in today’s society without giving up some of our data, but shouldn’t we at least care about what’s happening with that data?
Raveendhran: Yes, absolutely. How salient are our privacy concerns when we think about adopting really cool novel technologies? Are we thinking about those concerns consciously? I’m working on a project where we’re exploring whether people are more likely to give up privacy and data for the sake of convenience. For example, I could record myself while I’m sleeping and try to make sense of it all, but that takes considerable effort. Or, I can just wear a device and get a report on how well I’m sleeping if I give up my sleep data. But then a third-party company is getting my data. We are exploring some of these tradeoffs in this project.
Q: Some experts consider AI to be “the single most important and daunting challenge that humanity has ever faced,” to quote Oxford University’s Nick Bostrom. How do you view this challenge?
Raveendhran: Because I study how we interact with technology, what I’m most concerned about is people being too quick to adopt AI without thinking about why they want it, and without knowing much about how AI can be misused. Or, on the flip side, people being too resistant to it, because they think, “Oh, AI is bad” and, as a result, miss the opportunities technology creates. So, the Facebook mishap is a big example of negligence, of technology being misused. But the same Facebook has helped connect so many families and friends.
Technology itself isn’t good or bad. The way people are using technology can be scary when they aren’t making conscious choices. So how do we nudge people to apply more conscious choices to that context? That’s a question I’m really passionate about.