Surveying the Tech Ethics Minefield And How Purpose-Driven Leaders Can Get Us Through It
The stock trading platform Robinhood launched in 2015 with a noble mission: to make stock trading more accessible to the average Joe. With no trading fees and a friendly user interface, it appealed to retail investors with modest portfolios, especially millennials who flocked to the app with fierce loyalty. When a group of investors on Reddit, however, conspired to drive up the price on GameStop stock this past January — causing hedge funds that had shorted the stock to hemorrhage millions — Robinhood made the tricky decision to halt trading on the stock. A massive backlash by its users followed, including cancelled accounts, lawsuits and congressional hearings.
“On the face of it, it feels like they reneged on their brand promise,” says Professor Bobby Parmar. “On the one hand, they want to support the idea that anybody can trade, but on the other hand, they don’t want to upset powerful investment companies—so they got stuck between a rock and a hard place.”
Robinhood’s crisis is only the latest example of a company dealing with a complicated ethical issue caused by the unintended consequences of technology.
“Managers are wrestling with all kinds of difficult moral issues that show up in the context of new technology and innovation,” says Parmar. Uber drivers can assault their passengers. Airbnb hosts might discriminate by not renting to people of color. Twitter can allow rampant misinformation to spread unchecked.
“Anyone with an Internet connection can write whatever they want and send it to their 500,000 followers, and we haven’t figured a way to verify its accuracy,” says Professor Jared Harris. Add to that the technological advancements that facilitate deep fakes, doctored photos and videos that look real, and it creates a very difficult ethical landscape, as it becomes more and more difficult to verify what’s actually true.
“We’re headed rapidly to a world where bad actors can cook up fake videos that are nearly impossible to distinguish from real ones, and what does that world look like?” Harris asks.
Part of the way ethics always works, says University Professor Ed Freeman, is through constant re-evaluation as circumstances change and new groups push for recognition. “We have some principles and rules, and we have a case, and we apply the principles to solve the case,” he says. “Now, what happens when we get some new cases?” Technology is one way that new cases can arise, pitting two principles against each other—say, the idea that people have a right to own their own intellectual property and the ability of Google Books to democratize sharing of information. “That gives rise to the need to re-engage the adjustment of principles and cases,” Freeman says.
Such a moment is called reflective equilibrium, in the words of philosopher John Rawls, giving rise to the need to create a new set of beliefs. “A new situation arises, and we have to create new values, because it turns out our forebears didn’t always think about equality of all groups, or what happens if technology allows us to spy on each other in new ways.”
The Problem: Most Companies Are Not Set Up for Reflective Thought
“Silicon Valley for a long time had this motto, ‘move fast and break things,’” he says. Industries such as pharma may undergo deliberate processes to ensure they do no harm, but tech companies are often in a race to produce a product as quickly as possible, consequences be damned. “Because the dominant story in business is about profit and money, a lot of startups get into trouble when they’re not really testing and understanding the larger impacts of their technology,” Parmar says.
That urge prevents companies from performing “pre-mortems” to think through potential problems that may occur weeks or months down the line. Instead, they are often caught flat-footed when problems arise, leading to backlash by consumers and opportunities for competitors to take advantage. “When women didn’t feel safe riding in Ubers, it opened up an opportunity for companies like See Jane Go, which was basically Uber for women by women,” says Parmar, “which is a brilliant business model that solved the moral problem for Uber.”
Parmar and Freeman suggest two steps for innovation companies to do a better job of thinking ahead:
- Realize that every business model has moral principles baked into it, whether they recognize it or not.
- Know that while it’s impossible to predict all outcomes for new technologies, it’s important to engage with all stakeholders—including customers, employees, investors, suppliers, and communities—in order to better predict what might go wrong.
“You have to be engaged, not just with people inside your company but people outside your company,” says Freeman, a founding pioneer of stakeholder theory. Leaving any of those constituents behind can be disastrous, he says.
Look no further than Amazon, which perfected the art of getting products to customers fast but is now having to send Tweets insisting its employees aren’t peeing in bottles because they don’t get bathroom breaks. “Amazon is the most customer-friendly company in the galaxy,” says Freeman. “But they haven't yet figured out how to take care of their own employees.”
Predicting the impact on all stakeholders requires a diversity of perspectives, Parmar adds. “If everybody who’s on the team is thinking the same or reflecting a certain customer demographic, that’s a problem,” says Parmar, citing Airbnb’s issues with racial discrimination. “It makes you wonder: If their teams were more diverse, would that have come up sooner?”
Transparency Is Key to Stem Ethical Dilemmas When Introducing a New Technology
Another essential aspect of introducing new technology is transparency about the way it will be used, argues Professor Roshni Raveendhran. She studies “the future of work,” including examining companies’ use of technologies such as smart badges and computer-tracking software that closely monitors employee behavior and productivity.
“If deployed without understanding the psychology of how those technologies might affect people, they might actually lead to more problems than good,” she says. “It’s almost always the case that people don’t have a good idea about what data is being collected about them, how it’s being used or how it might be used against them.”
In research published earlier this year in the journal Organizational Behavior and Human Decision Processes, Raveendhran found that many people are open to being tracked by such technologies—just so long as the evaluation was being conducted by computer algorithms. “We love feedback about our behavior and knowing how we might actually manage our time differently,” she says. That acceptance diminished, however, if they thought their data was being reviewed by human supervisors.
Raveendhran suggests a better use of such tracking technology is to provide it to the employees themselves, so they can use it to improve their own performance before reviews. “This can be one of the biggest way in which employers can motivate employees and make them feel responsible for their own future in an organization,” she says.
On a broader level, she sees a similar phenomenon in wearable devices such as Fitbit or smart home device Nest—both recently acquired by Google—which collect sensitive personal data of users. “They tell Google how much we walk, how we sleep, how we eat—a lot of personal things companies never had access to before,” Raveendhran says. Privacy concerns spurred some consumers to stop using the devices after the Google acquisitions. “We are expecting companies to behave in a responsible manner, and many times they do, but many times they don’t.”
Much like with being transparent with employees regarding work-tracking data, she recommends companies empower consumers by being clear with them about how their data will be used—and letting them make up their own mind about whether the trade-off is worth it. “Those five-page privacy notices nobody ever reads are almost a deterrent to know more. Give the power to people by explaining in plain language what will happen to their data,” she says. “Then at least you know you have their consent in an open and honest manner.”
Theranos: A Case Study at the Intersection of Tech, Ethics and Leadership
Transparency of a different sort was at issue in a recent case study Harris wrote about the company Theranos, a startup run by Silicon Valley wunderkind Elizabeth Holmes that promised to revolutionize blood testing, performing 30 different tests on a single drop. The only problem was the technology didn’t work; she is now on trial for fraud.
“It’s an example of how the nature of technology itself can make it difficult for funders or board members to verify the science,” says Harris. “But there was a lot of hoopla with Holmes on the cover of Fortune in a black turtleneck. Technology we don’t understand takes on a magical quality, and people simply want to believe in it.”
In the case of Theranos, the very complexity of the technology allowed Holmes to carry on the charade for years, as the company kept racking up funding. “If we can’t peek inside this black box, or if we don’t have the technical expertise to verify what’s really going on, people often simply assume that everything must be legitimate. In addition to all the other ethical challenges introduced by technology, the complexity itself creates the opportunity to pull the wool over everybody’s eyes,” says Harris, who tells the story through Tyler Schultz, a scientist-turned-whistleblower who eventually exposed the company. The case explores the challenges Schultz, who worked with Harris and his collaborators on it, faced in getting out the word in a world in which many of the scientists knew the technology was bogus but feared speaking out due to reprisals, while many of the “suits” and investors were clueless.
“You see the challenge someone like Tyler has in getting the ethical issue exposed, given all the forces rallied against him,” Harris says. “It’s a cautionary tale of how much can go wrong if someone is willing to push the fraud forward.” Snake oil salesmen promising illusory benefits are nothing new, but in this case, he says, the “technology creates these ethical problems that allow the salesman to sell the snake oil a little easier.”
Theranos represents an extreme side of Silicon Valley, but the competitive landscape can often push companies to cut ethical corners in their race to market. Such haste is short-sighted, however, says Freeman. “This story that it’s a dog-eat-dog competitive world and we’ve got to move fast is just BS,” says Freeman. “The world waits just fine for good stuff.”
Rather than rushing a product to market, Parmar stresses the importance of learning everything one can before launching. “You have to learn about the impacts on stakeholders, and empathize with them,” he says. “Only then can you have the confidence to take risks.”
That empathy, adds Freeman, helps develop the best product in the long run. “Stakeholders are human beings. They have children, they have pets, they hurt, they love,” he says. “We need to understand our full humanity here. And if we do, by the way, there are a lot more business opportunities to be had.”