Following up my last blog where I wrote about what workplace developments I’m looking forward to in 2019, here’s the next one on my list. It’s a rich, meaty topic that has room for many voices: creating a code of ethics in technology.
Vox published a story this year about how a subset of ethics-conscious employees at tech companies like Google, Amazon and Microsoft are demanding input on the technology they help build. Google employees pushed for a company policy that Google would not use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
One concern tech employees had was government contracts in which the technology is used to violate basic human rights. In a worst-case-scenario of this, in 2001 investigative journalist Edwin Black wrote a book detailing American business giant IBM contracting with Nazi Germany, and the tech company developing a system that helped facilitate Nazi genocide.
Another example: In November, a group of Google employees published an open letter about a search engine code named “Dragonfly.” The employees were concerned that the Chinese state could use it to amp up surveillance efforts and violate the rights of Chinese citizens. Apparently, Amnesty International also joined in with their concerns, saying that if Google went along with the plan, it’s be a “dangerous precedent for tech companies enabling rights abuses by governments,” Time reported.
Armen Berjikly, senior director of growth strategy at Ultimate Software, spoke to me at the SHRM Chicago chapter’s Think Fast conference in October, and he also had some interesting things to say about ethics in tech. He said he’d like more companies to build code of ethics around artificial intelligence.
Said Berjikly: “If you look at the opportunity with artificial intelligence, rather than saying, let’s just see what happens, I’d much rather say, here are the things that we expect of it, here are the things that we want of it, and here’s the things we won’t allow it to do.”
Looking at these two instances in a more macro sense, they bring up a few big questions, like are people who are just doing their jobs responsible for the negative impacts of the technology they create? What if one group of employees’ ethics surrounding technology totally clashes with another group of employees? How do people and organizations decide what they will allow technology to do and what they won’t technology to do?
Meanwhile, another voice in tech also shared some thoughts on this topic.
Ankit Somani, co-founder of AllyO, said that guidelines can stifle innovation. He took guidelines to mean rules that are enforced by the government and that apply the same way to every company. A policy like this would disregard the fact that different companies have different cultures, he said.
Instead, if there were an ethical code for tech companies, any guidelines that applied to every company would have to be “the lowest common denominator possible,” he said. From there, individual companies could create their own internal regulations. These regulations would likely be very employee-driven, he added, since the employees entering the workforce now tend to be mission-driven and value-driven. They question things going on at the company they work at, and companies can use that as part of their decision-making.
Companies should be thinking about ethics, Somani said, adding that he would not want to brush off the importance of that. One way for companies to incorporate ethics is by making it a part of their product development process.
“Each company, especially bigger companies, when they’re working on a technology or putting something out, they should have not just a team of legal and privacy experts trying to follow the laws and get something out there that meets basic qualifications, [but also] an internal body which informs product development right from the get-go, not as an afterthought,” Somani said.
This would be the opposite of what happened at Google this fall. After already making a $10 billion bid for the JEDI contract with Department of Defense, the tech company eventually dropped out of the contract after pressure from thousands of employees who said they’d refuse to work on it. The contract, employees said, would violate the company’s ethics policy.
Something like this is a good learning experience, Somani said. It would have saved the company a lot of time if they had been upfront with employees from the beginning and not made the bid in the first place.
What do you think? Do tech companies need to develop a code of ethics? If your company has one, what are some of the stipulations that your employees and organizational leadership pushed for?