Technology

The Ethics of Artificial Intelligence in the Workplace

By Rob Carpenter

Aug. 7, 2019

Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior.

Despite its nascent nature, the ubiquity of AI applications is already transforming everyday life for the better.

Whether discussing smart assistants like Apple’s Siri or Amazon’s Alexa, applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, AI is quickly becoming an essential tool of modern life and business.

In fact, according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013.

Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.

Cementing the “intelligent” aspect of AI, advances in technology have led to the development of machine learning to make predictions or decisions without being explicitly programmed to perform the task. With machine learning, algorithms and statistical models allow systems to “learn” from data, and make decisions, relying on patterns and inference instead of specific instructions.

Unfortunately, the possibility of creating machines that can think raises myriad ethical issues. From pre-existing biases used to train AI to social manipulation via newsfeed algorithms and privacy invasions via facial recognition, ethical issues are cropping up as AI continues to expand in importance and utilization. This notion highlights the need for legitimate conversation surrounding how we can responsibly build and adopt these technologies.

How Do We Keep AI-Generated Data Safe, Private and Secure?

As an increasing number of AI enabled devices are developed and utilized by consumers and enterprises around the globe, the need to keep those devices secure has never been more important. AI’s increasing capabilities and utilization dramatically increase the opportunity for nefarious uses. Consider the dangerous potential of autonomous vehicles and weapons like armed drones falling under the control of bad actors.

As a result of this peril, it has become crucial that IT departments, consumers, business leaders and the government, fully understand cybercriminal strategies that could lead to an AI-driven threat environment. If they don’t, maintaining the security of these traditionally insecure devices and protecting an organization’s digital transformation becomes a nearly impossible endeavor.

How can we ensure safety for a technology that is designed to learn how to modify its own behavior? Developers can’t always determine how or why AI systems take various actions, and this will likely only grow more difficult as AI consumes more data and grows exponentially more complex.

For example, should law enforcement be able to access information recorded by AI devices like Amazon’s Alexa? In late 2018, a New Hampshire judge ordered the tech giant to turn over two days of Amazon Echo recordings in a double murder case. However, legal protections for this type of privacy-invading software remains unclear.

How Should Facial Recognition Technology Be Used?

The latest facial recognition applications can detect faces in a crowd with amazing accuracy. As such, applications for criminal identification and for determining the identity of missing people are growing in popularity. But these solutions also invoke a lot of criticism regarding legality and ethics.

People shouldn’t have to worry that law enforcement officials are going to improperly investigate or arrest them because a poorly designed computer system misidentified them. Unfortunately this is becoming a reality and the consequences for inaccurate facial recognition surveillance could turn deadly.

According to a 2017 blog post, Amazon’s facial recognition system, Rekognition, uses a confidence threshold set to 85 percent and upped that recommendation to a 99 percent confidence threshold not long after, but studies from the ACLU and MIT revealed that Rekognition had significantly higher error rates in determining demographic traits of certain members of the population than purported by Amazon.

Beyond accuracy (and the lack thereof in many cases), the other significant issue facing the technology is an abuse of its implementation — the “big brother” aspect.

In order to address privacy concerns, the U.S. Senate is reviewing the Commercial Facial Recognition Privacy Act, which seeks to implement legal changes that require companies to inform users before facial recognition data is acquired. This is in addition to the Biometric Information Privacy Act of Illinois, which is not specifically targeted at facial recognition but requires organizations to obtain consent to acquire biometric information, and that consent cannot be by default, it has to be given as a result of affirmative action.

As San Francisco works to ban use of the technology by local law enforcement, the divisive debate over the use — or potential misuse — of facial recognition rages on. The public needs to consider whether the use of facial recognition is about safety, surveillance and convenience or if it’s simply a way for advertisers or the government to track us. What is the government and private sector’s responsibility in using facial recognition and when is the line crossed?

How Should AI Be Used to Monitor the Public Activity of Citizen?

The future of personalized marketing and advertising is already here. AI can be combined with previous purchase behavior to tailor experiences for consumers and allow them to find what they are looking for faster. But don’t forget that AI systems are created by humans, who can be biased and judgmental. By displaying information and preferences that a buyer would prefer to keep secret, while more personalized and connected to an individual’s identity, this application of AI technology could evoke sentiments surrounding privacy invasion. Additionally, this solution would require storing an incredible amount of data, which may not be feasible or ethical.

Consider the notion that companies may be misleading you into giving away rights to your data. The impact is these organizations can now detect and target the most depressed, lonely or outraged people in society. Consider the instance when Target determined that a teen girl was pregnant and started to send coupons for baby items according to her pregnancy score. Her unsuspecting father was none too pleased about his high-schooler receiving ads that, in his mind, encouraged his daughter to get pregnant — and he let the retail giant know about it.

Unfortunately, not only are businesses gathering eye-opening amounts of information — many are being racially, economically and socially selective with the data being collected. And by allowing discriminatory ads to slip through the net, companies are opening a Pandora’s box of ethical issues.

How Far Will AI go to Improve Customer Service?

Today, AI is often employed to complement the role of human employees, freeing them up to complete the most interesting and useful tasks. Rather than focusing on the time-consuming, arduous jobs, AI now allows employees to focus on how to harness the speed, reach and efficiency of AI to work even more intelligently. AI systems can remove a significant amount of friction borne from interactions between customers and employees.

Thinking back to the advent of Google’s advertising business model and then the launch of Amazon’s product recommendation engine and Netflix’s ubiquitous “suggested for you” algorithm, consumers face a dizzying number of targeted offers. Sometimes this can be really convenient when you notice that your favorite author has come out with a new book, or the next seasons of a popular show launched. Other times it comes across as incredibly invasive and seemingly in violation of basic privacy rights.

As AI becomes more prominent across the enterprise, its application is a new issue that society has never been forced to consider or manage before. While the application of AI delivers a lot of good, it can also be used to harm people in various ways, and the best way to combat ethical issues is to be very transparent. Consequently, we — as technology developers and manufacturers, marketers and people in the tech space — have a social and ethical responsibility to be open to scrutiny and consider the ethics of artificial intelligence, working to hinder the misuse and potential negative effects of these new AI technologies.

Rob Carpenter is the founder and CEO of Valyant AI, a Colorado-based artificial intelligence company focused on customer service in the quick-serve restaurant industry.

Rob Carpenter is the founder and CEO of Valyant AI, a Colorado-based artificial intelligence company focused on customer service in the quick-serve restaurant industry.

Schedule, engage, and pay your staff in one system with Workforce.com.

Recommended