The benefits of artificial intelligence (AI) are obvious. First, AI is fast. Modern neural networks are powered by chips that have the ability to perform trillions of operations per second [1]. Second, AI never gets tired. An AI can be online twenty-four hours a day, seven days a week, constantly making decisions while learning at the same time. These factors make AI a cost-effective tool.
I had first-hand experience with AI while interning for a finance company in China. The company gives out loans to risky borrowers who have been rejected by larger banks—for example, a recent high school graduate without an extensive credit history. In this context, evaluating the creditworthiness of borrowers appears to be a perfect application for AI, as it can be viably deployed on a large scale. In addition, many seemingly unrelated data points can be considered to form accurate evaluations.
My job was to evaluate creditworthiness based on the profile pictures of loan applicants, one of these data points. Specifically, I designed an AI that could efficiently scan through hundreds of thousands of pictures in thirty minutes and identify specific attributes such as neckties or lipstick. Without this neural network, employees would have to manually label all these pictures, which is a far more costly and time consuming process.
By using this AI on the company’s customer database, it was determined that certain attributes correlated with creditworthiness. For example, wearing neckties and glasses indicated better creditworthiness, most likely because they could reflect higher levels of education or income. Surprisingly, with regards to other accessories, customers flagged as wearing hats were deemed to have lower creditworthiness. Out of those with hats, some were just wearing baseball caps or bucket hats, but most were wearing a distinctive blue helmet with a blue uniform. They were delivery people working for ELE.me, the Chinese equivalent of Uber Eats. They are required to wear helmets when delivering food on their electric scooters. These people tended to have worse creditworthiness due to lower incomes and lack of job security.
Although the AI could provide borderline borrowers with sorely needed credit lines, one may wonder whether it is ethical to use data points such as profile pictures for this goal. People were unwittingly having their profile pictures scrutinized to judge their creditworthiness. My concern is the degree to which the consumer loses their voice in this process.
What if a “helmeted” applicant my AI identified was really just someone who had dyed their hair a bright color? The system not only discriminates against delivery drivers, but anyone it erroneously labels. Although these types of problems seem rare, they are quite likely when handling such large volumes of data. As I was evaluating the performance of the algorithm on real customers, I found that one man who was wearing a shirt with vertical stripes got mislabeled as wearing a tie. Incorrect information like this could be fed into the credit score algorithm, which is then used by the bank to determine whether someone should receive a loan. Your credit score is solely determined by the AI, even if the algorithm’s judgment happens to be wrong. There is no pathway for borrowers to correct such errors. The same kind of issue was raised by Cathy O’Neil, a mathematician and data scientist. In her bestselling book, Weapons of Math Destruction, she speaks to the fact that applicants applying for a loan are evaluated by an AI driven black box that nobody understands, leaving the applicant without a voice to explain their circumstances or highlight mistakes [2].
How can we avoid ethical issues like this? The most important factor is transparency. People should know what they are being judged on, and the results of the AI should be explainable. By releasing this information, people are more able to understand why their loan application was not approved, and incorrect judgments can be identified and corrected.
People should also have the option to choose not to submit certain information or be evaluated based on certain information. For example, applicants need to provide their address to be able to receive a written letter informing them of their loan application status, but they should have the option to not have this information included in their evaluation. Although AI predictions would be less accurate for someone who has chosen not to submit or be evaluated based on certain data, consumers should retain the freedom to keep certain information private.
Finally, customers should have the option to appeal to a human if they feel that they have been misjudged. Although the evaluation of a human may be more biased than that of a computer, it is important that customers at least have the choice to choose this path. In addition, appeals also benefit the corporations themselves. By observing the cases where the AI fails, they can find their algorithm’s blindspots and increase its accuracy. Thus, corporations would be able to maximize the performance of their AI.
Ultimately, AI is just a tool and cannot be inherently good or bad. While it has the potential to be more accurate and less biased than humans, it is ultimately up to humans to design algorithms that are fast, efficient, and, most importantly, ethical.
[1] “Tesla K80.” NVIDIA, 2021. https://www.nvidia.com/en-gb/data-center/tesla-k80/.
[2] O'Neil, Cathy. 2017. Weapons of Math Destruction. Harlow, England: Penguin Books. 278-315.