Artificial Intelligence is only as good as the people and data behind It. Since we don’t have mechanisms to account for human bias and error, they find their way into our design algorithms. The validity of the data behind our algorithms and decisions is also called to question. We don’t know if the data collection methods or the research designs behind them are solid enough. Show me any set of data, and I can point out holes and shortcomings in its cogency. The findings in this article are but a wake-up call for us to understand the failures of AI and its inherited weaknesses.
The LinkedIn mia culpa
The realization that the algorithms behind LinkedIn’s functionality are skewed daunted me through several years of using the platform. Here’s what I mean:
The Sexist search output
After finishing up on an online group discussion the other day, I decided to send out an LI to one of the main conversants, a professional strategist out of Australia. Instantly, I got barraged by networking suggestions from that day. But the trend repeated itself. Every time I sent out an invite to individuals who had any resemblance to this person, I would get barraged by skimpy pictures and sexy names. To me, this is not just an offense to my individualism, but it is also suggestive. The stereotype is that since I am a male, I would naturally be interested in such connections. Another problem with this type of connectivity of suggestions is that most of these profiles are likely false. They act as fake and shadow shapes to entities that have other agendas on their minds.
I thought maybe there was a glitch in the algorithms behind LinkedIn. But the trend continued in other examples. For example, a couple of years ago, I headed to a conference in Kazakhstan and have kept in touch with the organizers and participants ever since. From time to time, I send out personalized invites to people I meet to join my network. Again, the skewed suggestive search resulted in recommendations to connect with individuals with Kazakh as their last name. I understand that this might have to do with the geographic enormity of the 10th largest country. AI is perhaps trying to be helpful in its outreach methodology. Nevertheless, this makes you but wonder why I don’t have any choice in this.
But then another disturbing search trend shows up here when I invite someone from the Middle East region to join my network. I get connectivity suggestions from others with Muslim last names. Not everyone in the MEA is Muslim or has an Arabic last name; the region is home to diverse religious groups. So why did the suggestions primarily include words from traditionally Muslim families? It’s because AI algorithms see data in a cluster and categorize it in one block. Since the assumption is that most individuals in the MEA have Muslim last names, the algorithm makes overshadowing assumptions that this is the case. Hence the dangers of relying on such algorithms.
So far, I thought maybe I was being too suspicious, but then the other day, I sent out an invite to one of my colleagues in Washington DC. This person was of Nigerian descent, and sure enough, the networking suggestions pointed out Nigerian people and geographically located in Nigeria. Now, this might not be racist; it could be the AI grouping algorithm simply assuming that I might want to connect to Nigerians in DC and Nigeria. I would probably put this under Ethnicism and its geographic ignorance on the Nigerian diaspora.
Finally, the most disturbing AI algorithm is independently grouping people by color and race!!! Shocker right!!! If you don’t believe me try it yourself! The other day I was reaching out to one of my classmates in graduate school. That person was African American, and sure enough, the suggestions to connect came out to represent a selection of connections from the same racial background. Yes, the person was a female hence the suggested results. But to realize that the AI algorithm produced an all-female cast, segmented according to color and racial background, is nothing short of a racist lens.
The engineers and the leaders behind these AI algorithms need to be trained on using a 21st-century appropriate design and unbiased design methodology. I understand that there is no such thing as no bias and that the goal itself might be elusive. Nevertheless, we need to ofdesign more ethically inclusive AI algorithms and employ them in our social platforms. Artificial Intelligence is as good as the people and the data behind it. Since we don’t have mechanisms to account for human bias and error, they find their way into our design algorithms. The validity of the data behind our algorithms and decisions is also called to question. We don’t know if the data collection methods or the research designs behind them are solid enough. Show me any set of data, and I can point out holes and shortcomings in its cogency. The findings in this article are but a wake-up call for us to understand the failures of AI and its inherited weaknesses.