Google is an amazing feat in terms of technological growth. The search engine has quickly become a huge part of our lives. We refer to it for transactional queries like information on the latest Optimum packages. We also refer to it for everything from DIY home improvement to the latest gossip about our favorite celebrities. Google is a great example of how deeply and comprehensively an AI can have an impact on our everyday lives.
But what many tech enthusiasts and gurus fail to understand is that this impact may not necessarily be 100% positive. In fact, there are very real concerns with ethical issues in artificial intelligence that are finally emerging into the mainstream. Read on to find out more about how AI still has a long way to go to succeed in our ever-changing world.
The “School Girl” Test – Inherent Bias in AI Gone Wild
The “schoolgirl” test is an accurate, if disturbing, example of how human biases can creep into an artificial intelligence system and skew its performance. A simple test entails Googling generic phrases and words and examining the difference in results it shows. For example, a query about “the world’s best athletes” will show you everyone from Messi to Tom Brady. But Google Images will rarely have any images of female athletes among the results it shows.
Another example is the word “schoolgirl”. Where you would expect the search engine to show results of young girls in school, the results are horrifyingly sexualized. Hypersexualized imagery of women in schoolgirl outfits will dominate the search results instead of what you are actually looking for. The results, in this case, are nowhere near your expectation. Instead, they are inappropriate and can often prove triggering and distressful to the viewer, countering Google’s intended purpose. But when Googling “schoolboy” you will typically see images of ordinary male school kids. This problem forms a cornerstone argument for examining ethical issues AI.
UNESCO’s Global Document on Ethics in AI
Is AI sexist? Not directly. But most of the world still is, which means sexual bias and sexism can very realistically creep into an AIs programming. But even more disturbing is the fact most AI systems also have a machine learning component. Meaning they learn more as they process more datasets and models. Even if an AI didn’t have sexual bias during the initial programming, it can learn it from skewed data sets that do contain that bias. So, while an AI may not directly have problems like sexism, racism, xenophobia, or warmongering, it can very well pick these traits up as it consumes data.
This very problem is evident in Google, one of the most visible and frequently used AIs in the world today. So, while we would like AI to be as impartial as we are led to believe, the fact is that AI systems like Google frequently deliver biased results. At a basic level, you can see how a biased search engine will only start to deliver the results for certain queries that reflect deep-rooted bigotry, gender bias, and problematic elements in society. This is why international bodies like UNESCO are working on creating a comprehensive, global document for ethical AI use.
Autonomous Driving: Ethical Dilemmas for AI Cars
Autonomous vehicles became all the rage a few years ago as electric vehicles gained traction. Manufacturers tout their autonomous cars as vehicles that can sense their environment and other elements in its proximity. All with little to no human involvement, essentially as close to a self-driving vehicle as we can get. The vehicle also gathers data on the road via various sensor feeds, which the manufacturer’s AI can use to learn and improve its reaction to traffic patterns.
However, AI is usually programmed to abide by the rules of the road. It does not (and cannot) take into account moral decisions. For example, in heavy high-speed traffic, a human driver may slam on the brakes when a jaywalker makes a dangerous crossing, despite the risks of a rear-end collision. An AI, however, may theoretically choose to not brake at all, and run over the jaywalker despite the human’s opposite inclination. It lacks the ability to make a moral decision instead of a data-backed one.
Ethical Problems with AI in Art
The New Rembrandt was quite a sensation a few years ago. A computer analyzed every pixel of over 340 of the famous Dutch master’s paintings. The AI then created a 3-D printed version of what it analyzed from Rembrandt’s unique artistic talents. The end result was unlike anything ever seen in art circles, and the AI delivered what can only be called an unprecedented event in art and culture.
But the lack of precedence doesn’t stop with a machine creating works of art based on data. There is also the problem of who to credit for the work of art. The AI itself is an artificial, digital program. It isn’t a person or a corporation, therefore it can’t claim any benefits accruing from the painting it generated. But then, who should get the credit? The project manager? The engineer who designed and ran the algorithm? Or, to stretch a point, the original Rembrandt himself, since it was based on his work?
Moreover, another problem seems immediately apparent. Under controlled circumstances, an AI may be able to deliver a unique piece of art based on existing examples. But it lacks the moral capacity to differentiate between pirated data or copyrighted data. Especially in terms of music, this may indicate a long-overdue overhaul of piracy laws and technology.