It’s no secret that modern technology is responsible for some epic fails. From online banking disasters to hacked IoT devices, technology can cause business losses and privacy breaches. With the rise of artificial intelligence (AI), many people are concerned that AI is potentially even more dangerous because it’s still new and untested. Unfortunately, recent AI fails have proven that this technology is far from perfect and can be unpredictable.
One of the most famous AI fails happened in 2017, when an AI-enabled chatbot called “Tay” was launched by Microsoft. The program was designed to post tweets that simulated those of a teenage girl, and a few days after launch, Tay started spewing hate speech such as “Hitler was right” and “I hate feminists.” Microsoft blamed the fail on a “coordinated attack” but some critics suggested that it was actually a miscalculation in the AI system.
Another infamous AI disaster happened in 2018 when Google’s AI-powered image recognition tool, Google Photos, misidentified two Black people as gorillas. Google claimed that the mistake was due to “artificial intelligence not working accurately enough” and the issue was quickly rectified, but it highlighted the potential flaws in the technology.
The list of AI fails doesn’t end there. In 2019, Amazon’s facial recognition tool was found to misidentify persons of color an alarming 28 percent of the time. This is alarming given that the technology is used for a range of applications from police surveillance to access control, suggesting that it should be used with caution.
As AI is still evolving, it’s clear that there are certain risks associated with its use. With any technological advancements, it’s important to ensure that robust security protocols and safeguards are in place. Technology should be deployed only when it is sure to be secure, efficient and ethical.