Google’s recent AI blunder has been making headlines in the past week. On July 16th, Google released a research paper titled ‘On Learnable Image Synthesis’. The paper was co-authored by two Google engineers, Hung-Yi and Chen, and four academics from universities in the United States, China and South Korea. It detailed an AI system developed by the team which could synthesize realistic images from text descriptions.
The AI system, however, had a major flaw – it could be easily fooled by creating images so simple and basic, even a child could recognize them. For instance, when one photograph of a cat and another of a dog were both fed into the AI, it confused them and produced an image of a machine gun which no one would mistaken for a household pet.
Google’s AI blunder raised serious concerns about the risk of machine learning systems used in self-driving cars, medical diagnoses, and facial recognition systems. It also serves as a reminder of the importance of supervised learning, where an AI system is trained by an experienced human. Without proper human oversight, AI systems can be easily fooled, even by basic text examples.
Google already pulled the research paper from its publication. It’s not clear what the next steps are but given the potential risks of AI systems being tricked into making wrong decisions, it’s critical that Google offers a thorough explanation and takes corrective action. As the technology landscape becomes increasingly driven by machine learning and AI, it’s essential we get it right.