Just like our dear-old Watson failed at the Jeopardy ‘U.S. Cities’ question, Melbourne University has also failed in the Artificial Intelligence project – yet not in the same way. While Watson failed because its process did not take the question category into account in its question analysis, Melbourne University has developed the Biometric Mirror that fails as it judges your face.
The facial recognition system takes a photo of your face, then judge a number of attributes to determine your attractiveness. The Biometric Mirror compares your attributes to an open-source database of 10,000 photos, and critiques the level of attractiveness in the photo by comparing how ‘attractive’ the face in the photo is. If you are more attractive then the photo, you get a higher ranking, and lower if you are less attractive then the photo. The problem in this selection is that the photos are 1. limited 2. not equal in its judgement (such as race and age of the two individuals the AI is comparing.
The difference between Watson and the Biometric Mirror, is the it was designed to be flawed. Its purpose was to expose flaws in government use of AI facial recognition. The design team argues that poor scoring by a machine can result is significant consequences, like being put on a travel ban by the U.S.
The interesting takeaway is how applying AI to processes can result in the need to further refine those process behind the technology itself.