World

Google Says Its New PaliGemma 2 AI Models Can Identify Emotions. Should We Be Worried? – Slashdot

“Google says its new AI model family has a curious feature: the ability to ‘identify’ emotions,” writes TechCrunch. And that’s raising some concerns…


Announced on Thursday, the PaliGemma 2 family of models can analyze images, enabling the AI to generate captions and answer questions about people it “sees” in photos. “PaliGemma 2 generates detailed, contextually relevant captions for images,” Google wrote in a blog post shared with TechCrunch, “going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene.” Emotion recognition doesn’t work out of the box, and PaliGemma 2 has to be fine-tuned for the purpose. Nonetheless, experts TechCrunch spoke with were alarmed at the prospect of an openly available emotion detector…

“Emotion detection isn’t possible in the general case, because people experience emotion in complex ways,” Mike Cook, a research fellow at Queen Mary University specializing in AI, told TechCrunch. “Of course, we do think we can tell what other people are feeling by looking at them, and lots of people over the years have tried, too, like spy agencies or marketing companies. I’m sure it’s absolutely possible to detect some generic signifiers in some cases, but it’s not something we can ever fully ‘solve.'” The unsurprising consequence is that emotion-detecting systems tend to be unreliable and biased by the assumptions of their designers… “Interpreting emotions is quite a subjective matter that extends beyond use of visual aids and is heavily embedded within a personal and cultural context,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, a nonprofit that studies the societal implications of artificial intelligence. “AI aside, research has shown that we cannot infer emotions from facial features alone….”

The biggest apprehension around open models like PaliGemma 2, which is available from a number of hosts, including AI dev platform Hugging Face, is that they’ll be abused or misused, which could lead to real-world harm. “If this so-called emotional identification is built on pseudoscientific presumptions, there are significant implications in how this capability may be used to further — and falsely — discriminate against marginalized groups such as in law enforcement, human resourcing, border governance, and so on,” Khlaaf said.
Those concerrns were echoed by a professor in data ethics and AI at the Oxford Internet Institute, Sandra Wachter, who gave this quote to TechCrunch. With models like this, “I can think of myriad potential issues… that can lead to a dystopian future, where your emotions determine if you get the job, a loan, and if you’re admitted to uni.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button