Human rights organizations are asking Zoom to slow its plan to introduce emotion-analyzing AI into its video conferencing software. The company has reportedly said that it will use AI to evaluate a user’s sentiment or engagement level. “Experts admit that emotion analysis does not work,” the consortium of human rights groups, including the ACLU, wrote in a letter to Zoom. “Facial expressions are often disconnected from the emotions underneath, and research has found that not even humans can accurately read or measure the emotions of others some of the time. Developing this tool adds credence to pseudoscience and puts your reputation at stake.” Zoom did not immediately respond to a request by Lifewire for comment.

Keeping Tabs on Your Emotions

According to the Protocol article, the Zoom monitoring system called Q for Sales would check users’ talk-time ratio, response time lag, and frequent speaker changes to track how engaged the person is. Zoom would use this data to assign scores between zero and 100, with higher scores indicating higher engagement or sentiment. The human rights groups claim the software could discriminate against people with disabilities or certain ethnicities by assuming that everyone uses the same facial expressions, voice patterns, and body language to communicate. The groups also suggest the software could be a data security risk.  “Harvesting deeply personal data could make any entity that deploys this tech a target for snooping government authorities and malicious hackers,” according to the letter. Julia Stoyanovich, a professor of computer science and engineering at New York University, told Lifewire in an email interview that she’s skeptical about the claims behind emotion detection.  “I don’t see how such technology can work—people’s emotional expression is very individual, very culturally dependent, and very context-specific,” Stoyanovich said. “But, perhaps even more importantly, I don’t see why we would want these tools to work. In other words, we’d be in even more trouble if they worked well. But perhaps even before thinking about the risks, we should ask—what are the potential benefits of such tech?” Zoom isn’t the only company to use emotion-detecting software. Theo Wills, the senior director of privacy at Kuma LLC, a privacy and security consulting company, told Lifewire via email that software to detect emotions is used during interviews to assess whether the user is paying attention. It’s also being piloted in the transportation industry to monitor if drivers appear drowsy, on video platforms to gauge interest and tailor recommendations, and in educational tutorials to determine if a particular teaching method is engaging. Wills contended that the controversy around emotion-monitoring software is more of a question of data ethics than privacy. She said it’s about the system making real-world decisions based on hunches.  “With this technology, you are now assuming the reason I have a particular expression on my face, but the impetus behind an expression varies widely due to things like social or cultural upbringing, family behaviors, past experiences, or nervousness in the moment,” Wills added. “Basing the algorithm on an assumption is inherently flawed and potentially discriminatory. Many populations are not represented in the population the algorithms are based on, and appropriate representation needs to be prioritized before this should be used.”

Practical Considerations

The problems raised by emotion tracking software may be practical as well as theoretical. Matt Heisie, the co-founder of Ferret.ai, an AI-driven app that provides relationship intelligence, told Lifewire in an email that users need to ask where the analysis of faces is being done and what data is being stored. Is the study being done on call recordings, processed in the cloud, or on the local device?  Also, Heisie asked, as the algorithm learns, what data it collects about a person’s face or movements that could potentially be disentangled from the algorithm and used to recreate someone’s biometrics? Is the company storing snapshots to verify or validate the algorithm’s learnings, and is the user notified of this new derivative data or stored images potentially being collected from their calls? “These are all problems many companies have solved, but there are also companies that have been rocked by scandal when it turns out they haven’t done this correctly,” Heisie said. “Facebook is the most significant case of a company that rolled back its facial recognition platform over concerns about user privacy. Parent company Meta is now pulling AR features from Instagram in some jurisdictions like Illinois and Texas over privacy laws surrounding biometric data.”