Since the Artificial Intelligence Act (AI Act) was first introduced in January 2024, there’s been considerable anticipation among companies leveraging AI tools. At FORCYD, our team has been actively discussing the potential implications of this new regulation. Many of our employees are curious about whether the AI tools we use might face bans or become subject to stricter regulations. During these discussions, one particular AI tool frequently came up prompting us to ask: what’s going to happen to sentiment analysis?
In this blog post, we will delve into what sentiment analysis is and how it is used in the eDiscovery world. Next, we will identify and discuss the relevant articles from the AI Act, and finally, the potential future of sentiment analysis under this new regulatory framework will be explored.
Please note that this article cannot be considered as legal advice and is based solely on the AI Act itself, without any additional commentaries or guidelines.
What is sentiment analysis?
For those who are not familiar with sentiment analysis, it’s helpful to start with a brief explanation. Sentiment analysis is a text classification task that uses AI to score documents – such as email and chat communication – based on the likelihood that the content contains negative, positive, or neutral sentiment. This process involves analysing the words and phrases used in communication, as well as the context in which they’re used. This analysis helps to quickly identify communications with unusual or highly charged interactions between participants. By detecting unusual communications between key actors, it’s possible to locate communications that need further investigation and build a deeper understanding of the conversations and topics important to the case at hand. [1][2]
However, it’s important to clarify that while emotion detection and sentiment analysis are often used interchangeably, they are not the same. Emotion detection, a subset of sentiment analysis, provides deeper insights into specific emotions in the text. While sentiment analysis is useful for tasks like tracking brand reputations or evaluating public opinion, it sometimes fails to capture the true feelings behind words, such as sarcasm or irony. This is where emotion detection becomes essential, offering a more nuanced understanding of the emotions involved.
Sentiment analysis in eDiscovery
Sentiment analysis often becomes a useful tool in eDiscovery, especially as it aids in prioritising documents for review with the assistance of additional AI tools. An illuminating example would be an internal investigation based on a harassment claim. In that case, sentiment analysis can be easily applied to emails and messages to identify hostile or aggressive interactions, helping to quickly pinpoint critical communications that warrant further review.
The tool analyses text, sentence by sentence, identifying emotions through specific indicator words. These words are totalled and ranked to produce an overall sentiment score for each sentence, with colour-coded sentiments to allow reviewers to quickly spot the emotions in the text. Nevertheless, it is important to note that the sentiment analysis scores are just a prediction and that is the reason why human oversight is always needed to achieve accuracy.
The AI Act
The AI Act defines an AI system as:
“a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. [3]
This definition of AI system emphasises ‘infers’ and ‘autonomy,’ distinguishing it from traditional software. It adopts a broad, technology-neutral approach to stay current, which is particularly interesting given the rapid advancements in AI technology. This EU strategy aims to ensure that the legislation remains relevant and effective, even as AI technology continues to evolve.
To further manage the varying capabilities and risks of AI systems, the AI Act categorises AI applications into three levels: [4]
- Applications that pose an unacceptable risk. Those AI systems are completely banned.
- High-risk applications, such as tools that scan and rank job applicants’ CVs, must meet specific legal standards.
- Applications with general-purpose AI, minimal risk, or no risk have fewer obligations to meet.
AI systems that pose threats to individuals are set to be banned under Art. 5 of the AI Act [5]. More specifically, these systems include those that manipulate cognitive behaviour, such as voice-activated toys encouraging risky behaviours in children, as well as social scoring based on behaviour, socio-economic status, or personal traits. Additionally, biometric identification and categorisation systems, including real-time remote biometric identification like facial recognition, fall under this prohibition. Exceptions may apply for law enforcement purposes, with stringent conditions specified.
On the same note, under Art. 6 of the AI Act, An AI system is considered high-risk if it is used as a safety component of a product, or if it is a product itself that is covered by EU legislation. These systems must undergo a third-party assessment before they can be sold or used. It is important to highlight that AI systems listed in Annex III are automatically high-risk classified.
Under Annex III, Art. 1(c) [6], AI systems intended to be used for emotion recognition are listed as high risk. Based on Art.3 (39) of the Act [7], an emotion recognition system is defined as:
“an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”
Biometric data, in turn, is defined under Art.3 (34) [8] as:
“personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”.
With a clear understanding of how the AI Act categorises and regulates different AI systems, we can now explore whether sentiment analysis falls under the prohibitions or restrictions of this legislation.
Sentiment Analysis and the AI Act: Navigating Future Implications
Based on the relevant articles of the AI Act and the operation of sentiment analysis in eDiscovery, it appears that sentiment analysis is neither prohibited under Art. 5 by the AI Act nor could be classified as high risk under Art. 6.5
The AI Act notably does not explicitly mention “sentiment analysis”, rather, it uses the term “emotion recognition”. Emotion recognition or detection, as mentioned above is a subset of sentiment analysis, but a more sophisticated one that can delve into more detailed emotions such as love and anger. Additionally, in its definition of emotion recognition applications, the AI Act explicitly states that these involve AI systems designed to identify or infer emotions or intentions of individuals based on biometric data.
Biometric data, based on the definitions provided by the EU and the AI act, are data from processing physical, physiological, or behavioural characteristics that uniquely identify a person, such as facial images or fingerprints. Sentiment analysis, on the other hand, uses only text to identify the prevailing sentiment of a document, which means that primarily it does not seem to fall under Annex III, Art. 1(c) and thus not classified as high risk.
Finally, it is important to mention that guidelines and commentaries, as well as case law, on all articles of the AI Act are expected in the upcoming months, which will hopefully provide more clarity on the question at hand. This underscores the need for ongoing dialogue and adaptation between regulatory frameworks like the AI Act and emerging AI technologies.
About the author:
Natalia Benou is a law and tech enthusiast who recently joined FORCYD as a Cyber Forensics and eDiscovery analyst.
Natalia holds an LL.B. in European Law with a specialisation in Law and Technology, and an LL.M. in Forensics, Criminology, and Law. Before joining FORCYD, Natalia gained valuable experience through several legal internships in the Benelux area and Greece, including with AKD Benelux Lawyers in Luxembourg and Zepos & Yannopoulos in Greece.
References:
[1] https://www.techtarget.com/searchbusinessanalytics/definition/opinion-mining-sentiment-mining
[2] https://help.relativity.com/RelativityOne/Content/Relativity/Sentiment_analysis/Running_sentiment_analysis.htm
[3] Article 3 of the ACT: https://artificialintelligenceact.eu/article/3/
[4] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[5] Art 5 of the AI Act: https://artificialintelligenceact.eu/article/5/
[6] Annex III Art. 1 (c): https://artificialintelligenceact.eu/annex/3/
[7] Art 3 (39) of the Act: https://artificialintelligenceact.eu/article/3/
[8] Art. 3(34) of the Act: https://artificialintelligenceact.eu/article/3/