Logged-out Icon

Home / Consumer Tech

Pentagon-Funded Study Explores AI’s Ability to Detect Norm Violations in Texts

Researchers funded by the Pentagon delve into AI's capability to detect 'violations of social norms' in text messages using GPT-3 and zero-shot text classification


In a world increasingly dominated by artificial intelligence (AI), an intriguing new study reveals how AI might interpret human behavior through our text messages. In this research venture, funded by the Pentagon, scientists are using AI to analyze and detect what they label as “violations of social norms”. Interestingly, they are harnessing GPT-3, an advanced language model developed by OpenAI, in combination with a zero-shot text classification method, to classify these deviations in text messages.

The innovative brainchild of two researchers from Ben-Gurion University, the project aims to address an unmet challenge in understanding social norms and their violations. In essence, they want AI to understand when someone might have stepped out of the lines of acceptable behavior and is feeling remorse. The endeavor is complex, given the variability of norms across diverse cultures and societies, but the researchers suggest a commonality in human reactions to violating these norms, regardless of cultural backgrounds.

The underlying theory of the study posits that people, across cultural boundaries, generally respond to the violation of norms with certain emotions, like guilt or shame. By identifying these emotions, the researchers aim to automatically spot instances of norm violations. To actualize this, they generate synthetic data using GPT-3 and use zero-shot text classification to train models that can recognize these “social emotions” in the data. The long-term goal? To use this model for scanning text histories for indications of norm violations.

The fact that this research is sponsored by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) raises eyebrows. DARPA, a historic pillar of U.S. military research since 1958, has been instrumental in technological breakthroughs over the decades, such as drones, vaccines, and even the internet. The funding for this research stems from DARPA’s computational cultural understanding program, which is broadly aimed at developing technologies for better understanding and interaction across cultures.

However, the connection between studying “social norm violation” and this program seems nebulous. There’s a vaguely ominous undertone to the idea of using software to understand foreign populations, especially when the software’s purpose may extend to analyzing sentiments before, during, or after conflicts.

In a broader context, this research essentially constitutes a new stride in sentiment analysis, a field already explored quite a bit in the realm of surveillance. It serves as another reminder of how AI continues to extend the reach of the U.S. defense community, raising concerns about potential future implications. As intriguing as the application of AI in analyzing text for norm violations might be, we must keep a watchful eye on the balance between technological progress and individual privacy. The question remains: will AI’s scrutiny of our text conversations be limited to academic research or lead to wider-reaching surveillance? Only time will tell.


Posts you may like