Biden Administration Funding AI Research

…That Detects ‘Microaggressions’ on Social Media

(AP Photo/Noah Berger, File)

The last time the Biden administration got serious about online content moderation was when it created a Disinformation Governance Board to be run by a whacked-out left-wing loon named Nina Jankowicz, who referred to herself as “the Mary Poppins of Disinformation.”

Thank God we dodged that bullet.

Just as we were settling in for some normal give-and-take with online censors, we learn that the Biden administration wants to do an end run around moderators and embed the censorship directly into the software. University of Washington researchers are trying to create artificial intelligence that can “protect” users from discriminatory language, and they’re being funded by the American taxpayer through the $1.9 trillion American Rescue Plan. (You didn’t really think all that money was going to go exclusively to “rescue” America from the coronavirus, did you?)

The researchers have already gotten $132,000 and expect total funding to reach more than $550,000 over the next five years.

Washington Free Beacon:

The researchers are developing machine-learning models that can analyze social media posts to detect implicit bias and microaggressions, commonly defined as slights that cause offense to members of marginalized groups. It’s a broad category, but past research conducted by the lead researcher on the University of Washington project suggests something as tame as praising meritocracy could be considered a microaggression.

The Biden administration’s funding of the research comes as the White House faces growing accusations that it seeks to suppress free speech online. Biden last month suggested there should be an investigation into Tesla CEO Elon Musk’s acquisition of Twitter after the billionaire declared the social media app would pursue a “free speech” agenda. Internal Twitter communications Musk released this month also revealed a prolonged relationship between the FBI and Twitter employees, with the agency playing a regular role in the platform’s content moderation.

Judicial Watch president Tom Fitton compared the AI effort to the Chinese Communists’ efforts “censor speech unapproved by the state.” Fitton said the research is a “project to make it easier for their leftist allies to censor speech.”

“It’s not the role of government to police speech that some might find either offensive or emotionally draining,” said Dan Scheider, vice president of the Media Research Center’s free speech division. “Government is supposed to be protecting our rights, not suppressing our rights.”

Biden and the left may believe they’re doing God’s work by “protecting people” from offensive language. Somehow, I seriously doubt that all language considered “offensive” will be tagged — for example, the idea that a Christian could demand no obscenities wouldn’t be written into the program.

The research’s description doesn’t give examples of what comments would qualify as microaggressions—though it acknowledges they can be unconscious and unintentional. The project is led by computer science professor Yulia Tsvetkov, who has authored studies that suggest the artificial intelligence model might identify and suppress language many would consider inoffensive, such as comments praising the concept of meritocracy.

Use AI to develop Skynet, not create ways to stifle free expression.