FOX NEWS: Google Gemini using ‘invisible’ commands to define ‘toxicity’ and shape the online world: Digital expert

A digital consultant who combed through files on Google Gemini warned that the artificial intelligence (AI) model has baked-in bias resulting from parameters that define "toxicity" and determine what information it chooses to keep "invisible."

On December 12, Ruby Media Group CEO Kris Ruby posted an ominous tweet that said, "I just broke the most important AI censorship story in the world right now. Let's see if anyone understands. Hint: Gemini."

"The Gemini release has a ton of data in it. It is explosive. Let's take a look at the real toxicity and bias prompts," she added. "According to the data, every site is classified as a particular bias. Should that be used as part of the datasets of what will define toxicity?"


She was the first tech analyst to point out these potential concerns regarding Gemini, months before members of the press and users on social media noticed issues with responses provided by the AI. 

Kris Ruby combed through data pertaining to Google Gemini to better understand how it was trained and what parameters are being set internally.






Here is a breakdown of Ruby's working theory on Gemini AI and how toxicity could impact model output and what it means.