Followng the tech and AI communities on X (formerly Twitter) this week has provided an insightful look at both its capabilities and limitations of Gemini – Google’s consumer AI chatbot designed for everyday interactions.

Some tech workers, leaders, and writers have posted screenshots of their interactions with the chatbot; specifically showing examples of strange, nonhistorical and inaccurate image generation which appear to be pandering toward diversity or “wokeness.”

On X, Google Senior Director of Product Jack Krawczyk issued an announcement shortly before this article was released stating that Google was aware of some historical image depictions in Gemini being inaccurate, and they are working immediately to address this.

“Gemini has identified inaccuracies in some historical image generation depictions, and is working quickly to rectify them.

Under Google’s AI Principles (http://ai.google/responsibility/principles), image generation capabilities are designed to reflect our global user base and we take representation and bias seriously.

We will continue this approach for open ended prompts (images of people walking dogs are universal!)

Historical contexts contain more nuances, so we will adjust to accommodate them as best we can.

Thank you and please continue delivering feedback as part of the alignment process! Thank you and stay connected!

Google first unveiled Gemini late last year after months of anticipation, proclaiming it as an AI model comparable to, or in some cases surpassing OpenAI’s GPT-4 – currently considered the highest performing large language model (LLM) on most third-party benchmarks and tests.

Yet initial analysis conducted by independent researchers concluded that Gemini performed worse than OpenAI’s older LLM chatbot GPT-3.5, leading Google to release two more advanced versions of Gemini – Gemini Advanced and Gemini 1.5 – earlier this year and eventually replace Bard with them.

Refusing to generate historical imagery while producing inaccurate depictions of the past
Now, even these more advanced Google AI models are coming under criticism by tech workers and other users for failing to produce historical imagery – for instance of German soldiers from the 1930s (during which time Nazi Party perpetrators of Holocaust were in control of military and country), or when asked for examples of Scandinavian and European peoples from earlier centuries (when darker-skinned people did reside there but only made up a minor portion). Google Gemini seems oddly selective in choosing these as examples for that period (darker-skinned people did live there but only made up a minority; most likely as an illustration).

venturebeat.org
ningmenggege@outlook.com

Leave a Reply

Your email address will not be published. Required fields are marked *