A few days ago, after Microsoft announced that it had added OpenAI's DALL-E 3 model into its Bing Image Creator art maker, the service experienced huge slowdowns in generating images due to so many people wanting to try it out. Microsoft stated it was adding a lot more GPUs in its data centers to try to get images made faster.
Now, it appears Microsoft may have to deal with another issue, as many people online are experiencing content warnings over their text prompts in Bing Image Creator, over words that seemingly look pretty normal.
Jez Corden wrote an article at Windows Central on this current problem. He stated that the phrase "man breaks server rack with a sledgehammer" got flagged by the service. Another user on Reddit who typed in "a cat with a cowboy hat and boots" also got a content violation message this weekend.
However, it looks like Microsoft is at least aware of this situation. When a person on X (formerly Twitter) asked current Microsoft Windows head Mikhail Parakhin about these aggressive content messages on Bing Image Creator, Parakhin's short response was "Hmm. This is weird - checking".
Hmm. This is weird - checking
— Mikhail Parakhin (@MParakhin) October 8, 2023
It does seem clear that something has happened to the content guardrails at Bing Image Creator. However, this is not the first time that this has happened. Right after Microsoft launched the AI art maker back in March, it was so aggressive at monitoring content out of the gate that just typing in the word "Bing" would trigger a violation message.
At the time, Parakhin wrote that was actually intentional for the launch, stating on X, "we are aware that Bing Image Creator is overly restrictive with sensitive queries, had to be on the safe side at launch." This current situation appears not to be intentional so hopefully this will be fixed soon.
5 Comments - Add comment