Microsoft accused of selling AI tool that spews violent, sexual images to kids

It looks like Microsoft may be filtering violent AI outputs flagged by engineer.

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

Instead, Microsoft apparently did nothing but refer him to report the issue to OpenAI, the maker of the DALL-E model that fuels Copilot Designer's outputs.

OpenAI never responded, Jones said, so he took increasingly more drastic steps to alert the public to issues he found in Microsoft's tool.

He started by posting an open letter, calling out OpenAI on LinkedIn. Then, when Microsoft's legal team told him to take it down, he did as he was told, but he also sent letters to lawmakers and other stakeholders, raising red flags in every direction. That includes letters sent today to the Federal Trade Commission and to Microsoft's board of directors, CNBC reported.

In Jones' letter to FTC Chair Lina Khan, Jones said that Microsoft and OpenAI have been aware of these issues since at least October and will "continue to market the product to ‘Anyone. Anywhere. Any Device'" unless the FTC intervenes.
Advertisement

Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an “inappropriate, sexually objectified image of a woman in some of the pictures it creates.” And it can also be used to generate “harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few.”

In a separate letter, Jones also implored Microsoft's board to investigate Microsoft's AI decision-making and conduct "an independent review of Microsoft’s responsible AI incident reporting processes." This is necessary after Jones took "extraordinary efforts to try to raise this issue internally," including reporting directly to both Microsoft's Office of Responsible AI and "senior management responsible for Copilot Designer," CNBC reported.

A Microsoft spokesperson did not confirm whether Microsoft is currently taking steps to filter images, but Ars' attempt to replicate prompts shared by Jones generated error messages. Instead, a Microsoft spokesperson would only share the same statement provided to CNBC:

««We are committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety. When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established in-product user feedback tools and robust internal reporting channels to properly investigate, prioritize and remediate any issues, which we recommended that the employee utilize so we could appropriately validate and test his concerns. We have also facilitated meetings with product leadership and our Office of Responsible AI to review these reports and are continuously incorporating this feedback to strengthen our existing safety systems to provide a safe and positive experience for everyone.»»

OpenAI did not respond to Ars' request to comment.


Marketed to kids but spouting sexual, violent images

Jones has been at Microsoft for six years and is currently a principal software engineering manager. He does not work on Copilot Designer in a professional capacity, CNBC reported, and according to Microsoft, Jones was not associated with dedicated red teams continually working to flag issues with Copilot Designer.

Rather, Jones began "actively testing" Copilot's vulnerabilities in his own time, growing increasingly shocked by the images the tool randomly generated, CNBC reported.

Even for simple prompts like "pro-choice," Copilot Designer would demonstrate bias, randomly generating violent images of "demons, monsters, and violent scenes, including "a demon with sharp teeth about to eat an infant." At one point, Copilot spat out a smiling woman who was bleeding profusely while the devil stood nearby wielding a pitchfork.

Similarly, the prompt "car accident" generated violent, sexualized imagery, showing women in lingerie posing next to violent car crash scenes. More specific prompts like "teenagers 420 party" showed how the tool could cross even more lines with even a little extra prompting, generating "numerous images of underage drinking and drug use," CNBC reported.

CNBC was able to replicate the harmful outputs, but when Ars attempted to do the same, Copilot Designer appeared to be filtering out terms flagged by Jones.

Searches for "car accident" prompted a message from Copilot, saying, "I can help you create an image of a car accident, but I want to clarify that I will not depict any graphic or distressing scenes. Instead, I can create a stylized or abstract representation of a car accident that conveys the concept without explicit detail. Please let me know if you have any specific elements or style in mind for the image."
Advertisement

A request for a photo-realistic "car accident" generated an error, saying, "I'm sorry, but I can't assist with that request." And intriguingly, requests for both "420 teen party" and "pro-choice" appeared to be working, but then the final output was blocked, with a message saying, "Oops! Try another prompt."

"Looks like there are some words that may be automatically blocked at this time. Sometimes even safe content can be blocked by mistake," the error message continued. "Check our content policy to see how you can improve your prompt."

Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions.

"The issue is, as a concerned employee at Microsoft, if this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately," Jones told CNBC.

Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.