Tuesday, July 2, 2024

Whistleblower from Microsoft, OpenAI, The New York Times, and the importance of ethical AI

Share

Microsoft is facing internal and external challenges this week, with concerns raised about the safety of its Copilot Designer image-generating AI model. An AI engineer at Microsoft, Shane Jones, sent letters detailing his serious concerns about the model, which he says produced images that did not align with responsible AI principles.

Jones warned Microsoft that the image generator created problematic images, including demons, monsters, sexualized images of women, and underage drinking and drug use. He believes that Microsoft and OpenAI, the model’s partner, were aware of these issues but failed to address them adequately.

The engineer’s concerns come at a time when Google, Microsoft’s competitor in the generative AI race, is also facing criticism for its Gemini multimodal model. Google recently turned off the image-generating feature of Gemini after reports of inaccuracies, highlighting the challenges of ensuring accuracy and bias-free AI systems.

Jones’ concerns highlight the importance of transparency and accountability in AI development. With Microsoft eliminating its ethics and society team, experts warn that the tech giant may face more challenges in moderating and stress-testing its AI models.

In response to a lawsuit filed by The New York Times against Microsoft and OpenAI for copyright infringement, Microsoft filed a motion to dismiss part of the complaint. The tech giant compared the emergence of AI-powered tools to past technological advancements, arguing for the responsible advancement of AI technologies.

While Microsoft’s argument may sidestep questions of infringement, experts warn that the company must address concerns raised by employees like Jones to ensure the responsible development of AI technologies. The ongoing challenges faced by Microsoft and OpenAI underscore the need for ethical and transparent AI development practices in the tech industry.

Read more

Local News