Tuesday, July 2, 2024

AI is not yet prepared for mainstream use

Share

AI tools like ChatGPT have become increasingly popular, with companies investing billions of dollars in the technology in hopes of revolutionizing the way we live and work. However, with this promise comes a slew of concerning headlines, including issues of bias, inaccuracies, copyright violations, and the creation of non-consensual intimate imagery.

Recently, the concept of “deepfakes” made headlines when AI-generated pornographic images of Taylor Swift circulated on social media, highlighting the potential dangers of mainstream artificial intelligence technology. In response, President Joe Biden called on Congress to pass legislation to regulate artificial intelligence, including banning AI voice impersonation.

Despite these concerns, Big Tech companies and AI firms continue to introduce new features and capabilities. OpenAI, the creator of ChatGPT, unveiled a new AI model called Sora, capable of generating realistic 60-second videos from text prompts. Microsoft integrated its AI assistant, Copilot, into popular software like Word and PowerPoint, while Google introduced Gemini, an AI chatbot replacing Google Assistant on some Android devices.

However, experts in artificial intelligence, law, and academia are worried about the mass adoption of AI without proper regulation. They signed a letter urging AI companies to make policy changes and undergo independent evaluations for safety and accountability. The letter emphasized the need for transparency and the importance of allowing independent researchers to evaluate AI systems.

Suresh Venkatasubramanian, a computer scientist and professor, expressed concerns about the gap between the promises of AI and its actual practice. He stressed the importance of evaluating AI systems independently to ensure safety, security, and trustworthiness.

As the debate over AI regulation continues, experts like Arvind Narayanan from Princeton University advocate for bolder reforms, such as taxing AI companies to fund social safety nets. For now, users of generative AI tools must understand the limitations and challenges associated with these technologies.

When asked about the readiness of generative AI tools for mass adoption, ChatGPT and Google’s Gemini AI tool both acknowledged the potential but emphasized the need to address ethical, societal, and regulatory challenges for responsible and beneficial mass adoption. Gemini also highlighted concerns about bias in training data and the responsible use of AI.

In conclusion, while AI technology holds immense promise, it also poses significant risks that must be addressed through regulation, transparency, and accountability. As the debate over AI ethics and regulation continues, it is crucial for policymakers, researchers, and industry leaders to work together to ensure the responsible development and deployment of artificial intelligence.

Read more

Local News