AI-made photos have experts worried

US: People have recently supplied cutting-edge AI systems, such as OpenAI’s DALL-E 2 and Google Research’s Imagen, text descriptions that these algorithms can utilize to create extraordinarily detailed, realistic-looking images.

As a result, the images can be comical or even reminiscent of great art, and they’re being shared widely on social media, including by key players in the IT industry. Objects can be added or removed via DALL-E 2 (which is a newer version of a comparable but less capable AI system OpenAI released last year).

In the future, such on-demand picture generating could serve as a powerful tool for the creation of all sorts of creative output, including art and advertising; DALL-E 2 and a similar system, Midjourney, have already been utilized to assist in the creation of magazine covers. OpenAI and Google have suggested that the technology could be used for image editing or stock photo creation.

There is no public release date for DALL-E 2 or Imagen at this time. As with many other systems already in use, these systems can produce findings that reflect the gender and cultural biases of the data on which they were trained — data that includes millions of photographs scraped from the internet.

Related Posts

Using technology to spread harmful stereotypes and biases can be dangerous. As these systems are capable of creating a wide variety of images from text and can be programmed to generate them, there is a fear that they could be used to automate bias on a vast scale. They can also be used for malicious objectives, such as propagating misinformation.

It’s only recently that the public has been aware of the prevalence of artificial intelligence (AI) in daily life, as well as the potential for prejudices based on gender, race, and other factors. Concerns about the accuracy and potential for racial prejudice in facial-recognition systems have risen sharply in recent years.

It’s no secret that gender and racial prejudices abound in today’s artificial intelligence (AI) systems. Both OpenAI and Google Research have admitted as much in their published research and documentation.

Researchers are still discovering how to measure AI bias, according to OpenAI policy research program manager Lama Ahmad, and OpenAI can utilize what it learns to fine-tune its AI over time. To better comprehend DALL-E 2 and provide input, Ahmad spearheaded OpenAI’s collaboration with a group of independent specialists earlier this year.

Burapha

Sawadee-khrup. I am a multicultural Thai newswriter that is always on the lookout for daily news that are intriguing and unique in my native country Thailand.

Recent Posts

STI’s Sudden Slowdown: What Singapore’s Market Pullback Reveals About Global Risk Mood

A​‍​‌‍​‍‌​‍​‌‍​‍‌ Market Catching Its Breath The Singapore market turned noticeably quieter after the Straits Times Index (STI) went down, reflecting…

December 6, 2025

Waves of Power: Decoding China’s Bold Fleet Deployment Across East Asian Seas

In​‍​‌‍​‍‌​‍​‌‍​‍‌ response to a sudden and highly visible spike in strategic naval operations, the attention of the world has been…

December 5, 2025

Rising Regional Tensions: How Naval Build-Up Near Taiwan and Japan Is Reshaping East Asian Security

The fast naval build-up in the area of Taiwan and Japan is causing the tension of East Asia to be…

December 5, 2025

Shifting Investment Tides: Asia’s IPO Boom and the AI-Bubble Warning for 2026

The future of Asia in 2026 has an excellent combination of both opportunities and risks: a fresh wave of IPO…

December 5, 2025

When Hunger Has a Gender: Unpacking the Global Food Access Gap Women Face

On​‍​‌‍​‍‌​‍​‌‍​‍‌ a dining table, food from many different cultures may look the same, but that is not the case. After…

December 5, 2025

Asia Power Index 2025: Unmasking the Power Shifts in a US–China Dominated Region — And India’s Strategic Rise

Asia​‍​‌‍​‍‌​‍​‌‍​‍‌ Power Index 2025 reveals a significant change of the region of Asia, transforming the entire continent. While the struggle…

December 5, 2025

This website uses cookies.

Read More