AI-made photos have experts worried

4 min read
AI-made photos have experts worried

 BeijingUS: People have recently supplied cutting-edge AI systems, such as OpenAI’s DALL-E 2 and Google Research’s Imagen, text descriptions that these algorithms can utilize to create extraordinarily detailed, realistic-looking images.

As a result, the images can be comical or even reminiscent of great art, and they’re being shared widely on social media, including by key players in the IT industry. Objects can be added or removed via DALL-E 2 (which is a newer version of a comparable but less capable AI system OpenAI released last year).

In the future, such on-demand picture generating could serve as a powerful tool for the creation of all sorts of creative output, including art and advertising; DALL-E 2 and a similar system, Midjourney, have already been utilized to assist in the creation of magazine covers. OpenAI and Google have suggested that the technology could be used for image editing or stock photo creation.

There is no public release date for DALL-E 2 or Imagen at this time. As with many other systems already in use, these systems can produce findings that reflect the gender and cultural biases of the data on which they were trained — data that includes millions of photographs scraped from the internet.

Related Posts

Using technology to spread harmful stereotypes and biases can be dangerous. As these systems are capable of creating a wide variety of images from text and can be programmed to generate them, there is a fear that they could be used to automate bias on a vast scale. They can also be used for malicious objectives, such as propagating misinformation.

It’s only recently that the public has been aware of the prevalence of artificial intelligence (AI) in daily life, as well as the potential for prejudices based on gender, race, and other factors. Concerns about the accuracy and potential for racial prejudice in facial-recognition systems have risen sharply in recent years.

It’s no secret that gender and racial prejudices abound in today’s artificial intelligence (AI) systems. Both OpenAI and Google Research have admitted as much in their published research and documentation.

Researchers are still discovering how to measure AI bias, according to OpenAI policy research program manager Lama Ahmad, and OpenAI can utilize what it learns to fine-tune its AI over time. To better comprehend DALL-E 2 and provide input, Ahmad spearheaded OpenAI’s collaboration with a group of independent specialists earlier this year.

Load More By Burapha
Load More In Technology
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Check Also

What is the Cambodian Food Capital of Canada?

Kingston, a university town noted for its farmers’ markets and limestone courtyards,…