Advertisement

The Pope’s Coat Focuses Attention on AI Images

PARIS — Was the Balenciaga coat a wakeup call?

After a photo-realistic image of Pope Francis wearing a white puffer coat from the brand caused an internet frenzy earlier this week, Elon Musk, Apple cofounder Steve Wozniak, and Skype cofounder Jaan Tallinn signed an open letter calling for companies to curb their development of artificial intelligence.

More from WWD

They were among the tech leaders stating that the rapidly evolving systems pose “profound risks to society and humanity.”

As unlikely as it may seem that the pope would be dressed by Demna, when the Midjourney-generated AI image went viral, most people couldn’t tell that it was fake. It was the first time many became aware of AI’s capabilities, and left the public to grapple with the implications of these new technologies.

Amid all this, famed brand Levi’s revealed it would be using AI-generated models in partnership with digital fashion studio Lalaland.ai to increase diversity and inclusion. The brand quickly faced backlash and calls for it to simply hire diverse human models instead of relying on technology.

The use of AI images has the potential to upend not only the fashion industry and creative jobs such as photography and styling — not to mention estimates from McKinsey that it could cost 400 million to 800 million jobs by 2030 — but also the way people view and analyze photographs, with profound implications for democracy itself.

“With the pope images, it’s fun, it’s sort of silly and it doesn’t matter too much in the sense of what those images actually are. But it’s opening up the conversation, and opening up these wider issues. It’s an opportunity to get people to pay attention,” said Mhairi Aitken, ethical fellow at the Alan Turing Institute, the U.K.’s national institute for data science and artificial intelligence.

Less benign images also circulated this week, including of French President Emmanuel Macron seemingly collecting garbage on the streets of Paris amid a sanitary workers’ strike and riots in the country, and incendiary pictures of former U.S. President Donald Trump appearing to be dragged away by police following his indictment.

At a conference hosted by the Alan Turing Institute this week, the images were a hot topic.

“These fake images that are coming out, there are concerns about what the long-term impacts might be. There’s excitement about the rapid advances in the technology, but at the same time, concerns around the impacts and that those might be harmful,” Aitken said. “There has been a heightened awareness of the risks around the uses of AI.”

There might be telltale signs — experts say to look at the hands, which AI hasn’t perfected yet, or the glasses — but that takes a discerning eye. “The reality is that’s not how people view images, it’s not how people consume media. If you’re just scrolling past, it looks real, it looks convincing,” she said. Plus, as the AI image generators improve, the images will become more and more sophisticated.

Fundamentally, it’s not about figuring out if an image is fake or not, it’s that the seed of disbelief is now planted in any image. Real images could be dismissed as fake if someone doesn’t like what they see and it’s inconvenient to their world view.

The speed at which AI is developing is “highly concerning” even to those that work in the field, said Alexander Loth, Microsoft senior program manager, data science and AI. He studies the use cases and benefits of the technologies at Microsoft’s AI For Good lab.

“A few weeks ago, it was not even seen as a possibility that you could enter a prompt and get a photorealistic looking pope,” he said.

He shared some slides to depict how fast AI is evolving, which show the big jumps that have taken place this year. Midjourney’s latest release can create photorealistic images like the pope’s white coat, while GPT-4, released two weeks ago, appears to understand complex logic.

Publicly available AI programs including Midjourney, ChatGPT and Dall-E have guardrails, but an open source program like Stable Diffusion could be worked around. “So it’s getting very difficult regarding misinformation and these kinds of pictures. When the next U.S. election happens, we are not very sure what we will see,” Loth said.

One proposed solution is invisible digital watermarks, similar to metadata, that could be used to authenticate real photos.

Another proposed solution is using the blockchain to verify the origin of an image. “It could be very useful in tracking fake news,” said Leonard Korkmaz, head of research at Quantlab and product manager at Ledger. He highlighted Lenster, a social network being built on the Lens Protocol, to track and verify posts on the blockchain.

“If the issuer was the Vatican posting the photos, using an NFT smart contract, people will be able to identify that it was posted by an official account. If it’s posted by someone unknown, that means it can be a fake and you need to do more investigation,” he said.

However, that requires issuing an NFT for an image, as well as verifying an account through these services with a technology the average person is unfamiliar with. The technology is “not completely mature right now,” Korkmaz noted. Lenster is still a bit unwieldy and not user-friendly quite yet. The company has released plans on how it plans to build the protocol but “it’s basically a vision that needs to come and be revealed,” he said.

“The notion of seeing is believing is no longer true, and that’s the big shift right now,” said Poynter Institute senior faculty for broadcast and online Al Tompkins.

Brands might look to AI to cut out photographers on basic images. “The real question is going to end up being, ‘What does genuine photography do that AI doesn’t?’” He compared it to the Photoshop revolution 30 years ago, which is now widely accepted as a tool to manipulate images.

However, with Photoshop you need specific skills, training and time to work on an image. Midjourney takes a few words and mere seconds. “With AI you don’t need any skills and it’s very fast. That’s the scary thing. Every bad actor can have a huge amount of fake pictures and fake news,” said Microsoft’s Loth. The only barrier to entry is your imagination.

The AI image generators also have the ability to create something “in the style of” a specific artist, which brings in copyright issues, said Poynter’s Tompkins. Once future law catches up with technology, he imagines something that will be similar to sampling a song in music to compensate photographers and artists.

Industry organization Coordination of European Picture Agencies, which includes Getty Images and Magnum Photos among its members, issued a set of guidelines to encourage the responsible use of AI in photography, as well as address copyright and privacy issues.

“We recognize the potential of AI to transform the visual media industry, but we also acknowledge the risks associated with its use,” said CEPIC president Christina Vaughan. The organization points out that the law is “struggling to cover all possible uses and potential abuses.”

“Many companies are producing derivative products that use existing gray areas to gain a competitive advantage by avoiding remunerating the original creators, sacrificing long-term societal benefits for short-term gains,” the organization said.

Copyright is moving into uncharted territory. In the U.S., Getty Images is suing Stable Diffusion creator Stability AI for training its AI on the agency’s photography, creating derivative works and violating its copyright.

“In most countries in the world at this point, it’s been determined that authorship requires a natural person,” said Thomas Coester, principal at Thomas Coester Intellectual Property based in Los Angeles.

In other words, if an AI platform is just given prompts that generate a text or image, most people would say there’s no meaningful creative input by the person, and therefore there’s no copyright — and anybody can copy it. However, if there’s some human input, that can change things.

“But it’s indeterminate at this point how much is enough,” said Coester.

In the age of “authenticity,” brands could be seen as duping their customers by using AI models. “If they’re lying about the person, are they lying about the product?” wondered Tompkins. “You might as well put it on a Barbie doll. It’s not real. People want to know what is the real deal.”

That lack of authenticity led Levi’s to backtrack on its announcement.

“We realize there is understandable sensitivity around AI-related technologies, and we want to clarify that this pilot is something we are on track to experiment with later this year in the hopes of strengthening the consumer experience. Today, industry standards for a photoshoot will generally be limited to one or two models per product. Lalaland.ai’s technology, and AI more broadly, can potentially assist us by allowing us to publish more images of our products on a range of body types more quickly,” the company said in a statement.

Levi’s clarified that it is not scaling back its plans for live photoshoots, adding: “Authentic storytelling has always been part of how we’ve connected with our fans, and human models and collaborators are core to that experience.”

With contributions from Jennifer Weil

Best of WWD

Click here to read the full article.