Deep Agency shows the dangers of applying AI in the fashion industry

    Generative AI disrupts industries – with understandable controversy.

    Earlier this month, Danny Postma, the founder of Headlime, an AI-powered marketing copy startup recently acquired by Jasper, announced Deep Agency, a platform he describes as an “AI photo studio and modeling agency.” Using art-generating AI, Deep Agency creates and offers “virtual models” for rent starting at $29 per month (for a limited time), allowing clients to place the models against digital backdrops to realize their photo shoots.

    “What is Deep Agency? It is a photo studio, with a few major differences’, Postma explains in an series of tweets. “No camera. No real people. No physical location… What’s this good for? Lots of things like automating content for social media influencers, modeling marketers’ ads, and ecommerce product photography.”

    Deep Agency is very much in the proof-of-concept stage, which is to say… kind of bored. There are many artifacts in the models’ faces, and the platform places guardrails – intentionally or not – around which to generate physiques. At the same time, Deep Agency model creation is curiously difficult to control; try to generate a female model dressed in a certain outfit, like that of a police officer, and Deep Agency just can’t do it.

    Nevertheless, reaction to the launch has been swift – and mixed.

    Some Twitter users applauded the technology, expressing an interest in using it for modeling clothes and clothing brands. Others accused Postma of pursuing a “supremely unethical” business model, scraping other people’s photos and likenesses and selling for profit.

    The gap reflects the wider debate over generative AI, which continues to attract astonishing levels of funding while raising a myriad of moral, ethical and legal issues. According to according to PitchBook, investments in generative AI will reach $42.6 billion by 2023 and skyrocket to $98.1 billion by 2026. But companies like OpenAI, Midjourney and Stability AI are currently embroiled in lawsuits over their generative AI technologies, which some accuse of replicating artists’ work without compensating them fairly.

    Deep Agency

    Image Credits: Deep Agency

    Deep Agency seems to have struck a chord mainly because of the application – and implications – of its product.

    Postma, who did not respond to a request for comment, is not shy about the fact that the platform could compete with – and perhaps hurt the livelihoods of – real models and photographers. While some platforms, such as Shutterstock, have created funds to share revenue from AI-generated art with artists, Deep Agency has made no such move – nor has it indicated any intention to do so.

    Coincidentally, just weeks after Deep Agency’s debut, Levi’s announced it would be partnering with design studio to create custom AI-generated models to “increase the diversity of models that customers can see as they shop their products.” wear.” Levi’s emphasized that it planned to use the synthetic models alongside human models and that the move would not affect its hiring plans. But it raised questions as to why the brand didn’t recruit more models with the diverse attributes it seeks, given the difficulty these models have historically had in finding opportunities in the fashion industry. (According to a questionnaireas of 2016, 78% of models in fashion ads were white.)

    In an email interview with, Os Keyes, a University of Washington doctoral student who studies ethical AI, noted that modeling and photography — and art in general — are areas particularly vulnerable to generative AI because photographers and artists structural flow. They are largely low-paid, independent contractors from large companies looking to cut costs, Keyes notes. Models, for example, often are on the hook for high agency commissions (~20%) and business expenses, including airfare, group housing, and the promotional materials needed to land client jobs.

    “Postma’s app – if it works – is basically designed to kick the chair out further among already precarious creative workers and send the money to Postma instead,” Keyes said. “That’s not really something to applaud, but it’s also not hugely surprising… The fact is that from a socioeconomic point of view, tools like these are designed to further drill down and concentrate profits.”

    Other critics object to the underlying technology. State-of-the-art image-generating systems, such as the type Deep Agency uses, are so-called “diffusion models,” which learn to create images from text prompts (e.g., “a sketch of a bird on a window sill”) as they visualize a blaze your way through web-scraped training data. In the minds of artists, what matters is the tendency of diffusion models to essentially copy and paste images — including copyrighted content — of the data used to train them.

    Deep Agency

    Image Credits: Deep Agency

    Companies that market distribution models have long argued that “fair use” protects them in the event that their systems have been trained on licensed content. (Enshrined in U.S. law, the fair use doctrine allows limited use of copyrighted material without first obtaining permission from the copyright owner.) But artists claim the models are infringing their rights, in part because the training data has been used without their consent or permission has been obtained.

    “The legality of a startup like this isn’t entirely clear, but what is clear is that it aims to put a lot of people out of work,” said Mike Cook, an AI ethicist and member of the open research group Knives and Paintbrushes. told in an email interview. “It’s hard to talk about the ethics of tools like this without addressing deeper issues related to economics, capitalism and business.”

    There is no mechanism for artists who suspect their art has been used to train Deep Agency’s model to remove that art from the training dataset. That’s worse than platforms like DeviantArt and Stability AI, which offer artists ways to refrain from contributing art to train art-generating AI.

    Deep Agency also didn’t say whether it’s considering setting up a revenue share for artists and others whose work helped create the platform model. Other vendors, such as Shutterstock, are experimenting with this, drawing from a combined pool to reimburse creators whose work is used to train AI art models.

    Cook points out another issue: data privacy.

    Deep Agency offers clients a way to create a “digital twin” model by uploading about 20 images of a person in different poses. But uploading photos to Deep Agency also adds them to the training data for the platform’s higher-end models, unless users explicitly delete them afterwards, as outlined in the service agreement.

    Deep Agency’s privacy policy doesn’t say exactly how the platform handles user-uploaded photos, or even where it stores them. And there’s seemingly no way to prevent rogue actors from making someone a virtual twin without their consent — a legitimate fear in light of the non-consensual deepfake nude models like Stable Diffusion used to create.

    Deep Agency

    Image Credits: Deep Agency

    “Their terms of use actually state, ‘You understand and acknowledge that similar or identical generations may be created by other people using their own prompts.’ I find this quite amusing, because the premise of the product is that anyone can have custom AI models that are unique every time,” said Cook. “In reality, they recognize the possibility of you getting exactly the same image as someone else, and also having your photos passed on to others for possible use. I can’t imagine many large companies enjoying the prospect of any of this.”

    Another problem with Deep Agency’s training data is the lack of transparency around the original set, Keyes says. That is, it’s not clear what images the model powering Deep Agency was trained on (although the confused watermarks in his images give clues) — which leaves open the possibility of algorithmic bias.

    A growing body of research has racial, ethnic, gender and other forms of stereotyping turned up in image-generating AI, included in the popular Stable Diffusion model, developed with Stability AI support. This month, researchers from AI startup Hugging Face and the University of Leipzig published a tool showing that models such as Stable Diffusion and OpenAI’s DALL-E 2 tend to produce images of people appearing white and male, especially when asked to portray people in positions of authority.

    According to to Vice’s Chloe Xiang, Deep Agency only generates images of women unless you buy a paid subscription – a problematic bias. In addition, Xiang writes, the platform tends to create blonde white female models, even if you select an image of a woman of a different race or likeness from the pre-generated catalog. Changing a model’s appearance requires additional, not-so-obvious adjustments.

    “Image-generating AI is fundamentally flawed because it depends on the representativeness of the data the image-generating AI has been trained on,” Keyes said. “If it includes mostly white, Asian, and light-skinned black people, all the synthesis in the world won’t represent representation for dark-skinned people.”

    Despite the glaring problems with Deep Agency, Cook doesn’t see it or similar tools disappearing any time soon. There’s just too much money in the space, he says – and he’s not wrong. In addition to Deep Agency and, startups like and Surreal are raking in big VC investments for technology that generates virtual fashion models, ethics be damned.

    “The tools aren’t really good enough yet, as anyone using the Deep Agency beta can see. But it’s only a matter of time,” Cook said. “Entrepreneurs and investors will keep banging their heads against these kinds of opportunities until they find a way to make one of them work.”

    Recent Articles


    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox