Five ideas for an ethical use of GEN-AI in arts and design.

Domestic Data Streamers
10 min readApr 8, 2024

Words by Pau Aleikum, edited by Alex Johnstone and Marta Handenawer

Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Humans Do The Heavy Data Lifting / CC-BY 4.0

Let’s start with a brief announcement: If you’re an artist in Barcelona interested in protecting your work from image scraping and collaborating with GEN-AI ethically, please get in touch. We’re forming a community to learn together and share tools and methods beneficial to the art and design communities.

GEN-AI is a complex field often associated with predatory practices and ethical violations. While there are already thousands of articles out there highlighting the ethically dubious methods underlying GEN-AI, in this foundational article we’ll focus on exploring potential solutions rather than dwelling solely on the problems.

As a creator looking at GEN AI, I’ve felt fear and rejection towards this technology. Boycotting these tools may seem like the obvious initial response, but in my view, this is not a sustainable solution to solving the problem in the long run. I realised very early on that if I didn’t engage with it (hands-on and intellectually), I wouldn’t understand how it worked, and my work would be distanced from the conversation on how this technology needs to be defined. My decision from the beginning was to engage with it as a way to build a critical practice.

That said, before we dive in, it’s essential to contextualise this issue with the following observations:

  1. GEN-AI systems are only possible because of the vast datasets of images, videos, and text used to train them. Some of these datasets are available in the public domain, but many belong to living artists who have not given their consent.
  2. Artists, designers and illustrators have always made their living by selling their images (decades of hard work that have produced a specific style and a series of works). Today, against what feels right, you can produce hundreds of images in the style of a living creator, with most of the earnings going to the software companies providing the model or interface.
  3. If your copyrighted work has been extensively used to train an algorithm without your consent, and now someone is using it to make economic gains (sometimes huge amounts like Lensa AI), I consider this ethically wrong. If the images generated by prompting your name in a model are threatening your way of life or reputation, I also consider this to be profoundly unethical.
  4. As GEN-AI creators, we have a responsibility to understand the technology we use, and its inherent implications. Just as consumers of fast fashion must recognise that they are funding exploitative labour conditions and huge environmental damage with each purchase.
Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0

Sadly, as we know, something can be ethically wrong, but it doesn’t mean it is illegal. So until legislation catches up, we can inform ourselves and act accordingly.

With all that understood, let’s start with the question:


The quick answer is yes; you can freely use AI-generated images. As Midjourney says, “You own all assets you create,” but you also allow others to view, use, and rework those images. So, you have commercial rights to use your creations, but so does everyone else. In some countries, no rights are granted to AI-generated art, so all images are in the public domain.

Now, there is a limit to this. For example, if you generate a recognisable image of Batman or any public figure, you cannot commercially use that art because you aren’t the rights holder for Batman.

But what about the stylistic traits of a specific artist? The truth is that you can protect an artwork but not a style. Protecting an art style is not valid under any copyright system. (Section 310.4 (Style), and 310.9 (composition), respectively.). This limit creates a substantial legal void in which this technology lingers. I think there is nothing improper with training an AI on a long-dead artist like Leonardo da Vinci, whose works are in the public domain, but can the same be said for a contemporary artist?

Let’s now explore three different approaches we can utilise in order to promote the ethical use of GEN-AI.:


If the problem is in the algorithmic model, what if we create one that only uses work that artists have been paid for? Here are four options:

1.1- The first case is Firefly from Adobe. I’m not a big fan of the sometimes monopolistic management or the dark pattern UX screens of Adobe, but I must say that the steps they have taken in GEN-AI are some of the least damaging to the creative community. Their models have been trained solely on Adobe Stock images, openly licensed content, and public domain content where the copyright has expired, which makes its output clear for use by commercial artists. As their models are trained with much more limited material, their quality cannot compare with Midjourney or Stable Diffusion, but the results can work for some specific uses.

1.2 — This is also the case for Shutterstock TRUTH. After initially suing Stability AI for using their images to train their models, they have now reached an agreement that consents to use these images. Still, Shutterstock image models have a lower-quality output that feels like a stock image, as you can see in the following example:

Images by Stella Sky (Midjourney left / Shutterstock right)

1.3- Finally, there is After talking directly with their CEO, we learned how they use licensed materials from GettyImages, Envato and Alamy, plus work from specific artists that have opted in. They have developed an exciting feature for these artists: a revenue-sharing model that compensates creators and rights-holders for their contributions to the models. They provide a reverse algorithm capable of tracing the images that are referenced to create a new one; for example, they can say: this is 20% Picasso, 37% Frida Kahlo and 3.6% Kandinsky, and they will split the benefits of the use accordingly. This is, for me, the most rightful future of GEN-AI generators and opens new spaces of possibility, giving back something to the artistic community for all they have contributed. The problem with Bria is that it is not a distributor company. They build models, but you have to find a way to deploy them, making them more challenging for individual artists to access.


Promptbase, CivitAI, ArtVy, and Midlibrary are for-profit enterprises engaged in blatant copyright infringement. They offer thousands of living artists as plug-in models and suggested prompts. This trend mirrors the practices of Midjourney, which frequently showcases living artists as recommended prompts in its user gallery and official magazine.

CivitAI hosts 6000+ NSFW models, and a range of harassment models for vocal activists such as Sam Yang/SamDoesArts, Sarah C. Andersen and Greg Rutkowski. It is backed by 5MUSD of venture capital. Models are monetized offsite on Patreon and Ko-Fi.

If you are working with Stable Diffusion or Midjourney, there are several things you can do to avoid calling up model parts from certain artists. The first is obvious: don’t write prompts that include the names of artists who have not consented to give their copyrights. As you probably don’t know who is in or out (Holly Herndon at Spawning is working on a set of tools that allows artists to opt-out from model training), you should avoid the names of the artists in general.

However, you still can’t be sure that the model is not using an artist’s work to generate your image. It is much more straightforward to establish which artists’ work is in the public domain. We’ve compiled a list of names available for use, free of copyright restrictions, and soon, we will be providing a prompt book featuring our favourite public domain artists.

Image prompting

Another way to lead a model away from copying another artist’s work is to prompt it with your own visual material. In the case of Midjourney, you can easily upload your images to their server and use them as a prompt to influence the final generation. Once again, this doesn’t mean that the model is not using other artists’ works, but as you can see in the following image, the output stays true to the style you provide:

We recently carried out a project that employed these last two techniques to generate a series of posters promoting the new season at the Liceu Opera House in Barcelona. We collaborated with talented Vietnamese media artist Thinh Truong to generate these images. The input was a mix of public-domain photographs from past shows owned by Liceu, prompt poetry from Thinh and artists’ names from the public domain. In this case, the amalgamation of an artist, a model, and owned artistic material produced some exciting results.

Example 1:

Example 2:


We’ve also been experimenting with generating our own models. Specifically, we have re-trained (fine-tuned) a foundational AI model, such as Stable Diffusion, with curated images to produce a new style. In doing so, we are overwriting part of the original model, so the results are shaped by this unique style.

An example of this was an experiment we did with Marta Ribas, an amazing painter, collage artist, and long-time friend and collaborator of Domestic Data Streamers. In this case, we aimed to create a pilot model that could capture her collage technique, textures, colours, and general style to create figurative images. Here, you can see a part of Marta Ribas’ original work and the generations we were able to create in her style.

Original work by Marta Ribas


When discussing GEN-AI with other artists, I’ve realised that one of the crucial aspects to address is its potential to threaten job opportunities, particularly for illustrators and visual designers.

In response to this, we’ve put together some potential revenue models for artists that could be developed based on the use of these technologies:

4.1 Style Licensing and Model Ownership by Artists

In this revenue model, artists retain ownership of their styles. They train a model and license it to agencies, brands, or individuals for specific projects and periods. Let’s say they want to apply your style for several illustrations in a book, so you “rent” your model to a client, and they work it out. As an artist, you offer a tailored model to clients that they use to create certain images, while you maintain full control over your artistic contribution and can limit its uses. This could involve restrictions on the number of projects, industries, topics (ideological, political or sexual) or geographical regions where the model can be employed. This is the same structure we explained previously in the case of Marta Ribas.

4.2 Direct Licensing and Royalty Agreements

This model involves GEN-AI platforms like Stable Diffusion or Midjourney directly licensing artworks from artists for use in training their models. Artists are paid an upfront licensing fee plus royalties based on the usage of their styles or specific images. This would be the case of BRIA. This could be further developed by creating tiers of artists; imagine a plugin for Midjourney that allows you to use a specific style by paying a bit more per month. This extra payment goes directly to the artists who have contributed to improving this add-on model.


Sadly, the burden of protection currently falls solely on individual creators. We are advocating for a change in the system to better protect artists, but until then, these are the three key actions you can take today:

  1. Opt-out from all GEN-AI platforms that you can. Also, register at Spawning so as to avoid future scraping. This is, of course, a terrible solution. The only system that should exist is an opt-in process.
  2. Poison: Whenever you upload new material online, use Glaze, Nightshade. This software masks your images in the eyes of AI training processes, making it impossible to replicate by making subtle changes to the pixels of an image (imperceivable to the human eye).
  3. Encryption: Use platforms like KinArt to protect your portfolio. It uses image segmentation (i.e., concealing parts of artwork) and tag randomisation (swapping an art piece’s image meta tags) to interfere with the model training process.


Moving forward, our next steps will be focused on exploring and implementing some of the previously mentioned revenue models while continuing to educate and disseminate information within the art and design communities regarding the ethical use of AI image-generation tools. For those who share our vision, we invite you to reach out and collaborate on projects that embody these principles.

  • For universities: Integrate workshops or lectures within your faculty around this topic. So far, we have worked with the Hong Kong Design Institute, the School of Visual Arts in New York, and the Elisava School of Design and Engineering in Barcelona.
  • For creators: Contact us if you want to explore any of these models. We will do our best to help in these processes pro bono.
  • For AI developers and companies: Develop ethical frameworks, data scraping rule books and revenue models to improve your relationship with the creators of the databases.
  • For Lawyers and law firms: Collaborate in developing new contract templates and models to protect artists from predatory practices by agencies and companies and help them improve their revenue models.

We encourage artists, AI developers, legal professionals, and educators to join us in exploring and implementing innovative revenue models that respect and reward artistic contributions. This is a step towards fostering a more ethical and sustainable ecosystem for AI-generated art, where artists retain agency over their creative contributions and are fairly compensated for their work. This way, we can start to shape a future in which technology serves as a tool for empowerment and artistic innovation rather than exploitation or appropriation.

If you are interested in participating in this journey, contact us here.