I want to share some of the things that I have learned about Stable Diffusion (SD) from my experience of playing and tinkering with it for last few months.
I’m a writer. This is the experience that I bring to this new toolset. If you are an artist, your approach may be different. This information is by no means comprehensive – there are still many (many!) things in SD that I haven’t even tried yet.
The all-around best results for artistic renders can be found using SD v1.5 (or models that have been trained with SD 1.5). The SD v2.0 & 2.1 are good for landscapes and photorealism. I haven’t used them yet, so I can only point to what others say about them.
A big advantage to using SD is that it is open source. There is a large community supporting development and research for models and optimization for SD. I check in on the r/StableDiffusion subreddit daily to see if there are any new updates or models that would be helpful for my needs. (There are several subreddits that support SD. You can look and find which communities work for you. Examples: r/SDforAll, r/StableDiffusionInfo, r/StableDiffusion)
There are tutorials to get set up with SD. If you’re interested, you can find them.
What GUI Do I Use?
My Prompts Guide
I have certain aesthetics that I like. I grew up in the golden age of fantasy and sci-fi art book covers (1980s), so most of my work incorporates both that era and back through the pulp sci-fi era. My prompts reflect my interests and aesthetic preferences. You can use this basic format to develop your own tastes and preferences.
I should note, at this point, that there are some out there that have ethical concerns about how the training models were sourced (artist styles). They choose not to use any living artists, or perhaps not any artist names at all – just using style prompts for their render style. Each person is allowed their own freedom of conscience to build the style they are comfortable with. For myself, I blend multiple artists to create an aesthetic that works for my project. My prompts include artists both living and deceased.
Basic character prompt for realistic oil painted character design.
Basic character prompt for stylized oil painted graphic novel image.
Negative Prompts
Here’s a reddit post on negative prompts: https://www.reddit.com/r/StableDiffusion/comments/zcnv5m/do_bad_anatomy_bad_fingers_etc_actually_work_as_a/
At the end of my negative prompt is an embedding (bad_prompt:0.. Here is the reddit post that explains the embedding and where you can download it. https://www.reddit.com/r/StableDiffusion/comments/yy2i5a/i_created_a_negative_embedding_textual_inversion/
Character Design
Usually I have one or two public figures in mind that I think would make for an interesting character. I plug in my standard prompt with the persons included and see what it looks like. I usually have to iterate a few times before I find something I like. Then I can use img2img with denoising turned way up (to let the AI dream and be more creative) and see if the likeness remains and if I like the character. If not, I’ll upload one of the pictures to this website https://starbyface.com/ and see what Hollywood stars look a little bit like the character. This usually gives me some new names to work with in my prompt.
“wide-angle shot, scientist character sheet based on auburn-haired young woman, scoop neck white top and military jacket, mid-length auburn hair, belt buckle, work boots, line art style drawn by artist terry dodson and adam hughes and todd mcfarlane and winsor mckay, inspired by (<actress01>:1.075) and (<actress02>:0.9) and (<actress03>:0.85). clean details, symmetrical face and body. 8k resolution. trending on artstation HQ, atompunk retro-futurism dark art, 1950s suburban, (dieselpunk:0.75)
Style Exploration
Here are some resources that I use when exploring a new style for a new project. Once I determine the style I like, and that I feel is appropriate for the project, I’ll stick with it for the duration.
https://www.urania.ai/top-sd-artists – Displays four renders for each of hundreds of artists and lists keywords for the artist (art style, movement, subject (landscapes, portraits, still life, etc.), and the discipline (painter, street art, photographer, comic artist, etc.)).
https://ckovalev.com/midjourney-ai/styles – This one is specifically for MJ, but the reference for artists works the same across any AI art generator.
https://proximacentaurib.notion.site/e28a4f8d97724f14a784a538b8589e7d – Displays artists and renders by the artist in SD.
https://proximacentaurib.notion.site/2b07d3195d5948c6a7e5836f9d535592 – Displays styles and modifiers in SD and renders using that modifier.
https://dreamlike.art/not_someone – An art gallery using the dreamlikeart model in SD. All images show the prompt and settings in SD.
https://sites.google.com/view/liviatheodoramidjourney/home?pli=1 – Livia Theodora’s MJ References. I downloaded a spreadsheet of artist references from this site a few months ago.
(https://docs.google.com/spreadsheets/d/1h6H2CqjLdZMbLjlz6EHwemfO4fkIzAfWtjRAMSa2KHE/edit#gid=0) Great resources.
Model Differences
In Automatic1111 you can blend models to use the strengths of both. You can set the weight for each model and the GUI creates a new model in your local repository.
Conclusion
Tarl Telford is a reknowned author who has found his imaginations sparkled in AI generated arts after successfully writing and publishing a plethora of books over the years. He explains his findings from a writer’s point of view which gives the readers a viewpoint of a person who has not been working in traditional arts ever since. Him being a writer adds the magic that makes it understandable to anyone and everyone and the readers connect easily with what the author is explainnig. Here is a link to his awesome works.
You can say ‘Hi’ to him here https://www.facebook.com/tarl.telford
Happy Generating.