• Home
  • Blog
    • A.I.
    • Stable Diffusion
    • Midjourney
    • Miscellaneous
    • Gear
    • Workflow
No Result
View All Result
  • Home
  • Blog
    • A.I.
    • Stable Diffusion
    • Midjourney
    • Miscellaneous
    • Gear
    • Workflow
No Result
View All Result
No Result
View All Result

Stable Diffusion Beginners’ User Guide by Tarl Nicholas Telford

May 19, 2025
Reading Time: 11 mins read

I want to share some of the things that I have learned about Stable Diffusion (SD) from my experience of playing and tinkering with it for last few months.

I’m a writer. This is the experience that I bring to this new toolset. If you are an artist, your approach may be different. This information is by no means comprehensive – there are still many (many!) things in SD that I haven’t even tried yet.

The all-around best results for artistic renders can be found using SD v1.5 (or models that have been trained with SD 1.5). The SD v2.0 & 2.1 are good for landscapes and photorealism. I haven’t used them yet, so I can only point to what others say about them.

A big advantage to using SD is that it is open source. There is a large community supporting development and research for models and optimization for SD. I check in on the r/StableDiffusion subreddit daily to see if there are any new updates or models that would be helpful for my needs. (There are several subreddits that support SD. You can look and find which communities work for you. Examples: r/SDforAll, r/StableDiffusionInfo, r/StableDiffusion)

Another advantage of SD is that you can run it locally on your own computer, as long as you have the minimum required hardware. This minimum changes all the time with the massive optimizations that the community is working on. I think it’s down to 4GB VRAM (a graphics card that has 4GB VRAM). Nvidia GPUs have the right kind of processors that SD can work without optimizations. If you have an AMD card, there are some additional steps you need to take to run SD. Mac is another story. There are tutorials. I don’t have any experience with Mac. I run a Nvidia 3050 8GB VRAM card on i5-660K CPU with 3.50GHz and 32GB RAM. (It took me a few years to get my PC to this point. I’m not a gamer, so everything has to work for art creation.)

There are tutorials to get set up with SD. If you’re interested, you can find them.

It has been said that nobody can tell you how to write a book – they can only tell you how they wrote their last book. That is the basic premise of this post. I can only share my current workflow and research. Your particular needs may warrant adaptation, and that’s the way it should be. I’ll provide some resources and know-how, and you take it from there.

What GUI Do I Use?

There are a few options to choose from when using SD. You can use a Google Collab and run it in the cloud (plenty of tutorials on YouTube). InvokeAI 2.0 launched recently. There are some good videos explaining how to install and use that interface. I use Automatic1111. You can also find plenty of tutorials to install that interface.

My Prompts Guide

I have certain aesthetics that I like. I grew up in the golden age of fantasy and sci-fi art book covers (1980s), so most of my work incorporates both that era and back through the pulp sci-fi era. My prompts reflect my interests and aesthetic preferences. You can use this basic format to develop your own tastes and preferences.

I should note, at this point, that there are some out there that have ethical concerns about how the training models were sourced (artist styles). They choose not to use any living artists, or perhaps not any artist names at all – just using style prompts for their render style. Each person is allowed their own freedom of conscience to build the style they are comfortable with. For myself, I blend multiple artists to create an aesthetic that works for my project. My prompts include artists both living and deceased.

When you include an element in parentheses with a colon and a number between 0 and 2, the strength of that element is multiplied by the number. My experience shows that any number above 1.5 gives wonky results. I tend to stay in the 0.8 – 1.25 range.

Basic character prompt for realistic oil painted character design.

“zoomed-out hyper-realistic portrait, stunningly beautiful young woman celtic princess with wavy red hair, light freckles, blue eyes, cinematic lighting, pulp fantasy character concept art, style by arthur rackham and windsor mckay and larry elmore and ralph horsley and (frank frazetta:0.😎, inspired by (<actress1>:0.975) and (<actress2>:0.94) and (<actress3>:0.94)”

Basic character prompt for stylized oil painted graphic novel image.

“zoomed out stylized painted graphic novel illustration, perfectly centered, full-length portrait of serious auburn-haired man with short beard, 46 years old, intense realistic eyes, atompunk retro-futurism HQ dark fine art, art style by aleksi briclot and peter gric and (hr giger:0.5) and syd mead and marc simonetti, inspired by <actor1> and (<actor2>:0.9) and (<actor3>:0.85), trending on artstation HQ, (dieselpunk:0.😎, Antarctica”
I generally render characters at 512w x 768h, using Euler_a with 20 steps and CFG at 11.5.
For img2img, when I’m generating variations of the character, I use CFG: 11.5 and denoising at 0.87. If I want more imagination, I will go as high as 0.95 denoising strength. Any denoising below 0.6 sticks pretty close to the original image, in my experience.

Negative Prompts

Negative Prompts improve your renders. SD doesn’t have the stylization built in (like MJ, NightCafe, Dalle2, etc.) so you need to use negative prompts to guide the generation away from things you don’t want to see. Here is my standard negative prompt (there are probably too many terms in there, but it gives good results … mostly).
“conjoined twins, siamese twins, stacked torsos, totem pole, istock, stock photo, too many limbs, weapon, sword, gun, chibi, weird eyes, signature, watermark, lowres, text, cropped, worst quality, low quality, normal quality, jpeg artifacts, username, blurry, artist name, unibrow, blind, morbid, mutilated, cloned face, low resolution, out of frame, mutated hands, fused fingers, too many fingers, bad anatomy, gross proportions, extra arms, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, (bad_prompt:0.8)”

Here’s a reddit post on negative prompts: https://www.reddit.com/r/StableDiffusion/comments/zcnv5m/do_bad_anatomy_bad_fingers_etc_actually_work_as_a/

At the end of my negative prompt is an embedding (bad_prompt:0.😎. Here is the reddit post that explains the embedding and where you can download it. https://www.reddit.com/r/StableDiffusion/comments/yy2i5a/i_created_a_negative_embedding_textual_inversion/

Character Design

Usually I have one or two public figures in mind that I think would make for an interesting character. I plug in my standard prompt with the persons included and see what it looks like. I usually have to iterate a few times before I find something I like. Then I can use img2img with denoising turned way up (to let the AI dream and be more creative) and see if the likeness remains and if I like the character. If not, I’ll upload one of the pictures to this website https://starbyface.com/ and see what Hollywood stars look a little bit like the character. This usually gives me some new names to work with in my prompt.

I try to use a mix of 3 or 4 recognizable faces to make something that blends features into something almost familiar, but still different enough that I’m not stepping into rights issues when I publish.

“wide-angle shot, scientist character sheet based on auburn-haired young woman, scoop neck white top and military jacket, mid-length auburn hair, belt buckle, work boots, line art style drawn by artist terry dodson and adam hughes and todd mcfarlane and winsor mckay, inspired by (<actress01>:1.075) and (<actress02>:0.9) and (<actress03>:0.85). clean details, symmetrical face and body. 8k resolution. trending on artstation HQ, atompunk retro-futurism dark art, 1950s suburban, (dieselpunk:0.75)
Negative prompt: Siamese twins, conjoined, multiple heads, blur, blurry, soft, blush, filter, noise, deformed, defective, incoherent, twisted, extra limbs, extra fingers, (poorly drawn hands), messy drawing, bad drawing, low detail, first try, ugly, boring, text, signature, letters, crazy teeth, extra teeth, body out of frame, ((deformed)), (cross-eyed), (closed eyes), blurry, (bad anatomy), ugly, disfigured, ((poorly drawn face)), (mutation), (mutated), (extra limbs), (bad body), signature, watermark, writing, symbol, icon, (bad_prompt:0.8), gun, guns, weapons, Siamese twins, conjoined, multiple heads, blur, blurry, soft, blush, filter, noise, deformed, defective, incoherent, twisted, extra limbs, extra fingers, (poorly drawn hands), messy drawing, bad drawing, low detail, first try, ugly, boring, text, signature, letters, crazy teeth, extra teeth, body out of frame, ((deformed)), (cross-eyed), (closed eyes), blurry, (bad anatomy), ugly, disfigured, ((poorly drawn face)), (mutation), (mutated), (extra limbs), (bad body), signature, watermark, writing, symbol, icon, (bad_prompt:0.8)
Steps: 18, Sampler: Euler a, CFG scale: 11.5, Seed: 1017574179, Size: 768×512, Model hash: e5a4c91a, Batch size: 6, Batch pos: 5, Denoising strength: 0.91″

Style Exploration

Here are some resources that I use when exploring a new style for a new project. Once I determine the style I like, and that I feel is appropriate for the project, I’ll stick with it for the duration.

https://www.urania.ai/top-sd-artists – Displays four renders for each of hundreds of artists and lists keywords for the artist (art style, movement, subject (landscapes, portraits, still life, etc.), and the discipline (painter, street art, photographer, comic artist, etc.)).

https://ckovalev.com/midjourney-ai/styles – This one is specifically for MJ, but the reference for artists works the same across any AI art generator.

https://proximacentaurib.notion.site/e28a4f8d97724f14a784a538b8589e7d – Displays artists and renders by the artist in SD.

https://proximacentaurib.notion.site/2b07d3195d5948c6a7e5836f9d535592 – Displays styles and modifiers in SD and renders using that modifier.

https://dreamlike.art/not_someone – An art gallery using the dreamlikeart model in SD. All images show the prompt and settings in SD.

https://sites.google.com/view/liviatheodoramidjourney/home?pli=1 – Livia Theodora’s MJ References. I downloaded a spreadsheet of artist references from this site a few months ago.

(https://docs.google.com/spreadsheets/d/1h6H2CqjLdZMbLjlz6EHwemfO4fkIzAfWtjRAMSa2KHE/edit#gid=0) Great resources.

Model Differences

One of the advantages of SD over MJ is that you can plug in different models to get different results. The downside is that these models are large. They take up a lot of space on your hard drive (if you are running SD locally). SD 1.5 is about 4.5GB. Most models are about that size.

In Automatic1111 you can blend models to use the strengths of both. You can set the weight for each model and the GUI creates a new model in your local repository.

Here are the names of the primary models that I use. You can find these on the huggingface site, or on https://civitai.com/. If you do a search in the r/StableDiffusion subreddit with any of these names, you’ll find posts on each of them.
SD 1.5, f222, analog diffusion, Anything-v3.0.
My primary model is a blend of SD 1.5 @55% and f222@45%.

Conclusion

This post has included basic Stable Diffusion information and tips that I use in my workflow. These resources can help you develop your own workflow for your creative projects. If you have additional thoughts or resources, feel free to post them in the comments. Stable diffusion can create styles both new and old. This image of a fantasy city was created in the style of classic artists that are in the public domain.
Prompt : “stylized graphic novel illustration, environmental concept art matte painting, fantasy city made of emeralds and crystal and green glass, style of deconstructivism and panfuturism and post-impressionism, inspired by pulp sci-fi comic art, gritty tribal vibes, detailed, dramatic lighting, chiaroscuro, art style by arthur rackham and franscisco goya and heironymus bosch and leonardo da vinci”.

Tarl Telford is a reknowned author who has found his imaginations sparkled in AI generated arts after successfully writing and publishing a plethora of books over the years. He explains his findings from a writer’s point of view which gives the readers a viewpoint of a person who has not been working in traditional arts ever since. Him being a writer adds the magic that makes it understandable to anyone and everyone and the readers connect easily with what the author is explainnig. Here is a link to his awesome works. 

https://www.amazon.com/stores/Tarl-Telford/author/B008L0B082?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true

You can say ‘Hi’ to him here https://www.facebook.com/tarl.telford

Happy Generating.

Share132Tweet83Send
Previous Post

The Ethical Uses of Ai Art and the Non-Ethical Uses : Jenna Soard

Next Post

Synthography : Birth of a New Artform

Related Posts

How To Price Your Freelance Gigs Properly in Freelance Websites
A.I.

How To Price Your Freelance Gigs Properly in Freelance Websites

May 19, 2025

If you are just starting out with A.I. generated images and setting foot into the world of freelance websites like Fiverr, Upwork, PayPerHour etc then it can be pretty overwhelming to learn how to price your gigs properly. Especially...

Solutions to Most Common Problems With Lower VRAM While Running Stable Diffusion Webui Locally
A.I.

Solutions to Most Common Problems With Lower VRAM While Running Stable Diffusion Webui Locally

May 19, 2025

If you are using an Nvidia Graphic Card with lower vrams (4Gb-8Gb) and are trying to run Stable Diffusion locally in your pc via Automatic SD Webui then you have most probably faced one or both of these two...

How to Install Stable Diffusion in Your Computer for Free
A.I.

How to Install Stable Diffusion in Your Computer for Free

May 19, 2025

A.I. image generators have changed the arts, graphics and design industries forever. You can now write prompts and direct the Artificial Intelligence to behave the way you like. Of course it all comes down to how good a prompstar...

The Remix Mode in Midjourney : Unleashed
A.I.

The Remix Mode in Midjourney : Unleashed

May 19, 2025

Midjourney just dropped a bomb last night by releasing the ‘Remix’ Mode. This gives minute controls over your generations like never before. If you are thinking this is just another version of the ‘Version’ feature then sadly you are...

  • Can Logitech Mx Master 3 be used Wired

    Can Logitech Mx Master 3 be used Wired

    878 shares
    Share 351 Tweet 220
  • How To Install and Upscale Your Images With Chainner

    576 shares
    Share 230 Tweet 144
  • Logitech MX Master 3 vs MX Master 3s : Worth The Upgrade?

    470 shares
    Share 188 Tweet 118
  • Image Prompting in Midjourney Version 4 : Revolutionizing the Design Industry?

    460 shares
    Share 184 Tweet 115
  • How to Find the Seed of an Image in Midjourney

    420 shares
    Share 168 Tweet 105
5 Practical ChatGPT Tips to Grow in Youtube
A.I.

5 Practical ChatGPT Tips to Grow in Youtube

by Debargha
May 19, 2025
0

In this article we are going to learn how you can use ChatGPT for youtube to speed up your content...

Read moreDetails

Install Comfyui Workflow Here

May 19, 2025
5 Ways You Can Make Money With A.I. Generated Images

5 Ways You Can Make Money With A.I. Generated Images

May 19, 2025
How to Create Zoom Out Video Animation Using Midjourney Images

How to Create Zoom Out Video Animation Using Midjourney Images

May 19, 2025
How to Create Zoom In Video Animation Using Midjourney Images

How to Create Zoom In Video Animation Using Midjourney Images

May 19, 2025
Facebook Youtube Instagram Twitter LinkedIn
No Result
View All Result
  • Home
  • Blog
    • A.I.
    • Stable Diffusion
    • Midjourney
    • Miscellaneous
    • Gear
    • Workflow