Rocky 8.7 & Stable diffusion v2

I have, resinstalled python 3, redownloaded SD from git as per above and added your code to the to the webui-user.sh file …

#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################

# Install directory without trailing slash
#install_dir="/home/$(whoami)"

# Name of the subdirectory
#clone_dir="stable-diffusion-webui"

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
--- a/webui-user.sh
+++ b/webui-user.sh
@@ -13,7 +13,7 @@
#export COMMANDLINE_ARGS=""

# python3 executable
-python_cmd="python3"
+python_cmd="python3.9"

# git executable
#export GIT="git"

# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
#venv_dir="venv"

# script to launch to start the app
#export LAUNCH_SCRIPT="launch.py"

# install command for torch
#export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113"

# Requirements file to use for stable-diffusion-webui
#export REQS_FILE="requirements_versions.txt"

# Fixed git repos
#export K_DIFFUSION_PACKAGE=""
#export GFPGAN_PACKAGE=""

# Fixed git commits
#export STABLE_DIFFUSION_COMMIT_HASH=""
#export TAMING_TRANSFORMERS_COMMIT_HASH=""
#export CODEFORMER_COMMIT_HASH=""
#export BLIP_COMMIT_HASH=""

# Uncomment to enable accelerated launch
#export ACCELERATE="True"

###########################################

and thrown away the SD venv folder but when I launch the app, I get the same error. SD keeps defaulting to 3.6.8 .
But I changed the python version in the web.sh file to

# python3 executable
if [[ -z "${python_cmd}" ]]
then
    python_cmd="python3.9"
fi

and now it works … and I even have Dreambooth. Thank you so much

Great!

I now see there was a misunderstanding. My “instructions” contained a patch. It’s not to be copied verbatim. It (let’s say) describes the process of editing a file. It gives some context (so you can find the proper position), tells you what to remove (line starts with -), and what to insert instead (line starts with +). So what you should have done is: find the line in webui-user.sh that reads

#python_cmd="python3"

and change it to

python_cmd="python3.9"

:slight_smile:

I really struggled to get xformers installed for Dreambooth.
After following many linux installer suggestions, I managed to install anaconda & xformers by following the link below. It is for windows but the python / anaconda part worked for me.

But I am discovering that depending on which webui file chosen, xformers is checked or not. Using conda, the webui.py has Dreambooth validated, but webui.sh is not validated.

Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[+] xformers version 0.0.16.dev432+git.bc08bbc installed.
[+] torch version 1.12.1 installed.
[+] torchvision version 0.13.1 installed.

But either way, the xformers does not load in the webui …

No module 'xformers'. Proceeding without it.

I know this is not a rocky issue, but maybe someone has got xformers working. Thank you

I managed to get the xformers module to load following the instructions in the link below:

I fixed it by editing the launch.py

commandline_args = os.environ.get('COMMANDLINE_ARGS', "--xformers")

WebUI does not look for xformers otherwise. It also seems to have installed its own version inside its own venv.

I launch stable diffusion using conda & python:

(automatic) [admin@lowrocky ~]$ cd stable-diffusion-webui/
(automatic) [admin@lowrocky stable-diffusion-webui]$ conda activate automatic
(automatic) [admin@lowrocky stable-diffusion-webui]$ python launch.py

I hope this helps someone …

For anyone who wants a very clear explanation on how to setup Dreambooth the link below is by far the best that I have seen.

There are loads of other high level videos for those involved with 3D & computer graphics.

For those who want to test SD, below is a summary of my experience using SD and Dreambooth.

  1. As per the video above, Dreambooth with the 768-v-ema.ckpt checkpoint does need at least 16GB and I only have a Nvidia 7080ti card with 10GB. To get round the GPU memory saturated error in Dreambooth, I installed the SD V1.5 checkpoint
    https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
    I created source portrait photographs in 256x256 pixels instead of the default 512x512 pixels. Using 256x256, about 5GB of VRAM was used. Dreambooth was set up to 256x256. I found that uniquely naming the checkpoint is very important to a successful result.

  2. Once the Dreambooth checkpoint is created, I set the set the txt2img tab to 256x256 and followed the instructions.

  3. Expect to create thousands of images to get what you are looking, particularly if you are looking to apply an artistic style to a source image. Also note that source images are integrated into an existing checkpoint, and this newly created checkpoint does not particularly distinguish between your source and everything else (unless you train it to using comparative pictures).
    4.There are plenty of YT videos and web pages to explain prompts and I really struggled to find the right combination of words, CFG scale and the number of samples.

  4. What I did discover is that changing the pixel size gave radically different results. If I kept the txt2img pixel height and width to 256x256, I would get a vast amount of results that were variants of the source images but not too much that had the prompts. But using the same prompts, I set the txt2img pixel height and width to 512x512, I would get far prompt driven style results and little from the source. Therefore, I found a sweet spot around 350x350 and very quickly I was getting a pleasant mix of artistic style prompts and source photos.

New CUDA / NVIDIA drivers have been throwing up errors on the SD / AUTOMATIC 111 install. For my case, it seems that pytorch / pip does not have the right elements for the latest drivers & cards.
I used the code below to upgrade NVIDIA / anaconda pytorch elements & cuda to version 11

conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia