본문 바로가기

좋아하는 것_매직IT/96.IT 핫이슈

Stable Diffusion Dream Script를 소개합니다. (github.com/lstein)

반응형

Stable Diffusion Dream Script를 소개합니다

 

해당 깃허브에서는 아래와 같이 소개하고 있고요..

-----
이것은 훌륭한 오픈 소스 텍스트-이미지 생성기인 CompVis/stable-diffusion의 포크입니다. 이 포크는 다음을 지원합니다.

  1. Discord 봇과 동일한 프롬프트 및 전환을 수락하는 대화형 명령줄 인터페이스입니다.
  2. 브라우저에서 이미지를 생성하기 위해 로컬 웹 서버를 실행할 수 있는 기본 웹 인터페이스입니다.
  3. Google Colab에서 코드를 실행하기 위한 노트북입니다.
  4. 이미지 생성을 안내하는 시드 이미지를 제공하는 img2img를 지원합니다. (인페인팅 및 마스킹은 곧 제공 예정)
  5. ESRGAN 및 GFPGAN 패키지(옵션)를 사용한 업스케일링 및 얼굴 고정.
  6. 신속한 조정을 위한 가중치가 적용된 하위 프롬프트.
  7. 프롬프트 언어 및 이미지의 사용자 정의를 위한 텍스트 반전.
  8. ...그리고 더!

이 포크는 빠르게 진화하고 있으므로 문제 패널을 사용하여 버그를 보고하고 기능을 요청하고 정기적으로 개선 사항 및 버그 수정을 확인하세요.

----

설치방법은 아래와 같고요..
리눅스와 윈도우에 대해서 정리해보면 아래와 같습니다. 

Installation

There are separate installation walkthroughs for Linux, Windows and Macintosh

Linux

  1. You will need to install the following prerequisites if they are not already available. Use your operating system's preferred installer
  • Python (version 3.8.5 recommended; higher may work)
  • git
  1. Install the Python Anaconda environment manager using pip3.
~$ pip3 install anaconda

After installing anaconda, you should log out of your system and log back in. If the installation worked, your command prompt will be prefixed by the name of the current anaconda environment, "(base)".

  1. Copy the stable-diffusion source code from GitHub:
(base) ~$ git clone https://github.com/lstein/stable-diffusion.git

This will create stable-diffusion folder where you will follow the rest of the steps.

  1. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
(base) ~$ cd stable-diffusion
(base) ~/stable-diffusion$
  1. Use anaconda to copy necessary python packages, create a new python environment named "ldm", and activate the environment.
(base) ~/stable-diffusion$ conda env create -f environment.yaml
(base) ~/stable-diffusion$ conda activate ldm
(ldm) ~/stable-diffusion$

After these steps, your command prompt will be prefixed by "(ldm)" as shown above.

  1. Load a couple of small machine-learning models required by stable diffusion:
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py

Note that this step is necessary because I modified the original just-in-time model loading scheme to allow the script to work on GPU machines that are not internet connected. See Workaround for machines with limited internet connectivity

  1. Now you need to install the weights for the stable diffusion model.

For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co). Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original. You may be asked to sign a license agreement at this point.

Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click the "download" link. Save the file somewhere safe on your local machine.

Now run the following commands from within the stable-diffusion directory. This will create a symbolic link from the stable-diffusion model.ckpt file, to the true location of the sd-v1-4.ckpt file.

(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
(ldm) ~/stable-diffusion$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
  1. Start generating images!
# for the pre-release weights use the -l or --liaon400m switch
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -l

# for the post-release weights do not use the switch
(ldm) ~/stable-diffusion$ python3 scripts/dream.py

# for additional configuration switches and arguments, use -h or --help
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
  1. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the "stable-diffusion" directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple ModuleNotFound errors.

Updating to newer versions of the script

This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:

(ldm) ~/stable-diffusion$ git pull

This will bring your local copy into sync with the remote one.

이어서 윈도우는 아래와 같고요..

Windows

  1. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
  2. Install Git from here: https://git-scm.com/download/win
  3. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
  4. Run the command:
git clone https://github.com/lstein/stable-diffusion.git

This will create stable-diffusion folder where you will follow the rest of the steps.

  1. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
cd stable-diffusion
  1. Run the following two commands:
conda env create -f environment.yaml    (step 6a)
conda activate ldm                      (step 6b)

This will install all python requirements and activate the "ldm" environment which sets PATH and other environment variables properly.

  1. Run the command:
python scripts\preload_models.py

This installs several machine learning models that stable diffusion requires. (Note that this step is required. I created it because some people are using GPU systems that are behind a firewall and the models can't be downloaded just-in-time)

  1. Now you need to install the weights for the big stable diffusion model.

For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co). Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original. You may be asked to sign a license agreement at this point.

Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click the "download" link. Now save the file somewhere safe on your local machine. The weight file is >4 GB in size, so downloading may take a while.

Now run the following commands from within the stable-diffusion directory to copy the weights file to the right place:

mkdir -p models\ldm\stable-diffusion-v1
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt

Please replace "C:\path\to\sd-v1.4.ckpt" with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file, you may instead create a shortcut to it from within "models\ldm\stable-diffusion-v1".

  1. Start generating images!
# for the pre-release weights
python scripts\dream.py -l

# for the post-release weights
python scripts\dream.py
  1. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3), enter the stable-diffusion directory (step 5, "cd \path\to\stable-diffusion"), run "conda activate ldm" (step 6b), and then launch the dream script (step 9).

Note: Tildebyte has written an alternative "Easy peasy Windows install" which uses the Windows Powershell and pew. If you are having trouble with Anaconda on Windows, give this a try (or try it first!)

Updating to newer versions of the script

This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:

git pull
 

This will bring your local copy into sync with the remote one.

주요내용은 아래와 같습니다. 

  • 윈도우/리눅스/맥(애플 실리콘)에서 실행하는 가이드 포함
  • 디스코드 봇과 비슷한 명령을 수행하는 CLI 제공
  • img2img 지원
  • 로컬 웹서버 + 브라우저에서 이미지를 생성하는 기본 웹 인터페이스 제공
  • 구글 CoLab에서 실행하는 노트북 포함
  • ESRGAN & GFPGAN 패키지를 이용해서 업스케일링 및 얼굴 복원(face fixing) 지원

 

그외에 자세한 사항은 아래 웹페이지를 참고부탁드릴께요~

오늘의 블로그는 여기까지고요..
항상 믿고 봐주셔서 감사합니다. 

728x90
300x250