This guide will show you how to run VQGAN+CLIP locally, if you run in the any errors, you should check out this page.

  1. Install VS code and python
  2. Download the AI code from here
  3. Extract the ZIP folder you just downloaded with the files app and open it in VS code from the files menu in VS code
  4. Press ctrl+shift+` simultaneously to open a terminal tab
  5. Type python3 -m venv venv in the terminal to create a python virtual environment, this is necessary to keep different dependencies separate
  6. Type source ./venv/bin/activate if you are using linux or macOS or ./venv/Scripts/activate.ps1 if you are using windows
  7. In the folder type mkdir checkpoints, this will create a folder called “checkpoints”
  8. Download the model files from here and here, then put them in the checkpoints folder you made in the last step.
  9. In the terminal type pip3 install -r requirements.txt
  10. Then, type one of the following commands based on your computer’s OS: pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio===0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html for windows, pip3 install torch torchvision torchaudio for macOS, or pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio==0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html for linux
  11. In the terminal type python3 generate.py -p "prompt" and replace prompt with your prompt, but make sure to leave the quotes.
  12. Wait for the image to finish, you can view progress in the file output.png (Open it the same way as you opened main.py, or open it in your computer’s file app)
  13. Great job, you’re done! If you have any questions, please ask them here, but if you get any errors, ask about them in the errors page.