Skip to content

AI-powered game asset generation studio - transform prompts into professional game-ready assets using Stable Diffusion

License

Notifications You must be signed in to change notification settings

soheil-mp/GameAssetLab

Repository files navigation

Fine-Tuning Stable Diffusion on Gaming Characters


Introduction

Welcome to this repository which is dedicated to fine-tuning Stable Diffusion, an open-source neural network for generating high-quality images. This project aims to enhance the capabilities of Stable Diffusion by fine-tuning it with gaming asset datasets.

Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt (Wikipedia contributors, 2024).


Getting Started

Prerequisites

  • Python 3.9+
  • Installing the dependencies listed in requirements.txt.
pip install -r requirements.txt

Data Preparation

To prepare your data for fine-tuning the Stable Diffusion model, follow these steps:

Step 1: Organize Your Image Dataset

Place all the images you wish to use for fine-tuning inside the ./data/images/ folder.

Step 2: Write Prompt for Images

  • Single Prompt (for Dreambooth):
    In Dreambooth, only a single well-defined prompt is used for fine-tuning all the images.
  • Multiple Prompt (for Keras-CV):
    To implement stable diffusion with Keras-CV, we require a separate prompt for each image in our dataset. Utilize the BLIP model to automatically generate descriptive captions for each image in the dataset (using the ./preparation/auto prompting using BLIP.ipynb script).

Fine-Tuning Process

Using Dreambooth

In this approach, the Stable Diffusion model got fine-tuned using Dreambooth. It involves setting up the model from huggingface and fine-tuning it with given gaming characters. The fine-tuned model then is used to generate new artwork related to gaming characters.

Using Keras-CV

In this appraoch, the Stable Diffusion model got fine-tuned using TensorFlow's Keras framework. A custom class is used for handling the training processes (including loss calculation and gradient updates) which updates part of the weight of the the diffusion model.


Evaluation

The fine-tuned stable diffusion model has generated a set of character images for evaluation. The images confirm the model's ability to:

  • Presence of Requested Object: Each image clearly displays the intended fantasy characters.
  • Variation in Design: There's evident variation among characters, with differences in shapes and colors showcased.
  • Background Simplicity: Characters are set against solid, non-complex backgrounds for clarity.
  • Separation from Background: The subjects are well-differentiated from the backdrop, ensuring easy discernibility.

Character Variation

Demon with sword Elfs with arrows Gandalf the gray
A demon Two Elfs A wizard

Colour Variation of Same Characters

Blue Knights Red Knights Green Knights

About

AI-powered game asset generation studio - transform prompts into professional game-ready assets using Stable Diffusion

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published