How to Fine-Tune Pre-Trained Stable Diffusion Models Using Custom Images #128263
Unanswered
Aditya1Jhaveri
asked this question in
Programming Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Select Topic Area
Question
Body
Problem statement:
I am utilizing Stable Diffusion XL Base 1.0 for image generation, but it does not accept my custom input image. I would like to generate a new image based on my input image and the specified prompt.
Description:
I need a solution to generate anthropomorphic pet portraits from user-uploaded pet photos. Specifically, I want the AI to use the pet's face from the uploaded image and create the rest of the body based on a given prompt, such as a king, doctor, or lawyer. The generated portraits should retain the pet's unique features, making it clear they are the same pets.
The problem I'm encountering is that when I input a custom pet image into the pre-trained Stable Diffusion model with an anthropomorphic prompt, the model generates an image based on its dataset instead of using my custom image. I want the AI to generate new images using the provided pet photos, incorporating the given prompt, rather than creating random images from its own dataset.
How can I fine-tune a pre-trained Stable Diffusion model, or any relevant model, with our custom images so that it uses these images to generate new portraits according to the given input and prompt?
#image processing #Deep Learning # Stable Diffusion #artificial-intelligence # Image generation
Beta Was this translation helpful? Give feedback.
All reactions