Skip to content
forked from Bing-su/adetailer

Auto detecting, masking and inpainting with detection model.

License

Notifications You must be signed in to change notification settings

facok/adetailer

 
 

Repository files navigation

!After Detailer

!After Detailer is a extension for stable diffusion webui, similar to Detection Detailer, except it uses ultralytics instead of the mmdet.

Install

(from Mikubill/sd-webui-controlnet)

  1. Open "Extensions" tab.
  2. Open "Install from URL" tab in the tab.
  3. Enter https://github.com/Bing-su/adetailer.git to "URL for extension's git repository".
  4. Press "Install" button.
  5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart".
  6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.)
  7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.)

You can now install it directly from the Extensions tab.

image

You DON'T need to download any model from huggingface.

Options

Model, Prompts
ADetailer model Determine what to detect. None = disable
ADetailer prompt, negative prompt Prompts and negative prompts to apply If left blank, it will use the same as the input.
Detection
Detection model confidence threshold Only objects with a detection model confidence above this threshold are used for inpainting.
Mask min/max ratio Only use masks whose area is between those ratios for the area of the entire image.

If you want to exclude objects in the background, try setting the min ratio to around 0.01.

Mask Preprocessing
Mask x, y offset Moves the mask horizontally and vertically by
Mask erosion (-) / dilation (+) Enlarge or reduce the detected mask. opencv example
Mask merge mode None: Inpaint each mask
Merge: Merge all masks and inpaint
Merge and Invert: Merge all masks and Invert, then inpaint

Applied in this order: x, y offset → erosion/dilation → merge/invert.

Inpainting

image

Each option corresponds to a corresponding option on the inpaint tab.

ControlNet Inpainting

You can use the ControlNet extension if you have ControlNet installed and ControlNet models.

Support inpaint, scribble, lineart, openpose, tile controlnet models. Once you choose a model, the preprocessor is set automatically.

Model

Model Target mAP 50 mAP 50-95
face_yolov8n.pt 2D / realistic face 0.660 0.366
face_yolov8s.pt 2D / realistic face 0.713 0.404
hand_yolov8n.pt 2D / realistic hand 0.767 0.505
person_yolov8n-seg.pt 2D / realistic person 0.782 (bbox)
0.761 (mask)
0.555 (bbox)
0.460 (mask)
person_yolov8s-seg.pt 2D / realistic person 0.824 (bbox)
0.809 (mask)
0.605 (bbox)
0.508 (mask)
mediapipe_face_full realistic face - -
mediapipe_face_short realistic face - -
mediapipe_face_mesh realistic face - -

The yolo models can be found on huggingface Bingsu/adetailer.

User Model

Put your ultralytics model in webui/models/adetailer. The model name should end with .pt or .pth.

It must be a bbox detection or segment model and use all label.

Dataset

Datasets used for training the yolo models are:

Face

Hand

Person

Example

image image

ko-fi

About

Auto detecting, masking and inpainting with detection model.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%