You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am reaching out to share our recent intriguing findings from a compression experiment. Our results demonstrate a significant reduction in model size without compromising quality. Briefly, we discovered that restoration models derived from large T2I models seldom utilize the coarse layers of the UNet. By simply removing the network blocks beyond a predetermined depth in the skip-connection setup, we observed minimal impact on the results. Specifically, for Stable SR, only depth-level 9, which utilizes 60% of the parameters, is required to achieve high-quality restoration.
Here are the quantitative results from the DIV2K test set:
I am reaching out to share our recent intriguing findings from a compression experiment. Our results demonstrate a significant reduction in model size without compromising quality. Briefly, we discovered that restoration models derived from large T2I models seldom utilize the coarse layers of the UNet. By simply removing the network blocks beyond a predetermined depth in the skip-connection setup, we observed minimal impact on the results. Specifically, for Stable SR, only depth-level 9, which utilizes 60% of the parameters, is required to achieve high-quality restoration.
Here are the quantitative results from the DIV2K test set:
You can find more details about our research at the following link:
https://arxiv.org/abs/2401.17547
The text was updated successfully, but these errors were encountered: