From ba4da72753bf02ffde8dc1b44eaafc50f31fac9a Mon Sep 17 00:00:00 2001 From: Dann Dempsey Date: Fri, 14 Apr 2023 13:54:29 +0200 Subject: [PATCH 1/2] s/the flickering/flickering/ --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 120212c..00e983f 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ ## Abstract -It has been shown that deep convolutional neural networks (CNN) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts including the flickering which are not well reduced by the CNN-based methods developed for still images. Also, recent video compression algorithms include in-loop filters which reduce the blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce the artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of the consecutive frames in videos. The similar patches are aggregated from the neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts within a frame and suppresses the flickering artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities, for image and video compression standards such as JPEG, MPEG-2, H.264/AVC, and HEVC. +It has been shown that deep convolutional neural networks (CNN) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts including flickering which are not well reduced by the CNN-based methods developed for still images. Also, recent video compression algorithms include in-loop filters which reduce the blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce the artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of the consecutive frames in videos. The similar patches are aggregated from the neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts within a frame and suppresses the flickering artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities, for image and video compression standards such as JPEG, MPEG-2, H.264/AVC, and HEVC.

## Related Work From efd60dfc0fedec0fc68dae7bc4a89558f8c19342 Mon Sep 17 00:00:00 2001 From: Dann Dempsey Date: Fri, 14 Apr 2023 13:59:29 +0200 Subject: [PATCH 2/2] Fix a few more grammar issues. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 00e983f..7602b64 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ ## Abstract -It has been shown that deep convolutional neural networks (CNN) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts including flickering which are not well reduced by the CNN-based methods developed for still images. Also, recent video compression algorithms include in-loop filters which reduce the blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce the artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of the consecutive frames in videos. The similar patches are aggregated from the neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts within a frame and suppresses the flickering artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities, for image and video compression standards such as JPEG, MPEG-2, H.264/AVC, and HEVC. +It has been shown that deep convolutional neural networks (CNN) reduce JPEG compression artifacts better than previous approaches. However, the latest video compression standards have more complex artifacts including flickering which are not well reduced by CNN-based methods developed for still images. Also, recent video compression algorithms include in-loop filters which reduce blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of consecutive frames in videos. Similar patches are aggregated from neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts within a frame and suppresses flickering artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities, for image and video compression standards such as JPEG, MPEG-2, H.264/AVC, and HEVC.

## Related Work