Henry AI Labs

SRDenseNet Explained

As a quick introduction, DenseNet is an advanced CNN design that utilizes a novel connectivity pattern. This connectivity pattern extends the ResNet skip connection by ‘skipping’ every preceding layer’s output to the current input. This is discussed in more detail in this article from Henry AI Labs.

Super-Resolution refers to the task of upsampling an image from a low resolution such as 90 x 90 into a higher resolution such as 360 x 360, in this anecdote this would be an upscaling factor of 4x. Convolutional Neural Networks have been shown to be effective at learning this upsampling. These networks are trained by taking high-resolution images and constructing their low-resolution pairs. These (low-resolution, high-resolution) pairs are then fed to the network to learn Super-Resolution. Another important detail is that it is very difficult to feed the entire image to the network, rather patches of the low and high resolution images are used during training and testing.

One of the earliest Super-Resolution CNN papers proposed an architecture that closely resembled the high-level structure of traditional sparse coding methods. The image was convolved over, flatten into fully-connected layers, and then reshaped and transformed with upsampling convolutions (also known as transposed convolutions or deconvolutional layers). In a way this architecture resembled the U-Net model, although it was lacking any skip-connections.

The CNN is then optimized by using the Mean Squared Loss between the upsampled low-resolution image patch and the original high-resolution patch. This can be further improved by using a multi-loss function with an adversarial and perceptual loss as well. However, that is aside from the focus of this paper and article.

The SR-DenseNet model takes another look at the CNN architecture used for Super-Resolution. This integrates one of the more advanced models, DenseNet, used in academic image classification competitions.

The diagram above depicts the integration of the the DenseNet blocks in the Super-Resolution framework.

The image above proposes an additional variant to the SR-DenseNet. A single skip connection is added between the start and end of the chain of DenseBlocks.

The image above proposes another variant to the SR-DenseNet in which skip connections are added between all the dense blocks.

The authors are heuristically motivated by the concept that they are adding information about the low-level features to more abstract representations through these skip connections.

It is difficult to evaluate Super-Resolution models and thus conclude how effective the addition of DenseNet blocks was. The image below shows a patch of an upsampled image compared with other super-resolution methods.

It is clearly evident from this picture that the SRDenseNet produces a sharper patch, however, from the pictures on the top it is very difficult to see a difference between the images.

Concluding thoughts from Henry AI Labs

We are interested in seeing how advancements in Neural Network connectivity manifests itself in other tasks such as Super-Resolution. Typically, when academics present these networks such as DenseNet, they only report its performance on Image classification, most of the time being on the ImageNet or CIFAR-10/100 datasets. SR-DenseNet is interesting because it uses the DenseNet connectivity pattern that we have previously explored and explained on Henry AI Labs. We are interested in seeing how this connectivity pattern translates to additional tasks such as in the architecture of Generative Adversarial Networks.

Related Articles

Henry AI Labs