# PAPERS

## A brief summary of the articles I read

### The site is under construction

• SAR image despeckling through convolutional neural networks
First paper that investigates the use of CNN for SAR image despeckling. The DnCNN is used and the average of temporal series of images is used as groundtruth, assuming that no changes occurred between acquisitions. However, this definition is not clear and thus difficult to reproduce in practise (not objective)

• SAR Image Despeckling Using a ConvolutionalNeural Network
Natural images are used to generate SAR-like data (probably with 4-looks speckle) and to train ID-CNN (Image Despeckling CNN): intensity images are fed to the network, which reproduces the noise that divides the input image to produce the denoised estimation. TV-L2 is added in the loss formulation, along with MSE.

• Generative Adversarial Network-BasedRestoration of Speckled SAR Images
An extension of the above work is ID-GAN (Image Despeckling Generative Adversarial Network). The ID-GAN now plays the role of the Generator. The loss is a combination of Euclidean loss, perceptual loss and adversarial loss. As before, natural images corrupted with speckle are used as training set. Only results with L=4 are probably displayed.

• Learning a Dilated Residual Network for SAR Image Despeckling
Combination of Dilated (Atrous) Convolution and Residual Learning are employed in a CNN, trained with natural images corrupted with synthetic speckle. Low noise level images are showed (there aren't even results on real SAR images)

• Deep Learning for SAR Image Despeckling
From our EUSAR paper:

Natural images are used to pre-train the network, which is then fine-tuned using SAR images: stacks of data are averaged to produce a clean reference and synthetic noise is added to them.

U-Net is trained in a residual fashion (log images are fed to the network) with MSE loss

The restoration problem is modeled as a maximum a posteriori (MAP) optimization problem. In this framework, the aim is to find the estimate $\&space;\hat{x}$ that maximizes p(x/y), which corresponds to $\&space;\hat{x}=argmax(logp(y/x)+logp(x))$. This optimization is accomplished thanks to a GAN architecture and without the use of groundtruth data. The Generator is trained to restore xhat starting from the noisy observation y ($\&space;\hat{x} = G(y)$). Then, yhat is generated from xhat through the corruption function F (yhat=F(xhat)=F(G(y))) and the MSE is computed wrt ytilde = F(G(yhat)) so to enforce to have high probability under the likelihood term. The same set of parameters is used for F. As for the prior term, it is maximized by the discriminator, whose role is to distinguish between y and yhat. Indeed, this requires the recovery of xhat sampled from p(x).
However, access to the corruption function is assumed, which is not necessarily the case in SAR images:

To our knowledge, there is no other Deep Learning approach attempting to solve the unsupervised signal reconstruction problem [2019]

• They train a network to regress a corrupted image to the same image with a different corruption value. Assuming the corruption has zero-mean, their network learns to remove the corruption by the conditional expectation. This setting implicitly assumes access to the distribution of uncorrupted images in order to generate different noisy versions of the same image.

• High-Quality Self-Supervised Deep Image Denoising

• Learning speckle suppression in SAR images without ground truth: application to sentinel-1 time-series
Under the assumption that no changes occur, multitemporal stacks are exploited: two images are randomly picked from the stack at a time. As only the noise is changing, the network will end up learning the underlying reflectivity. We conducted a similar experiment by taking close dates from a stack: however, the network fails to have same performances as we have when taking into account changes.

• Towards Deep Unsupervised SAR Despeckling with blind-spot Convolutional Neural Networks
Adaptation of blind-spot approach to amplitude SAR images, where Gamma distribution is fitted.

• RABASAR: a fast RAtio BAsed multi-temporal SAR despeckling
A speckle reduction method for multi-temporal stacks of SAR images. The core of this algorithm is to compute a super-image at each time a new image is acquired. Given a SAR time-series, the method consist in:

• computing the so called super-image by first multi-temporal multilooking (taking into account changes through a generalized likelihood test), then denoising the multilooked result to remove the remaining noise fluctuations (to this purpose, the ENL is estimated so to tune the level of denoising to be applied)
• Then, given an image of the stack, the ratio image is compute between date i and the superimage.
• The ratio is denoised and its result is multiplied by the super-image to produce the filtered output.
• Episodic Training for Domain Generalization
Keywords: Domain generalization, Neural Network training

Domain generalization (DG) is the challenging and topi-cal problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains

Problem settings: given n source domains $\&space;D_1, D_2, \dots, D_n$ used for training, the aim is to learn a model that generalizes well on the novel testing domain $\&space;D\star$, which has different statistics w.r.t. the training domains. Solutions:

• Vanilla Aggregation Method: all training domains are aggregated together and a single end-to-end CNN is trained. Simple, though providing a strong baseline.
• Domain Specific Models: a model is trained for each domain i
• Episodic Training: solution proposed by the authors. First implementation: the CNN is separated into a feature extractor and a classifier. Robustness of the domain-agnostic model ensured by training each of the two blocks for a specific domain with a partner mismatched with the input data. A generalization is then proposed: each block is trained partnered with a block with random weights. This adapts well for both homogeneous and heterogeneous label-spaces.