Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.
r"""Computes the loss for the given input.
Args:
interpolate (torch.Tensor) : It must have the dimensions (N, \*) where
\* means any number of additional dimensions.
d_interpolate (torch.Tensor) : Output of the ``discriminator`` with ``interpolate``
as the input. It must have the dimensions (N, \*)
where \* means any number of additional dimensions.
Returns:
scalar if reduction is applied else Tensor with dimensions (N, \*).
"""
# TODO(Aniket1998): Check for performance bottlenecks
# If found, write the backprop yourself instead of
# relying on autograd
return wasserstein_gradient_penalty(interpolate, d_interpolate, self.reduction)