How to use the gradient.compute_gradient function in gradient

To help you get started, we’ve selected a few gradient examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github jatinshah / ufldl_tutorial / softmax_exercise.py View on Github external
## STEP 2: Implement softmaxCost
#
#  Implement softmaxCost in softmaxCost.m.

(cost, grad) = softmax.softmax_cost(theta, num_classes, input_size, lambda_, input_data, labels)

##======================================================================
## STEP 3: Gradient checking
#
#  As with any learning algorithm, you should always check that your
#  gradients are correct before learning the parameters.
#
if debug:
    J = lambda x: softmax.softmax_cost(x, num_classes, input_size, lambda_, input_data, labels)

    num_grad = gradient.compute_gradient(J, theta)

    # Use this to visually compare the gradients side by side
    print num_grad, grad

    # Compare numerically computed gradients with the ones obtained from backpropagation
    diff = np.linalg.norm(num_grad - grad) / np.linalg.norm(num_grad + grad)
    print diff
    print "Norm of the difference between numerical and analytical num_grad (should be < 1e-7)\n\n"

##======================================================================
## STEP 4: Learning parameters
#
#  Once you have verified that your gradients are correct,
#  you can start training your softmax regression code using softmaxTrain
#  (which uses minFunc).
github jatinshah / ufldl_tutorial / linear_decoder_exercise.py View on Github external
# To speed up gradient checking, we will use a reduced network and some
# dummy patches

debug_hidden_size = 5
debug_visible_size = 8
patches = np.random.rand(8, 10)

theta = sparse_autoencoder.initialize(debug_hidden_size, debug_visible_size)

cost, grad = sparse_autoencoder.sparse_autoencoder_linear_cost(theta, debug_visible_size, debug_hidden_size,
                                                               lambda_, sparsity_param, beta, patches)

# Check gradients
J = lambda x: sparse_autoencoder.sparse_autoencoder_linear_cost(x, debug_visible_size, debug_hidden_size,
                                                                lambda_, sparsity_param, beta, patches)
num_grad = gradient.compute_gradient(J, theta)

print grad, num_grad

# Compare numerically computed gradients with the ones obtained from backpropagation
diff = np.linalg.norm(num_grad - grad) / np.linalg.norm(num_grad + grad)
print diff
print "Norm of the difference between numerical and analytical num_grad (should be < 1e-9)\n\n"

##======================================================================
## STEP 2: Learn features on small patches
#  In this step, you will use your sparse autoencoder (which now uses a
#  linear decoder) to learn features on small patches sampled from related
#  images.

## STEP 2a: Load patches
#  In this step, we load 100k patches sampled from the STL10 dataset and
github jatinshah / ufldl_tutorial / train.py View on Github external
# First, lets make sure your numerical gradient computation is correct for a
# simple function.  After you have implemented computeNumericalGradient.m,
# run the following:


if debug:
    gradient.check_gradient()

    # Now we can use it to check your cost function and derivative calculations
    # for the sparse autoencoder.
    # J is the cost function

    J = lambda x: sparse_autoencoder.sparse_autoencoder_cost(x, visible_size, hidden_size,
                                                             lambda_, sparsity_param,
                                                             beta, patches)
    num_grad = gradient.compute_gradient(J, theta)

    # Use this to visually compare the gradients side by side
    print num_grad, grad

    # Compare numerically computed gradients with the ones obtained from backpropagation
    diff = np.linalg.norm(num_grad - grad) / np.linalg.norm(num_grad + grad)
    print diff
    print "Norm of the difference between numerical and analytical num_grad (should be < 1e-9)\n\n"

##======================================================================
## STEP 4: After verifying that your implementation of
#  sparseAutoencoderCost is correct, You can start training your sparse
#  autoencoder with minFunc (L-BFGS).

#  Randomly initialize the parameters
theta = sparse_autoencoder.initialize(hidden_size, visible_size)