How to use the baal.modelwrapper.ModelWrapper function in baal

To help you get started, we’ve selected a few baal examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github ElementAI / baal / src / baal / bayesian / bayesian_layer / wrapper.py View on Github external
import torch
import structlog
from torch.autograd import Variable

from baal.modelwrapper import ModelWrapper
from baal.bayesian.bayesian_layer import LinearBayesianLayer

log = structlog.get_logger("ModelWrapper")


class BayesianWrapper(ModelWrapper):
    """
    A class to do bayesian inference using a model with bayesian layers.
    This class assures that a model with Bayesian Layers accompanies is
    followed by a Softplus() instead of ReLu().
    It also makes sure that we have few samples of the training output,
    to be able to estimate the uncertainty of loss through VI.

    Args:
        model : (torch.nn.Module) pytorch model
        criterion : (torch.nn.Module) pytorch loss function
        beta : scale for regularization effect
        iterations: The number of times the model would iterate over
            the batch to create uncertainty.

    Returns:
        Tensor , computed bayesian loss
github ElementAI / baal / experiments / vgg_mcdropout_cifar10.py View on Github external
hyperparams['shuffle_prop'])
    criterion = CrossEntropyLoss()
    model = vgg16(pretrained=False, num_classes=10)
    weights = load_state_dict_from_url('https://download.pytorch.org/models/vgg16-397923af.pth')
    weights = {k: v for k, v in weights.items() if 'classifier.6' not in k}
    model.load_state_dict(weights, strict=False)

    # change dropout layer to MCDropout
    model = patch_module(model)

    if use_cuda:
        model.cuda()
    optimizer = optim.SGD(model.parameters(), lr=hyperparams["lr"], momentum=0.9)

    # Wraps the model into a usable API.
    model = ModelWrapper(model, criterion)

    logs = {}
    logs['epoch'] = 0

    # for prediction we use a smaller batchsize
    # since it is slower
    active_loop = ActiveLearningLoop(active_set,
                                     model.predict_on_dataset,
                                     heuristic,
                                     hyperparams.get('n_data_to_label', 1),
                                     batch_size=10,
                                     iterations=hyperparams['iterations'],
                                     use_cuda=use_cuda)

    for epoch in tqdm(range(args.epoch)):
        model.train_on_dataset(active_set, optimizer, hyperparams["batch_size"], 1, use_cuda)