How to use the larq.layers_base.QuantizerBase function in larq

To help you get started, we’ve selected a few larq examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github larq / larq / larq / layers.py View on Github external
pointwise_initializer=pointwise_initializer,
            bias_initializer=bias_initializer,
            depthwise_regularizer=depthwise_regularizer,
            pointwise_regularizer=pointwise_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            depthwise_constraint=depthwise_constraint,
            pointwise_constraint=pointwise_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantConv2DTranspose(QuantizerBase, tf.keras.layers.Conv2DTranspose):
    """Transposed quantized convolution layer (sometimes called Deconvolution).

    The need for transposed convolutions generally arises from the desire to use a
    transformation going in the opposite direction of a normal convolution, i.e.,
    from something that has the shape of the output of some convolution to something
    that has the shape of its input while maintaining a connectivity pattern
    that is compatible with said convolution. `input_quantizer` and `kernel_quantizer`
    are the element-wise quantization functions to use. If both quantization functions
    are `None` this layer is equivalent to `Conv2DTranspose`.

    When using this layer as the first layer in a model, provide the keyword argument
    `input_shape` (tuple of integers, does not include the sample axis), e.g.
    `input_shape=(128, 128, 3)` for 128x128 RGB pictures in
    `data_format="channels_last"`.

    # Arguments
github larq / larq / larq / layers.py View on Github external
input_quantizer=input_quantizer,
            kernel_quantizer=kernel_quantizer,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantConv2D(QuantizerBase, tf.keras.layers.Conv2D):
    """2D quantized convolution layer (e.g. spatial convolution over images).

    This layer creates a convolution kernel that is convolved
    with the layer input to produce a tensor of outputs.
    `input_quantizer` and `kernel_quantizer` are the element-wise quantization
    functions to use. If both quantization functions are `None` this layer is
    equivalent to `Conv2D`. If `use_bias` is True, a bias vector is created
    and added to the outputs. Finally, if `activation` is not `None`,
    it is applied to the outputs as well.

    When using this layer as the first layer in a model, provide the keyword argument
    `input_shape` (tuple of integers, does not include the sample axis),
    e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures in
    `data_format="channels_last"`.

    # Arguments
github larq / larq / larq / layers.py View on Github external
If both `input_quantizer` and `kernel_quantizer` are `None` the layer
is equivalent to a full precision layer.
"""

import tensorflow as tf

from larq import utils
from larq.layers_base import (
    QuantizerBase,
    QuantizerDepthwiseBase,
    QuantizerSeparableBase,
)


@utils.register_keras_custom_object
class QuantDense(QuantizerBase, tf.keras.layers.Dense):
    """Just your regular densely-connected quantized NN layer.

    `QuantDense` implements the operation:
    `output = activation(dot(input_quantizer(input), kernel_quantizer(kernel)) + bias)`,
    where `activation` is the element-wise activation function passed as the
    `activation` argument, `kernel` is a weights matrix created by the layer, and `bias`
    is a bias vector created by the layer (only applicable if `use_bias` is `True`).
    `input_quantizer` and `kernel_quantizer` are the element-wise quantization
    functions to use. If both quantization functions are `None` this layer is
    equivalent to `Dense`.

    !!! note ""
        If the input to the layer has a rank greater than 2, then it is flattened
        prior to the initial dot product with `kernel`.

    !!! example
github larq / larq / larq / layers.py View on Github external
input_quantizer=input_quantizer,
            kernel_quantizer=kernel_quantizer,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantConv3D(QuantizerBase, tf.keras.layers.Conv3D):
    """3D convolution layer (e.g. spatial convolution over volumes).

    This layer creates a convolution kernel that is convolved
    with the layer input to produce a tensor of
    outputs. `input_quantizer` and `kernel_quantizer` are the element-wise quantization
    functions to use. If both quantization functions are `None` this layer is
    equivalent to `Conv3D`. If `use_bias` is True, a bias vector is created and
    added to the outputs. Finally, if `activation` is not `None`,
    it is applied to the outputs as well.

    When using this layer as the first layer in a model, provide the keyword argument
    `input_shape` (tuple of integers, does not include the sample axis),
    e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes
    with a single channel, in `data_format="channels_last"`.

    # Arguments
github larq / larq / larq / layers.py View on Github external
input_quantizer=input_quantizer,
            kernel_quantizer=kernel_quantizer,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantLocallyConnected1D(QuantizerBase, tf.keras.layers.LocallyConnected1D):
    """Locally-connected quantized layer for 1D inputs.

    The `QuantLocallyConnected1D` layer works similarly to the `QuantConv1D` layer,
    except that weights are unshared, that is, a different set of filters is applied
    at each different patch of the input. `input_quantizer` and `kernel_quantizer`
    are the element-wise quantization functions to use. If both quantization functions
    are `None` this layer is equivalent to `LocallyConnected1D`.

    !!! example
        ```python
        # apply a unshared weight convolution 1d of length 3 to a sequence with
        # 10 timesteps, with 64 output filters
        model = Sequential()
        model.add(QuantLocallyConnected1D(64, 3, input_shape=(10, 32)))
        # now model.output_shape == (None, 8, 64)
        # add a new conv1d on top
github larq / larq / larq / layers.py View on Github external
kernel_quantizer=kernel_quantizer,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            implementation=implementation,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantLocallyConnected2D(QuantizerBase, tf.keras.layers.LocallyConnected2D):
    """Locally-connected quantized layer for 2D inputs.

    The `QuantLocallyConnected2D` layer works similarly to the `QuantConv2D` layer,
    except that weights are unshared, that is, a different set of filters is applied
    at each different patch of the input. `input_quantizer` and `kernel_quantizer`
    are the element-wise quantization functions to use. If both quantization functions
    are `None` this layer is equivalent to `LocallyConnected2D`.

    !!! example
        ```python
        # apply a 3x3 unshared weights convolution with 64 output filters on a
        32x32 image
        # with `data_format="channels_last"`:
        model = Sequential()
        model.add(QuantLocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3)))
        # now model.output_shape == (None, 30, 30, 64)
github larq / larq / larq / layers.py View on Github external
input_quantizer=input_quantizer,
            kernel_quantizer=kernel_quantizer,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantConv3DTranspose(QuantizerBase, tf.keras.layers.Conv3DTranspose):
    """Transposed quantized convolution layer (sometimes called Deconvolution).

    The need for transposed convolutions generally arises
    from the desire to use a transformation going in the opposite direction
    of a normal convolution, i.e., from something that has the shape of the
    output of some convolution to something that has the shape of its input
    while maintaining a connectivity pattern that is compatible with
    said convolution. `input_quantizer` and `kernel_quantizer`
    are the element-wise quantization functions to use. If both quantization functions
    are `None` this layer is equivalent to `Conv3DTranspose`.

    When using this layer as the first layer in a model, provide the keyword argument
    `input_shape` (tuple of integers, does not include the sample axis),
    e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels
    if `data_format="channels_last"`.
github larq / larq / larq / layers.py View on Github external
input_quantizer=input_quantizer,
            kernel_quantizer=kernel_quantizer,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            metrics=metrics,
            **kwargs,
        )


@utils.register_keras_custom_object
class QuantConv1D(QuantizerBase, tf.keras.layers.Conv1D):
    """1D quantized convolution layer (e.g. temporal convolution).

    This layer creates a convolution kernel that is convolved with the layer input
    over a single spatial (or temporal) dimension to produce a tensor of outputs.
    `input_quantizer` and `kernel_quantizer` are the element-wise quantization
    functions to use. If both quantization functions are `None` this layer is
    equivalent to `Conv1D`.
    If `use_bias` is True, a bias vector is created and added to the outputs.
    Finally, if `activation` is not `None`, it is applied to the outputs as well.

    When using this layer as the first layer in a model, provide an `input_shape`
    argument (tuple of integers or `None`, e.g. `(10, 128)` for sequences of
    10 vectors of 128-dimensional vectors, or `(None, 128)` for variable-length
    sequences of 128-dimensional vectors.

    # Arguments