How to use the hls4ml.model.hls_model function in hls4ml

To help you get started, we’ve selected a few hls4ml examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github hls-fpga-machine-learning / hls4ml / hls4ml / model / optimizer / passes / bn_quant.py View on Github external
import numpy as np
import re

from ..optimizer import OptimizerPass
import hls4ml.model.hls_model as hls_model
import hls4ml.model.templates as templates

class BatchNormalizationQuantizedTanh(hls_model.Layer):
    ''' Merged Batch Normalization and quantized (binary or ternary) Tanh layer.
        The mean, variance, beta, gamma parameters are folded into the threshold(s) at which the
        sign of the input flips after the quantized (binary or ternary) Tanh activation.
    '''

    def initialize(self):
        inp = self.get_input_variable()
        shape = inp.shape
        dims = inp.dim_names
        precision_bits = re.search('.+<(.+?)>', inp.type.precision).group(1).split(',')
        if 'ap_int' in inp.type.precision:
            W = int(precision_bits[0])
            I = W
            F = 0
        elif 'ap_fixed' in inp.type.precision:
            W = int(precision_bits[0])