How to use the kaggler.const.SEC_PER_MIN function in Kaggler

To help you get started, we’ve selected a few Kaggler examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github jeongyoonlee / Kaggler / kaggler / model / nn.py View on Github external
start = time.time()
        print('\tEPOCH TRAIN     VALID     BEST      TIME (m)')
        print('\t--------------------------------------------')

        # Before training
        p = self.predict_raw(X)
        auc = roc_auc_score(y, p)
        auc_val = auc
        if X_val is not None:
            p_val = self.predict_raw(X_val)
            auc_val = roc_auc_score(y_val, p_val)

        print('\t{:3d}:  {:.6f}  {:.6f}  {:.6f}  {:.2f}'.format(
              0, auc, auc_val, self.auc_opt,
              (time.time() - start) / SEC_PER_MIN))

        # Use 'while' instead of 'for' to increase n_epoch if the validation
        # error keeps improving at the end of n_epoch
        epoch = 1
        while epoch <= n_epoch:
            # Shuffle inputs every epoch - it helps avoiding the local optimum
            # when batch < n_obs.
            np.random.shuffle(idx)

            # Find the optimal weights for batch input examples.
            # If batch == 1, it's the stochastic optimization, which is slow
            # but uses minimal memory.  If batch == n_obs, it's the batch
            # optimization, which is fast but uses maximum memory.
            # Otherwise, it's the mini-batch optimization, which balances the
            # speed and space trade-offs.
            for i in range(int(n_obs / batch) + 1):