How to use the scikit-learn.sklearn.linear_model.stochastic_gradient.BaseSGDRegressor function in scikit-learn

To help you get started, we’ve selected a few scikit-learn examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github angadgill / Parallel-SGD / scikit-learn / sklearn / linear_model / stochastic_gradient.py View on Github external
dataset,
                              n_iter,
                              int(self.fit_intercept),
                              int(self.verbose),
                              int(self.shuffle),
                              seed,
                              1.0, 1.0,
                              learning_rate_type,
                              self.eta0, self.power_t, self.t_,
                              intercept_decay)

            self.t_ += n_iter * X.shape[0]
            self.intercept_ = np.atleast_1d(self.intercept_)


class SGDRegressor(BaseSGDRegressor, _LearntSelectorMixin):
    """Linear model fitted by minimizing a regularized empirical loss with SGD

    SGD stands for Stochastic Gradient Descent: the gradient of the loss is
    estimated each sample at a time and the model is updated along the way with
    a decreasing strength schedule (aka learning rate).

    The regularizer is a penalty added to the loss function that shrinks model
    parameters towards the zero vector using either the squared euclidean norm
    L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
    parameter update crosses the 0.0 value because of the regularizer, the
    update is truncated to 0.0 to allow for learning sparse models and achieve
    online feature selection.

    This implementation works with data represented as dense numpy arrays of
    floating point values for the features.
github angadgill / Parallel-SGD / scikit-learn / sklearn / linear_model / stochastic_gradient.py View on Github external
def __init__(self, loss="squared_loss", penalty="l2", alpha=0.0001,
                 l1_ratio=0.15, fit_intercept=True, n_iter=5, shuffle=True,
                 verbose=0, epsilon=DEFAULT_EPSILON, random_state=None,
                 learning_rate="invscaling", eta0=0.01, power_t=0.25,
                 warm_start=False, average=False, n_jobs=1):
        super(BaseSGDRegressor, self).__init__(loss=loss, penalty=penalty,
                                               alpha=alpha, l1_ratio=l1_ratio,
                                               fit_intercept=fit_intercept,
                                               n_iter=n_iter, shuffle=shuffle,
                                               verbose=verbose,
                                               epsilon=epsilon,
                                               random_state=random_state,
                                               learning_rate=learning_rate,
                                               eta0=eta0, power_t=power_t,
                                               warm_start=warm_start,
                                               average=average)
        self.n_jobs=int(n_jobs)