How to use the pmdarima.utils.check_endog function in pmdarima

To help you get started, we’ve selected a few pmdarima examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github alkaline-ml / pmdarima / pmdarima / arima / arima.py View on Github external
confidence intervals out of the box, so we have to perform the
    ``get_prediction`` code here and unpack the confidence intervals manually.

    Notes
    -----
    For internal use only.
    """
    results = arima_res.get_prediction(
        start=start,
        end=end,
        exog=exog,
        **kwargs)

    f = results.predicted_mean
    conf_int = results.conf_int(alpha=alpha)
    return check_endog(f, dtype=None, copy=False), \
        check_array(conf_int, copy=False, dtype=None)
github alkaline-ml / pmdarima / pmdarima / metrics.py View on Github external
>>> y_true = np.array([0.07533, 0.07533, 0.07533, 0.07533,
    ...                    0.07533, 0.07533, 0.0672, 0.0672])
    >>> y_pred = np.array([0.102, 0.107, 0.047, 0.1,
    ...                    0.032, 0.047, 0.108, 0.089])
    >>> smape(y_true, y_pred)
    42.60306631890196

    A perfect score:
    >>> smape(y_true, y_true)
    0.0

    References
    ----------
    .. [1] https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error  # noqa: E501
    """
    y_true = check_endog(y_true)  # type: np.ndarray
    y_pred = check_endog(y_pred)  # type: np.ndarray
    abs_diff = np.abs(y_pred - y_true)
    return np.mean((abs_diff * 200 / (np.abs(y_pred) + np.abs(y_true))))
github alkaline-ml / pmdarima / pmdarima / arima / arima.py View on Github external
Notes
        -----
        * Internally, this calls ``fit`` again using the OLD model parameters
          as the starting parameters for the new model's MLE computation.
        """
        get_compatible_check_is_fitted(self, 'arima_res_')
        model_res = self.arima_res_

        # Allow updating with a scalar if the user is just adding a single
        # sample.
        if not is_iterable(y):
            y = [y]

        # validate the new samples to add
        y = check_endog(y, dtype=DTYPE)
        n_samples = y.shape[0]

        # if exogenous is None and new exog provided, or vice versa, raise
        exogenous = self._check_exog(exogenous)  # type: np.ndarray

        # ensure the k_exog matches
        if exogenous is not None:
            k_exog = model_res.model.k_exog
            n_exog, exog_dim = exogenous.shape

            if exogenous.shape[1] != k_exog:
                raise ValueError("Dim mismatch in fit exogenous (%i) and new "
                                 "exogenous (%i)" % (k_exog, exog_dim))

            # make sure the number of samples in exogenous match the number
            # of samples in the endog
github alkaline-ml / pmdarima / pmdarima / model_selection / _validation.py View on Github external
- 'mean_squared_error'

    cv : BaseTSCrossValidator or None, optional (default=None)
        An instance of cross-validation. If None, will use a RollingForecastCV

    verbose : integer, optional
        The verbosity level.

    error_score : 'raise' or numeric
        Value to assign to the score if an error occurs in estimator fitting.
        If set to 'raise', the error is raised.
        If a numeric value is given, ModelFitWarning is raised. This parameter
        does not affect the refit step, which will always raise the error.
    """
    y, exog = indexable(y, exogenous)
    y = check_endog(y, copy=False)

    cv = check_cv(cv)
    scoring = _check_scoring(scoring)

    # validate the error score
    if not (error_score == "raise" or isinstance(error_score, numbers.Number)):
        raise ValueError('error_score should be the string "raise" or a '
                         'numeric value')

    # TODO: clone between each iteration?
    # TODO: in the future we might consider joblib for parallelizing, but it
    #   . could cause cross threads in parallelism..

    results = [
        _fit_and_score(fold, base.clone(estimator), y, exog,
                       scorer=scoring,