How to use the datalad.support.param.Parameter function in datalad

To help you get started, we’ve selected a few datalad examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github datalad / datalad / 3rd / datalad-revolution / datalad_revolution / revcreate.py View on Github external
metavar='INIT OPTIONS',
            nargs=REMAINDER,
            doc="""options to pass to :command:`git init`. [PY: Options can be
            given as a list of command line arguments or as a GitPython-style
            option dictionary PY][CMD: Any argument specified after the
            destination path of the repository will be passed to git-init
            as-is CMD]. Note that not all options will lead to viable results.
            For example '--bare' will not yield a repository where DataLad
            can adjust files in its worktree."""),
        dataset=Parameter(
            args=("-d", "--dataset"),
            metavar='DATASET',
            doc="""specify the dataset to perform the create operation on. If
            a dataset is given, a new subdataset will be created in it.""",
            constraints=EnsureRevDataset() | EnsureNone()),
        force=Parameter(
            args=("-f", "--force",),
            doc="""enforce creation of a dataset in a non-empty directory""",
            action='store_true'),
        description=location_description,
        no_annex=Parameter(
            args=("--no-annex",),
            doc="""if set, a plain Git repository will be created without any
            annex""",
            action='store_true'),
        # TODO seems to only cause a config flag to be set, this could be done
        # in a procedure
        fake_dates=Parameter(
            args=('--fake-dates',),
            action='store_true',
            doc="""Configure the repository to use fake dates. The date for a
            new commit will be set to one second later than the latest commit
github datalad / datalad / datalad / distribution / publish.py View on Github external
dataset=Parameter(
            args=("-d", "--dataset"),
            metavar='DATASET',
            doc="""specify the (top-level) dataset to be published. If no dataset
            is given, the datasets are determined based on the input arguments""",
            constraints=EnsureDataset() | EnsureNone()),
        to=Parameter(
            args=("--to",),
            metavar='LABEL',
            doc="""name of the target sibling. If no name is given an attempt is
            made to identify the target based on the dataset's configuration
            (i.e. a configured tracking branch, or a single sibling that is
            configured for publication)""",
            # TODO: See TODO at top of class!
            constraints=EnsureStr() | EnsureNone()),
        since=Parameter(
            args=("--since",),
            constraints=EnsureStr() | EnsureNone(),
            doc="""When publishing dataset(s), specifies commit (treeish, tag, etc)
            from which to look for changes
            to decide whether updated publishing is necessary for this and which children.
            If empty argument is provided, then we would take from the previously 
            published to that remote/sibling state (for the current branch)"""),
        # since: commit => .gitmodules diff to head => submodules to publish
        missing=missing_sibling_opt,
        path=Parameter(
            args=("path",),
            metavar='PATH',
            # TODO this description is no longer correct
            doc="path(s), that may point to file handle(s) to publish including "
                "their actual content or to subdataset(s) to be published. If a "
                "file handle is published with its data, this implicitly means "
github datalad / datalad / datalad / interface / install_handle.py View on Github external
Installing a handle into an existing directory is not possible, except if
    a handle with the same name already exists therein. In the latter case,
    the cloning will be skipped.

    Examples:

      $ datalad install-handle http://psydata.ovgu.de/forrest_gump
      $ datalad install-handle MyCoolCollection/EvenCoolerHandle /foo/bar
    """
    _params_ = dict(
        handle=Parameter(
            doc="name or url of the handle to install; in case of a name it "
                "is expected to state the name of the collection the handle is "
                "in, followed by the handle's name, separated by '/'.",
            constraints=EnsureStr()),
        path=Parameter(
            args=('path',),
            nargs='?',
            doc="path, where to install the handle. If not provided, local "
                "directory with the name from the url will be used",
            constraints=EnsureStr() | EnsureNone()),
        name=Parameter(
            doc="local name of the installed handle. If not provided, the name"
                "of the installation directory will be used.",
            constraints=EnsureStr() | EnsureNone()))

    @staticmethod
    def __call__(handle, path=None, name=None):
        """
        Examples
        --------
        >>> from datalad.api import install_handle, list_handles, whereis
github datalad / datalad / datalad / distribution / clone.py View on Github external
_params_ = dict(
        dataset=Parameter(
            args=("-d", "--dataset"),
            doc="""(parent) dataset to clone into. If given, the newly cloned
            dataset is registered as a subdataset of the parent. Also, if given,
            relative paths are interpreted as being relative to the parent
            dataset, and not relative to the working directory.""",
            constraints=EnsureDataset() | EnsureNone()),
        source=Parameter(
            args=("source",),
            metavar='SOURCE',
            doc="""URL, DataLad resource identifier, local path or instance of
            dataset to be cloned""",
            constraints=EnsureStr() | EnsureNone()),
        path=Parameter(
            args=("path",),
            metavar='PATH',
            nargs="?",
            doc="""path to clone into.  If no `path` is provided a
            destination path will be derived from a source URL
            similar to :command:`git clone`"""),
        description=location_description,
        reckless=reckless_opt,
        alt_sources=Parameter(
            args=('--alternative-sources',),
            dest='alt_sources',
            metavar='SOURCE',
            nargs='+',
            doc="""Alternative sources to be tried if a dataset cannot
            be obtained from the main `source`""",
            constraints=EnsureStr() | EnsureNone()),
github datalad / datalad / datalad / interface / add_archive_content.py View on Github external
constraints=EnsureStr() | EnsureNone()
        ),
        # TODO: Python only???
        annex=Parameter(
            doc="""annex instance to use"""
            #constraints=EnsureStr() | EnsureNone()
        ),
        # TODO: Python only!
        stats=Parameter(
            doc="""ActivityStats instance for global tracking""",
        ),
        key=Parameter(
            args=("--key",),
            action="store_true",
            doc="""flag to signal if provided archive is not actually a filename on its own but an annex key"""),
        copy=Parameter(
            args=("--copy",),
            action="store_true",
            doc="""flag to copy the content of the archive instead of moving"""),
        allow_dirty=allow_dirty,
        commit=Parameter(
            args=("--no-commit",),
            action="store_false",
            dest="commit",
            doc="""flag to not commit upon completion"""),
        drop_after=Parameter(
            args=("--drop-after",),
            action="store_true",
            doc="""drop extracted files after adding to annex""",
        ),
        delete_after=Parameter(
            args=("--delete-after",),
github datalad / datalad / datalad / distribution / publish.py View on Github external
_no_eval_results = True
    # TODO: Figure out, how to tell about tracking branch/upstream
    #      (and the respective remote)
    #      - it is used, when no destination is given
    #      - it is configured to be the given destination, if there was no
    #        upstream set up before, so you can use just "datalad publish" next
    #        time.

    _params_ = dict(
        dataset=Parameter(
            args=("-d", "--dataset"),
            metavar='DATASET',
            doc="""specify the (top-level) dataset to be published. If no dataset
            is given, the datasets are determined based on the input arguments""",
            constraints=EnsureDataset() | EnsureNone()),
        to=Parameter(
            args=("--to",),
            metavar='LABEL',
            doc="""name of the target sibling. If no name is given an attempt is
            made to identify the target based on the dataset's configuration
            (i.e. a configured tracking branch, or a single sibling that is
            configured for publication)""",
            # TODO: See TODO at top of class!
            constraints=EnsureStr() | EnsureNone()),
        since=Parameter(
            args=("--since",),
            constraints=EnsureStr() | EnsureNone(),
            doc="""When publishing dataset(s), specifies commit (treeish, tag, etc)
            from which to look for changes
            to decide whether updated publishing is necessary for this and which children.
            If empty argument is provided, then we would take from the previously 
            published to that remote/sibling state (for the current branch)"""),
github datalad / datalad / datalad / plugin / check_dates.py View on Github external
to_render = dict(res.items())
        elif "report" in res and res["report"]["objects"]:
            to_render = {k: v for k, v in res.items()
                         if k not in ["status", "message", "logger"]}
        if to_render:
            ui.message(json.dumps(to_render, sort_keys=True, indent=2))

    _params_ = dict(
        paths=Parameter(
            args=("paths",),
            metavar="PATH",
            nargs="*",
            doc="""Root directory in which to search for Git repositories. The
            current working directory will be used by default.""",
            constraints=EnsureStr() | EnsureNone()),
        reference_date=Parameter(
            args=("-D", "--reference-date"),
            metavar="DATE",
            doc="""Compare dates to this date. If dateutil is installed, this
            value can be any format that its parser recognizes. Otherwise, it
            should be a unix timestamp that starts with a "@". The default
            value corresponds to 01 Jan, 2018 00:00:00 -0000.""",
            constraints=EnsureStr()),
        revs=Parameter(
            args=("--rev",),
            dest="revs",
            action="append",
            metavar="REVISION",
            doc="""Search timestamps from commits that are reachable from [PY:
            these revisions PY][CMD: REVISION CMD]. Any revision specification
            supported by :command:`git log`, including flags like --all and
            --tags, can be used.[CMD:  This option can be given multiple times.
github datalad / datalad / datalad / distribution / create_sibling_github.py View on Github external
facilitate distribution and exchange, while still allowing any dataset
    consumer to obtain actual data content from alternative sources.

    For Github authentication user credentials can be given as arguments.
    Alternatively, they are obtained interactively or queried from the systems
    credential store. Lastly, an *oauth* token stored in the Git
    configuration under variable *hub.oauthtoken* will be used automatically.
    Such a token can be obtained, for example, using the commandline Github
    interface (https://github.com/sociomantic/git-hub) by running:
    :kbd:`git hub setup`.
    """
    # XXX prevent common args from being added to the docstring
    _no_eval_results = True

    _params_ = dict(
        dataset=Parameter(
            args=("--dataset", "-d",),
            doc="""specify the dataset to create the publication target for. If
                no dataset is given, an attempt is made to identify the dataset
                based on the current working directory""",
            constraints=EnsureDataset() | EnsureNone()),
        reponame=Parameter(
            args=('reponame',),
            metavar='REPONAME',
            doc="""Github repository name. When operating recursively,
            a suffix will be appended to this name for each subdataset""",
            constraints=EnsureStr()),
        recursive=recursion_flag,
        recursion_limit=recursion_limit,
        name=Parameter(
            args=('-s', '--name',),
            metavar='NAME',
github datalad / datalad / datalad / metadata / metadata.py View on Github external
and dataset paths::

        % datalad -f json metadata --get-aggregates
    """
    # make the custom renderer the default, path reporting isn't the top
    # priority here
    result_renderer = 'tailored'

    _params_ = dict(
        dataset=Parameter(
            args=("-d", "--dataset"),
            doc="""dataset to query. If given, metadata will be reported
            as stored in this dataset. Otherwise, the closest available
            dataset containing a query path will be consulted.""",
            constraints=EnsureDataset() | EnsureNone()),
        path=Parameter(
            args=("path",),
            metavar="PATH",
            doc="path(s) to query metadata for",
            nargs="*",
            constraints=EnsureStr() | EnsureNone()),
        get_aggregates=Parameter(
            args=('--get-aggregates',),
            action='store_true',
            doc="""if set, yields all (sub)datasets for which aggregate
            metadata are available in the dataset. No other action is
            performed, even if other arguments are given. The reported
            results contain a datasets's ID, the commit hash at which
            metadata aggregation was performed, and the location of the
            object file(s) containing the aggregated metadata."""),
        reporton=reporton_opt,
        recursive=recursion_flag)
github datalad / datalad / datalad / core / local / status.py View on Github external
constraints=EnsureChoice(None, 'basic', 'availability', 'all'),
        doc="""Switch whether to include information on the annex
        content of individual files in the status report, such as
        recorded file size. By default no annex information is reported
        (faster). Three report modes are available: basic information
        like file size and key name ('basic'); additionally test whether
        file content is present in the local annex ('availability';
        requires one or two additional file system stat calls, but does
        not call git-annex), this will add the result properties
        'has_content' (boolean flag) and 'objloc' (absolute path to an
        existing annex object file); or 'all' which will report all
        available information (presently identical to 'availability').
        [CMD: The 'basic' mode will be assumed when this option is given,
        but no mode is specified. CMD]
        """),
    untracked=Parameter(
        args=('--untracked',),
        metavar='MODE',
        constraints=EnsureChoice('no', 'normal', 'all'),
        doc="""If and how untracked content is reported when comparing
        a revision to the state of the work tree. 'no': no untracked
        content is reported; 'normal': untracked files and entire
        untracked directories are reported as such; 'all': report
        individual files even in fully untracked directories."""),
    recursive=recursion_flag,
    recursion_limit=recursion_limit)


STATE_COLOR_MAP = {
    'untracked': ac.RED,
    'modified': ac.RED,
    'deleted': ac.RED,