How to use the awswrangler.s3.utils.delete_objects function in awswrangler

To help you get started, we’ve selected a few awswrangler examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github awslabs / aws-data-wrangler / awswrangler / s3 / write / write.py View on Github external
mode="append",
    region=None,
    key=None,
    secret=None,
    profile=None,
    num_procs=None,
    num_files=2,
):
    """
    Store a given Pandas Dataframe in S3
    """
    session_primitives = SessionPrimitives(
        region=region, key=key, secret=secret, profile=profile
    )
    if mode == "overwrite" or (mode == "overwrite_partitions" and not partition_cols):
        delete_objects(path, session_primitives=session_primitives)
    elif mode not in ["overwrite_partitions", "append"]:
        raise UnsupportedWriteMode(mode)
    partition_paths = _write_data(
        df=df,
        path=path,
        partition_cols=partition_cols,
        preserve_index=preserve_index,
        file_format=file_format,
        mode=mode,
        session_primitives=session_primitives,
        num_procs=num_procs,
        num_files=num_files,
    )
    if database:
        write_metadata(
            df=df,
github awslabs / aws-data-wrangler / benchmarks / serverless_etl / run_pythonshell.py View on Github external
def clean_output(bucket):
    path = f"s3://{bucket}/pythonshell_output/"
    print(f"Cleaning {path}*")
    delete_objects(path)
github awslabs / aws-data-wrangler / benchmarks / serverless_etl / run_lambda.py View on Github external
def clean_output(bucket):
    path = f"s3://{bucket}/lambda_output/"
    print(f"Cleaning {path}*")
    delete_objects(path)
github awslabs / aws-data-wrangler / benchmarks / serverless_etl / run_pyspark.py View on Github external
def clean_output(bucket):
    path = f"s3://{bucket}/pyspark_output/"
    print(f"Cleaning {path}*")
    delete_objects(path)