How to use the etlhelper.db_helper_factory.DB_HELPER_FACTORY function in etlhelper

To help you get started, we’ve selected a few etlhelper examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github BritishGeologicalSurvey / etlhelper / etlhelper / etl.py View on Github external
:param query: str, SQL insert command with placeholders for data
    :param rows: List of tuples containing data to be inserted/updated
    :param conn: dbapi connection
    :param commit_chunks: bool, commit after each chunk has been inserted/updated
    :return row_count: int, number of rows inserted/updated
    """
    msg = ("executemany parameter order will be changed in a future release to "
           "(query, conn, rows).  "
           "Avoid breaking code by using named parameters for all e.g. "
           "executemany(query=my_query, conn=my_conn, rows=my_rows)")
    warn(msg, DeprecationWarning)
    logger.info(f"Executing many (chunksize={CHUNKSIZE})")
    logger.debug(f"Executing:\n\n{query}\n\nagainst\n\n{conn}")

    helper = DB_HELPER_FACTORY.from_conn(conn)
    processed = 0

    with helper.cursor(conn) as cursor:
        for chunk in _chunker(rows, CHUNKSIZE):
            # Run query
            try:
                # Chunker pads to whole chunk with None; remove these
                chunk = [row for row in chunk if row is not None]

                # Show first row as example of data
                if processed == 0:
                    logger.debug(f"First row: {chunk[0]}")

                # Execute query
                helper.executemany(cursor, query, chunk)
                processed += len(chunk)