How to use the lakehouse.pyspark_table function in lakehouse

To help you get started, we’ve selected a few lakehouse examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github dagster-io / dagster / python_modules / lakehouse / lakehouse_tests / test_pyspark_custom_url_scheme_lakehouse.py View on Github external
def _wrap(fn):
        return pyspark_table(
            name=name, metadata={FEATURE_AREA: feature_area}, input_tables=input_tables
        )(fn)
github dagster-io / dagster / python_modules / lakehouse / lakehouse_tests / test_typed_pyspark_lakehouse.py View on Github external
def _wrap(fn):
        return pyspark_table(
            name=name,
            metadata={'spark_type': spark_type},
            input_tables=input_tables,
            description=description + '\n\n' + create_column_descriptions(spark_type)
            if description
            else create_column_descriptions(spark_type),
        )(fn)
github dagster-io / dagster / python_modules / lakehouse / lakehouse_tests / test_basic_pyspark_lakehouse.py View on Github external
@pyspark_table(other_input_defs=[InputDefinition('num', int)])
def TableOne(context, num) -> SparkDF:
    return context.resources.spark.spark_session.createDataFrame([Row(num=num)])
github dagster-io / dagster / python_modules / lakehouse / lakehouse_tests / test_basic_pyspark_lakehouse.py View on Github external
@pyspark_table
def TableTwo(context) -> SparkDF:
    return context.resources.spark.spark_session.createDataFrame([Row(num=2)])