Pipeline
laktory.models.pipeline.Pipeline
¤
Bases: BaseModel, PulumiResource, TerraformResource, PipelineChild
Pipeline model to manage a data pipeline including reading from data sources, applying data transformations and outputting to data sinks.
A pipeline is composed of collections of nodes, each one defining its
own source, transformations and optional sink. A node may be the source of
another node.
A pipeline may be run manually by using python or the CLI, but it may also be deployed and scheduled using one of the supported orchestrators, such as a Databricks job or Lakeflow Declarative Pipeline.
The DataFrame backend used to run the pipeline can be configured at the pipeline level or at the nodes level.
Examples:
This first example shows how to configure a simple pipeline with 2 nodes. Upon execution, raw data will be read from a JSON files and two DataFrames (bronze and silver) will be created and saved as parquet files. Notice how the first node is used as a data source for the second node. Polars is used as the DataFrame backend.
import io
import laktory as lk
pipeline_yaml = '''
name: pl-stock-prices
dataframe_backend: POLARS
nodes:
- name: brz_stock_prices
source:
path: ./data/stock_prices/
format: JSONL
sinks:
- path: ./data/brz_stock_prices.parquet
format: PARQUET
- name: slv_stock_prices
source:
node_name: brz_stock_prices
as_stream: false
sinks:
- path: ./data/slv_stock_prices.parquet
format: PARQUET
transformer:
nodes:
- expr: |
SELECT
CAST(data.created_at AS TIMESTAMP) AS created_at,
data.symbol AS name,
data.symbol AS symbol,
data.open AS open,
data.close AS close,
data.high AS high,
data.low AS low,
data.volume AS volume
FROM
{df}
- func_name: unique
func_kwargs:
subset:
- symbol
- created_at
keep:
any
'''
pl = lk.models.Pipeline.model_validate_yaml(io.StringIO(pipeline_yaml))
# Execute pipeline
# pl.execute()
The next example also defines a 2 nodes pipeline, but uses PySpark as the DataFrame backend. It defines the configuration required to deploy it as a Databricks job. In this case, the sinks are writing to unity catalog tables.
import io
import laktory as lk
pipeline_yaml = '''
name: pl-stocks-job
dataframe_backend: PYSPARK
orchestrator:
type: DATABRICKS_JOB
serverless_environment_version: "5"
nodes:
- name: brz_stock_prices
source:
path: dbfs:/laktory/data/stock_prices/
as_stream: false
format: JSONL
sinks:
- table_name: brz_stock_prices_job
mode: OVERWRITE
- name: slv_stock_prices
expectations:
- name: positive_price
expr: open > 0
action: DROP
source:
node_name: brz_stock_prices
as_stream: false
sinks:
- table_name: slv_stock_prices_job
mode: OVERWRITE
transformer:
nodes:
- expr: |
SELECT
cast(data.created_at AS TIMESTAMP) AS created_at,
data.symbol AS symbol,
data.open AS open,
data.close AS close
FROM
{df}
- func_name: drop_duplicates
func_kwargs:
subset: ["created_at", "symbol"]
dataframe_api: NATIVE
'''
pl = lk.models.Pipeline.model_validate_yaml(io.StringIO(pipeline_yaml))
| PARAMETER | DESCRIPTION |
|---|---|
databricks_quality_monitor_enabled
|
Enable Databricks Quality Monitor. When enabled, quality monitors are created for each sink configured with a quality monitor and deleted for sinks without.
TYPE:
|
dependencies
|
List of dependencies required to run the pipeline. If Laktory is not provided, it's current version is added to the list.
TYPE:
|
imports
|
List of modules to import before execution. Generally used to load Narwhals extensions.
Packages listed in
TYPE:
|
name
|
Name of the pipeline
TYPE:
|
nodes
|
List of pipeline nodes. Each node defines a data source, a series of transformations and optionally a sink.
TYPE:
|
orchestrator
|
Orchestrator used for scheduling and executing the pipeline. The selected option defines which resources are to be deployed. Supported options are instances of classes:
TYPE:
|
root_path_
|
Location of the pipeline node root used to store logs, metrics and checkpoints.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
dag_figure |
[UNDER DEVELOPMENT] Generate a figure representation of the pipeline |
execute |
Execute the pipeline (read sources and write sinks) by sequentially |
get_execution_plan |
Execute the pipeline (read sources and write sinks) by sequentially |
| ATTRIBUTE | DESCRIPTION |
|---|---|
additional_core_resources |
if orchestrator is
TYPE:
|
dag |
Networkx Directed Acyclic Graph representation of the pipeline. Useful
TYPE:
|
is_orchestrator_dlt |
If
TYPE:
|
nodes_dict |
Nodes dictionary whose keys are the node names.
TYPE:
|
resource_type_id |
pl
TYPE:
|
sorted_nodes |
Topologically sorted nodes.
TYPE:
|
additional_core_resources
property
¤
if orchestrator is DLT:
- DLT Pipeline
if orchestrator is DATABRICKS_JOB:
- Databricks Job
dag
property
¤
Networkx Directed Acyclic Graph representation of the pipeline. Useful to identify interdependencies between nodes.
| RETURNS | DESCRIPTION |
|---|---|
DiGraph
|
Directed Acyclic Graph |
is_orchestrator_dlt
property
¤
If True, pipeline orchestrator is DLT
nodes_dict
property
¤
Nodes dictionary whose keys are the node names.
| RETURNS | DESCRIPTION |
|---|---|
dict[str, PipelineNode]
|
Nodes |
resource_type_id
property
¤
pl
sorted_nodes
property
¤
Topologically sorted nodes.
| RETURNS | DESCRIPTION |
|---|---|
list[PipelineNode]
|
List of Topologically sorted nodes. |
dag_figure()
¤
[UNDER DEVELOPMENT] Generate a figure representation of the pipeline DAG.
| RETURNS | DESCRIPTION |
|---|---|
Figure
|
Plotly figure representation of the pipeline. |
Source code in laktory/models/pipeline/pipeline.py
663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 | |
execute(write_sinks=True, full_refresh=False, named_dfs=None, update_tables_metadata=True, selects=None)
¤
Execute the pipeline (read sources and write sinks) by sequentially executing each node. The selected orchestrator might impact how data sources or sinks are processed.
| PARAMETER | DESCRIPTION |
|---|---|
write_sinks
|
If
DEFAULT:
|
full_refresh
|
If
TYPE:
|
named_dfs
|
Named DataFrames to be passed to pipeline nodes transformer.
TYPE:
|
update_tables_metadata
|
Update tables metadata
TYPE:
|
selects
|
List of node names with optional dependency notation:
TYPE:
|
Source code in laktory/models/pipeline/pipeline.py
572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 | |
get_execution_plan(selects=None)
¤
Execute the pipeline (read sources and write sinks) by sequentially executing each node. The selected orchestrator might impact how data sources or sinks are processed.
| PARAMETER | DESCRIPTION |
|---|---|
selects
|
List of node names with optional dependency notation:
TYPE:
|
Source code in laktory/models/pipeline/pipeline.py
543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 | |