PipelineNode
laktory.models.pipeline.PipelineNode
¤
Bases: BaseModel, PipelineChild
Pipeline base component generating a DataFrame by reading a data source and applying a transformer (chain of dataframe transformations). Optional output to a data sink.
Examples:
A node reading stock prices data from a CSV file and writing a DataFrame as a parquet file.
import io
import laktory as lk
node_yaml = '''
name: brz_stock_prices
source:
path: "./events/stock_prices/"
format: JSON
sinks:
- path: ./tables/brz_stock_prices/
format: PARQUET
'''
node = lk.models.PipelineNode.model_validate_yaml(io.StringIO(node_yaml))
# node.execute()
A node reading stock prices from an upstream node and writing a DataFrame to a data table.
import io
import laktory as lk
node_yaml = '''
name: slv_stock_prices
source:
node_name: brz_stock_prices
sinks:
- schema_name: finance
table_name: slv_stock_prices
transformer:
nodes:
- expr: |
SELECT
data.created_at AS created_at,
data.symbol AS symbol,
data.open AS open,
data.close AS close,
data.high AS high,
data.low AS low,
data.volume AS volume
FROM
{df}
- func_name: drop_duplicates
func_kwargs:
subset:
- symbol
- timestamp
'''
node = lk.models.PipelineNode.model_validate_yaml(io.StringIO(node_yaml))
# node.execute()
| PARAMETER | DESCRIPTION |
|---|---|
comment
|
Comment for the associated table or view
TYPE:
|
dlt_template
|
Specify which template (notebook) to use when Databricks pipeline is selected as the orchestrator.
TYPE:
|
execution_task_name_
|
Execution task name when orchestrator (such as Databricks Jobs and Airflow) supports multi-tasks execution.
Nodes with the same task name will be executed together in a single task. If
TYPE:
|
expectations
|
List of data expectations. Can trigger warnings, drop invalid records or fail a pipeline.
TYPE:
|
expectations_checkpoint_path_
|
Path to which the checkpoint file for which expectations on a streaming dataframe should be written.
TYPE:
|
name
|
Name given to the node.
TYPE:
|
primary_keys
|
A list of column names that uniquely identify each row in the
DataFrame. These columns are used to:
- Document the uniqueness constraints of the node's output data.
- Define the default primary keys for sinks CDC merge operations
- Referenced in expectations and unit tests.
While optional, specifying
TYPE:
|
root_path_
|
Location of the pipeline node root used to store logs, metrics and checkpoints.
TYPE:
|
sinks
|
Definition of the data sink(s). Set
TYPE:
|
source
|
Definition of the data source(s)
TYPE:
|
tags
|
Node tags for selective execution.
TYPE:
|
time_column
|
The name of the column that represents the timestamp or temporal dimension in the DataFrame. This column is used to: - Document the time-based ordering and filtering semantics of the node’s output data. - Enable time-aware operations such as point-in-time joins, incremental processing, and time-series analysis. - Serve as a reference in expectations, unit tests, and feature engineering workflows. While optional, specifying time_column helps ensure consistency in time-based logic and improves the reliability of downstream operations that depend on temporal alignment.
TYPE:
|
transformer
|
Data transformations applied between the source and the sink(s).
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
check_expectations |
Check expectations, raise errors, warnings where required and build |
execute |
Execute pipeline node by: |
| ATTRIBUTE | DESCRIPTION |
|---|---|
all_sinks |
List of all sinks (output and quarantine).
|
data_sources |
Get all sources feeding the pipeline node
TYPE:
|
has_output_sinks |
TYPE:
|
has_sinks |
TYPE:
|
is_orchestrator_dlt |
If
TYPE:
|
output_df |
Dataframe resulting from reading source, applying transformer and dropping rows not meeting data quality
TYPE:
|
output_sinks |
List of sinks writing the output DataFrame
TYPE:
|
primary_sink |
Primary output sink used as a source for downstream nodes.
TYPE:
|
quarantine_df |
DataFrame storing
TYPE:
|
quarantine_sinks |
List of sinks writing the quarantine DataFrame
TYPE:
|
sinks_count |
Total number of sinks.
TYPE:
|
stage_df |
Dataframe resulting from reading source and applying transformer, before data quality checks are applied.
TYPE:
|
upstream_node_names |
Pipeline node names required to execute current node.
TYPE:
|
view_definition |
Transformer View Definition (when applicable)
|
all_sinks
property
¤
List of all sinks (output and quarantine).
data_sources
property
¤
Get all sources feeding the pipeline node
has_output_sinks
property
¤
True if node has at least one output sink.
has_sinks
property
¤
True if node has at least one sink.
is_orchestrator_dlt
property
¤
If True, pipeline node is used in the context of a DLT pipeline
output_df
property
¤
Dataframe resulting from reading source, applying transformer and dropping rows not meeting data quality expectations.
output_sinks
property
¤
List of sinks writing the output DataFrame
primary_sink
property
¤
Primary output sink used as a source for downstream nodes.
quarantine_df
property
¤
DataFrame storing stage_df rows not meeting data quality expectations.
quarantine_sinks
property
¤
List of sinks writing the quarantine DataFrame
sinks_count
property
¤
Total number of sinks.
stage_df
property
¤
Dataframe resulting from reading source and applying transformer, before data quality checks are applied.
upstream_node_names
property
¤
Pipeline node names required to execute current node.
view_definition
property
¤
Transformer View Definition (when applicable)
check_expectations()
¤
Check expectations, raise errors, warnings where required and build filtered and quarantine DataFrames.
Some actions have to be disabled when selected orchestrator is Databricks DLT:
- Raising error on Failure when expectation is supported by DLT
- Dropping rows when expectation is supported by DLT
Source code in laktory/models/pipeline/pipelinenode.py
655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 | |
execute(apply_transformer=True, write_sinks=True, full_refresh=False, named_dfs=None, update_tables_metadata=True)
¤
Execute pipeline node by:
- Reading the source
- Applying the user defined (and layer-specific if applicable) transformations
- Checking expectations
- Writing the sinks
| PARAMETER | DESCRIPTION |
|---|---|
apply_transformer
|
Flag to apply transformer in the execution
TYPE:
|
write_sinks
|
Flag to include writing sink in the execution
TYPE:
|
full_refresh
|
If
TYPE:
|
named_dfs
|
Named DataFrame passed to transformer nodes
TYPE:
|
update_tables_metadata
|
Update tables metadata
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AnyFrame
|
output Spark DataFrame |
Source code in laktory/models/pipeline/pipelinenode.py
546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 | |