Pipeline cluster
laktory.models.resources.databricks.pipeline.PipelineCluster
¤
Bases: Cluster
Pipeline Cluster. Same attributes as laktory.models.Cluster
, except for
autotermination_minutes
cluster_id
data_security_mode
enable_elastic_disk
idempotency_token
is_pinned
libraries
no_wait
node_type_id
runtime_engine
single_user_name
spark_version
that are not allowed.
PARAMETER | DESCRIPTION |
---|---|
resource_name_
|
Name of the resource in the context of infrastructure as code. If None,
TYPE:
|
options
|
Resources options specifications
TYPE:
|
lookup_existing
|
Specifications for looking up existing resource. Other attributes will be ignored.
TYPE:
|
variables
|
Dict of variables to be injected in the model at runtime |
access_controls
|
List of access controls
TYPE:
|
apply_policy_default_values
|
Whether to use policy default values for missing cluster attributes.
TYPE:
|
autoscale
|
Autoscale specifications
TYPE:
|
autotermination_minutes
|
TYPE:
|
cluster_id
|
TYPE:
|
custom_tags
|
Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS EC2 instances and EBS volumes) with these tags in addition to default_tags. If a custom cluster tag has the same name as a default cluster tag, the custom tag is prefixed with an x_ when it is propagated.
TYPE:
|
data_security_mode
|
TYPE:
|
driver_instance_pool_id
|
Similar to instance_pool_id, but for driver node. If omitted, and instance_pool_id is specified, then the driver will be allocated from that pool.
TYPE:
|
driver_node_type_id
|
The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as node_type_id defined above.
TYPE:
|
enable_elastic_disk
|
TYPE:
|
enable_local_disk_encryption
|
Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster’s local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access.
TYPE:
|
idempotency_token
|
TYPE:
|
init_scripts
|
List of init scripts specifications
TYPE:
|
instance_pool_id
|
To reduce cluster start time, you can attach a cluster to a predefined pool of idle instances. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster’s request, it expands by allocating new instances from the instance provider. When an attached cluster changes its state to TERMINATED, the instances it used are returned to the pool and reused by a different cluster.
TYPE:
|
is_pinned
|
TYPE:
|
libraries
|
TYPE:
|
name
|
Cluster name, which doesn’t have to be unique. If not specified at creation, the cluster name will be an empty string.
TYPE:
|
node_type_id
|
TYPE:
|
no_wait
|
TYPE:
|
num_workers
|
Number of worker nodes that this cluster should have. A cluster has one Spark driver and num_workers executors for a total of num_workers + 1 Spark nodes.
TYPE:
|
policy_id
|
TYPE:
|
runtime_engine
|
TYPE:
|
single_user_name
|
TYPE:
|
spark_conf
|
Map with key-value pairs to fine-tune Spark clusters, where you can provide custom Spark configuration properties in a cluster configuration.
TYPE:
|
spark_env_vars
|
Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.
TYPE:
|
spark_version
|
TYPE:
|
ssh_public_keys
|
SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. You can specify up to 10 keys.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
inject_vars |
Inject model variables values into a model attributes. |
inject_vars_into_dump |
Inject model variables values into a model dump. |
model_validate_json_file |
Load model from json file object |
model_validate_yaml |
Load model from yaml file object using laktory.yaml.RecursiveLoader. Supports |
push_vars |
Push variable values to all child recursively |
validate_assignment_disabled |
Updating a model attribute inside a model validator when |
ATTRIBUTE | DESCRIPTION |
---|---|
additional_core_resources |
TYPE:
|
core_resources |
List of core resources to be deployed with this laktory model:
|
pulumi_properties |
Resources properties formatted for pulumi:
TYPE:
|
resource_key |
Resource key used to build default resource name. Equivalent to
TYPE:
|
resource_type_id |
Resource type id used to build default resource name. Equivalent to
TYPE:
|
self_as_core_resources |
Flag set to
|
terraform_properties |
Resources properties formatted for terraform:
TYPE:
|
additional_core_resources
property
¤
- permissions
core_resources
property
¤
List of core resources to be deployed with this laktory model: - class instance (self)
pulumi_properties
property
¤
Resources properties formatted for pulumi:
- Serialization (model dump)
- Removal of excludes defined in
self.pulumi_excludes
- Renaming of keys according to
self.pulumi_renames
- Injection of variables
RETURNS | DESCRIPTION |
---|---|
dict
|
Pulumi-safe model dump |
resource_key
property
¤
Resource key used to build default resource name. Equivalent to name properties if available. Otherwise, empty string.
resource_type_id
property
¤
Resource type id used to build default resource name. Equivalent to class name converted to kebab case. e.g.: SecretScope -> secret-scope
self_as_core_resources
property
¤
Flag set to True
if self must be included in core resources
terraform_properties
property
¤
Resources properties formatted for terraform:
- Serialization (model dump)
- Removal of excludes defined in
self.terraform_excludes
- Renaming of keys according to
self.terraform_renames
- Injection of variables
RETURNS | DESCRIPTION |
---|---|
dict
|
Terraform-safe model dump |
inject_vars(inplace=False, vars=None)
¤
Inject model variables values into a model attributes.
PARAMETER | DESCRIPTION |
---|---|
inplace
|
If
TYPE:
|
vars
|
A dictionary of variables to be injected in addition to the model internal variables.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Model instance. |
Examples:
from typing import Union
from laktory import models
class Cluster(models.BaseModel):
name: str = None
size: Union[int, str] = None
c = Cluster(
name="cluster-${vars.my_cluster}",
size="${{ 4 if vars.env == 'prod' else 2 }}",
variables={
"env": "dev",
},
).inject_vars()
print(c)
# > variables={'env': 'dev'} name='cluster-${vars.my_cluster}' size=2
References
Source code in laktory/models/basemodel.py
389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 |
|
inject_vars_into_dump(dump, inplace=False, vars=None)
¤
Inject model variables values into a model dump.
PARAMETER | DESCRIPTION |
---|---|
dump
|
Model dump (or any other general purpose mutable object) |
inplace
|
If
TYPE:
|
vars
|
A dictionary of variables to be injected in addition to the model internal variables. |
RETURNS | DESCRIPTION |
---|---|
Model dump with injected variables. |
Examples:
from laktory import models
m = models.BaseModel(
variables={
"env": "dev",
},
)
data = {
"name": "cluster-${vars.my_cluster}",
"size": "${{ 4 if vars.env == 'prod' else 2 }}",
}
print(m.inject_vars_into_dump(data))
# > {'name': 'cluster-${vars.my_cluster}', 'size': 2}
References
Source code in laktory/models/basemodel.py
470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 |
|
model_validate_json_file(fp)
classmethod
¤
Load model from json file object
PARAMETER | DESCRIPTION |
---|---|
fp
|
file object structured as a json file
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Model
|
Model instance |
Source code in laktory/models/basemodel.py
262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 |
|
model_validate_yaml(fp)
classmethod
¤
Load model from yaml file object using laktory.yaml.RecursiveLoader. Supports
reference to external yaml and sql files using !use
, !extend
and !update
tags.
Path to external files can be defined using model or environment variables.
Referenced path should always be relative to the file they are referenced from.
PARAMETER | DESCRIPTION |
---|---|
fp
|
file object structured as a yaml file
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Model
|
Model instance |
Examples:
businesses:
apple:
symbol: aapl
address: !use addresses.yaml
<<: !update common.yaml
emails:
- jane.doe@apple.com
- extend! emails.yaml
amazon:
symbol: amzn
address: !use addresses.yaml
<<: update! common.yaml
emails:
- john.doe@amazon.com
- extend! emails.yaml
Source code in laktory/models/basemodel.py
203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 |
|
push_vars(update_core_resources=False)
¤
Push variable values to all child recursively
Source code in laktory/models/basemodel.py
359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 |
|
validate_assignment_disabled()
¤
Updating a model attribute inside a model validator when validate_assignment
is True
causes an infinite recursion by design and must be turned off
temporarily.
Source code in laktory/models/basemodel.py
341 342 343 344 345 346 347 348 349 350 351 352 353 |
|