本文提供了 Databricks 资产捆绑包特征和常见捆绑包用例的示例配置。
提示
本文中的部分示例以及其他示例可以在捆绑包示例 GitHub 存储库中找到。
使用无服务器计算的作业
Databricks 资产捆绑包支持在无服务器计算上运行的作业。 若要对此进行配置,您可以省略带有笔记本任务作业的 clusters
设置,或者您可以指定环境,如以下示例所示。 对于 Python 脚本、Python wheel 和 dbt 任务,无服务器计算需要 environment_key
。 请参阅 environment_key。
# A serverless job (no cluster definition)
resources:
jobs:
serverless_job_no_cluster:
name: serverless_job_no_cluster
email_notifications:
on_failure:
- someone@example.com
tasks:
- task_key: notebook_task
notebook_task:
notebook_path: ../src/notebook.ipynb
# A serverless job (environment spec)
resources:
jobs:
serverless_job_environment:
name: serverless_job_environment
tasks:
- task_key: task
spark_python_task:
python_file: ../src/main.py
# The key that references an environment spec in a job.
# https://docs.databricks.com/api/workspace/jobs/create#tasks-environment_key
environment_key: default
# A list of task execution environment specifications that can be referenced by tasks of this job.
environments:
- environment_key: default
# Full documentation of this spec can be found at:
# https://docs.databricks.com/api/workspace/jobs/create#environments-spec
spec:
client: '1'
dependencies:
- my-library
使用无服务器计算的管道
Databricks 资产捆绑包支持无服务器计算上运行的管道。 若要对此进行配置,请将管道 serverless
设置设置为 true
。 以下示例配置定义了一个在无服务器计算上运行的管道,以及一个每小时触发管道刷新的作业。
# A pipeline that runs on serverless compute
resources:
pipelines:
my_pipeline:
name: my_pipeline
target: ${bundle.environment}
serverless: true
catalog: users
libraries:
- notebook:
path: ../src/my_pipeline.ipynb
configuration:
bundle.sourcePath: /Workspace/${workspace.file_path}/src
# This defines a job to refresh a pipeline that is triggered every hour
resources:
jobs:
my_job:
name: my_job
# Run this job once an hour.
trigger:
periodic:
interval: 1
unit: HOURS
email_notifications:
on_failure:
- someone@example.com
tasks:
- task_key: refresh_pipeline
pipeline_task:
pipeline_id: ${resources.pipelines.my_pipeline.id}
含 SQL 笔记本的作业
以下示例配置定义了一个包含 SQL 笔记本的作业。
resources:
jobs:
job_with_sql_notebook:
name: 'Job to demonstrate using a SQL notebook with a SQL warehouse'
tasks:
- task_key: notebook
notebook_task:
notebook_path: ./select.sql
warehouse_id: 799f096837fzzzz4
含多个 wheel 文件的作业
以下示例配置定义了一个包含一个作业的捆绑包,该作业使用了多个 *.whl
文件。
# job.yml
resources:
jobs:
example_job:
name: 'Example with multiple wheels'
tasks:
- task_key: task
spark_python_task:
python_file: ../src/call_wheel.py
libraries:
- whl: ../my_custom_wheel1/dist/*.whl
- whl: ../my_custom_wheel2/dist/*.whl
new_cluster:
node_type_id: i3.xlarge
num_workers: 0
spark_version: 14.3.x-scala2.12
spark_conf:
'spark.databricks.cluster.profile': 'singleNode'
'spark.master': 'local[*, 4]'
custom_tags:
'ResourceClass': 'SingleNode'
# databricks.yml
bundle:
name: job_with_multiple_wheels
include:
- ./resources/job.yml
workspace:
host: https://myworkspace.cloud.databricks.com
artifacts:
my_custom_wheel1:
type: whl
build: poetry build
path: ./my_custom_wheel1
my_custom_wheel2:
type: whl
build: poetry build
path: ./my_custom_wheel2
targets:
dev:
default: true
mode: development
使用 requirements.txt 文件的作业
以下示例配置定义了一个使用 requirements.txt 文件的作业。
resources:
jobs:
job_with_requirements_txt:
name: 'Example job that uses a requirements.txt file'
tasks:
- task_key: task
job_cluster_key: default
spark_python_task:
python_file: ../src/main.py
libraries:
- requirements: /Workspace/${workspace.file_path}/requirements.txt
按计划进行的作业
以下示例显示按计划运行的作业的配置。 有关作业计划和触发器的信息,请参阅使用计划和触发器自动执行作业。
此配置定义每天在指定时间运行的作业:
resources:
jobs:
my-notebook-job:
name: my-notebook-job
tasks:
- task_key: my-notebook-task
notebook_task:
notebook_path: ./my-notebook.ipynb
schedule:
quartz_cron_expression: '0 0 8 * * ?' # daily at 8am
timezone_id: UTC
pause_status: UNPAUSED
在此配置中,作业在上次运行一周后运行:
resources:
jobs:
my-notebook-job:
name: my-notebook-job
tasks:
- task_key: my-notebook-task
notebook_task:
notebook_path: ./my-notebook.ipynb
trigger:
pause_status: UNPAUSED
periodic:
interval: 1
unit: WEEKS
将 JAR 文件上传到 Unity Catalog 的捆绑包
可以将 Unity Catalog 卷指定为项目路径,以便将 JAR 文件和 wheel 文件等所有项目上传到 Unity Catalog 卷。 以下示例捆绑包生成 JAR 文件并将其上传到 Unity 目录。 有关 artifact_path
映射的信息,请参阅 artifact_path。 有关 artifacts
的信息,请参阅项目。
bundle:
name: jar-bundle
workspace:
host: https://myworkspace.cloud.databricks.com
artifact_path: /Volumes/main/default/my_volume
artifacts:
my_java_code:
path: ./sample-java
build: 'javac PrintArgs.java && jar cvfm PrintArgs.jar META-INF/MANIFEST.MF PrintArgs.class'
files:
- source: ./sample-java/PrintArgs.jar
resources:
jobs:
jar_job:
name: 'Spark Jar Job'
tasks:
- task_key: SparkJarTask
new_cluster:
num_workers: 1
spark_version: '14.3.x-scala2.12'
node_type_id: 'i3.xlarge'
spark_jar_task:
main_class_name: PrintArgs
libraries:
- jar: ./sample-java/PrintArgs.jar