databricks.DataQualityMonitor
This resource allows you to set up data quality monitoring checks for Unity Catalog objects, currently schema and table.
For the table object_type, you must either:
- be an owner of the table’s parent catalog, have USE_SCHEMA on the table’s parent schema, and have SELECT access on the table
- have USE_CATALOG on the table’s parent catalog, be an owner of the table’s parent schema, and have SELECT access on the table.
- have the following permissions:
- USE_CATALOG on the table’s parent catalog
- USE_SCHEMA on the table’s parent schema
- be an owner of the table.
Note This resource can only be used with a workspace-level provider!
Example Usage
import * as pulumi from "@pulumi/pulumi";
import * as databricks from "@pulumi/databricks";
const _this = new databricks.Schema("this", {
catalogName: "my_catalog",
name: "my_schema",
});
const thisDataQualityMonitor = new databricks.DataQualityMonitor("this", {
objectType: "schema",
objectId: _this.schemaId,
anomalyDetectionConfig: {},
});
import pulumi
import pulumi_databricks as databricks
this = databricks.Schema("this",
catalog_name="my_catalog",
name="my_schema")
this_data_quality_monitor = databricks.DataQualityMonitor("this",
object_type="schema",
object_id=this.schema_id,
anomaly_detection_config={})
package main
import (
"github.com/pulumi/pulumi-databricks/sdk/go/databricks"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
this, err := databricks.NewSchema(ctx, "this", &databricks.SchemaArgs{
CatalogName: pulumi.String("my_catalog"),
Name: pulumi.String("my_schema"),
})
if err != nil {
return err
}
_, err = databricks.NewDataQualityMonitor(ctx, "this", &databricks.DataQualityMonitorArgs{
ObjectType: pulumi.String("schema"),
ObjectId: this.SchemaId,
AnomalyDetectionConfig: &databricks.DataQualityMonitorAnomalyDetectionConfigArgs{},
})
if err != nil {
return err
}
return nil
})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Databricks = Pulumi.Databricks;
return await Deployment.RunAsync(() =>
{
var @this = new Databricks.Schema("this", new()
{
CatalogName = "my_catalog",
Name = "my_schema",
});
var thisDataQualityMonitor = new Databricks.DataQualityMonitor("this", new()
{
ObjectType = "schema",
ObjectId = @this.SchemaId,
AnomalyDetectionConfig = null,
});
});
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.databricks.Schema;
import com.pulumi.databricks.SchemaArgs;
import com.pulumi.databricks.DataQualityMonitor;
import com.pulumi.databricks.DataQualityMonitorArgs;
import com.pulumi.databricks.inputs.DataQualityMonitorAnomalyDetectionConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var this_ = new Schema("this", SchemaArgs.builder()
.catalogName("my_catalog")
.name("my_schema")
.build());
var thisDataQualityMonitor = new DataQualityMonitor("thisDataQualityMonitor", DataQualityMonitorArgs.builder()
.objectType("schema")
.objectId(this_.schemaId())
.anomalyDetectionConfig(DataQualityMonitorAnomalyDetectionConfigArgs.builder()
.build())
.build());
}
}
resources:
this:
type: databricks:Schema
properties:
catalogName: my_catalog
name: my_schema
thisDataQualityMonitor:
type: databricks:DataQualityMonitor
name: this
properties:
objectType: schema
objectId: ${this.schemaId}
anomalyDetectionConfig: {}
Create DataQualityMonitor Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DataQualityMonitor(name: string, args: DataQualityMonitorArgs, opts?: CustomResourceOptions);@overload
def DataQualityMonitor(resource_name: str,
args: DataQualityMonitorArgs,
opts: Optional[ResourceOptions] = None)
@overload
def DataQualityMonitor(resource_name: str,
opts: Optional[ResourceOptions] = None,
object_id: Optional[str] = None,
object_type: Optional[str] = None,
anomaly_detection_config: Optional[DataQualityMonitorAnomalyDetectionConfigArgs] = None,
data_profiling_config: Optional[DataQualityMonitorDataProfilingConfigArgs] = None)func NewDataQualityMonitor(ctx *Context, name string, args DataQualityMonitorArgs, opts ...ResourceOption) (*DataQualityMonitor, error)public DataQualityMonitor(string name, DataQualityMonitorArgs args, CustomResourceOptions? opts = null)
public DataQualityMonitor(String name, DataQualityMonitorArgs args)
public DataQualityMonitor(String name, DataQualityMonitorArgs args, CustomResourceOptions options)
type: databricks:DataQualityMonitor
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DataQualityMonitorArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DataQualityMonitorArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DataQualityMonitorArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataQualityMonitorArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DataQualityMonitorArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var dataQualityMonitorResource = new Databricks.DataQualityMonitor("dataQualityMonitorResource", new()
{
ObjectId = "string",
ObjectType = "string",
AnomalyDetectionConfig = null,
DataProfilingConfig = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigArgs
{
OutputSchemaId = "string",
NotificationSettings = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigNotificationSettingsArgs
{
OnFailure = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailureArgs
{
EmailAddresses = new[]
{
"string",
},
},
},
BaselineTableName = "string",
DashboardId = "string",
AssetsDir = "string",
EffectiveWarehouseId = "string",
InferenceLog = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigInferenceLogArgs
{
Granularities = new[]
{
"string",
},
ModelIdColumn = "string",
PredictionColumn = "string",
ProblemType = "string",
TimestampColumn = "string",
LabelColumn = "string",
},
LatestMonitorFailureMessage = "string",
MonitorVersion = 0,
CustomMetrics = new[]
{
new Databricks.Inputs.DataQualityMonitorDataProfilingConfigCustomMetricArgs
{
Definition = "string",
InputColumns = new[]
{
"string",
},
Name = "string",
OutputDataType = "string",
Type = "string",
},
},
MonitoredTableName = "string",
DriftMetricsTableName = "string",
ProfileMetricsTableName = "string",
Schedule = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigScheduleArgs
{
QuartzCronExpression = "string",
TimezoneId = "string",
PauseStatus = "string",
},
SkipBuiltinDashboard = false,
SlicingExprs = new[]
{
"string",
},
Snapshot = null,
Status = "string",
TimeSeries = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigTimeSeriesArgs
{
Granularities = new[]
{
"string",
},
TimestampColumn = "string",
},
WarehouseId = "string",
},
});
example, err := databricks.NewDataQualityMonitor(ctx, "dataQualityMonitorResource", &databricks.DataQualityMonitorArgs{
ObjectId: pulumi.String("string"),
ObjectType: pulumi.String("string"),
AnomalyDetectionConfig: &databricks.DataQualityMonitorAnomalyDetectionConfigArgs{},
DataProfilingConfig: &databricks.DataQualityMonitorDataProfilingConfigArgs{
OutputSchemaId: pulumi.String("string"),
NotificationSettings: &databricks.DataQualityMonitorDataProfilingConfigNotificationSettingsArgs{
OnFailure: &databricks.DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailureArgs{
EmailAddresses: pulumi.StringArray{
pulumi.String("string"),
},
},
},
BaselineTableName: pulumi.String("string"),
DashboardId: pulumi.String("string"),
AssetsDir: pulumi.String("string"),
EffectiveWarehouseId: pulumi.String("string"),
InferenceLog: &databricks.DataQualityMonitorDataProfilingConfigInferenceLogArgs{
Granularities: pulumi.StringArray{
pulumi.String("string"),
},
ModelIdColumn: pulumi.String("string"),
PredictionColumn: pulumi.String("string"),
ProblemType: pulumi.String("string"),
TimestampColumn: pulumi.String("string"),
LabelColumn: pulumi.String("string"),
},
LatestMonitorFailureMessage: pulumi.String("string"),
MonitorVersion: pulumi.Int(0),
CustomMetrics: databricks.DataQualityMonitorDataProfilingConfigCustomMetricArray{
&databricks.DataQualityMonitorDataProfilingConfigCustomMetricArgs{
Definition: pulumi.String("string"),
InputColumns: pulumi.StringArray{
pulumi.String("string"),
},
Name: pulumi.String("string"),
OutputDataType: pulumi.String("string"),
Type: pulumi.String("string"),
},
},
MonitoredTableName: pulumi.String("string"),
DriftMetricsTableName: pulumi.String("string"),
ProfileMetricsTableName: pulumi.String("string"),
Schedule: &databricks.DataQualityMonitorDataProfilingConfigScheduleArgs{
QuartzCronExpression: pulumi.String("string"),
TimezoneId: pulumi.String("string"),
PauseStatus: pulumi.String("string"),
},
SkipBuiltinDashboard: pulumi.Bool(false),
SlicingExprs: pulumi.StringArray{
pulumi.String("string"),
},
Snapshot: &databricks.DataQualityMonitorDataProfilingConfigSnapshotArgs{},
Status: pulumi.String("string"),
TimeSeries: &databricks.DataQualityMonitorDataProfilingConfigTimeSeriesArgs{
Granularities: pulumi.StringArray{
pulumi.String("string"),
},
TimestampColumn: pulumi.String("string"),
},
WarehouseId: pulumi.String("string"),
},
})
var dataQualityMonitorResource = new DataQualityMonitor("dataQualityMonitorResource", DataQualityMonitorArgs.builder()
.objectId("string")
.objectType("string")
.anomalyDetectionConfig(DataQualityMonitorAnomalyDetectionConfigArgs.builder()
.build())
.dataProfilingConfig(DataQualityMonitorDataProfilingConfigArgs.builder()
.outputSchemaId("string")
.notificationSettings(DataQualityMonitorDataProfilingConfigNotificationSettingsArgs.builder()
.onFailure(DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailureArgs.builder()
.emailAddresses("string")
.build())
.build())
.baselineTableName("string")
.dashboardId("string")
.assetsDir("string")
.effectiveWarehouseId("string")
.inferenceLog(DataQualityMonitorDataProfilingConfigInferenceLogArgs.builder()
.granularities("string")
.modelIdColumn("string")
.predictionColumn("string")
.problemType("string")
.timestampColumn("string")
.labelColumn("string")
.build())
.latestMonitorFailureMessage("string")
.monitorVersion(0)
.customMetrics(DataQualityMonitorDataProfilingConfigCustomMetricArgs.builder()
.definition("string")
.inputColumns("string")
.name("string")
.outputDataType("string")
.type("string")
.build())
.monitoredTableName("string")
.driftMetricsTableName("string")
.profileMetricsTableName("string")
.schedule(DataQualityMonitorDataProfilingConfigScheduleArgs.builder()
.quartzCronExpression("string")
.timezoneId("string")
.pauseStatus("string")
.build())
.skipBuiltinDashboard(false)
.slicingExprs("string")
.snapshot(DataQualityMonitorDataProfilingConfigSnapshotArgs.builder()
.build())
.status("string")
.timeSeries(DataQualityMonitorDataProfilingConfigTimeSeriesArgs.builder()
.granularities("string")
.timestampColumn("string")
.build())
.warehouseId("string")
.build())
.build());
data_quality_monitor_resource = databricks.DataQualityMonitor("dataQualityMonitorResource",
object_id="string",
object_type="string",
anomaly_detection_config={},
data_profiling_config={
"output_schema_id": "string",
"notification_settings": {
"on_failure": {
"email_addresses": ["string"],
},
},
"baseline_table_name": "string",
"dashboard_id": "string",
"assets_dir": "string",
"effective_warehouse_id": "string",
"inference_log": {
"granularities": ["string"],
"model_id_column": "string",
"prediction_column": "string",
"problem_type": "string",
"timestamp_column": "string",
"label_column": "string",
},
"latest_monitor_failure_message": "string",
"monitor_version": 0,
"custom_metrics": [{
"definition": "string",
"input_columns": ["string"],
"name": "string",
"output_data_type": "string",
"type": "string",
}],
"monitored_table_name": "string",
"drift_metrics_table_name": "string",
"profile_metrics_table_name": "string",
"schedule": {
"quartz_cron_expression": "string",
"timezone_id": "string",
"pause_status": "string",
},
"skip_builtin_dashboard": False,
"slicing_exprs": ["string"],
"snapshot": {},
"status": "string",
"time_series": {
"granularities": ["string"],
"timestamp_column": "string",
},
"warehouse_id": "string",
})
const dataQualityMonitorResource = new databricks.DataQualityMonitor("dataQualityMonitorResource", {
objectId: "string",
objectType: "string",
anomalyDetectionConfig: {},
dataProfilingConfig: {
outputSchemaId: "string",
notificationSettings: {
onFailure: {
emailAddresses: ["string"],
},
},
baselineTableName: "string",
dashboardId: "string",
assetsDir: "string",
effectiveWarehouseId: "string",
inferenceLog: {
granularities: ["string"],
modelIdColumn: "string",
predictionColumn: "string",
problemType: "string",
timestampColumn: "string",
labelColumn: "string",
},
latestMonitorFailureMessage: "string",
monitorVersion: 0,
customMetrics: [{
definition: "string",
inputColumns: ["string"],
name: "string",
outputDataType: "string",
type: "string",
}],
monitoredTableName: "string",
driftMetricsTableName: "string",
profileMetricsTableName: "string",
schedule: {
quartzCronExpression: "string",
timezoneId: "string",
pauseStatus: "string",
},
skipBuiltinDashboard: false,
slicingExprs: ["string"],
snapshot: {},
status: "string",
timeSeries: {
granularities: ["string"],
timestampColumn: "string",
},
warehouseId: "string",
},
});
type: databricks:DataQualityMonitor
properties:
anomalyDetectionConfig: {}
dataProfilingConfig:
assetsDir: string
baselineTableName: string
customMetrics:
- definition: string
inputColumns:
- string
name: string
outputDataType: string
type: string
dashboardId: string
driftMetricsTableName: string
effectiveWarehouseId: string
inferenceLog:
granularities:
- string
labelColumn: string
modelIdColumn: string
predictionColumn: string
problemType: string
timestampColumn: string
latestMonitorFailureMessage: string
monitorVersion: 0
monitoredTableName: string
notificationSettings:
onFailure:
emailAddresses:
- string
outputSchemaId: string
profileMetricsTableName: string
schedule:
pauseStatus: string
quartzCronExpression: string
timezoneId: string
skipBuiltinDashboard: false
slicingExprs:
- string
snapshot: {}
status: string
timeSeries:
granularities:
- string
timestampColumn: string
warehouseId: string
objectId: string
objectType: string
DataQualityMonitor Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DataQualityMonitor resource accepts the following input properties:
- Object
Id string The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- Object
Type string - The type of the monitored object. Can be one of the following:
schemaortable - Anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config - Anomaly Detection Configuration, applicable to
schemaobject types - Data
Profiling DataConfig Quality Monitor Data Profiling Config - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present
- Object
Id string The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- Object
Type string - The type of the monitored object. Can be one of the following:
schemaortable - Anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config Args - Anomaly Detection Configuration, applicable to
schemaobject types - Data
Profiling DataConfig Quality Monitor Data Profiling Config Args - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present
- object
Id String The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object
Type String - The type of the monitored object. Can be one of the following:
schemaortable - anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config - Anomaly Detection Configuration, applicable to
schemaobject types - data
Profiling DataConfig Quality Monitor Data Profiling Config - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present
- object
Id string The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object
Type string - The type of the monitored object. Can be one of the following:
schemaortable - anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config - Anomaly Detection Configuration, applicable to
schemaobject types - data
Profiling DataConfig Quality Monitor Data Profiling Config - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present
- object_
id str The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object_
type str - The type of the monitored object. Can be one of the following:
schemaortable - anomaly_
detection_ Dataconfig Quality Monitor Anomaly Detection Config Args - Anomaly Detection Configuration, applicable to
schemaobject types - data_
profiling_ Dataconfig Quality Monitor Data Profiling Config Args - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present
- object
Id String The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object
Type String - The type of the monitored object. Can be one of the following:
schemaortable - anomaly
Detection Property MapConfig - Anomaly Detection Configuration, applicable to
schemaobject types - data
Profiling Property MapConfig - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present
Outputs
All input properties are implicitly available as output properties. Additionally, the DataQualityMonitor resource produces the following output properties:
- Id string
- The provider-assigned unique ID for this managed resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- id String
- The provider-assigned unique ID for this managed resource.
- id string
- The provider-assigned unique ID for this managed resource.
- id str
- The provider-assigned unique ID for this managed resource.
- id String
- The provider-assigned unique ID for this managed resource.
Look up Existing DataQualityMonitor Resource
Get an existing DataQualityMonitor resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: DataQualityMonitorState, opts?: CustomResourceOptions): DataQualityMonitor@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
anomaly_detection_config: Optional[DataQualityMonitorAnomalyDetectionConfigArgs] = None,
data_profiling_config: Optional[DataQualityMonitorDataProfilingConfigArgs] = None,
object_id: Optional[str] = None,
object_type: Optional[str] = None) -> DataQualityMonitorfunc GetDataQualityMonitor(ctx *Context, name string, id IDInput, state *DataQualityMonitorState, opts ...ResourceOption) (*DataQualityMonitor, error)public static DataQualityMonitor Get(string name, Input<string> id, DataQualityMonitorState? state, CustomResourceOptions? opts = null)public static DataQualityMonitor get(String name, Output<String> id, DataQualityMonitorState state, CustomResourceOptions options)resources: _: type: databricks:DataQualityMonitor get: id: ${id}- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- Anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config - Anomaly Detection Configuration, applicable to
schemaobject types - Data
Profiling DataConfig Quality Monitor Data Profiling Config - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present - Object
Id string The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- Object
Type string - The type of the monitored object. Can be one of the following:
schemaortable
- Anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config Args - Anomaly Detection Configuration, applicable to
schemaobject types - Data
Profiling DataConfig Quality Monitor Data Profiling Config Args - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present - Object
Id string The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- Object
Type string - The type of the monitored object. Can be one of the following:
schemaortable
- anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config - Anomaly Detection Configuration, applicable to
schemaobject types - data
Profiling DataConfig Quality Monitor Data Profiling Config - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present - object
Id String The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object
Type String - The type of the monitored object. Can be one of the following:
schemaortable
- anomaly
Detection DataConfig Quality Monitor Anomaly Detection Config - Anomaly Detection Configuration, applicable to
schemaobject types - data
Profiling DataConfig Quality Monitor Data Profiling Config - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present - object
Id string The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object
Type string - The type of the monitored object. Can be one of the following:
schemaortable
- anomaly_
detection_ Dataconfig Quality Monitor Anomaly Detection Config Args - Anomaly Detection Configuration, applicable to
schemaobject types - data_
profiling_ Dataconfig Quality Monitor Data Profiling Config Args - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present - object_
id str The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object_
type str - The type of the monitored object. Can be one of the following:
schemaortable
- anomaly
Detection Property MapConfig - Anomaly Detection Configuration, applicable to
schemaobject types - data
Profiling Property MapConfig - Data Profiling Configuration, applicable to
tableobject types. Exactly oneAnalysis Configurationmust be present - object
Id String The UUID of the request object. It is
schema_idforschema, andtable_idfortable.Find the
schema_idfrom either:- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
Schemasresource. - In Catalog Explorer > select the
schema> go to theDetailstab > theSchema IDfield.
Find the
table_idfrom either:- The [table_id](https://docs.databricks.com/api/workspace/tables/get#table_id) of the
Tablesresource. - In Catalog Explorer > select the
table> go to theDetailstab > theTable IDfield
- The [schema_id](https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the
- object
Type String - The type of the monitored object. Can be one of the following:
schemaortable
Supporting Types
DataQualityMonitorDataProfilingConfig, DataQualityMonitorDataProfilingConfigArgs
- Output
Schema stringId - ID of the schema where output tables are created
- Assets
Dir string - Field for specifying the absolute path to a custom directory to store data-monitoring assets. Normally prepopulated to a default user location via UI and Python APIs
- Baseline
Table stringName - Baseline table name.
Baseline data is used to compute drift from the data in the monitored
table_name. The baseline table and the monitored table shall have the same schema - Custom
Metrics List<DataQuality Monitor Data Profiling Config Custom Metric> - Custom metrics
- Dashboard
Id string - Drift
Metrics stringTable Name - Effective
Warehouse stringId - Inference
Log DataQuality Monitor Data Profiling Config Inference Log Analysis Configurationfor monitoring inference log tables- Latest
Monitor stringFailure Message - Monitor
Version int - Monitored
Table stringName - Notification
Settings DataQuality Monitor Data Profiling Config Notification Settings - Field for specifying notification settings
- Profile
Metrics stringTable Name - Schedule
Data
Quality Monitor Data Profiling Config Schedule - The cron schedule
- Skip
Builtin boolDashboard - Whether to skip creating a default dashboard summarizing data quality metrics
- Slicing
Exprs List<string> - List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example
slicing_exprs=[“col_1”, “col_2 > 10”]will generate the following slices: two slices forcol_2 </span>> 10(True and False), and one slice per unique value incol1. For high-cardinality columns, only the top 100 unique values by frequency will generate slices - Snapshot
Data
Quality Monitor Data Profiling Config Snapshot Analysis Configurationfor monitoring snapshot tables- Status string
- Time
Series DataQuality Monitor Data Profiling Config Time Series Analysis Configurationfor monitoring time series tables- Warehouse
Id string - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used
- Output
Schema stringId - ID of the schema where output tables are created
- Assets
Dir string - Field for specifying the absolute path to a custom directory to store data-monitoring assets. Normally prepopulated to a default user location via UI and Python APIs
- Baseline
Table stringName - Baseline table name.
Baseline data is used to compute drift from the data in the monitored
table_name. The baseline table and the monitored table shall have the same schema - Custom
Metrics []DataQuality Monitor Data Profiling Config Custom Metric - Custom metrics
- Dashboard
Id string - Drift
Metrics stringTable Name - Effective
Warehouse stringId - Inference
Log DataQuality Monitor Data Profiling Config Inference Log Analysis Configurationfor monitoring inference log tables- Latest
Monitor stringFailure Message - Monitor
Version int - Monitored
Table stringName - Notification
Settings DataQuality Monitor Data Profiling Config Notification Settings - Field for specifying notification settings
- Profile
Metrics stringTable Name - Schedule
Data
Quality Monitor Data Profiling Config Schedule - The cron schedule
- Skip
Builtin boolDashboard - Whether to skip creating a default dashboard summarizing data quality metrics
- Slicing
Exprs []string - List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example
slicing_exprs=[“col_1”, “col_2 > 10”]will generate the following slices: two slices forcol_2 </span>> 10(True and False), and one slice per unique value incol1. For high-cardinality columns, only the top 100 unique values by frequency will generate slices - Snapshot
Data
Quality Monitor Data Profiling Config Snapshot Analysis Configurationfor monitoring snapshot tables- Status string
- Time
Series DataQuality Monitor Data Profiling Config Time Series Analysis Configurationfor monitoring time series tables- Warehouse
Id string - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used
- output
Schema StringId - ID of the schema where output tables are created
- assets
Dir String - Field for specifying the absolute path to a custom directory to store data-monitoring assets. Normally prepopulated to a default user location via UI and Python APIs
- baseline
Table StringName - Baseline table name.
Baseline data is used to compute drift from the data in the monitored
table_name. The baseline table and the monitored table shall have the same schema - custom
Metrics List<DataQuality Monitor Data Profiling Config Custom Metric> - Custom metrics
- dashboard
Id String - drift
Metrics StringTable Name - effective
Warehouse StringId - inference
Log DataQuality Monitor Data Profiling Config Inference Log Analysis Configurationfor monitoring inference log tables- latest
Monitor StringFailure Message - monitor
Version Integer - monitored
Table StringName - notification
Settings DataQuality Monitor Data Profiling Config Notification Settings - Field for specifying notification settings
- profile
Metrics StringTable Name - schedule
Data
Quality Monitor Data Profiling Config Schedule - The cron schedule
- skip
Builtin BooleanDashboard - Whether to skip creating a default dashboard summarizing data quality metrics
- slicing
Exprs List<String> - List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example
slicing_exprs=[“col_1”, “col_2 > 10”]will generate the following slices: two slices forcol_2 </span>> 10(True and False), and one slice per unique value incol1. For high-cardinality columns, only the top 100 unique values by frequency will generate slices - snapshot
Data
Quality Monitor Data Profiling Config Snapshot Analysis Configurationfor monitoring snapshot tables- status String
- time
Series DataQuality Monitor Data Profiling Config Time Series Analysis Configurationfor monitoring time series tables- warehouse
Id String - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used
- output
Schema stringId - ID of the schema where output tables are created
- assets
Dir string - Field for specifying the absolute path to a custom directory to store data-monitoring assets. Normally prepopulated to a default user location via UI and Python APIs
- baseline
Table stringName - Baseline table name.
Baseline data is used to compute drift from the data in the monitored
table_name. The baseline table and the monitored table shall have the same schema - custom
Metrics DataQuality Monitor Data Profiling Config Custom Metric[] - Custom metrics
- dashboard
Id string - drift
Metrics stringTable Name - effective
Warehouse stringId - inference
Log DataQuality Monitor Data Profiling Config Inference Log Analysis Configurationfor monitoring inference log tables- latest
Monitor stringFailure Message - monitor
Version number - monitored
Table stringName - notification
Settings DataQuality Monitor Data Profiling Config Notification Settings - Field for specifying notification settings
- profile
Metrics stringTable Name - schedule
Data
Quality Monitor Data Profiling Config Schedule - The cron schedule
- skip
Builtin booleanDashboard - Whether to skip creating a default dashboard summarizing data quality metrics
- slicing
Exprs string[] - List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example
slicing_exprs=[“col_1”, “col_2 > 10”]will generate the following slices: two slices forcol_2 </span>> 10(True and False), and one slice per unique value incol1. For high-cardinality columns, only the top 100 unique values by frequency will generate slices - snapshot
Data
Quality Monitor Data Profiling Config Snapshot Analysis Configurationfor monitoring snapshot tables- status string
- time
Series DataQuality Monitor Data Profiling Config Time Series Analysis Configurationfor monitoring time series tables- warehouse
Id string - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used
- output_
schema_ strid - ID of the schema where output tables are created
- assets_
dir str - Field for specifying the absolute path to a custom directory to store data-monitoring assets. Normally prepopulated to a default user location via UI and Python APIs
- baseline_
table_ strname - Baseline table name.
Baseline data is used to compute drift from the data in the monitored
table_name. The baseline table and the monitored table shall have the same schema - custom_
metrics Sequence[DataQuality Monitor Data Profiling Config Custom Metric] - Custom metrics
- dashboard_
id str - drift_
metrics_ strtable_ name - effective_
warehouse_ strid - inference_
log DataQuality Monitor Data Profiling Config Inference Log Analysis Configurationfor monitoring inference log tables- latest_
monitor_ strfailure_ message - monitor_
version int - monitored_
table_ strname - notification_
settings DataQuality Monitor Data Profiling Config Notification Settings - Field for specifying notification settings
- profile_
metrics_ strtable_ name - schedule
Data
Quality Monitor Data Profiling Config Schedule - The cron schedule
- skip_
builtin_ booldashboard - Whether to skip creating a default dashboard summarizing data quality metrics
- slicing_
exprs Sequence[str] - List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example
slicing_exprs=[“col_1”, “col_2 > 10”]will generate the following slices: two slices forcol_2 </span>> 10(True and False), and one slice per unique value incol1. For high-cardinality columns, only the top 100 unique values by frequency will generate slices - snapshot
Data
Quality Monitor Data Profiling Config Snapshot Analysis Configurationfor monitoring snapshot tables- status str
- time_
series DataQuality Monitor Data Profiling Config Time Series Analysis Configurationfor monitoring time series tables- warehouse_
id str - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used
- output
Schema StringId - ID of the schema where output tables are created
- assets
Dir String - Field for specifying the absolute path to a custom directory to store data-monitoring assets. Normally prepopulated to a default user location via UI and Python APIs
- baseline
Table StringName - Baseline table name.
Baseline data is used to compute drift from the data in the monitored
table_name. The baseline table and the monitored table shall have the same schema - custom
Metrics List<Property Map> - Custom metrics
- dashboard
Id String - drift
Metrics StringTable Name - effective
Warehouse StringId - inference
Log Property Map Analysis Configurationfor monitoring inference log tables- latest
Monitor StringFailure Message - monitor
Version Number - monitored
Table StringName - notification
Settings Property Map - Field for specifying notification settings
- profile
Metrics StringTable Name - schedule Property Map
- The cron schedule
- skip
Builtin BooleanDashboard - Whether to skip creating a default dashboard summarizing data quality metrics
- slicing
Exprs List<String> - List of column expressions to slice data with for targeted analysis. The data is grouped by
each expression independently, resulting in a separate slice for each predicate and its
complements. For example
slicing_exprs=[“col_1”, “col_2 > 10”]will generate the following slices: two slices forcol_2 </span>> 10(True and False), and one slice per unique value incol1. For high-cardinality columns, only the top 100 unique values by frequency will generate slices - snapshot Property Map
Analysis Configurationfor monitoring snapshot tables- status String
- time
Series Property Map Analysis Configurationfor monitoring time series tables- warehouse
Id String - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used
DataQualityMonitorDataProfilingConfigCustomMetric, DataQualityMonitorDataProfilingConfigCustomMetricArgs
- Definition string
- Jinja template for a SQL expression that specifies how to compute the metric. See create metric definition
- Input
Columns List<string> - A list of column names in the input table the metric should be computed for.
Can use
":table"to indicate that the metric needs information from multiple columns - Name string
- Name of the metric in the output tables
- Output
Data stringType - The output type of the custom metric
- Type string
- The type of the custom metric. Possible values are:
DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE,DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED,DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT
- Definition string
- Jinja template for a SQL expression that specifies how to compute the metric. See create metric definition
- Input
Columns []string - A list of column names in the input table the metric should be computed for.
Can use
":table"to indicate that the metric needs information from multiple columns - Name string
- Name of the metric in the output tables
- Output
Data stringType - The output type of the custom metric
- Type string
- The type of the custom metric. Possible values are:
DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE,DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED,DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT
- definition String
- Jinja template for a SQL expression that specifies how to compute the metric. See create metric definition
- input
Columns List<String> - A list of column names in the input table the metric should be computed for.
Can use
":table"to indicate that the metric needs information from multiple columns - name String
- Name of the metric in the output tables
- output
Data StringType - The output type of the custom metric
- type String
- The type of the custom metric. Possible values are:
DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE,DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED,DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT
- definition string
- Jinja template for a SQL expression that specifies how to compute the metric. See create metric definition
- input
Columns string[] - A list of column names in the input table the metric should be computed for.
Can use
":table"to indicate that the metric needs information from multiple columns - name string
- Name of the metric in the output tables
- output
Data stringType - The output type of the custom metric
- type string
- The type of the custom metric. Possible values are:
DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE,DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED,DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT
- definition str
- Jinja template for a SQL expression that specifies how to compute the metric. See create metric definition
- input_
columns Sequence[str] - A list of column names in the input table the metric should be computed for.
Can use
":table"to indicate that the metric needs information from multiple columns - name str
- Name of the metric in the output tables
- output_
data_ strtype - The output type of the custom metric
- type str
- The type of the custom metric. Possible values are:
DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE,DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED,DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT
- definition String
- Jinja template for a SQL expression that specifies how to compute the metric. See create metric definition
- input
Columns List<String> - A list of column names in the input table the metric should be computed for.
Can use
":table"to indicate that the metric needs information from multiple columns - name String
- Name of the metric in the output tables
- output
Data StringType - The output type of the custom metric
- type String
- The type of the custom metric. Possible values are:
DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE,DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED,DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT
DataQualityMonitorDataProfilingConfigInferenceLog, DataQualityMonitorDataProfilingConfigInferenceLogArgs
- Granularities List<string>
- Model
Id stringColumn - Column for the model identifier
- Prediction
Column string - Column for the prediction
- Problem
Type string - Problem type the model aims to solve. Possible values are:
INFERENCE_PROBLEM_TYPE_CLASSIFICATION,INFERENCE_PROBLEM_TYPE_REGRESSION - Timestamp
Column string - Label
Column string - Column for the label
- Granularities []string
- Model
Id stringColumn - Column for the model identifier
- Prediction
Column string - Column for the prediction
- Problem
Type string - Problem type the model aims to solve. Possible values are:
INFERENCE_PROBLEM_TYPE_CLASSIFICATION,INFERENCE_PROBLEM_TYPE_REGRESSION - Timestamp
Column string - Label
Column string - Column for the label
- granularities List<String>
- model
Id StringColumn - Column for the model identifier
- prediction
Column String - Column for the prediction
- problem
Type String - Problem type the model aims to solve. Possible values are:
INFERENCE_PROBLEM_TYPE_CLASSIFICATION,INFERENCE_PROBLEM_TYPE_REGRESSION - timestamp
Column String - label
Column String - Column for the label
- granularities string[]
- model
Id stringColumn - Column for the model identifier
- prediction
Column string - Column for the prediction
- problem
Type string - Problem type the model aims to solve. Possible values are:
INFERENCE_PROBLEM_TYPE_CLASSIFICATION,INFERENCE_PROBLEM_TYPE_REGRESSION - timestamp
Column string - label
Column string - Column for the label
- granularities Sequence[str]
- model_
id_ strcolumn - Column for the model identifier
- prediction_
column str - Column for the prediction
- problem_
type str - Problem type the model aims to solve. Possible values are:
INFERENCE_PROBLEM_TYPE_CLASSIFICATION,INFERENCE_PROBLEM_TYPE_REGRESSION - timestamp_
column str - label_
column str - Column for the label
- granularities List<String>
- model
Id StringColumn - Column for the model identifier
- prediction
Column String - Column for the prediction
- problem
Type String - Problem type the model aims to solve. Possible values are:
INFERENCE_PROBLEM_TYPE_CLASSIFICATION,INFERENCE_PROBLEM_TYPE_REGRESSION - timestamp
Column String - label
Column String - Column for the label
DataQualityMonitorDataProfilingConfigNotificationSettings, DataQualityMonitorDataProfilingConfigNotificationSettingsArgs
- On
Failure DataQuality Monitor Data Profiling Config Notification Settings On Failure - Destinations to send notifications on failure/timeout
- On
Failure DataQuality Monitor Data Profiling Config Notification Settings On Failure - Destinations to send notifications on failure/timeout
- on
Failure DataQuality Monitor Data Profiling Config Notification Settings On Failure - Destinations to send notifications on failure/timeout
- on
Failure DataQuality Monitor Data Profiling Config Notification Settings On Failure - Destinations to send notifications on failure/timeout
- on_
failure DataQuality Monitor Data Profiling Config Notification Settings On Failure - Destinations to send notifications on failure/timeout
- on
Failure Property Map - Destinations to send notifications on failure/timeout
DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure, DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailureArgs
- Email
Addresses List<string> - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported
- Email
Addresses []string - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported
- email
Addresses List<String> - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported
- email
Addresses string[] - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported
- email_
addresses Sequence[str] - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported
- email
Addresses List<String> - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported
DataQualityMonitorDataProfilingConfigSchedule, DataQualityMonitorDataProfilingConfigScheduleArgs
- Quartz
Cron stringExpression - The expression that determines when to run the monitor. See examples
- Timezone
Id string - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.
See
Java TimeZone <http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html>_ for details. The timezone id (e.g.,America/Los_Angeles) in which to evaluate the quartz expression - Pause
Status string
- Quartz
Cron stringExpression - The expression that determines when to run the monitor. See examples
- Timezone
Id string - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.
See
Java TimeZone <http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html>_ for details. The timezone id (e.g.,America/Los_Angeles) in which to evaluate the quartz expression - Pause
Status string
- quartz
Cron StringExpression - The expression that determines when to run the monitor. See examples
- timezone
Id String - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.
See
Java TimeZone <http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html>_ for details. The timezone id (e.g.,America/Los_Angeles) in which to evaluate the quartz expression - pause
Status String
- quartz
Cron stringExpression - The expression that determines when to run the monitor. See examples
- timezone
Id string - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.
See
Java TimeZone <http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html>_ for details. The timezone id (e.g.,America/Los_Angeles) in which to evaluate the quartz expression - pause
Status string
- quartz_
cron_ strexpression - The expression that determines when to run the monitor. See examples
- timezone_
id str - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.
See
Java TimeZone <http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html>_ for details. The timezone id (e.g.,America/Los_Angeles) in which to evaluate the quartz expression - pause_
status str
- quartz
Cron StringExpression - The expression that determines when to run the monitor. See examples
- timezone
Id String - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.
See
Java TimeZone <http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html>_ for details. The timezone id (e.g.,America/Los_Angeles) in which to evaluate the quartz expression - pause
Status String
DataQualityMonitorDataProfilingConfigTimeSeries, DataQualityMonitorDataProfilingConfigTimeSeriesArgs
- Granularities List<string>
- Timestamp
Column string
- Granularities []string
- Timestamp
Column string
- granularities List<String>
- timestamp
Column String
- granularities string[]
- timestamp
Column string
- granularities Sequence[str]
- timestamp_
column str
- granularities List<String>
- timestamp
Column String
Import
As of Pulumi v1.5, resources can be imported through configuration.
hcl
import {
id = “object_type,object_id”
to = databricks_data_quality_monitor.this
}
If you are using an older version of Pulumi, import the resource using the pulumi import command as follows:
$ pulumi import databricks:index/dataQualityMonitor:DataQualityMonitor this "object_type,object_id"
To learn more about importing existing cloud resources, see Importing resources.
Package Details
- Repository
- databricks pulumi/pulumi-databricks
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
databricksTerraform Provider.
