Packages

package jobconf

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. final case class JobConfig(jobId: ID, jobDescription: Option[NonEmptyString], connections: Option[ConnectionsConfig], schemas: Seq[SchemaConfig] = Seq.empty, sources: Option[SourcesConfig], streams: Option[StreamSourcesConfig], virtualSources: Seq[VirtualSourceConfig] = Seq.empty, virtualStreams: Seq[VirtualSourceConfig] = Seq.empty, loadChecks: Option[LoadChecksConfig], metrics: Option[MetricsConfig], checks: Option[ChecksConfig], targets: Option[TargetsConfig], jobMetadata: Seq[SparkParam] = Seq.empty) extends Product with Serializable

    Data Quality job-level configuration

    Data Quality job-level configuration

    jobId

    Job ID

    jobDescription

    Job description

    connections

    Connections to external data systems (RDBMS, Message Brokers, etc.)

    schemas

    Various schema definitions

    sources

    Data sources processed within current job (only applicable to batch jobs).

    streams

    Stream sources processed within current job (only applicable to streaming jobs).

    virtualSources

    Virtual sources to be created from regular sources.

    virtualStreams

    Virtual stream to be created from regular streams.

    loadChecks

    Load checks to be performed on data sources before reading data itself

    metrics

    Metrics to be calculated for data sources

    checks

    Checks to be performed over metrics

    targets

    Targets that define various job result outputs to a multiple channels

    jobMetadata

    List of metadata parameters

Value Members

  1. object Checks
  2. object Connections
  3. object Files

    Note

    General note on working with files in Checkita Framework:

    • Path may contain file system connector prefix such as file:// to read from local file system or s3a:// to read from S3 storage.
    • It is up to user to setup all required spark configuration parameters to read from and write into specified file system.
    • If file system connector prefix is not defined then files are always read from and written into Spark's default file system.
    • Pay attention when running framework in local mode: in this case spark will read files from local file system only.
  4. object LoadChecks
  5. object MetricParams
  6. object Metrics
  7. object Outputs
  8. object Schemas
  9. object Sources
  10. object Targets

Ungrouped