Class: Aws::GlueDataBrew::Client

Inherits:
Seahorse::Client::Base
  • Object
show all
Includes:
ClientStubs
Defined in:
lib/aws-sdk-gluedatabrew/client.rb

Overview

An API client for GlueDataBrew. To construct a client, you need to configure a ‘:region` and `:credentials`.

client = Aws::GlueDataBrew::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the [developer guide](/sdk-for-ruby/v3/developer-guide/setup-config.html).

See #initialize for a full list of supported configuration options.

Class Attribute Summary collapse

API Operations collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • ‘Aws::Credentials` - Used for configuring static, non-refreshing credentials.

    • ‘Aws::SharedCredentials` - Used for loading static credentials from a shared file, such as `~/.aws/config`.

    • ‘Aws::AssumeRoleCredentials` - Used when you need to assume a role.

    • ‘Aws::AssumeRoleWebIdentityCredentials` - Used when you need to assume a role after providing credentials via the web.

    • ‘Aws::SSOCredentials` - Used for loading credentials from AWS SSO using an access token generated from `aws login`.

    • ‘Aws::ProcessCredentials` - Used for loading credentials from a process that outputs to stdout.

    • ‘Aws::InstanceProfileCredentials` - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • ‘Aws::ECSCredentials` - Used for loading credentials from instances running in ECS.

    • ‘Aws::CognitoIdentityCredentials` - Used for loading credentials from the Cognito Identity service.

    When ‘:credentials` are not configured directly, the following locations will be searched for credentials:

    • Aws.config`

    • The ‘:access_key_id`, `:secret_access_key`, `:session_token`, and `:account_id` options.

    • ENV, ENV, ENV, and ENV

    • ‘~/.aws/credentials`

    • ‘~/.aws/config`

    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of ‘Aws::InstanceProfileCredentials` or `Aws::ECSCredentials` to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV to true.

  • :region (required, String)

    The AWS region to connect to. The configured ‘:region` is used to determine the service `:endpoint`. When not passed, a default `:region` is searched for in the following locations:

  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to ‘true`, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to `false`.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in ‘adaptive` retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a `RetryCapacityNotAvailableError` and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When ‘true`, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When ‘true`, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in ‘standard` and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to ‘true’ the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the ‘:endpoint` option directly. This is normally constructed from the `:region` option. Configuring `:endpoint` is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to ‘true`, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the ‘:logger` at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in ‘standard` and `adaptive` retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, ‘default’ is used.

  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the ‘legacy` retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the ‘legacy` retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the ‘legacy` retry mode.

    @see www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the ‘legacy` retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the ‘legacy` retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • ‘legacy` - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • ‘standard` - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • ‘adaptive` - An experimental retry mode that includes all the functionality of `standard` mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default ‘:sigv4a_signing_region_set` is searched for in the following locations:

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    ** Please note ** When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses ‘NoOpTelemetryProvider` which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the

    ‘opentelemetry-sdk` gem and then, pass in an instance of a `Aws::Telemetry::OTelProvider` for telemetry provider.

  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • ‘Aws::StaticTokenProvider` - Used for configuring static, non-refreshing tokens.

    • ‘Aws::SSOTokenProvider` - Used for loading tokens from AWS SSO using an access token generated from `aws login`.

    When ‘:token_provider` is not configured directly, the `Aws::TokenProviderChain` will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to ‘true`, dualstack enabled endpoints (with `.aws` TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to ‘true`, fips compatible endpoints will be used if available. When a `fips` region is used, the region is normalized and this config is set to `true`.

  • :validate_params (Boolean) — default: true

    When ‘true`, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::GlueDataBrew::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to ‘#resolve_endpoint(parameters)` where `parameters` is a Struct similar to `Aws::GlueDataBrew::EndpointParameters`.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has “Expect” header set to “100-continue”. Defaults to ‘nil` which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like ‘proxy.com:123’.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When ‘true`, HTTP debug output will be sent to the `:logger`.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a ‘content-length`).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When ‘true`, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When ‘true`, SSL peer certificates are verified when establishing a connection.



444
445
446
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 444

def initialize(*args)
  super
end

Class Attribute Details

.identifierObject (readonly)

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



3434
3435
3436
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3434

def identifier
  @identifier
end

Class Method Details

.errors_moduleObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



3437
3438
3439
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3437

def errors_module
  Errors
end

Instance Method Details

#batch_delete_recipe_version(params = {}) ⇒ Types::BatchDeleteRecipeVersionResponse

Deletes one or more versions of a recipe at a time.

The entire request will be rejected if:

  • The recipe does not exist.

  • There is an invalid version identifier in the list of versions.

  • The version list is empty.

  • The version list size exceeds 50.

  • The version list contains duplicate entries.

The request will complete successfully, but with partial failures, if:

  • A version does not exist.

  • A version is being used by a job.

  • You specify ‘LATEST_WORKING`, but it’s being used by a project.

  • The version fails to be deleted.

The ‘LATEST_WORKING` version will only be deleted if the recipe has no other versions. If you try to delete `LATEST_WORKING` while other versions exist (or if they can’t be deleted), then ‘LATEST_WORKING` will be listed as partial failure in the response.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_recipe_version({
  name: "RecipeName", # required
  recipe_versions: ["RecipeVersion"], # required
})

Response structure


resp.name #=> String
resp.errors #=> Array
resp.errors[0].error_code #=> String
resp.errors[0].error_message #=> String
resp.errors[0].recipe_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the recipe whose versions are to be deleted.

  • :recipe_versions (required, Array<String>)

    An array of version identifiers, for the recipe versions to be deleted. You can specify numeric versions (‘X.Y`) or `LATEST_WORKING`. `LATEST_PUBLISHED` is not supported.

Returns:

See Also:



511
512
513
514
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 511

def batch_delete_recipe_version(params = {}, options = {})
  req = build_request(:batch_delete_recipe_version, params)
  req.send_request(options)
end

#build_request(operation_name, params = {}) ⇒ Object

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Parameters:

  • params ({}) (defaults to: {})


3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3407

def build_request(operation_name, params = {})
  handlers = @handlers.for(operation_name)
  tracer = config.telemetry_provider.tracer_provider.tracer(
    Aws::Telemetry.module_to_tracer_name('Aws::GlueDataBrew')
  )
  context = Seahorse::Client::RequestContext.new(
    operation_name: operation_name,
    operation: config.api.operation(operation_name),
    client: self,
    params: params,
    config: config,
    tracer: tracer
  )
  context[:gem_name] = 'aws-sdk-gluedatabrew'
  context[:gem_version] = '1.49.0'
  Seahorse::Client::Request.new(handlers, context)
end

#create_dataset(params = {}) ⇒ Types::CreateDatasetResponse

Creates a new DataBrew dataset.

Examples:

Request syntax with placeholder values


resp = client.create_dataset({
  name: "DatasetName", # required
  format: "CSV", # accepts CSV, JSON, PARQUET, EXCEL, ORC
  format_options: {
    json: {
      multi_line: false,
    },
    excel: {
      sheet_names: ["SheetName"],
      sheet_indexes: [1],
      header_row: false,
    },
    csv: {
      delimiter: "Delimiter",
      header_row: false,
    },
  },
  input: { # required
    s3_input_definition: {
      bucket: "Bucket", # required
      key: "Key",
      bucket_owner: "BucketOwner",
    },
    data_catalog_input_definition: {
      catalog_id: "CatalogId",
      database_name: "DatabaseName", # required
      table_name: "TableName", # required
      temp_directory: {
        bucket: "Bucket", # required
        key: "Key",
        bucket_owner: "BucketOwner",
      },
    },
    database_input_definition: {
      glue_connection_name: "GlueConnectionName", # required
      database_table_name: "DatabaseTableName",
      temp_directory: {
        bucket: "Bucket", # required
        key: "Key",
        bucket_owner: "BucketOwner",
      },
      query_string: "QueryString",
    },
    metadata: {
      source_arn: "Arn",
    },
  },
  path_options: {
    last_modified_date_condition: {
      expression: "Expression", # required
      values_map: { # required
        "ValueReference" => "ConditionValue",
      },
    },
    files_limit: {
      max_files: 1, # required
      ordered_by: "LAST_MODIFIED_DATE", # accepts LAST_MODIFIED_DATE
      order: "DESCENDING", # accepts DESCENDING, ASCENDING
    },
    parameters: {
      "PathParameterName" => {
        name: "PathParameterName", # required
        type: "Datetime", # required, accepts Datetime, Number, String
        datetime_options: {
          format: "DatetimeFormat", # required
          timezone_offset: "TimezoneOffset",
          locale_code: "LocaleCode",
        },
        create_column: false,
        filter: {
          expression: "Expression", # required
          values_map: { # required
            "ValueReference" => "ConditionValue",
          },
        },
      },
    },
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the dataset to be created. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • :format (String)

    The file format of a dataset that is created from an Amazon S3 file or folder.

  • :format_options (Types::FormatOptions)

    Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.

  • :input (required, Types::Input)

    Represents information on how DataBrew can find data, in either the Glue Data Catalog or Amazon S3.

  • :path_options (Types::PathOptions)

    A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

  • :tags (Hash<String,String>)

    Metadata tags to apply to this dataset.

Returns:

See Also:



638
639
640
641
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 638

def create_dataset(params = {}, options = {})
  req = build_request(:create_dataset, params)
  req.send_request(options)
end

#create_profile_job(params = {}) ⇒ Types::CreateProfileJobResponse

Creates a new job to analyze a dataset and create its data profile.

Examples:

Request syntax with placeholder values


resp = client.create_profile_job({
  dataset_name: "DatasetName", # required
  encryption_key_arn: "EncryptionKeyArn",
  encryption_mode: "SSE-KMS", # accepts SSE-KMS, SSE-S3
  name: "JobName", # required
  log_subscription: "ENABLE", # accepts ENABLE, DISABLE
  max_capacity: 1,
  max_retries: 1,
  output_location: { # required
    bucket: "Bucket", # required
    key: "Key",
    bucket_owner: "BucketOwner",
  },
  configuration: {
    dataset_statistics_configuration: {
      included_statistics: ["Statistic"],
      overrides: [
        {
          statistic: "Statistic", # required
          parameters: { # required
            "ParameterName" => "ParameterValue",
          },
        },
      ],
    },
    profile_columns: [
      {
        regex: "ColumnName",
        name: "ColumnName",
      },
    ],
    column_statistics_configurations: [
      {
        selectors: [
          {
            regex: "ColumnName",
            name: "ColumnName",
          },
        ],
        statistics: { # required
          included_statistics: ["Statistic"],
          overrides: [
            {
              statistic: "Statistic", # required
              parameters: { # required
                "ParameterName" => "ParameterValue",
              },
            },
          ],
        },
      },
    ],
    entity_detector_configuration: {
      entity_types: ["EntityType"], # required
      allowed_statistics: [
        {
          statistics: ["Statistic"], # required
        },
      ],
    },
  },
  validation_configurations: [
    {
      ruleset_arn: "Arn", # required
      validation_mode: "CHECK_ALL", # accepts CHECK_ALL
    },
  ],
  role_arn: "Arn", # required
  tags: {
    "TagKey" => "TagValue",
  },
  timeout: 1,
  job_sample: {
    mode: "FULL_DATASET", # accepts FULL_DATASET, CUSTOM_ROWS
    size: 1,
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dataset_name (required, String)

    The name of the dataset that this job is to act upon.

  • :encryption_key_arn (String)

    The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

  • :encryption_mode (String)

    The encryption mode for the job, which can be one of the following:

    • ‘SSE-KMS` - `SSE-KMS` - Server-side encryption with KMS-managed keys.

    • ‘SSE-S3` - Server-side encryption with keys managed by Amazon S3.

  • :name (required, String)

    The name of the job to be created. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • :log_subscription (String)

    Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.

  • :max_capacity (Integer)

    The maximum number of nodes that DataBrew can use when the job processes data.

  • :max_retries (Integer)

    The maximum number of times to retry the job after a job run fails.

  • :output_location (required, Types::S3Location)

    Represents an Amazon S3 location (bucket name, bucket owner, and object key) where DataBrew can read input data, or write output from a job.

  • :configuration (Types::ProfileConfiguration)

    Configuration for profile jobs. Used to select columns, do evaluations, and override default parameters of evaluations. When configuration is null, the profile job will run with default settings.

  • :validation_configurations (Array<Types::ValidationConfiguration>)

    List of validation configurations that are applied to the profile job.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • :tags (Hash<String,String>)

    Metadata tags to apply to this job.

  • :timeout (Integer)

    The job’s timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of ‘TIMEOUT`.

  • :job_sample (Types::JobSample)

    Sample configuration for profile jobs only. Determines the number of rows on which the profile job will be executed. If a JobSample value is not provided, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.

Returns:

See Also:



797
798
799
800
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 797

def create_profile_job(params = {}, options = {})
  req = build_request(:create_profile_job, params)
  req.send_request(options)
end

#create_project(params = {}) ⇒ Types::CreateProjectResponse

Creates a new DataBrew project.

Examples:

Request syntax with placeholder values


resp = client.create_project({
  dataset_name: "DatasetName", # required
  name: "ProjectName", # required
  recipe_name: "RecipeName", # required
  sample: {
    size: 1,
    type: "FIRST_N", # required, accepts FIRST_N, LAST_N, RANDOM
  },
  role_arn: "Arn", # required
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dataset_name (required, String)

    The name of an existing dataset to associate this project with.

  • :name (required, String)

    A unique name for the new project. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • :recipe_name (required, String)

    The name of an existing recipe to associate with the project.

  • :sample (Types::Sample)

    Represents the sample size and sampling type for DataBrew to use for interactive data analysis.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed for this request.

  • :tags (Hash<String,String>)

    Metadata tags to apply to this project.

Returns:

See Also:



853
854
855
856
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 853

def create_project(params = {}, options = {})
  req = build_request(:create_project, params)
  req.send_request(options)
end

#create_recipe(params = {}) ⇒ Types::CreateRecipeResponse

Creates a new DataBrew recipe.

Examples:

Request syntax with placeholder values


resp = client.create_recipe({
  description: "RecipeDescription",
  name: "RecipeName", # required
  steps: [ # required
    {
      action: { # required
        operation: "Operation", # required
        parameters: {
          "ParameterName" => "ParameterValue",
        },
      },
      condition_expressions: [
        {
          condition: "Condition", # required
          value: "ConditionValue",
          target_column: "TargetColumn", # required
        },
      ],
    },
  ],
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    A description for the recipe.

  • :name (required, String)

    A unique name for the recipe. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • :steps (required, Array<Types::RecipeStep>)

    An array containing the steps to be performed by the recipe. Each recipe step consists of one recipe action and (optionally) an array of condition expressions.

  • :tags (Hash<String,String>)

    Metadata tags to apply to this recipe.

Returns:

See Also:



914
915
916
917
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 914

def create_recipe(params = {}, options = {})
  req = build_request(:create_recipe, params)
  req.send_request(options)
end

#create_recipe_job(params = {}) ⇒ Types::CreateRecipeJobResponse

Creates a new job to transform input data, using steps defined in an existing Glue DataBrew recipe

Examples:

Request syntax with placeholder values


resp = client.create_recipe_job({
  dataset_name: "DatasetName",
  encryption_key_arn: "EncryptionKeyArn",
  encryption_mode: "SSE-KMS", # accepts SSE-KMS, SSE-S3
  name: "JobName", # required
  log_subscription: "ENABLE", # accepts ENABLE, DISABLE
  max_capacity: 1,
  max_retries: 1,
  outputs: [
    {
      compression_format: "GZIP", # accepts GZIP, LZ4, SNAPPY, BZIP2, DEFLATE, LZO, BROTLI, ZSTD, ZLIB
      format: "CSV", # accepts CSV, JSON, PARQUET, GLUEPARQUET, AVRO, ORC, XML, TABLEAUHYPER
      partition_columns: ["ColumnName"],
      location: { # required
        bucket: "Bucket", # required
        key: "Key",
        bucket_owner: "BucketOwner",
      },
      overwrite: false,
      format_options: {
        csv: {
          delimiter: "Delimiter",
        },
      },
      max_output_files: 1,
    },
  ],
  data_catalog_outputs: [
    {
      catalog_id: "CatalogId",
      database_name: "DatabaseName", # required
      table_name: "TableName", # required
      s3_options: {
        location: { # required
          bucket: "Bucket", # required
          key: "Key",
          bucket_owner: "BucketOwner",
        },
      },
      database_options: {
        temp_directory: {
          bucket: "Bucket", # required
          key: "Key",
          bucket_owner: "BucketOwner",
        },
        table_name: "DatabaseTableName", # required
      },
      overwrite: false,
    },
  ],
  database_outputs: [
    {
      glue_connection_name: "GlueConnectionName", # required
      database_options: { # required
        temp_directory: {
          bucket: "Bucket", # required
          key: "Key",
          bucket_owner: "BucketOwner",
        },
        table_name: "DatabaseTableName", # required
      },
      database_output_mode: "NEW_TABLE", # accepts NEW_TABLE
    },
  ],
  project_name: "ProjectName",
  recipe_reference: {
    name: "RecipeName", # required
    recipe_version: "RecipeVersion",
  },
  role_arn: "Arn", # required
  tags: {
    "TagKey" => "TagValue",
  },
  timeout: 1,
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dataset_name (String)

    The name of the dataset that this job processes.

  • :encryption_key_arn (String)

    The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

  • :encryption_mode (String)

    The encryption mode for the job, which can be one of the following:

    • ‘SSE-KMS` - Server-side encryption with keys managed by KMS.

    • ‘SSE-S3` - Server-side encryption with keys managed by Amazon S3.

  • :name (required, String)

    A unique name for the job. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • :log_subscription (String)

    Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.

  • :max_capacity (Integer)

    The maximum number of nodes that DataBrew can consume when the job processes data.

  • :max_retries (Integer)

    The maximum number of times to retry the job after a job run fails.

  • :outputs (Array<Types::Output>)

    One or more artifacts that represent the output from running the job.

  • :data_catalog_outputs (Array<Types::DataCatalogOutput>)

    One or more artifacts that represent the Glue Data Catalog output from running the job.

  • :database_outputs (Array<Types::DatabaseOutput>)

    Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write to.

  • :project_name (String)

    Either the name of an existing project, or a combination of a recipe and a dataset to associate with the recipe.

  • :recipe_reference (Types::RecipeReference)

    Represents the name and version of a DataBrew recipe.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • :tags (Hash<String,String>)

    Metadata tags to apply to this job.

  • :timeout (Integer)

    The job’s timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of ‘TIMEOUT`.

Returns:

See Also:



1070
1071
1072
1073
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1070

def create_recipe_job(params = {}, options = {})
  req = build_request(:create_recipe_job, params)
  req.send_request(options)
end

#create_ruleset(params = {}) ⇒ Types::CreateRulesetResponse

Creates a new ruleset that can be used in a profile job to validate the data quality of a dataset.

Examples:

Request syntax with placeholder values


resp = client.create_ruleset({
  name: "RulesetName", # required
  description: "RulesetDescription",
  target_arn: "Arn", # required
  rules: [ # required
    {
      name: "RuleName", # required
      disabled: false,
      check_expression: "Expression", # required
      substitution_map: {
        "ValueReference" => "ConditionValue",
      },
      threshold: {
        value: 1.0, # required
        type: "GREATER_THAN_OR_EQUAL", # accepts GREATER_THAN_OR_EQUAL, LESS_THAN_OR_EQUAL, GREATER_THAN, LESS_THAN
        unit: "COUNT", # accepts COUNT, PERCENTAGE
      },
      column_selectors: [
        {
          regex: "ColumnName",
          name: "ColumnName",
        },
      ],
    },
  ],
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the ruleset to be created. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • :description (String)

    The description of the ruleset.

  • :target_arn (required, String)

    The Amazon Resource Name (ARN) of a resource (dataset) that the ruleset is associated with.

  • :rules (required, Array<Types::Rule>)

    A list of rules that are defined with the ruleset. A rule includes one or more checks to be validated on a DataBrew dataset.

  • :tags (Hash<String,String>)

    Metadata tags to apply to the ruleset.

Returns:

See Also:



1140
1141
1142
1143
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1140

def create_ruleset(params = {}, options = {})
  req = build_request(:create_ruleset, params)
  req.send_request(options)
end

#create_schedule(params = {}) ⇒ Types::CreateScheduleResponse

Creates a new schedule for one or more DataBrew jobs. Jobs can be run at a specific date and time, or at regular intervals.

Examples:

Request syntax with placeholder values


resp = client.create_schedule({
  job_names: ["JobName"],
  cron_expression: "CronExpression", # required
  tags: {
    "TagKey" => "TagValue",
  },
  name: "ScheduleName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_names (Array<String>)

    The name or names of one or more jobs to be run.

  • :cron_expression (required, String)

    The date or dates and time or times when the jobs are to be run. For more information, see [Cron expressions] in the *Glue DataBrew Developer Guide*.

    [1]: docs.aws.amazon.com/databrew/latest/dg/jobs.cron.html

  • :tags (Hash<String,String>)

    Metadata tags to apply to this schedule.

  • :name (required, String)

    A unique name for the schedule. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

Returns:

See Also:



1190
1191
1192
1193
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1190

def create_schedule(params = {}, options = {})
  req = build_request(:create_schedule, params)
  req.send_request(options)
end

#delete_dataset(params = {}) ⇒ Types::DeleteDatasetResponse

Deletes a dataset from DataBrew.

Examples:

Request syntax with placeholder values


resp = client.delete_dataset({
  name: "DatasetName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the dataset to be deleted.

Returns:

See Also:



1218
1219
1220
1221
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1218

def delete_dataset(params = {}, options = {})
  req = build_request(:delete_dataset, params)
  req.send_request(options)
end

#delete_job(params = {}) ⇒ Types::DeleteJobResponse

Deletes the specified DataBrew job.

Examples:

Request syntax with placeholder values


resp = client.delete_job({
  name: "JobName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the job to be deleted.

Returns:

See Also:



1246
1247
1248
1249
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1246

def delete_job(params = {}, options = {})
  req = build_request(:delete_job, params)
  req.send_request(options)
end

#delete_project(params = {}) ⇒ Types::DeleteProjectResponse

Deletes an existing DataBrew project.

Examples:

Request syntax with placeholder values


resp = client.delete_project({
  name: "ProjectName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the project to be deleted.

Returns:

See Also:



1274
1275
1276
1277
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1274

def delete_project(params = {}, options = {})
  req = build_request(:delete_project, params)
  req.send_request(options)
end

#delete_recipe_version(params = {}) ⇒ Types::DeleteRecipeVersionResponse

Deletes a single version of a DataBrew recipe.

Examples:

Request syntax with placeholder values


resp = client.delete_recipe_version({
  name: "RecipeName", # required
  recipe_version: "RecipeVersion", # required
})

Response structure


resp.name #=> String
resp.recipe_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the recipe.

  • :recipe_version (required, String)

    The version of the recipe to be deleted. You can specify a numeric versions (‘X.Y`) or `LATEST_WORKING`. `LATEST_PUBLISHED` is not supported.

Returns:

See Also:



1310
1311
1312
1313
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1310

def delete_recipe_version(params = {}, options = {})
  req = build_request(:delete_recipe_version, params)
  req.send_request(options)
end

#delete_ruleset(params = {}) ⇒ Types::DeleteRulesetResponse

Deletes a ruleset.

Examples:

Request syntax with placeholder values


resp = client.delete_ruleset({
  name: "RulesetName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the ruleset to be deleted.

Returns:

See Also:



1338
1339
1340
1341
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1338

def delete_ruleset(params = {}, options = {})
  req = build_request(:delete_ruleset, params)
  req.send_request(options)
end

#delete_schedule(params = {}) ⇒ Types::DeleteScheduleResponse

Deletes the specified DataBrew schedule.

Examples:

Request syntax with placeholder values


resp = client.delete_schedule({
  name: "ScheduleName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the schedule to be deleted.

Returns:

See Also:



1366
1367
1368
1369
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1366

def delete_schedule(params = {}, options = {})
  req = build_request(:delete_schedule, params)
  req.send_request(options)
end

#describe_dataset(params = {}) ⇒ Types::DescribeDatasetResponse

Returns the definition of a specific DataBrew dataset.

Examples:

Request syntax with placeholder values


resp = client.describe_dataset({
  name: "DatasetName", # required
})

Response structure


resp.created_by #=> String
resp.create_date #=> Time
resp.name #=> String
resp.format #=> String, one of "CSV", "JSON", "PARQUET", "EXCEL", "ORC"
resp.format_options.json.multi_line #=> Boolean
resp.format_options.excel.sheet_names #=> Array
resp.format_options.excel.sheet_names[0] #=> String
resp.format_options.excel.sheet_indexes #=> Array
resp.format_options.excel.sheet_indexes[0] #=> Integer
resp.format_options.excel.header_row #=> Boolean
resp.format_options.csv.delimiter #=> String
resp.format_options.csv.header_row #=> Boolean
resp.input.s3_input_definition.bucket #=> String
resp.input.s3_input_definition.key #=> String
resp.input.s3_input_definition.bucket_owner #=> String
resp.input.data_catalog_input_definition.catalog_id #=> String
resp.input.data_catalog_input_definition.database_name #=> String
resp.input.data_catalog_input_definition.table_name #=> String
resp.input.data_catalog_input_definition.temp_directory.bucket #=> String
resp.input.data_catalog_input_definition.temp_directory.key #=> String
resp.input.data_catalog_input_definition.temp_directory.bucket_owner #=> String
resp.input.database_input_definition.glue_connection_name #=> String
resp.input.database_input_definition.database_table_name #=> String
resp.input.database_input_definition.temp_directory.bucket #=> String
resp.input.database_input_definition.temp_directory.key #=> String
resp.input.database_input_definition.temp_directory.bucket_owner #=> String
resp.input.database_input_definition.query_string #=> String
resp.input..source_arn #=> String
resp.last_modified_date #=> Time
resp.last_modified_by #=> String
resp.source #=> String, one of "S3", "DATA-CATALOG", "DATABASE"
resp.path_options.last_modified_date_condition.expression #=> String
resp.path_options.last_modified_date_condition.values_map #=> Hash
resp.path_options.last_modified_date_condition.values_map["ValueReference"] #=> String
resp.path_options.files_limit.max_files #=> Integer
resp.path_options.files_limit.ordered_by #=> String, one of "LAST_MODIFIED_DATE"
resp.path_options.files_limit.order #=> String, one of "DESCENDING", "ASCENDING"
resp.path_options.parameters #=> Hash
resp.path_options.parameters["PathParameterName"].name #=> String
resp.path_options.parameters["PathParameterName"].type #=> String, one of "Datetime", "Number", "String"
resp.path_options.parameters["PathParameterName"].datetime_options.format #=> String
resp.path_options.parameters["PathParameterName"].datetime_options.timezone_offset #=> String
resp.path_options.parameters["PathParameterName"].datetime_options.locale_code #=> String
resp.path_options.parameters["PathParameterName"].create_column #=> Boolean
resp.path_options.parameters["PathParameterName"].filter.expression #=> String
resp.path_options.parameters["PathParameterName"].filter.values_map #=> Hash
resp.path_options.parameters["PathParameterName"].filter.values_map["ValueReference"] #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.resource_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the dataset to be described.

Returns:

See Also:



1454
1455
1456
1457
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1454

def describe_dataset(params = {}, options = {})
  req = build_request(:describe_dataset, params)
  req.send_request(options)
end

#describe_job(params = {}) ⇒ Types::DescribeJobResponse

Returns the definition of a specific DataBrew job.

Examples:

Request syntax with placeholder values


resp = client.describe_job({
  name: "JobName", # required
})

Response structure


resp.create_date #=> Time
resp.created_by #=> String
resp.dataset_name #=> String
resp.encryption_key_arn #=> String
resp.encryption_mode #=> String, one of "SSE-KMS", "SSE-S3"
resp.name #=> String
resp.type #=> String, one of "PROFILE", "RECIPE"
resp.last_modified_by #=> String
resp.last_modified_date #=> Time
resp.log_subscription #=> String, one of "ENABLE", "DISABLE"
resp.max_capacity #=> Integer
resp.max_retries #=> Integer
resp.outputs #=> Array
resp.outputs[0].compression_format #=> String, one of "GZIP", "LZ4", "SNAPPY", "BZIP2", "DEFLATE", "LZO", "BROTLI", "ZSTD", "ZLIB"
resp.outputs[0].format #=> String, one of "CSV", "JSON", "PARQUET", "GLUEPARQUET", "AVRO", "ORC", "XML", "TABLEAUHYPER"
resp.outputs[0].partition_columns #=> Array
resp.outputs[0].partition_columns[0] #=> String
resp.outputs[0].location.bucket #=> String
resp.outputs[0].location.key #=> String
resp.outputs[0].location.bucket_owner #=> String
resp.outputs[0].overwrite #=> Boolean
resp.outputs[0].format_options.csv.delimiter #=> String
resp.outputs[0].max_output_files #=> Integer
resp.data_catalog_outputs #=> Array
resp.data_catalog_outputs[0].catalog_id #=> String
resp.data_catalog_outputs[0].database_name #=> String
resp.data_catalog_outputs[0].table_name #=> String
resp.data_catalog_outputs[0].s3_options.location.bucket #=> String
resp.data_catalog_outputs[0].s3_options.location.key #=> String
resp.data_catalog_outputs[0].s3_options.location.bucket_owner #=> String
resp.data_catalog_outputs[0].database_options.temp_directory.bucket #=> String
resp.data_catalog_outputs[0].database_options.temp_directory.key #=> String
resp.data_catalog_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.data_catalog_outputs[0].database_options.table_name #=> String
resp.data_catalog_outputs[0].overwrite #=> Boolean
resp.database_outputs #=> Array
resp.database_outputs[0].glue_connection_name #=> String
resp.database_outputs[0].database_options.temp_directory.bucket #=> String
resp.database_outputs[0].database_options.temp_directory.key #=> String
resp.database_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.database_outputs[0].database_options.table_name #=> String
resp.database_outputs[0].database_output_mode #=> String, one of "NEW_TABLE"
resp.project_name #=> String
resp.profile_configuration.dataset_statistics_configuration.included_statistics #=> Array
resp.profile_configuration.dataset_statistics_configuration.included_statistics[0] #=> String
resp.profile_configuration.dataset_statistics_configuration.overrides #=> Array
resp.profile_configuration.dataset_statistics_configuration.overrides[0].statistic #=> String
resp.profile_configuration.dataset_statistics_configuration.overrides[0].parameters #=> Hash
resp.profile_configuration.dataset_statistics_configuration.overrides[0].parameters["ParameterName"] #=> String
resp.profile_configuration.profile_columns #=> Array
resp.profile_configuration.profile_columns[0].regex #=> String
resp.profile_configuration.profile_columns[0].name #=> String
resp.profile_configuration.column_statistics_configurations #=> Array
resp.profile_configuration.column_statistics_configurations[0].selectors #=> Array
resp.profile_configuration.column_statistics_configurations[0].selectors[0].regex #=> String
resp.profile_configuration.column_statistics_configurations[0].selectors[0].name #=> String
resp.profile_configuration.column_statistics_configurations[0].statistics.included_statistics #=> Array
resp.profile_configuration.column_statistics_configurations[0].statistics.included_statistics[0] #=> String
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides #=> Array
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides[0].statistic #=> String
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides[0].parameters #=> Hash
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides[0].parameters["ParameterName"] #=> String
resp.profile_configuration.entity_detector_configuration.entity_types #=> Array
resp.profile_configuration.entity_detector_configuration.entity_types[0] #=> String
resp.profile_configuration.entity_detector_configuration.allowed_statistics #=> Array
resp.profile_configuration.entity_detector_configuration.allowed_statistics[0].statistics #=> Array
resp.profile_configuration.entity_detector_configuration.allowed_statistics[0].statistics[0] #=> String
resp.validation_configurations #=> Array
resp.validation_configurations[0].ruleset_arn #=> String
resp.validation_configurations[0].validation_mode #=> String, one of "CHECK_ALL"
resp.recipe_reference.name #=> String
resp.recipe_reference.recipe_version #=> String
resp.resource_arn #=> String
resp.role_arn #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.timeout #=> Integer
resp.job_sample.mode #=> String, one of "FULL_DATASET", "CUSTOM_ROWS"
resp.job_sample.size #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the job to be described.

Returns:

See Also:



1583
1584
1585
1586
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1583

def describe_job(params = {}, options = {})
  req = build_request(:describe_job, params)
  req.send_request(options)
end

#describe_job_run(params = {}) ⇒ Types::DescribeJobRunResponse

Represents one run of a DataBrew job.

Examples:

Request syntax with placeholder values


resp = client.describe_job_run({
  name: "JobName", # required
  run_id: "JobRunId", # required
})

Response structure


resp.attempt #=> Integer
resp.completed_on #=> Time
resp.dataset_name #=> String
resp.error_message #=> String
resp.execution_time #=> Integer
resp.job_name #=> String
resp.profile_configuration.dataset_statistics_configuration.included_statistics #=> Array
resp.profile_configuration.dataset_statistics_configuration.included_statistics[0] #=> String
resp.profile_configuration.dataset_statistics_configuration.overrides #=> Array
resp.profile_configuration.dataset_statistics_configuration.overrides[0].statistic #=> String
resp.profile_configuration.dataset_statistics_configuration.overrides[0].parameters #=> Hash
resp.profile_configuration.dataset_statistics_configuration.overrides[0].parameters["ParameterName"] #=> String
resp.profile_configuration.profile_columns #=> Array
resp.profile_configuration.profile_columns[0].regex #=> String
resp.profile_configuration.profile_columns[0].name #=> String
resp.profile_configuration.column_statistics_configurations #=> Array
resp.profile_configuration.column_statistics_configurations[0].selectors #=> Array
resp.profile_configuration.column_statistics_configurations[0].selectors[0].regex #=> String
resp.profile_configuration.column_statistics_configurations[0].selectors[0].name #=> String
resp.profile_configuration.column_statistics_configurations[0].statistics.included_statistics #=> Array
resp.profile_configuration.column_statistics_configurations[0].statistics.included_statistics[0] #=> String
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides #=> Array
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides[0].statistic #=> String
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides[0].parameters #=> Hash
resp.profile_configuration.column_statistics_configurations[0].statistics.overrides[0].parameters["ParameterName"] #=> String
resp.profile_configuration.entity_detector_configuration.entity_types #=> Array
resp.profile_configuration.entity_detector_configuration.entity_types[0] #=> String
resp.profile_configuration.entity_detector_configuration.allowed_statistics #=> Array
resp.profile_configuration.entity_detector_configuration.allowed_statistics[0].statistics #=> Array
resp.profile_configuration.entity_detector_configuration.allowed_statistics[0].statistics[0] #=> String
resp.validation_configurations #=> Array
resp.validation_configurations[0].ruleset_arn #=> String
resp.validation_configurations[0].validation_mode #=> String, one of "CHECK_ALL"
resp.run_id #=> String
resp.state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.log_subscription #=> String, one of "ENABLE", "DISABLE"
resp.log_group_name #=> String
resp.outputs #=> Array
resp.outputs[0].compression_format #=> String, one of "GZIP", "LZ4", "SNAPPY", "BZIP2", "DEFLATE", "LZO", "BROTLI", "ZSTD", "ZLIB"
resp.outputs[0].format #=> String, one of "CSV", "JSON", "PARQUET", "GLUEPARQUET", "AVRO", "ORC", "XML", "TABLEAUHYPER"
resp.outputs[0].partition_columns #=> Array
resp.outputs[0].partition_columns[0] #=> String
resp.outputs[0].location.bucket #=> String
resp.outputs[0].location.key #=> String
resp.outputs[0].location.bucket_owner #=> String
resp.outputs[0].overwrite #=> Boolean
resp.outputs[0].format_options.csv.delimiter #=> String
resp.outputs[0].max_output_files #=> Integer
resp.data_catalog_outputs #=> Array
resp.data_catalog_outputs[0].catalog_id #=> String
resp.data_catalog_outputs[0].database_name #=> String
resp.data_catalog_outputs[0].table_name #=> String
resp.data_catalog_outputs[0].s3_options.location.bucket #=> String
resp.data_catalog_outputs[0].s3_options.location.key #=> String
resp.data_catalog_outputs[0].s3_options.location.bucket_owner #=> String
resp.data_catalog_outputs[0].database_options.temp_directory.bucket #=> String
resp.data_catalog_outputs[0].database_options.temp_directory.key #=> String
resp.data_catalog_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.data_catalog_outputs[0].database_options.table_name #=> String
resp.data_catalog_outputs[0].overwrite #=> Boolean
resp.database_outputs #=> Array
resp.database_outputs[0].glue_connection_name #=> String
resp.database_outputs[0].database_options.temp_directory.bucket #=> String
resp.database_outputs[0].database_options.temp_directory.key #=> String
resp.database_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.database_outputs[0].database_options.table_name #=> String
resp.database_outputs[0].database_output_mode #=> String, one of "NEW_TABLE"
resp.recipe_reference.name #=> String
resp.recipe_reference.recipe_version #=> String
resp.started_by #=> String
resp.started_on #=> Time
resp.job_sample.mode #=> String, one of "FULL_DATASET", "CUSTOM_ROWS"
resp.job_sample.size #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the job being processed during this run.

  • :run_id (required, String)

    The unique identifier of the job run.

Returns:

See Also:



1705
1706
1707
1708
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1705

def describe_job_run(params = {}, options = {})
  req = build_request(:describe_job_run, params)
  req.send_request(options)
end

#describe_project(params = {}) ⇒ Types::DescribeProjectResponse

Returns the definition of a specific DataBrew project.

Examples:

Request syntax with placeholder values


resp = client.describe_project({
  name: "ProjectName", # required
})

Response structure


resp.create_date #=> Time
resp.created_by #=> String
resp.dataset_name #=> String
resp.last_modified_date #=> Time
resp.last_modified_by #=> String
resp.name #=> String
resp.recipe_name #=> String
resp.resource_arn #=> String
resp.sample.size #=> Integer
resp.sample.type #=> String, one of "FIRST_N", "LAST_N", "RANDOM"
resp.role_arn #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.session_status #=> String, one of "ASSIGNED", "FAILED", "INITIALIZING", "PROVISIONING", "READY", "RECYCLING", "ROTATING", "TERMINATED", "TERMINATING", "UPDATING"
resp.opened_by #=> String
resp.open_date #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the project to be described.

Returns:

See Also:



1761
1762
1763
1764
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1761

def describe_project(params = {}, options = {})
  req = build_request(:describe_project, params)
  req.send_request(options)
end

#describe_recipe(params = {}) ⇒ Types::DescribeRecipeResponse

Returns the definition of a specific DataBrew recipe corresponding to a particular version.

Examples:

Request syntax with placeholder values


resp = client.describe_recipe({
  name: "RecipeName", # required
  recipe_version: "RecipeVersion",
})

Response structure


resp.created_by #=> String
resp.create_date #=> Time
resp.last_modified_by #=> String
resp.last_modified_date #=> Time
resp.project_name #=> String
resp.published_by #=> String
resp.published_date #=> Time
resp.description #=> String
resp.name #=> String
resp.steps #=> Array
resp.steps[0].action.operation #=> String
resp.steps[0].action.parameters #=> Hash
resp.steps[0].action.parameters["ParameterName"] #=> String
resp.steps[0].condition_expressions #=> Array
resp.steps[0].condition_expressions[0].condition #=> String
resp.steps[0].condition_expressions[0].value #=> String
resp.steps[0].condition_expressions[0].target_column #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.resource_arn #=> String
resp.recipe_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the recipe to be described.

  • :recipe_version (String)

    The recipe version identifier. If this parameter isn’t specified, then the latest published version is returned.

Returns:

See Also:



1827
1828
1829
1830
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1827

def describe_recipe(params = {}, options = {})
  req = build_request(:describe_recipe, params)
  req.send_request(options)
end

#describe_ruleset(params = {}) ⇒ Types::DescribeRulesetResponse

Retrieves detailed information about the ruleset.

Examples:

Request syntax with placeholder values


resp = client.describe_ruleset({
  name: "RulesetName", # required
})

Response structure


resp.name #=> String
resp.description #=> String
resp.target_arn #=> String
resp.rules #=> Array
resp.rules[0].name #=> String
resp.rules[0].disabled #=> Boolean
resp.rules[0].check_expression #=> String
resp.rules[0].substitution_map #=> Hash
resp.rules[0].substitution_map["ValueReference"] #=> String
resp.rules[0].threshold.value #=> Float
resp.rules[0].threshold.type #=> String, one of "GREATER_THAN_OR_EQUAL", "LESS_THAN_OR_EQUAL", "GREATER_THAN", "LESS_THAN"
resp.rules[0].threshold.unit #=> String, one of "COUNT", "PERCENTAGE"
resp.rules[0].column_selectors #=> Array
resp.rules[0].column_selectors[0].regex #=> String
resp.rules[0].column_selectors[0].name #=> String
resp.create_date #=> Time
resp.created_by #=> String
resp.last_modified_by #=> String
resp.last_modified_date #=> Time
resp.resource_arn #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the ruleset to be described.

Returns:

See Also:



1885
1886
1887
1888
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1885

def describe_ruleset(params = {}, options = {})
  req = build_request(:describe_ruleset, params)
  req.send_request(options)
end

#describe_schedule(params = {}) ⇒ Types::DescribeScheduleResponse

Returns the definition of a specific DataBrew schedule.

Examples:

Request syntax with placeholder values


resp = client.describe_schedule({
  name: "ScheduleName", # required
})

Response structure


resp.create_date #=> Time
resp.created_by #=> String
resp.job_names #=> Array
resp.job_names[0] #=> String
resp.last_modified_by #=> String
resp.last_modified_date #=> Time
resp.resource_arn #=> String
resp.cron_expression #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the schedule to be described.

Returns:

See Also:



1931
1932
1933
1934
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 1931

def describe_schedule(params = {}, options = {})
  req = build_request(:describe_schedule, params)
  req.send_request(options)
end

#list_datasets(params = {}) ⇒ Types::ListDatasetsResponse

Lists all of the DataBrew datasets.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_datasets({
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.datasets #=> Array
resp.datasets[0]. #=> String
resp.datasets[0].created_by #=> String
resp.datasets[0].create_date #=> Time
resp.datasets[0].name #=> String
resp.datasets[0].format #=> String, one of "CSV", "JSON", "PARQUET", "EXCEL", "ORC"
resp.datasets[0].format_options.json.multi_line #=> Boolean
resp.datasets[0].format_options.excel.sheet_names #=> Array
resp.datasets[0].format_options.excel.sheet_names[0] #=> String
resp.datasets[0].format_options.excel.sheet_indexes #=> Array
resp.datasets[0].format_options.excel.sheet_indexes[0] #=> Integer
resp.datasets[0].format_options.excel.header_row #=> Boolean
resp.datasets[0].format_options.csv.delimiter #=> String
resp.datasets[0].format_options.csv.header_row #=> Boolean
resp.datasets[0].input.s3_input_definition.bucket #=> String
resp.datasets[0].input.s3_input_definition.key #=> String
resp.datasets[0].input.s3_input_definition.bucket_owner #=> String
resp.datasets[0].input.data_catalog_input_definition.catalog_id #=> String
resp.datasets[0].input.data_catalog_input_definition.database_name #=> String
resp.datasets[0].input.data_catalog_input_definition.table_name #=> String
resp.datasets[0].input.data_catalog_input_definition.temp_directory.bucket #=> String
resp.datasets[0].input.data_catalog_input_definition.temp_directory.key #=> String
resp.datasets[0].input.data_catalog_input_definition.temp_directory.bucket_owner #=> String
resp.datasets[0].input.database_input_definition.glue_connection_name #=> String
resp.datasets[0].input.database_input_definition.database_table_name #=> String
resp.datasets[0].input.database_input_definition.temp_directory.bucket #=> String
resp.datasets[0].input.database_input_definition.temp_directory.key #=> String
resp.datasets[0].input.database_input_definition.temp_directory.bucket_owner #=> String
resp.datasets[0].input.database_input_definition.query_string #=> String
resp.datasets[0].input..source_arn #=> String
resp.datasets[0].last_modified_date #=> Time
resp.datasets[0].last_modified_by #=> String
resp.datasets[0].source #=> String, one of "S3", "DATA-CATALOG", "DATABASE"
resp.datasets[0].path_options.last_modified_date_condition.expression #=> String
resp.datasets[0].path_options.last_modified_date_condition.values_map #=> Hash
resp.datasets[0].path_options.last_modified_date_condition.values_map["ValueReference"] #=> String
resp.datasets[0].path_options.files_limit.max_files #=> Integer
resp.datasets[0].path_options.files_limit.ordered_by #=> String, one of "LAST_MODIFIED_DATE"
resp.datasets[0].path_options.files_limit.order #=> String, one of "DESCENDING", "ASCENDING"
resp.datasets[0].path_options.parameters #=> Hash
resp.datasets[0].path_options.parameters["PathParameterName"].name #=> String
resp.datasets[0].path_options.parameters["PathParameterName"].type #=> String, one of "Datetime", "Number", "String"
resp.datasets[0].path_options.parameters["PathParameterName"].datetime_options.format #=> String
resp.datasets[0].path_options.parameters["PathParameterName"].datetime_options.timezone_offset #=> String
resp.datasets[0].path_options.parameters["PathParameterName"].datetime_options.locale_code #=> String
resp.datasets[0].path_options.parameters["PathParameterName"].create_column #=> Boolean
resp.datasets[0].path_options.parameters["PathParameterName"].filter.expression #=> String
resp.datasets[0].path_options.parameters["PathParameterName"].filter.values_map #=> Hash
resp.datasets[0].path_options.parameters["PathParameterName"].filter.values_map["ValueReference"] #=> String
resp.datasets[0].tags #=> Hash
resp.datasets[0].tags["TagKey"] #=> String
resp.datasets[0].resource_arn #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    The token returned by a previous call to retrieve the next set of results.

Returns:

See Also:



2019
2020
2021
2022
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2019

def list_datasets(params = {}, options = {})
  req = build_request(:list_datasets, params)
  req.send_request(options)
end

#list_job_runs(params = {}) ⇒ Types::ListJobRunsResponse

Lists all of the previous runs of a particular DataBrew job.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_job_runs({
  name: "JobName", # required
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.job_runs #=> Array
resp.job_runs[0].attempt #=> Integer
resp.job_runs[0].completed_on #=> Time
resp.job_runs[0].dataset_name #=> String
resp.job_runs[0].error_message #=> String
resp.job_runs[0].execution_time #=> Integer
resp.job_runs[0].job_name #=> String
resp.job_runs[0].run_id #=> String
resp.job_runs[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.job_runs[0].log_subscription #=> String, one of "ENABLE", "DISABLE"
resp.job_runs[0].log_group_name #=> String
resp.job_runs[0].outputs #=> Array
resp.job_runs[0].outputs[0].compression_format #=> String, one of "GZIP", "LZ4", "SNAPPY", "BZIP2", "DEFLATE", "LZO", "BROTLI", "ZSTD", "ZLIB"
resp.job_runs[0].outputs[0].format #=> String, one of "CSV", "JSON", "PARQUET", "GLUEPARQUET", "AVRO", "ORC", "XML", "TABLEAUHYPER"
resp.job_runs[0].outputs[0].partition_columns #=> Array
resp.job_runs[0].outputs[0].partition_columns[0] #=> String
resp.job_runs[0].outputs[0].location.bucket #=> String
resp.job_runs[0].outputs[0].location.key #=> String
resp.job_runs[0].outputs[0].location.bucket_owner #=> String
resp.job_runs[0].outputs[0].overwrite #=> Boolean
resp.job_runs[0].outputs[0].format_options.csv.delimiter #=> String
resp.job_runs[0].outputs[0].max_output_files #=> Integer
resp.job_runs[0].data_catalog_outputs #=> Array
resp.job_runs[0].data_catalog_outputs[0].catalog_id #=> String
resp.job_runs[0].data_catalog_outputs[0].database_name #=> String
resp.job_runs[0].data_catalog_outputs[0].table_name #=> String
resp.job_runs[0].data_catalog_outputs[0].s3_options.location.bucket #=> String
resp.job_runs[0].data_catalog_outputs[0].s3_options.location.key #=> String
resp.job_runs[0].data_catalog_outputs[0].s3_options.location.bucket_owner #=> String
resp.job_runs[0].data_catalog_outputs[0].database_options.temp_directory.bucket #=> String
resp.job_runs[0].data_catalog_outputs[0].database_options.temp_directory.key #=> String
resp.job_runs[0].data_catalog_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.job_runs[0].data_catalog_outputs[0].database_options.table_name #=> String
resp.job_runs[0].data_catalog_outputs[0].overwrite #=> Boolean
resp.job_runs[0].database_outputs #=> Array
resp.job_runs[0].database_outputs[0].glue_connection_name #=> String
resp.job_runs[0].database_outputs[0].database_options.temp_directory.bucket #=> String
resp.job_runs[0].database_outputs[0].database_options.temp_directory.key #=> String
resp.job_runs[0].database_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.job_runs[0].database_outputs[0].database_options.table_name #=> String
resp.job_runs[0].database_outputs[0].database_output_mode #=> String, one of "NEW_TABLE"
resp.job_runs[0].recipe_reference.name #=> String
resp.job_runs[0].recipe_reference.recipe_version #=> String
resp.job_runs[0].started_by #=> String
resp.job_runs[0].started_on #=> Time
resp.job_runs[0].job_sample.mode #=> String, one of "FULL_DATASET", "CUSTOM_ROWS"
resp.job_runs[0].job_sample.size #=> Integer
resp.job_runs[0].validation_configurations #=> Array
resp.job_runs[0].validation_configurations[0].ruleset_arn #=> String
resp.job_runs[0].validation_configurations[0].validation_mode #=> String, one of "CHECK_ALL"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the job.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    The token returned by a previous call to retrieve the next set of results.

Returns:

See Also:



2109
2110
2111
2112
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2109

def list_job_runs(params = {}, options = {})
  req = build_request(:list_job_runs, params)
  req.send_request(options)
end

#list_jobs(params = {}) ⇒ Types::ListJobsResponse

Lists all of the DataBrew jobs that are defined.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_jobs({
  dataset_name: "DatasetName",
  max_results: 1,
  next_token: "NextToken",
  project_name: "ProjectName",
})

Response structure


resp.jobs #=> Array
resp.jobs[0]. #=> String
resp.jobs[0].created_by #=> String
resp.jobs[0].create_date #=> Time
resp.jobs[0].dataset_name #=> String
resp.jobs[0].encryption_key_arn #=> String
resp.jobs[0].encryption_mode #=> String, one of "SSE-KMS", "SSE-S3"
resp.jobs[0].name #=> String
resp.jobs[0].type #=> String, one of "PROFILE", "RECIPE"
resp.jobs[0].last_modified_by #=> String
resp.jobs[0].last_modified_date #=> Time
resp.jobs[0].log_subscription #=> String, one of "ENABLE", "DISABLE"
resp.jobs[0].max_capacity #=> Integer
resp.jobs[0].max_retries #=> Integer
resp.jobs[0].outputs #=> Array
resp.jobs[0].outputs[0].compression_format #=> String, one of "GZIP", "LZ4", "SNAPPY", "BZIP2", "DEFLATE", "LZO", "BROTLI", "ZSTD", "ZLIB"
resp.jobs[0].outputs[0].format #=> String, one of "CSV", "JSON", "PARQUET", "GLUEPARQUET", "AVRO", "ORC", "XML", "TABLEAUHYPER"
resp.jobs[0].outputs[0].partition_columns #=> Array
resp.jobs[0].outputs[0].partition_columns[0] #=> String
resp.jobs[0].outputs[0].location.bucket #=> String
resp.jobs[0].outputs[0].location.key #=> String
resp.jobs[0].outputs[0].location.bucket_owner #=> String
resp.jobs[0].outputs[0].overwrite #=> Boolean
resp.jobs[0].outputs[0].format_options.csv.delimiter #=> String
resp.jobs[0].outputs[0].max_output_files #=> Integer
resp.jobs[0].data_catalog_outputs #=> Array
resp.jobs[0].data_catalog_outputs[0].catalog_id #=> String
resp.jobs[0].data_catalog_outputs[0].database_name #=> String
resp.jobs[0].data_catalog_outputs[0].table_name #=> String
resp.jobs[0].data_catalog_outputs[0].s3_options.location.bucket #=> String
resp.jobs[0].data_catalog_outputs[0].s3_options.location.key #=> String
resp.jobs[0].data_catalog_outputs[0].s3_options.location.bucket_owner #=> String
resp.jobs[0].data_catalog_outputs[0].database_options.temp_directory.bucket #=> String
resp.jobs[0].data_catalog_outputs[0].database_options.temp_directory.key #=> String
resp.jobs[0].data_catalog_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.jobs[0].data_catalog_outputs[0].database_options.table_name #=> String
resp.jobs[0].data_catalog_outputs[0].overwrite #=> Boolean
resp.jobs[0].database_outputs #=> Array
resp.jobs[0].database_outputs[0].glue_connection_name #=> String
resp.jobs[0].database_outputs[0].database_options.temp_directory.bucket #=> String
resp.jobs[0].database_outputs[0].database_options.temp_directory.key #=> String
resp.jobs[0].database_outputs[0].database_options.temp_directory.bucket_owner #=> String
resp.jobs[0].database_outputs[0].database_options.table_name #=> String
resp.jobs[0].database_outputs[0].database_output_mode #=> String, one of "NEW_TABLE"
resp.jobs[0].project_name #=> String
resp.jobs[0].recipe_reference.name #=> String
resp.jobs[0].recipe_reference.recipe_version #=> String
resp.jobs[0].resource_arn #=> String
resp.jobs[0].role_arn #=> String
resp.jobs[0].timeout #=> Integer
resp.jobs[0].tags #=> Hash
resp.jobs[0].tags["TagKey"] #=> String
resp.jobs[0].job_sample.mode #=> String, one of "FULL_DATASET", "CUSTOM_ROWS"
resp.jobs[0].job_sample.size #=> Integer
resp.jobs[0].validation_configurations #=> Array
resp.jobs[0].validation_configurations[0].ruleset_arn #=> String
resp.jobs[0].validation_configurations[0].validation_mode #=> String, one of "CHECK_ALL"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dataset_name (String)

    The name of a dataset. Using this parameter indicates to return only those jobs that act on the specified dataset.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    A token generated by DataBrew that specifies where to continue pagination if a previous request was truncated. To get the next set of pages, pass in the NextToken value from the response object of the previous page call.

  • :project_name (String)

    The name of a project. Using this parameter indicates to return only those jobs that are associated with the specified project.

Returns:

See Also:



2214
2215
2216
2217
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2214

def list_jobs(params = {}, options = {})
  req = build_request(:list_jobs, params)
  req.send_request(options)
end

#list_projects(params = {}) ⇒ Types::ListProjectsResponse

Lists all of the DataBrew projects that are defined.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_projects({
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.projects #=> Array
resp.projects[0]. #=> String
resp.projects[0].create_date #=> Time
resp.projects[0].created_by #=> String
resp.projects[0].dataset_name #=> String
resp.projects[0].last_modified_date #=> Time
resp.projects[0].last_modified_by #=> String
resp.projects[0].name #=> String
resp.projects[0].recipe_name #=> String
resp.projects[0].resource_arn #=> String
resp.projects[0].sample.size #=> Integer
resp.projects[0].sample.type #=> String, one of "FIRST_N", "LAST_N", "RANDOM"
resp.projects[0].tags #=> Hash
resp.projects[0].tags["TagKey"] #=> String
resp.projects[0].role_arn #=> String
resp.projects[0].opened_by #=> String
resp.projects[0].open_date #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    The token returned by a previous call to retrieve the next set of results.

  • :max_results (Integer)

    The maximum number of results to return in this request.

Returns:

See Also:



2267
2268
2269
2270
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2267

def list_projects(params = {}, options = {})
  req = build_request(:list_projects, params)
  req.send_request(options)
end

#list_recipe_versions(params = {}) ⇒ Types::ListRecipeVersionsResponse

Lists the versions of a particular DataBrew recipe, except for ‘LATEST_WORKING`.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_recipe_versions({
  max_results: 1,
  next_token: "NextToken",
  name: "RecipeName", # required
})

Response structure


resp.next_token #=> String
resp.recipes #=> Array
resp.recipes[0].created_by #=> String
resp.recipes[0].create_date #=> Time
resp.recipes[0].last_modified_by #=> String
resp.recipes[0].last_modified_date #=> Time
resp.recipes[0].project_name #=> String
resp.recipes[0].published_by #=> String
resp.recipes[0].published_date #=> Time
resp.recipes[0].description #=> String
resp.recipes[0].name #=> String
resp.recipes[0].resource_arn #=> String
resp.recipes[0].steps #=> Array
resp.recipes[0].steps[0].action.operation #=> String
resp.recipes[0].steps[0].action.parameters #=> Hash
resp.recipes[0].steps[0].action.parameters["ParameterName"] #=> String
resp.recipes[0].steps[0].condition_expressions #=> Array
resp.recipes[0].steps[0].condition_expressions[0].condition #=> String
resp.recipes[0].steps[0].condition_expressions[0].value #=> String
resp.recipes[0].steps[0].condition_expressions[0].target_column #=> String
resp.recipes[0].tags #=> Hash
resp.recipes[0].tags["TagKey"] #=> String
resp.recipes[0].recipe_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    The token returned by a previous call to retrieve the next set of results.

  • :name (required, String)

    The name of the recipe for which to return version information.

Returns:

See Also:



2330
2331
2332
2333
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2330

def list_recipe_versions(params = {}, options = {})
  req = build_request(:list_recipe_versions, params)
  req.send_request(options)
end

#list_recipes(params = {}) ⇒ Types::ListRecipesResponse

Lists all of the DataBrew recipes that are defined.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_recipes({
  max_results: 1,
  next_token: "NextToken",
  recipe_version: "RecipeVersion",
})

Response structure


resp.recipes #=> Array
resp.recipes[0].created_by #=> String
resp.recipes[0].create_date #=> Time
resp.recipes[0].last_modified_by #=> String
resp.recipes[0].last_modified_date #=> Time
resp.recipes[0].project_name #=> String
resp.recipes[0].published_by #=> String
resp.recipes[0].published_date #=> Time
resp.recipes[0].description #=> String
resp.recipes[0].name #=> String
resp.recipes[0].resource_arn #=> String
resp.recipes[0].steps #=> Array
resp.recipes[0].steps[0].action.operation #=> String
resp.recipes[0].steps[0].action.parameters #=> Hash
resp.recipes[0].steps[0].action.parameters["ParameterName"] #=> String
resp.recipes[0].steps[0].condition_expressions #=> Array
resp.recipes[0].steps[0].condition_expressions[0].condition #=> String
resp.recipes[0].steps[0].condition_expressions[0].value #=> String
resp.recipes[0].steps[0].condition_expressions[0].target_column #=> String
resp.recipes[0].tags #=> Hash
resp.recipes[0].tags["TagKey"] #=> String
resp.recipes[0].recipe_version #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    The token returned by a previous call to retrieve the next set of results.

  • :recipe_version (String)

    Return only those recipes with a version identifier of ‘LATEST_WORKING` or `LATEST_PUBLISHED`. If `RecipeVersion` is omitted, `ListRecipes` returns all of the `LATEST_PUBLISHED` recipe versions.

    Valid values: ‘LATEST_WORKING` | `LATEST_PUBLISHED`

Returns:

See Also:



2396
2397
2398
2399
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2396

def list_recipes(params = {}, options = {})
  req = build_request(:list_recipes, params)
  req.send_request(options)
end

#list_rulesets(params = {}) ⇒ Types::ListRulesetsResponse

List all rulesets available in the current account or rulesets associated with a specific resource (dataset).

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_rulesets({
  target_arn: "Arn",
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.rulesets #=> Array
resp.rulesets[0]. #=> String
resp.rulesets[0].created_by #=> String
resp.rulesets[0].create_date #=> Time
resp.rulesets[0].description #=> String
resp.rulesets[0].last_modified_by #=> String
resp.rulesets[0].last_modified_date #=> Time
resp.rulesets[0].name #=> String
resp.rulesets[0].resource_arn #=> String
resp.rulesets[0].rule_count #=> Integer
resp.rulesets[0].tags #=> Hash
resp.rulesets[0].tags["TagKey"] #=> String
resp.rulesets[0].target_arn #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :target_arn (String)

    The Amazon Resource Name (ARN) of a resource (dataset). Using this parameter indicates to return only those rulesets that are associated with the specified resource.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    A token generated by DataBrew that specifies where to continue pagination if a previous request was truncated. To get the next set of pages, pass in the NextToken value from the response object of the previous page call.

Returns:

See Also:



2454
2455
2456
2457
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2454

def list_rulesets(params = {}, options = {})
  req = build_request(:list_rulesets, params)
  req.send_request(options)
end

#list_schedules(params = {}) ⇒ Types::ListSchedulesResponse

Lists the DataBrew schedules that are defined.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_schedules({
  job_name: "JobName",
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.schedules #=> Array
resp.schedules[0]. #=> String
resp.schedules[0].created_by #=> String
resp.schedules[0].create_date #=> Time
resp.schedules[0].job_names #=> Array
resp.schedules[0].job_names[0] #=> String
resp.schedules[0].last_modified_by #=> String
resp.schedules[0].last_modified_date #=> Time
resp.schedules[0].resource_arn #=> String
resp.schedules[0].cron_expression #=> String
resp.schedules[0].tags #=> Hash
resp.schedules[0].tags["TagKey"] #=> String
resp.schedules[0].name #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (String)

    The name of the job that these schedules apply to.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    The token returned by a previous call to retrieve the next set of results.

Returns:

See Also:



2507
2508
2509
2510
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2507

def list_schedules(params = {}, options = {})
  req = build_request(:list_schedules, params)
  req.send_request(options)
end

#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse

Lists all the tags for a DataBrew resource.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "Arn", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) string that uniquely identifies the DataBrew resource.

Returns:

See Also:



2537
2538
2539
2540
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2537

def list_tags_for_resource(params = {}, options = {})
  req = build_request(:list_tags_for_resource, params)
  req.send_request(options)
end

#publish_recipe(params = {}) ⇒ Types::PublishRecipeResponse

Publishes a new version of a DataBrew recipe.

Examples:

Request syntax with placeholder values


resp = client.publish_recipe({
  description: "RecipeDescription",
  name: "RecipeName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    A description of the recipe to be published, for this version of the recipe.

  • :name (required, String)

    The name of the recipe to be published.

Returns:

See Also:



2570
2571
2572
2573
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2570

def publish_recipe(params = {}, options = {})
  req = build_request(:publish_recipe, params)
  req.send_request(options)
end

#send_project_session_action(params = {}) ⇒ Types::SendProjectSessionActionResponse

Performs a recipe step within an interactive DataBrew session that’s currently open.

Examples:

Request syntax with placeholder values


resp = client.send_project_session_action({
  preview: false,
  name: "ProjectName", # required
  recipe_step: {
    action: { # required
      operation: "Operation", # required
      parameters: {
        "ParameterName" => "ParameterValue",
      },
    },
    condition_expressions: [
      {
        condition: "Condition", # required
        value: "ConditionValue",
        target_column: "TargetColumn", # required
      },
    ],
  },
  step_index: 1,
  client_session_id: "ClientSessionId",
  view_frame: {
    start_column_index: 1, # required
    column_range: 1,
    hidden_columns: ["ColumnName"],
    start_row_index: 1,
    row_range: 1,
    analytics: "ENABLE", # accepts ENABLE, DISABLE
  },
})

Response structure


resp.result #=> String
resp.name #=> String
resp.action_id #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :preview (Boolean)

    If true, the result of the recipe step will be returned, but not applied.

  • :name (required, String)

    The name of the project to apply the action to.

  • :recipe_step (Types::RecipeStep)

    Represents a single step from a DataBrew recipe to be performed.

  • :step_index (Integer)

    The index from which to preview a step. This index is used to preview the result of steps that have already been applied, so that the resulting view frame is from earlier in the view frame stack.

  • :client_session_id (String)

    A unique identifier for an interactive session that’s currently open and ready for work. The action will be performed on this session.

  • :view_frame (Types::ViewFrame)

    Represents the data being transformed during an action.

Returns:

See Also:



2648
2649
2650
2651
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2648

def send_project_session_action(params = {}, options = {})
  req = build_request(:send_project_session_action, params)
  req.send_request(options)
end

#start_job_run(params = {}) ⇒ Types::StartJobRunResponse

Runs a DataBrew job.

Examples:

Request syntax with placeholder values


resp = client.start_job_run({
  name: "JobName", # required
})

Response structure


resp.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the job to be run.

Returns:

See Also:



2676
2677
2678
2679
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2676

def start_job_run(params = {}, options = {})
  req = build_request(:start_job_run, params)
  req.send_request(options)
end

#start_project_session(params = {}) ⇒ Types::StartProjectSessionResponse

Creates an interactive session, enabling you to manipulate data in a DataBrew project.

Examples:

Request syntax with placeholder values


resp = client.start_project_session({
  name: "ProjectName", # required
  assume_control: false,
})

Response structure


resp.name #=> String
resp.client_session_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the project to act upon.

  • :assume_control (Boolean)

    A value that, if true, enables you to take control of a session, even if a different client is currently accessing the project.

Returns:

See Also:



2712
2713
2714
2715
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2712

def start_project_session(params = {}, options = {})
  req = build_request(:start_project_session, params)
  req.send_request(options)
end

#stop_job_run(params = {}) ⇒ Types::StopJobRunResponse

Stops a particular run of a job.

Examples:

Request syntax with placeholder values


resp = client.stop_job_run({
  name: "JobName", # required
  run_id: "JobRunId", # required
})

Response structure


resp.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the job to be stopped.

  • :run_id (required, String)

    The ID of the job run to be stopped.

Returns:

See Also:



2744
2745
2746
2747
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2744

def stop_job_run(params = {}, options = {})
  req = build_request(:stop_job_run, params)
  req.send_request(options)
end

#tag_resource(params = {}) ⇒ Struct

Adds metadata tags to a DataBrew resource, such as a dataset, project, recipe, job, or schedule.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "Arn", # required
  tags: { # required
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The DataBrew resource to which tags should be added. The value for this parameter is an Amazon Resource Name (ARN). For DataBrew, you can tag a dataset, a job, a project, or a recipe.

  • :tags (required, Hash<String,String>)

    One or more tags to be assigned to the resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2775
2776
2777
2778
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2775

def tag_resource(params = {}, options = {})
  req = build_request(:tag_resource, params)
  req.send_request(options)
end

#untag_resource(params = {}) ⇒ Struct

Removes metadata tags from a DataBrew resource.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "Arn", # required
  tag_keys: ["TagKey"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    A DataBrew resource from which you want to remove a tag or tags. The value for this parameter is an Amazon Resource Name (ARN).

  • :tag_keys (required, Array<String>)

    The tag keys (names) of one or more tags to be removed.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2802
2803
2804
2805
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2802

def untag_resource(params = {}, options = {})
  req = build_request(:untag_resource, params)
  req.send_request(options)
end

#update_dataset(params = {}) ⇒ Types::UpdateDatasetResponse

Modifies the definition of an existing DataBrew dataset.

Examples:

Request syntax with placeholder values


resp = client.update_dataset({
  name: "DatasetName", # required
  format: "CSV", # accepts CSV, JSON, PARQUET, EXCEL, ORC
  format_options: {
    json: {
      multi_line: false,
    },
    excel: {
      sheet_names: ["SheetName"],
      sheet_indexes: [1],
      header_row: false,
    },
    csv: {
      delimiter: "Delimiter",
      header_row: false,
    },
  },
  input: { # required
    s3_input_definition: {
      bucket: "Bucket", # required
      key: "Key",
      bucket_owner: "BucketOwner",
    },
    data_catalog_input_definition: {
      catalog_id: "CatalogId",
      database_name: "DatabaseName", # required
      table_name: "TableName", # required
      temp_directory: {
        bucket: "Bucket", # required
        key: "Key",
        bucket_owner: "BucketOwner",
      },
    },
    database_input_definition: {
      glue_connection_name: "GlueConnectionName", # required
      database_table_name: "DatabaseTableName",
      temp_directory: {
        bucket: "Bucket", # required
        key: "Key",
        bucket_owner: "BucketOwner",
      },
      query_string: "QueryString",
    },
    metadata: {
      source_arn: "Arn",
    },
  },
  path_options: {
    last_modified_date_condition: {
      expression: "Expression", # required
      values_map: { # required
        "ValueReference" => "ConditionValue",
      },
    },
    files_limit: {
      max_files: 1, # required
      ordered_by: "LAST_MODIFIED_DATE", # accepts LAST_MODIFIED_DATE
      order: "DESCENDING", # accepts DESCENDING, ASCENDING
    },
    parameters: {
      "PathParameterName" => {
        name: "PathParameterName", # required
        type: "Datetime", # required, accepts Datetime, Number, String
        datetime_options: {
          format: "DatetimeFormat", # required
          timezone_offset: "TimezoneOffset",
          locale_code: "LocaleCode",
        },
        create_column: false,
        filter: {
          expression: "Expression", # required
          values_map: { # required
            "ValueReference" => "ConditionValue",
          },
        },
      },
    },
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the dataset to be updated.

  • :format (String)

    The file format of a dataset that is created from an Amazon S3 file or folder.

  • :format_options (Types::FormatOptions)

    Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.

  • :input (required, Types::Input)

    Represents information on how DataBrew can find data, in either the Glue Data Catalog or Amazon S3.

  • :path_options (Types::PathOptions)

    A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

Returns:

See Also:



2922
2923
2924
2925
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 2922

def update_dataset(params = {}, options = {})
  req = build_request(:update_dataset, params)
  req.send_request(options)
end

#update_profile_job(params = {}) ⇒ Types::UpdateProfileJobResponse

Modifies the definition of an existing profile job.

Examples:

Request syntax with placeholder values


resp = client.update_profile_job({
  configuration: {
    dataset_statistics_configuration: {
      included_statistics: ["Statistic"],
      overrides: [
        {
          statistic: "Statistic", # required
          parameters: { # required
            "ParameterName" => "ParameterValue",
          },
        },
      ],
    },
    profile_columns: [
      {
        regex: "ColumnName",
        name: "ColumnName",
      },
    ],
    column_statistics_configurations: [
      {
        selectors: [
          {
            regex: "ColumnName",
            name: "ColumnName",
          },
        ],
        statistics: { # required
          included_statistics: ["Statistic"],
          overrides: [
            {
              statistic: "Statistic", # required
              parameters: { # required
                "ParameterName" => "ParameterValue",
              },
            },
          ],
        },
      },
    ],
    entity_detector_configuration: {
      entity_types: ["EntityType"], # required
      allowed_statistics: [
        {
          statistics: ["Statistic"], # required
        },
      ],
    },
  },
  encryption_key_arn: "EncryptionKeyArn",
  encryption_mode: "SSE-KMS", # accepts SSE-KMS, SSE-S3
  name: "JobName", # required
  log_subscription: "ENABLE", # accepts ENABLE, DISABLE
  max_capacity: 1,
  max_retries: 1,
  output_location: { # required
    bucket: "Bucket", # required
    key: "Key",
    bucket_owner: "BucketOwner",
  },
  validation_configurations: [
    {
      ruleset_arn: "Arn", # required
      validation_mode: "CHECK_ALL", # accepts CHECK_ALL
    },
  ],
  role_arn: "Arn", # required
  timeout: 1,
  job_sample: {
    mode: "FULL_DATASET", # accepts FULL_DATASET, CUSTOM_ROWS
    size: 1,
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :configuration (Types::ProfileConfiguration)

    Configuration for profile jobs. Used to select columns, do evaluations, and override default parameters of evaluations. When configuration is null, the profile job will run with default settings.

  • :encryption_key_arn (String)

    The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

  • :encryption_mode (String)

    The encryption mode for the job, which can be one of the following:

    • ‘SSE-KMS` - Server-side encryption with keys managed by KMS.

    • ‘SSE-S3` - Server-side encryption with keys managed by Amazon S3.

  • :name (required, String)

    The name of the job to be updated.

  • :log_subscription (String)

    Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.

  • :max_capacity (Integer)

    The maximum number of compute nodes that DataBrew can use when the job processes data.

  • :max_retries (Integer)

    The maximum number of times to retry the job after a job run fails.

  • :output_location (required, Types::S3Location)

    Represents an Amazon S3 location (bucket name, bucket owner, and object key) where DataBrew can read input data, or write output from a job.

  • :validation_configurations (Array<Types::ValidationConfiguration>)

    List of validation configurations that are applied to the profile job.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • :timeout (Integer)

    The job’s timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of ‘TIMEOUT`.

  • :job_sample (Types::JobSample)

    Sample configuration for Profile Jobs only. Determines the number of rows on which the Profile job will be executed. If a JobSample value is not provided for profile jobs, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.

Returns:

See Also:



3070
3071
3072
3073
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3070

def update_profile_job(params = {}, options = {})
  req = build_request(:update_profile_job, params)
  req.send_request(options)
end

#update_project(params = {}) ⇒ Types::UpdateProjectResponse

Modifies the definition of an existing DataBrew project.

Examples:

Request syntax with placeholder values


resp = client.update_project({
  sample: {
    size: 1,
    type: "FIRST_N", # required, accepts FIRST_N, LAST_N, RANDOM
  },
  role_arn: "Arn", # required
  name: "ProjectName", # required
})

Response structure


resp.last_modified_date #=> Time
resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :sample (Types::Sample)

    Represents the sample size and sampling type for DataBrew to use for interactive data analysis.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the IAM role to be assumed for this request.

  • :name (required, String)

    The name of the project to be updated.

Returns:

See Also:



3113
3114
3115
3116
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3113

def update_project(params = {}, options = {})
  req = build_request(:update_project, params)
  req.send_request(options)
end

#update_recipe(params = {}) ⇒ Types::UpdateRecipeResponse

Modifies the definition of the ‘LATEST_WORKING` version of a DataBrew recipe.

Examples:

Request syntax with placeholder values


resp = client.update_recipe({
  description: "RecipeDescription",
  name: "RecipeName", # required
  steps: [
    {
      action: { # required
        operation: "Operation", # required
        parameters: {
          "ParameterName" => "ParameterValue",
        },
      },
      condition_expressions: [
        {
          condition: "Condition", # required
          value: "ConditionValue",
          target_column: "TargetColumn", # required
        },
      ],
    },
  ],
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    A description of the recipe.

  • :name (required, String)

    The name of the recipe to be updated.

  • :steps (Array<Types::RecipeStep>)

    One or more steps to be performed by the recipe. Each step consists of an action, and the conditions under which the action should succeed.

Returns:

See Also:



3167
3168
3169
3170
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3167

def update_recipe(params = {}, options = {})
  req = build_request(:update_recipe, params)
  req.send_request(options)
end

#update_recipe_job(params = {}) ⇒ Types::UpdateRecipeJobResponse

Modifies the definition of an existing DataBrew recipe job.

Examples:

Request syntax with placeholder values


resp = client.update_recipe_job({
  encryption_key_arn: "EncryptionKeyArn",
  encryption_mode: "SSE-KMS", # accepts SSE-KMS, SSE-S3
  name: "JobName", # required
  log_subscription: "ENABLE", # accepts ENABLE, DISABLE
  max_capacity: 1,
  max_retries: 1,
  outputs: [
    {
      compression_format: "GZIP", # accepts GZIP, LZ4, SNAPPY, BZIP2, DEFLATE, LZO, BROTLI, ZSTD, ZLIB
      format: "CSV", # accepts CSV, JSON, PARQUET, GLUEPARQUET, AVRO, ORC, XML, TABLEAUHYPER
      partition_columns: ["ColumnName"],
      location: { # required
        bucket: "Bucket", # required
        key: "Key",
        bucket_owner: "BucketOwner",
      },
      overwrite: false,
      format_options: {
        csv: {
          delimiter: "Delimiter",
        },
      },
      max_output_files: 1,
    },
  ],
  data_catalog_outputs: [
    {
      catalog_id: "CatalogId",
      database_name: "DatabaseName", # required
      table_name: "TableName", # required
      s3_options: {
        location: { # required
          bucket: "Bucket", # required
          key: "Key",
          bucket_owner: "BucketOwner",
        },
      },
      database_options: {
        temp_directory: {
          bucket: "Bucket", # required
          key: "Key",
          bucket_owner: "BucketOwner",
        },
        table_name: "DatabaseTableName", # required
      },
      overwrite: false,
    },
  ],
  database_outputs: [
    {
      glue_connection_name: "GlueConnectionName", # required
      database_options: { # required
        temp_directory: {
          bucket: "Bucket", # required
          key: "Key",
          bucket_owner: "BucketOwner",
        },
        table_name: "DatabaseTableName", # required
      },
      database_output_mode: "NEW_TABLE", # accepts NEW_TABLE
    },
  ],
  role_arn: "Arn", # required
  timeout: 1,
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :encryption_key_arn (String)

    The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

  • :encryption_mode (String)

    The encryption mode for the job, which can be one of the following:

    • ‘SSE-KMS` - Server-side encryption with keys managed by KMS.

    • ‘SSE-S3` - Server-side encryption with keys managed by Amazon S3.

  • :name (required, String)

    The name of the job to update.

  • :log_subscription (String)

    Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.

  • :max_capacity (Integer)

    The maximum number of nodes that DataBrew can consume when the job processes data.

  • :max_retries (Integer)

    The maximum number of times to retry the job after a job run fails.

  • :outputs (Array<Types::Output>)

    One or more artifacts that represent the output from running the job.

  • :data_catalog_outputs (Array<Types::DataCatalogOutput>)

    One or more artifacts that represent the Glue Data Catalog output from running the job.

  • :database_outputs (Array<Types::DatabaseOutput>)

    Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • :timeout (Integer)

    The job’s timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of ‘TIMEOUT`.

Returns:

See Also:



3299
3300
3301
3302
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3299

def update_recipe_job(params = {}, options = {})
  req = build_request(:update_recipe_job, params)
  req.send_request(options)
end

#update_ruleset(params = {}) ⇒ Types::UpdateRulesetResponse

Updates specified ruleset.

Examples:

Request syntax with placeholder values


resp = client.update_ruleset({
  name: "RulesetName", # required
  description: "RulesetDescription",
  rules: [ # required
    {
      name: "RuleName", # required
      disabled: false,
      check_expression: "Expression", # required
      substitution_map: {
        "ValueReference" => "ConditionValue",
      },
      threshold: {
        value: 1.0, # required
        type: "GREATER_THAN_OR_EQUAL", # accepts GREATER_THAN_OR_EQUAL, LESS_THAN_OR_EQUAL, GREATER_THAN, LESS_THAN
        unit: "COUNT", # accepts COUNT, PERCENTAGE
      },
      column_selectors: [
        {
          regex: "ColumnName",
          name: "ColumnName",
        },
      ],
    },
  ],
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the ruleset to be updated.

  • :description (String)

    The description of the ruleset.

  • :rules (required, Array<Types::Rule>)

    A list of rules that are defined with the ruleset. A rule includes one or more checks to be validated on a DataBrew dataset.

Returns:

See Also:



3356
3357
3358
3359
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3356

def update_ruleset(params = {}, options = {})
  req = build_request(:update_ruleset, params)
  req.send_request(options)
end

#update_schedule(params = {}) ⇒ Types::UpdateScheduleResponse

Modifies the definition of an existing DataBrew schedule.

Examples:

Request syntax with placeholder values


resp = client.update_schedule({
  job_names: ["JobName"],
  cron_expression: "CronExpression", # required
  name: "ScheduleName", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_names (Array<String>)

    The name or names of one or more jobs to be run for this schedule.

  • :cron_expression (required, String)

    The date or dates and time or times when the jobs are to be run. For more information, see [Cron expressions] in the *Glue DataBrew Developer Guide*.

    [1]: docs.aws.amazon.com/databrew/latest/dg/jobs.cron.html

  • :name (required, String)

    The name of the schedule to update.

Returns:

See Also:



3398
3399
3400
3401
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3398

def update_schedule(params = {}, options = {})
  req = build_request(:update_schedule, params)
  req.send_request(options)
end

#waiter_namesObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Deprecated.


3427
3428
3429
# File 'lib/aws-sdk-gluedatabrew/client.rb', line 3427

def waiter_names
  []
end