Class: Aws::Neptunedata::Types::CreateMLEndpointInput

Inherits:
Struct
  • Object
show all
Includes:
Structure
Defined in:
lib/aws-sdk-neptunedata/types.rb

Overview

Constant Summary collapse

SENSITIVE =
[]

Instance Attribute Summary collapse

Instance Attribute Details

#idString

A unique identifier for the new inference endpoint. The default is an autogenerated timestamped name.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#instance_countInteger

The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction. The default is 1

Returns:

  • (Integer)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#instance_typeString

The type of Neptune ML instance to use for online servicing. The default is ‘ml.m5.xlarge`. Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#ml_model_training_job_idString

The job Id of the completed model-training job that has created the model that the inference endpoint will point to. You must supply either the ‘mlModelTrainingJobId` or the `mlModelTransformJobId`.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#ml_model_transform_job_idString

The job Id of the completed model-transform job. You must supply either the ‘mlModelTrainingJobId` or the `mlModelTransformJobId`.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#model_nameString

Model type for training. By default the Neptune ML model is automatically based on the ‘modelType` used in data processing, but you can specify a different model type here. The default is `rgcn` for heterogeneous graphs and `kge` for knowledge graphs. The only valid value for heterogeneous graphs is `rgcn`. Valid values for knowledge graphs are: `kge`, `transe`, `distmult`, and `rotate`.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#neptune_iam_role_arnString

The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will be thrown.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#updateBoolean

If set to ‘true`, `update` indicates that this is an update request. The default is `false`. You must supply either the `mlModelTrainingJobId` or the `mlModelTransformJobId`.

Returns:

  • (Boolean)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end

#volume_encryption_kms_keyString

The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

Returns:

  • (String)


434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/aws-sdk-neptunedata/types.rb', line 434

class CreateMLEndpointInput < Struct.new(
  :id,
  :ml_model_training_job_id,
  :ml_model_transform_job_id,
  :update,
  :neptune_iam_role_arn,
  :model_name,
  :instance_type,
  :instance_count,
  :volume_encryption_kms_key)
  SENSITIVE = []
  include Aws::Structure
end