Class: Aws::CloudWatchLogs::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::CloudWatchLogs::Client
- Includes:
- Aws::ClientStubs
- Defined in:
- lib/aws-sdk-cloudwatchlogs/client.rb
Overview
An API client for CloudWatchLogs. To construct a client, you need to configure a ‘:region` and `:credentials`.
client = Aws::CloudWatchLogs::Client.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the [developer guide](/sdk-for-ruby/v3/developer-guide/setup-config.html).
See #initialize for a full list of supported configuration options.
Class Attribute Summary collapse
- .identifier ⇒ Object readonly private
API Operations collapse
-
#associate_kms_key(params = {}) ⇒ Struct
Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.
-
#cancel_export_task(params = {}) ⇒ Struct
Cancels the specified export task.
-
#create_delivery(params = {}) ⇒ Types::CreateDeliveryResponse
Creates a delivery.
-
#create_export_task(params = {}) ⇒ Types::CreateExportTaskResponse
Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket.
-
#create_log_anomaly_detector(params = {}) ⇒ Types::CreateLogAnomalyDetectorResponse
Creates an *anomaly detector* that regularly scans one or more log groups and look for patterns and anomalies in the logs.
-
#create_log_group(params = {}) ⇒ Struct
Creates a log group with the specified name.
-
#create_log_stream(params = {}) ⇒ Struct
Creates a log stream for the specified log group.
-
#delete_account_policy(params = {}) ⇒ Struct
Deletes a CloudWatch Logs account policy.
-
#delete_data_protection_policy(params = {}) ⇒ Struct
Deletes the data protection policy from the specified log group.
-
#delete_delivery(params = {}) ⇒ Struct
Deletes a delivery.
-
#delete_delivery_destination(params = {}) ⇒ Struct
Deletes a *delivery destination*.
-
#delete_delivery_destination_policy(params = {}) ⇒ Struct
Deletes a delivery destination policy.
-
#delete_delivery_source(params = {}) ⇒ Struct
Deletes a *delivery source*.
-
#delete_destination(params = {}) ⇒ Struct
Deletes the specified destination, and eventually disables all the subscription filters that publish to it.
-
#delete_index_policy(params = {}) ⇒ Struct
Deletes a log-group level field index policy that was applied to a single log group.
-
#delete_integration(params = {}) ⇒ Struct
Deletes the integration between CloudWatch Logs and OpenSearch Service.
-
#delete_log_anomaly_detector(params = {}) ⇒ Struct
Deletes the specified CloudWatch Logs anomaly detector.
-
#delete_log_group(params = {}) ⇒ Struct
Deletes the specified log group and permanently deletes all the archived log events associated with the log group.
-
#delete_log_stream(params = {}) ⇒ Struct
Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.
-
#delete_metric_filter(params = {}) ⇒ Struct
Deletes the specified metric filter.
-
#delete_query_definition(params = {}) ⇒ Types::DeleteQueryDefinitionResponse
Deletes a saved CloudWatch Logs Insights query definition.
-
#delete_resource_policy(params = {}) ⇒ Struct
Deletes a resource policy from this account.
-
#delete_retention_policy(params = {}) ⇒ Struct
Deletes the specified retention policy.
-
#delete_subscription_filter(params = {}) ⇒ Struct
Deletes the specified subscription filter.
-
#delete_transformer(params = {}) ⇒ Struct
Deletes the log transformer for the specified log group.
-
#describe_account_policies(params = {}) ⇒ Types::DescribeAccountPoliciesResponse
Returns a list of all CloudWatch Logs account policies in the account.
-
#describe_configuration_templates(params = {}) ⇒ Types::DescribeConfigurationTemplatesResponse
Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries.
-
#describe_deliveries(params = {}) ⇒ Types::DescribeDeliveriesResponse
Retrieves a list of the deliveries that have been created in the account.
-
#describe_delivery_destinations(params = {}) ⇒ Types::DescribeDeliveryDestinationsResponse
Retrieves a list of the delivery destinations that have been created in the account.
-
#describe_delivery_sources(params = {}) ⇒ Types::DescribeDeliverySourcesResponse
Retrieves a list of the delivery sources that have been created in the account.
-
#describe_destinations(params = {}) ⇒ Types::DescribeDestinationsResponse
Lists all your destinations.
-
#describe_export_tasks(params = {}) ⇒ Types::DescribeExportTasksResponse
Lists the specified export tasks.
-
#describe_field_indexes(params = {}) ⇒ Types::DescribeFieldIndexesResponse
Returns a list of field indexes listed in the field index policies of one or more log groups.
-
#describe_index_policies(params = {}) ⇒ Types::DescribeIndexPoliciesResponse
Returns the field index policies of one or more log groups.
-
#describe_log_groups(params = {}) ⇒ Types::DescribeLogGroupsResponse
Lists the specified log groups.
-
#describe_log_streams(params = {}) ⇒ Types::DescribeLogStreamsResponse
Lists the log streams for the specified log group.
-
#describe_metric_filters(params = {}) ⇒ Types::DescribeMetricFiltersResponse
Lists the specified metric filters.
-
#describe_queries(params = {}) ⇒ Types::DescribeQueriesResponse
Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account.
-
#describe_query_definitions(params = {}) ⇒ Types::DescribeQueryDefinitionsResponse
This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions.
-
#describe_resource_policies(params = {}) ⇒ Types::DescribeResourcePoliciesResponse
Lists the resource policies in this account.
-
#describe_subscription_filters(params = {}) ⇒ Types::DescribeSubscriptionFiltersResponse
Lists the subscription filters for the specified log group.
-
#disassociate_kms_key(params = {}) ⇒ Struct
Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.
-
#filter_log_events(params = {}) ⇒ Types::FilterLogEventsResponse
Lists log events from the specified log group.
-
#get_data_protection_policy(params = {}) ⇒ Types::GetDataProtectionPolicyResponse
Returns information about a log group data protection policy.
-
#get_delivery(params = {}) ⇒ Types::GetDeliveryResponse
Returns complete information about one logical delivery.
-
#get_delivery_destination(params = {}) ⇒ Types::GetDeliveryDestinationResponse
Retrieves complete information about one delivery destination.
-
#get_delivery_destination_policy(params = {}) ⇒ Types::GetDeliveryDestinationPolicyResponse
Retrieves the delivery destination policy assigned to the delivery destination that you specify.
-
#get_delivery_source(params = {}) ⇒ Types::GetDeliverySourceResponse
Retrieves complete information about one delivery source.
-
#get_integration(params = {}) ⇒ Types::GetIntegrationResponse
Returns information about one integration between CloudWatch Logs and OpenSearch Service.
-
#get_log_anomaly_detector(params = {}) ⇒ Types::GetLogAnomalyDetectorResponse
Retrieves information about the log anomaly detector that you specify.
-
#get_log_events(params = {}) ⇒ Types::GetLogEventsResponse
Lists log events from the specified log stream.
-
#get_log_group_fields(params = {}) ⇒ Types::GetLogGroupFieldsResponse
Returns a list of the fields that are included in log events in the specified log group.
-
#get_log_record(params = {}) ⇒ Types::GetLogRecordResponse
Retrieves all of the fields and values of a single log event.
-
#get_query_results(params = {}) ⇒ Types::GetQueryResultsResponse
Returns the results from the specified query.
-
#get_transformer(params = {}) ⇒ Types::GetTransformerResponse
Returns the information about the log transformer associated with this log group.
-
#list_anomalies(params = {}) ⇒ Types::ListAnomaliesResponse
Returns a list of anomalies that log anomaly detectors have found.
-
#list_integrations(params = {}) ⇒ Types::ListIntegrationsResponse
Returns a list of integrations between CloudWatch Logs and other services in this account.
-
#list_log_anomaly_detectors(params = {}) ⇒ Types::ListLogAnomalyDetectorsResponse
Retrieves a list of the log anomaly detectors in the account.
-
#list_log_groups_for_query(params = {}) ⇒ Types::ListLogGroupsForQueryResponse
Returns a list of the log groups that were analyzed during a single CloudWatch Logs Insights query.
-
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Displays the tags associated with a CloudWatch Logs resource.
-
#list_tags_log_group(params = {}) ⇒ Types::ListTagsLogGroupResponse
The ListTagsLogGroup operation is on the path to deprecation.
-
#put_account_policy(params = {}) ⇒ Types::PutAccountPolicyResponse
Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account.
-
#put_data_protection_policy(params = {}) ⇒ Types::PutDataProtectionPolicyResponse
Creates a data protection policy for the specified log group.
-
#put_delivery_destination(params = {}) ⇒ Types::PutDeliveryDestinationResponse
Creates or updates a logical *delivery destination*.
-
#put_delivery_destination_policy(params = {}) ⇒ Types::PutDeliveryDestinationPolicyResponse
Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account.
-
#put_delivery_source(params = {}) ⇒ Types::PutDeliverySourceResponse
Creates or updates a logical *delivery source*.
-
#put_destination(params = {}) ⇒ Types::PutDestinationResponse
Creates or updates a destination.
-
#put_destination_policy(params = {}) ⇒ Struct
Creates or updates an access policy associated with an existing destination.
-
#put_index_policy(params = {}) ⇒ Types::PutIndexPolicyResponse
Creates or updates a *field index policy* for the specified log group.
-
#put_integration(params = {}) ⇒ Types::PutIntegrationResponse
Creates an integration between CloudWatch Logs and another service in this account.
-
#put_log_events(params = {}) ⇒ Types::PutLogEventsResponse
Uploads a batch of log events to the specified log stream.
-
#put_metric_filter(params = {}) ⇒ Struct
Creates or updates a metric filter and associates it with the specified log group.
-
#put_query_definition(params = {}) ⇒ Types::PutQueryDefinitionResponse
Creates or updates a query definition for CloudWatch Logs Insights.
-
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyResponse
Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53.
-
#put_retention_policy(params = {}) ⇒ Struct
Sets the retention of the specified log group.
-
#put_subscription_filter(params = {}) ⇒ Struct
Creates or updates a subscription filter and associates it with the specified log group.
-
#put_transformer(params = {}) ⇒ Struct
Creates or updates a *log transformer* for a single log group.
-
#start_live_tail(params = {}) ⇒ Types::StartLiveTailResponse
Starts a Live Tail streaming session for one or more log groups.
-
#start_query(params = {}) ⇒ Types::StartQueryResponse
Starts a query of one or more log groups using CloudWatch Logs Insights.
-
#stop_query(params = {}) ⇒ Types::StopQueryResponse
Stops a CloudWatch Logs Insights query that is in progress.
-
#tag_log_group(params = {}) ⇒ Struct
The TagLogGroup operation is on the path to deprecation.
-
#tag_resource(params = {}) ⇒ Struct
Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource.
-
#test_metric_filter(params = {}) ⇒ Types::TestMetricFilterResponse
Tests the filter pattern of a metric filter against a sample of log event messages.
-
#test_transformer(params = {}) ⇒ Types::TestTransformerResponse
Use this operation to test a log transformer.
-
#untag_log_group(params = {}) ⇒ Struct
The UntagLogGroup operation is on the path to deprecation.
-
#untag_resource(params = {}) ⇒ Struct
Removes one or more tags from the specified resource.
-
#update_anomaly(params = {}) ⇒ Struct
Use this operation to suppress anomaly detection for a specified anomaly or pattern.
-
#update_delivery_configuration(params = {}) ⇒ Struct
Use this operation to update the configuration of a [delivery] to change either the S3 path pattern or the format of the delivered logs.
-
#update_log_anomaly_detector(params = {}) ⇒ Struct
Updates an existing log anomaly detector.
Class Method Summary collapse
- .errors_module ⇒ Object private
Instance Method Summary collapse
- #build_request(operation_name, params = {}) ⇒ Object private
-
#initialize(options) ⇒ Client
constructor
A new instance of Client.
- #waiter_names ⇒ Object deprecated private Deprecated.
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
485 486 487 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 485 def initialize(*args) super end |
Class Attribute Details
.identifier ⇒ Object (readonly)
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
7077 7078 7079 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 7077 def identifier @identifier end |
Class Method Details
.errors_module ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
7080 7081 7082 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 7080 def errors_module Errors end |
Instance Method Details
#associate_kms_key(params = {}) ⇒ Struct
Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.
When you use ‘AssociateKmsKey`, you specify either the `logGroupName` parameter or the `resourceIdentifier` parameter. You can’t specify both of those parameters in the same operation.
-
Specify the ‘logGroupName` parameter to cause log events ingested into that log group to be encrypted with that key. Only the log events ingested after the key is associated are encrypted with that key.
Associating a KMS key with a log group overrides any existing associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
Associating a key with a log group does not cause the results of queries of that log group to be encrypted with that key. To have query results encrypted with a KMS key, you must use an ‘AssociateKmsKey` operation with the `resourceIdentifier` parameter that specifies a `query-result` resource.
-
Specify the ‘resourceIdentifier` parameter with a `query-result` resource, to use that key to encrypt the stored results of all future [StartQuery] operations in the account. The response from a [GetQueryResults] operation will still return the query results in plain text.
Even if you have not associated a key with your query results, the query results are encrypted when stored, using the default CloudWatch Logs method.
If you run a query from a monitoring account that queries logs in a source account, the query results key from the monitoring account, if any, is used.
If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable.
<note markdown=“1”> CloudWatch Logs supports only symmetric KMS keys. Do not use an associate an asymmetric KMS key with your log group or query results. For more information, see [Using Symmetric and Asymmetric Keys].
</note>
It can take up to 5 minutes for this operation to take effect.
If you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an ‘InvalidParameterException` error.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartQuery.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetQueryResults.html [3]: docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html
613 614 615 616 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 613 def associate_kms_key(params = {}, = {}) req = build_request(:associate_kms_key, params) req.send_request() end |
#build_request(operation_name, params = {}) ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
7050 7051 7052 7053 7054 7055 7056 7057 7058 7059 7060 7061 7062 7063 7064 7065 7066 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 7050 def build_request(operation_name, params = {}) handlers = @handlers.for(operation_name) tracer = config.telemetry_provider.tracer_provider.tracer( Aws::Telemetry.module_to_tracer_name('Aws::CloudWatchLogs') ) context = Seahorse::Client::RequestContext.new( operation_name: operation_name, operation: config.api.operation(operation_name), client: self, params: params, config: config, tracer: tracer ) context[:gem_name] = 'aws-sdk-cloudwatchlogs' context[:gem_version] = '1.110.0' Seahorse::Client::Request.new(handlers, context) end |
#cancel_export_task(params = {}) ⇒ Struct
Cancels the specified export task.
The task must be in the ‘PENDING` or `RUNNING` state.
637 638 639 640 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 637 def cancel_export_task(params = {}, = {}) req = build_request(:cancel_export_task, params) req.send_request() end |
#create_delivery(params = {}) ⇒ Types::CreateDeliveryResponse
Creates a delivery. A delivery is a connection between a logical *delivery source* and a logical *delivery destination* that you have already created.
Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as **Supported [V2 Permissions]** in the table at [Enabling logging from Amazon Web Services services.]
A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose.
To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
-
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource].
-
Create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see [PutDeliveryDestination].
-
If you are delivering logs cross-account, you must use
- PutDeliveryDestinationPolicy][4
-
in the destination account to
assign an IAM policy to the destination. This policy allows delivery to that destination.
-
Use ‘CreateDelivery` to create a delivery by pairing exactly one delivery source and one delivery destination.
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
To update an existing delivery configuration, use [UpdateDeliveryConfiguration].
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html [5]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_UpdateDeliveryConfiguration.html
757 758 759 760 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 757 def create_delivery(params = {}, = {}) req = build_request(:create_delivery, params) req.send_request() end |
#create_export_task(params = {}) ⇒ Types::CreateExportTaskResponse
Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket. When you perform a ‘CreateExportTask` operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination.
Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported.
Exporting to S3 buckets that are encrypted with AES-256 is supported.
This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use
- DescribeExportTasks][1
-
to get the status of the export task. Each
account can only have one active (‘RUNNING` or `PENDING`) export task at a time. To cancel an export task, use [CancelExportTask].
You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects.
<note markdown=“1”> We recommend that you don’t regularly export to Amazon S3 as a way to continuously archive your logs. For that use case, we instaed recommend that you use subscriptions. For more information about subscriptions, see [Real-time processing of log data with subscriptions].
</note>
<note markdown=“1”> Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities.
</note>
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeExportTasks.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CancelExportTask.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
865 866 867 868 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 865 def create_export_task(params = {}, = {}) req = build_request(:create_export_task, params) req.send_request() end |
#create_log_anomaly_detector(params = {}) ⇒ Types::CreateLogAnomalyDetectorResponse
Creates an *anomaly detector* that regularly scans one or more log groups and look for patterns and anomalies in the logs.
An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find patterns. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns.
The anomaly detector uses pattern recognition to find ‘anomalies`, which are unusual log events. It uses the `evaluationFrequency` to compare current log events and patterns with trained baselines.
Fields within a pattern are called tokens. Fields that vary within a pattern, such as a request ID or timestamp, are referred to as *dynamic tokens* and represented by ‘<*>`.
The following is an example of a pattern:
‘[INFO] Request time: <*> ms`
This pattern represents log events like ‘[INFO] Request time: 327 ms` and other similar log events that differ only by the number, in this csse 327. When the pattern is displayed, the different numbers are replaced by `<*>`
<note markdown=“1”> Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see [Help protect sensitive log data with masking].
</note>
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html
992 993 994 995 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 992 def create_log_anomaly_detector(params = {}, = {}) req = build_request(:create_log_anomaly_detector, params) req.send_request() end |
#create_log_group(params = {}) ⇒ Struct
Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account.
You must use the following guidelines when naming a log group:
-
Log group names must be unique within a Region for an Amazon Web Services account.
-
Log group names can be between 1 and 512 characters long.
-
Log group names consist of the following characters: a-z, A-Z, 0-9, ‘_’ (underscore), ‘-’ (hyphen), ‘/’ (forward slash), ‘.’ (period), and ‘#’ (number sign)
-
Log group names can’t start with the string ‘aws/`
When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use [PutRetentionPolicy].
If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a KMS key with the log group but the KMS key does not exist or the KMS key is disabled, you receive an ‘InvalidParameterException` error.
CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see [Using Symmetric and Asymmetric Keys].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutRetentionPolicy.html [2]: docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html
1101 1102 1103 1104 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1101 def create_log_group(params = {}, = {}) req = build_request(:create_log_group, params) req.send_request() end |
#create_log_stream(params = {}) ⇒ Struct
Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.
There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on ‘CreateLogStream` operations, after which transactions are throttled.
You must use the following guidelines when naming a log stream:
-
Log stream names must be unique within the log group.
-
Log stream names can be between 1 and 512 characters long.
-
Don’t use ‘:’ (colon) or ‘*’ (asterisk) characters.
1141 1142 1143 1144 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1141 def create_log_stream(params = {}, = {}) req = build_request(:create_log_stream, params) req.send_request() end |
#delete_account_policy(params = {}) ⇒ Struct
Deletes a CloudWatch Logs account policy. This stops the account-wide policy from applying to log groups in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting.
-
To delete a data protection policy, you must have the ‘logs:DeleteDataProtectionPolicy` and `logs:DeleteAccountPolicy` permissions.
-
To delete a subscription filter policy, you must have the ‘logs:DeleteSubscriptionFilter` and `logs:DeleteAccountPolicy` permissions.
-
To delete a transformer policy, you must have the ‘logs:DeleteTransformer` and `logs:DeleteAccountPolicy` permissions.
-
To delete a field index policy, you must have the ‘logs:DeleteIndexPolicy` and `logs:DeleteAccountPolicy` permissions.
If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries.
1191 1192 1193 1194 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1191 def delete_account_policy(params = {}, = {}) req = build_request(:delete_account_policy, params) req.send_request() end |
#delete_data_protection_policy(params = {}) ⇒ Struct
Deletes the data protection policy from the specified log group.
For more information about data protection policies, see [PutDataProtectionPolicy].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDataProtectionPolicy.html
1221 1222 1223 1224 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1221 def delete_data_protection_policy(params = {}, = {}) req = build_request(:delete_data_protection_policy, params) req.send_request() end |
#delete_delivery(params = {}) ⇒ Struct
Deletes a delivery. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does not delete the delivery destination or the delivery source.
1252 1253 1254 1255 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1252 def delete_delivery(params = {}, = {}) req = build_request(:delete_delivery, params) req.send_request() end |
#delete_delivery_destination(params = {}) ⇒ Struct
Deletes a *delivery destination*. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*.
You can’t delete a delivery destination if any current deliveries are associated with it. To find whether any deliveries are associated with this delivery destination, use the [DescribeDeliveries] operation and check the ‘deliveryDestinationArn` field in the results.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeDeliveries.html
1290 1291 1292 1293 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1290 def delete_delivery_destination(params = {}, = {}) req = build_request(:delete_delivery_destination, params) req.send_request() end |
#delete_delivery_destination_policy(params = {}) ⇒ Struct
Deletes a delivery destination policy. For more information about these policies, see [PutDeliveryDestinationPolicy].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html
1318 1319 1320 1321 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1318 def delete_delivery_destination_policy(params = {}, = {}) req = build_request(:delete_delivery_destination_policy, params) req.send_request() end |
#delete_delivery_source(params = {}) ⇒ Struct
Deletes a *delivery source*. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*.
You can’t delete a delivery source if any current deliveries are associated with it. To find whether any deliveries are associated with this delivery source, use the [DescribeDeliveries] operation and check the ‘deliverySourceName` field in the results.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeDeliveries.html
1350 1351 1352 1353 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1350 def delete_delivery_source(params = {}, = {}) req = build_request(:delete_delivery_source, params) req.send_request() end |
#delete_destination(params = {}) ⇒ Struct
Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.
1374 1375 1376 1377 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1374 def delete_destination(params = {}, = {}) req = build_request(:delete_destination, params) req.send_request() end |
#delete_index_policy(params = {}) ⇒ Struct
Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries.
You can’t use this operation to delete an account-level index policy. Instead, use [DeletAccountPolicy].
If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteAccountPolicy.html
1412 1413 1414 1415 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1412 def delete_index_policy(params = {}, = {}) req = build_request(:delete_index_policy, params) req.send_request() end |
#delete_integration(params = {}) ⇒ Struct
Deletes the integration between CloudWatch Logs and OpenSearch Service. If your integration has active vended logs dashboards, you must specify ‘true` for the `force` parameter, otherwise the operation will fail. If you delete the integration by setting `force` to `true`, all your vended logs dashboards powered by OpenSearch Service will be deleted and the data that was on them will no longer be accessible.
1451 1452 1453 1454 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1451 def delete_integration(params = {}, = {}) req = build_request(:delete_integration, params) req.send_request() end |
#delete_log_anomaly_detector(params = {}) ⇒ Struct
Deletes the specified CloudWatch Logs anomaly detector.
1479 1480 1481 1482 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1479 def delete_log_anomaly_detector(params = {}, = {}) req = build_request(:delete_log_anomaly_detector, params) req.send_request() end |
#delete_log_group(params = {}) ⇒ Struct
Deletes the specified log group and permanently deletes all the archived log events associated with the log group.
1502 1503 1504 1505 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1502 def delete_log_group(params = {}, = {}) req = build_request(:delete_log_group, params) req.send_request() end |
#delete_log_stream(params = {}) ⇒ Struct
Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.
1529 1530 1531 1532 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1529 def delete_log_stream(params = {}, = {}) req = build_request(:delete_log_stream, params) req.send_request() end |
#delete_metric_filter(params = {}) ⇒ Struct
Deletes the specified metric filter.
1555 1556 1557 1558 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1555 def delete_metric_filter(params = {}, = {}) req = build_request(:delete_metric_filter, params) req.send_request() end |
#delete_query_definition(params = {}) ⇒ Types::DeleteQueryDefinitionResponse
Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.
Each ‘DeleteQueryDefinition` operation can delete one query definition.
You must have the ‘logs:DeleteQueryDefinition` permission to be able to perform this operation.
1597 1598 1599 1600 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1597 def delete_query_definition(params = {}, = {}) req = build_request(:delete_query_definition, params) req.send_request() end |
#delete_resource_policy(params = {}) ⇒ Struct
Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.
1620 1621 1622 1623 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1620 def delete_resource_policy(params = {}, = {}) req = build_request(:delete_resource_policy, params) req.send_request() end |
#delete_retention_policy(params = {}) ⇒ Struct
Deletes the specified retention policy.
Log events do not expire if they belong to log groups without a retention policy.
1645 1646 1647 1648 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1645 def delete_retention_policy(params = {}, = {}) req = build_request(:delete_retention_policy, params) req.send_request() end |
#delete_subscription_filter(params = {}) ⇒ Struct
Deletes the specified subscription filter.
1671 1672 1673 1674 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1671 def delete_subscription_filter(params = {}, = {}) req = build_request(:delete_subscription_filter, params) req.send_request() end |
#delete_transformer(params = {}) ⇒ Struct
Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted.
After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events.
1704 1705 1706 1707 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1704 def delete_transformer(params = {}, = {}) req = build_request(:delete_transformer, params) req.send_request() end |
#describe_account_policies(params = {}) ⇒ Types::DescribeAccountPoliciesResponse
Returns a list of all CloudWatch Logs account policies in the account.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are retrieving information for.
-
To see data protection policies, you must have the ‘logs:GetDataProtectionPolicy` and `logs:DescribeAccountPolicies` permissions.
-
To see subscription filter policies, you must have the ‘logs:DescrubeSubscriptionFilters` and `logs:DescribeAccountPolicies` permissions.
-
To see transformer policies, you must have the ‘logs:GetTransformer` and `logs:DescribeAccountPolicies` permissions.
-
To see field index policies, you must have the ‘logs:DescribeIndexPolicies` and `logs:DescribeAccountPolicies` permissions.
1782 1783 1784 1785 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1782 def describe_account_policies(params = {}, = {}) req = build_request(:describe_account_policies, params) req.send_request() end |
#describe_configuration_templates(params = {}) ⇒ Types::DescribeConfigurationTemplatesResponse
Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries. For more information about deliveries, see [CreateDelivery].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
1870 1871 1872 1873 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1870 def describe_configuration_templates(params = {}, = {}) req = build_request(:describe_configuration_templates, params) req.send_request() end |
#describe_deliveries(params = {}) ⇒ Types::DescribeDeliveriesResponse
Retrieves a list of the deliveries that have been created in the account.
A delivery is a connection between a [ *delivery source* ][1] and a [ *delivery destination* ][2].
A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in [Enable logging from Amazon Web Services services.]
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
1936 1937 1938 1939 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1936 def describe_deliveries(params = {}, = {}) req = build_request(:describe_deliveries, params) req.send_request() end |
#describe_delivery_destinations(params = {}) ⇒ Types::DescribeDeliveryDestinationsResponse
Retrieves a list of the delivery destinations that have been created in the account.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
1982 1983 1984 1985 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 1982 def describe_delivery_destinations(params = {}, = {}) req = build_request(:describe_delivery_destinations, params) req.send_request() end |
#describe_delivery_sources(params = {}) ⇒ Types::DescribeDeliverySourcesResponse
Retrieves a list of the delivery sources that have been created in the account.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2029 2030 2031 2032 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2029 def describe_delivery_sources(params = {}, = {}) req = build_request(:describe_delivery_sources, params) req.send_request() end |
#describe_destinations(params = {}) ⇒ Types::DescribeDestinationsResponse
Lists all your destinations. The results are ASCII-sorted by destination name.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2079 2080 2081 2082 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2079 def describe_destinations(params = {}, = {}) req = build_request(:describe_destinations, params) req.send_request() end |
#describe_export_tasks(params = {}) ⇒ Types::DescribeExportTasksResponse
Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status.
2137 2138 2139 2140 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2137 def describe_export_tasks(params = {}, = {}) req = build_request(:describe_export_tasks, params) req.send_request() end |
#describe_field_indexes(params = {}) ⇒ Types::DescribeFieldIndexesResponse
Returns a list of field indexes listed in the field index policies of one or more log groups. For more information about field index policies, see [PutIndexPolicy].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutIndexPolicy.html
2184 2185 2186 2187 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2184 def describe_field_indexes(params = {}, = {}) req = build_request(:describe_field_indexes, params) req.send_request() end |
#describe_index_policies(params = {}) ⇒ Types::DescribeIndexPoliciesResponse
Returns the field index policies of one or more log groups. For more information about field index policies, see [PutIndexPolicy].
If a specified log group has a log-group level index policy, that policy is returned by this operation.
If a specified log group doesn’t have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation.
To find information about only account-level policies, use
- DescribeAccountPolicies][2
-
instead.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutIndexPolicy.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeAccountPolicies.html
2241 2242 2243 2244 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2241 def describe_index_policies(params = {}, = {}) req = build_request(:describe_index_policies, params) req.send_request() end |
#describe_log_groups(params = {}) ⇒ Types::DescribeLogGroupsResponse
Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.
CloudWatch Logs doesn’t support IAM policies that control access to the ‘DescribeLogGroups` action by using the `aws:ResourceTag/key-name ` condition key. Other CloudWatch Logs actions do support the use of the `aws:ResourceTag/key-name ` condition key to control access. For more information about using tags to control access, see [Controlling access to Amazon Web Services resources using tags].
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see [CloudWatch cross-account observability].
[1]: docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html [2]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2369 2370 2371 2372 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2369 def describe_log_groups(params = {}, = {}) req = build_request(:describe_log_groups, params) req.send_request() end |
#describe_log_streams(params = {}) ⇒ Types::DescribeLogStreamsResponse
Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.
You can specify the log group to search by using either ‘logGroupIdentifier` or `logGroupName`. You must include one of these two parameters, but you can’t include both.
This operation has a limit of 25 transactions per second, after which transactions are throttled.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see [CloudWatch cross-account observability].
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2481 2482 2483 2484 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2481 def describe_log_streams(params = {}, = {}) req = build_request(:describe_log_streams, params) req.send_request() end |
#describe_metric_filters(params = {}) ⇒ Types::DescribeMetricFiltersResponse
Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2555 2556 2557 2558 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2555 def describe_metric_filters(params = {}, = {}) req = build_request(:describe_metric_filters, params) req.send_request() end |
#describe_queries(params = {}) ⇒ Types::DescribeQueriesResponse
Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status.
2614 2615 2616 2617 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2614 def describe_queries(params = {}, = {}) req = build_request(:describe_queries, params) req.send_request() end |
#describe_query_definitions(params = {}) ⇒ Types::DescribeQueryDefinitionsResponse
This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions. You can retrieve query definitions from the current account or from a source account that is linked to the current account.
You can use the ‘queryDefinitionNamePrefix` parameter to limit the results to only the query definitions that have names that start with a certain string.
2679 2680 2681 2682 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2679 def describe_query_definitions(params = {}, = {}) req = build_request(:describe_query_definitions, params) req.send_request() end |
#describe_resource_policies(params = {}) ⇒ Types::DescribeResourcePoliciesResponse
Lists the resource policies in this account.
2718 2719 2720 2721 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2718 def describe_resource_policies(params = {}, = {}) req = build_request(:describe_resource_policies, params) req.send_request() end |
#describe_subscription_filters(params = {}) ⇒ Types::DescribeSubscriptionFiltersResponse
Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2775 2776 2777 2778 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2775 def describe_subscription_filters(params = {}, = {}) req = build_request(:describe_subscription_filters, params) req.send_request() end |
#disassociate_kms_key(params = {}) ⇒ Struct
Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.
When you use ‘DisassociateKmsKey`, you specify either the `logGroupName` parameter or the `resourceIdentifier` parameter. You can’t specify both of those parameters in the same operation.
-
Specify the ‘logGroupName` parameter to stop using the KMS key to encrypt future log events ingested and stored in the log group. Instead, they will be encrypted with the default CloudWatch Logs method. The log events that were ingested while the key was associated with the log group are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed.
-
Specify the ‘resourceIdentifier` parameter with the `query-result` resource to stop using the KMS key to encrypt the results of all future [StartQuery] operations in the account. They will instead be encrypted with the default CloudWatch Logs method. The results from queries that ran while the key was associated with the account are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed.
It can take up to 5 minutes for this operation to take effect.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartQuery.html
2856 2857 2858 2859 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 2856 def disassociate_kms_key(params = {}, = {}) req = build_request(:disassociate_kms_key, params) req.send_request() end |
#filter_log_events(params = {}) ⇒ Types::FilterLogEventsResponse
Lists log events from the specified log group. You can list all the log events or filter the results using one or more of the following:
-
A filter pattern
-
A time range
-
The log stream name, or a log stream name prefix that matches mutltiple log streams
You must have the ‘logs:FilterLogEvents` permission to perform this operation.
You can specify the log group to search by using either ‘logGroupIdentifier` or `logGroupName`. You must include one of these two parameters, but you can’t include both.
‘FilterLogEvents` is a paginated operation. Each page returned can contain up to 1 MB of log events or up to 10,000 log events. A returned page might only be partially full, or even empty. For example, if the result of a query would return 15,000 log events, the first page isn’t guaranteed to have 10,000 log events even if they all fit into 1 MB.
Partially full or empty pages don’t necessarily mean that pagination is finished. If the results include a ‘nextToken`, there might be more log events available. You can return these additional log events by providing the nextToken in a subsequent `FilterLogEvents` operation. If the results don’t include a ‘nextToken`, then pagination is finished.
<note markdown=“1”> If you set ‘startFromHead` to `true` and you don’t include `endTime` in your request, you can end up in a situation where the pagination doesn’t terminate. This can happen when the new log events are being added to the target log streams faster than they are being read. This situation is a good use case for the CloudWatch Logs [Live Tail] feature.
</note>
The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the ‘PutLogEvents` request.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see [CloudWatch cross-account observability].
<note markdown=“1”> If you are using [log transformation], the ‘FilterLogEvents` operation returns only the original versions of log events, before they were transformed. To view the transformed versions, you must use a [CloudWatch Logs query.]
</note>
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs_LiveTail.html [2]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3043 3044 3045 3046 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3043 def filter_log_events(params = {}, = {}) req = build_request(:filter_log_events, params) req.send_request() end |
#get_data_protection_policy(params = {}) ⇒ Types::GetDataProtectionPolicyResponse
Returns information about a log group data protection policy.
3076 3077 3078 3079 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3076 def get_data_protection_policy(params = {}, = {}) req = build_request(:get_data_protection_policy, params) req.send_request() end |
#get_delivery(params = {}) ⇒ Types::GetDeliveryResponse
Returns complete information about one logical delivery. A delivery is a connection between a [ *delivery source* ][1] and a [ *delivery destination* ][2].
A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in [Enable logging from Amazon Web Services services.]
You need to specify the delivery ‘id` in this operation. You can find the IDs of the deliveries in your account with the
- DescribeDeliveries][4
-
operation.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeDeliveries.html
3134 3135 3136 3137 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3134 def get_delivery(params = {}, = {}) req = build_request(:get_delivery, params) req.send_request() end |
#get_delivery_destination(params = {}) ⇒ Types::GetDeliveryDestinationResponse
Retrieves complete information about one delivery destination.
3168 3169 3170 3171 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3168 def get_delivery_destination(params = {}, = {}) req = build_request(:get_delivery_destination, params) req.send_request() end |
#get_delivery_destination_policy(params = {}) ⇒ Types::GetDeliveryDestinationPolicyResponse
Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see [PutDeliveryDestinationPolicy].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html
3204 3205 3206 3207 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3204 def get_delivery_destination_policy(params = {}, = {}) req = build_request(:get_delivery_destination_policy, params) req.send_request() end |
#get_delivery_source(params = {}) ⇒ Types::GetDeliverySourceResponse
Retrieves complete information about one delivery source.
3239 3240 3241 3242 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3239 def get_delivery_source(params = {}, = {}) req = build_request(:get_delivery_source, params) req.send_request() end |
#get_integration(params = {}) ⇒ Types::GetIntegrationResponse
Returns information about one integration between CloudWatch Logs and OpenSearch Service.
3305 3306 3307 3308 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3305 def get_integration(params = {}, = {}) req = build_request(:get_integration, params) req.send_request() end |
#get_log_anomaly_detector(params = {}) ⇒ Types::GetLogAnomalyDetectorResponse
Retrieves information about the log anomaly detector that you specify. The KMS key ARN detected is valid.
3357 3358 3359 3360 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3357 def get_log_anomaly_detector(params = {}, = {}) req = build_request(:get_log_anomaly_detector, params) req.send_request() end |
#get_log_events(params = {}) ⇒ Types::GetLogEventsResponse
Lists log events from the specified log stream. You can list all of the log events or filter using a time range.
‘GetLogEvents` is a paginated operation. Each page returned can contain up to 1 MB of log events or up to 10,000 log events. A returned page might only be partially full, or even empty. For example, if the result of a query would return 15,000 log events, the first page isn’t guaranteed to have 10,000 log events even if they all fit into 1 MB.
Partially full or empty pages don’t necessarily mean that pagination is finished. As long as the ‘nextBackwardToken` or `nextForwardToken` returned is NOT equal to the `nextToken` that you passed into the API call, there might be more log events available. The token that you use depends on the direction you want to move in along the log stream. The returned tokens are never null.
<note markdown=“1”> If you set ‘startFromHead` to `true` and you don’t include `endTime` in your request, you can end up in a situation where the pagination doesn’t terminate. This can happen when the new log events are being added to the target log streams faster than they are being read. This situation is a good use case for the CloudWatch Logs [Live Tail] feature.
</note>
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see [CloudWatch cross-account observability].
You can specify the log group to search by using either ‘logGroupIdentifier` or `logGroupName`. You must include one of these two parameters, but you can’t include both.
<note markdown=“1”> If you are using [log transformation], the ‘GetLogEvents` operation returns only the original versions of log events, before they were transformed. To view the transformed versions, you must use a
- CloudWatch Logs query.][4
-
</note>
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs_LiveTail.html [2]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3503 3504 3505 3506 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3503 def get_log_events(params = {}, = {}) req = build_request(:get_log_events, params) req.send_request() end |
#get_log_group_fields(params = {}) ⇒ Types::GetLogGroupFieldsResponse
Returns a list of the fields that are included in log events in the specified log group. Includes the percentage of log events that contain each field. The search is limited to a time period that you specify.
You can specify the log group to search by using either ‘logGroupIdentifier` or `logGroupName`. You must specify one of these parameters, but you can’t specify both.
In the results, fields that start with ‘@` are fields generated by CloudWatch Logs. For example, `@timestamp` is the timestamp of each log event. For more information about the fields that are generated by CloudWatch logs, see [Supported Logs and Discovered Fields].
The response results are sorted by the frequency percentage, starting with the highest percentage.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see [CloudWatch cross-account observability].
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html [2]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
3584 3585 3586 3587 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3584 def get_log_group_fields(params = {}, = {}) req = build_request(:get_log_group_fields, params) req.send_request() end |
#get_log_record(params = {}) ⇒ Types::GetLogRecordResponse
Retrieves all of the fields and values of a single log event. All fields are retrieved, even if the original query that produced the ‘logRecordPointer` retrieved only a subset of fields. Fields are returned as field name/field value pairs.
The full unparsed log event is returned within ‘@message`.
3630 3631 3632 3633 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3630 def get_log_record(params = {}, = {}) req = build_request(:get_log_record, params) req.send_request() end |
#get_query_results(params = {}) ⇒ Types::GetQueryResultsResponse
Returns the results from the specified query.
Only the fields requested in the query are returned, along with a ‘@ptr` field, which is the identifier for the log record. You can use the value of `@ptr` in a [GetLogRecord] operation to get the full log record.
‘GetQueryResults` does not start running a query. To run a query, use [StartQuery]. For more information about how long results of previous queries are available, see [CloudWatch Logs quotas].
If the value of the ‘Status` field in the output is `Running`, this operation returns only partial results. If you see a value of `Scheduled` or `Running` for the status, you can retry the operation later to see the final results.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see [CloudWatch cross-account observability].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogRecord.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartQuery.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
3700 3701 3702 3703 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3700 def get_query_results(params = {}, = {}) req = build_request(:get_query_results, params) req.send_request() end |
#get_transformer(params = {}) ⇒ Types::GetTransformerResponse
Returns the information about the log transformer associated with this log group.
This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use [DescribeAccountPolicies].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeAccountPolicies.html
3814 3815 3816 3817 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3814 def get_transformer(params = {}, = {}) req = build_request(:get_transformer, params) req.send_request() end |
#list_anomalies(params = {}) ⇒ Types::ListAnomaliesResponse
Returns a list of anomalies that log anomaly detectors have found. For details about the structure format of each anomaly object that is returned, see the example in this section.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3893 3894 3895 3896 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3893 def list_anomalies(params = {}, = {}) req = build_request(:list_anomalies, params) req.send_request() end |
#list_integrations(params = {}) ⇒ Types::ListIntegrationsResponse
Returns a list of integrations between CloudWatch Logs and other services in this account. Currently, only one integration can be created in an account, and this integration must be with OpenSearch Service.
3938 3939 3940 3941 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3938 def list_integrations(params = {}, = {}) req = build_request(:list_integrations, params) req.send_request() end |
#list_log_anomaly_detectors(params = {}) ⇒ Types::ListLogAnomalyDetectorsResponse
Retrieves a list of the log anomaly detectors in the account.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3992 3993 3994 3995 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 3992 def list_log_anomaly_detectors(params = {}, = {}) req = build_request(:list_log_anomaly_detectors, params) req.send_request() end |
#list_log_groups_for_query(params = {}) ⇒ Types::ListLogGroupsForQueryResponse
Returns a list of the log groups that were analyzed during a single CloudWatch Logs Insights query. This can be useful for queries that use log group name prefixes or the ‘filterIndex` command, because the log groups are dynamically selected in these cases.
For more information about field indexes, see [Create field indexes to improve query performance and reduce costs].
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Field-Indexing.html
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4049 4050 4051 4052 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4049 def list_log_groups_for_query(params = {}, = {}) req = build_request(:list_log_groups_for_query, params) req.send_request() end |
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging.
4092 4093 4094 4095 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4092 def (params = {}, = {}) req = build_request(:list_tags_for_resource, params) req.send_request() end |
#list_tags_log_group(params = {}) ⇒ Types::ListTagsLogGroupResponse
The ListTagsLogGroup operation is on the path to deprecation. We recommend that you use [ListTagsForResource] instead.
Lists the tags for the specified log group.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_ListTagsForResource.html
4128 4129 4130 4131 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4128 def (params = {}, = {}) req = build_request(:list_tags_log_group, params) req.send_request() end |
#put_account_policy(params = {}) ⇒ Types::PutAccountPolicyResponse
Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating.
-
To create a data protection policy, you must have the ‘logs:PutDataProtectionPolicy` and `logs:PutAccountPolicy` permissions.
-
To create a subscription filter policy, you must have the ‘logs:PutSubscriptionFilter` and `logs:PutccountPolicy` permissions.
-
To create a transformer policy, you must have the ‘logs:PutTransformer` and `logs:PutAccountPolicy` permissions.
-
To create a field index policy, you must have the ‘logs:PutIndexPolicy` and `logs:PutAccountPolicy` permissions.
**Data protection policy**
A data protection policy can help safeguard sensitive data that’s ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.
Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.
If you use ‘PutAccountPolicy` to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the ‘logs:Unmask` permission can use a [GetLogEvents] or
- FilterLogEvents][2
-
operation with the ‘unmask` parameter set to
‘true` to view the unmasked log events. Users with the `logs:Unmask` can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the `unmask` query command.
For more information, including a list of types of data that can be audited and masked, see [Protect sensitive log data with masking].
To use the ‘PutAccountPolicy` operation for a data protection policy, you must be signed on with the `logs:PutDataProtectionPolicy` and `logs:PutAccountPolicy` permissions.
The ‘PutAccountPolicy` operation applies to all log groups in the account. You can use [PutDataProtectionPolicy] to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
**Subscription filter policy**
A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
-
An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
-
An Firehose data stream in the same account as the subscription policy, for same-account delivery.
-
A Lambda function in the same account as the subscription policy, for same-account delivery.
-
A logical destination in a different account created with [PutDestination], for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in ‘PolicyName`. To perform a `PutAccountPolicy` subscription filter operation for any destination except a Lambda function, you must also have the `iam:PassRole` permission.
**Transformer policy**
Creates or updates a *log transformer policy* for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see [ Processors that you can use].
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the ‘selectionCriteria` parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with `my-log`, you can’t have another field index policy filtered to ‘my-logpprod` or `my-logging`.
You can also set up a transformer at the log-group level. For more information, see [PutTransformer]. If there is both a log-group level transformer created with ‘PutTransformer` and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
**Field index policy**
You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see [Create field indexes to improve query performance and reduce costs]
To find the fields that are in your log group events, use the
- GetLogGroupFields][9
-
operation.
For example, suppose you have created a field index for ‘requestId`. Then, any CloudWatch Logs Insights query on that log group that includes `requestId = value ` or `requestId in [value, value, …]` will attempt to process only the log events where the indexed field matches the specified value.
Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field of ‘RequestId` won’t match a log event containing ‘requestId`.
You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with the ‘selectionCriteria` parameter. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with `my-log`, you can’t have another field index policy filtered to ‘my-logpprod` or `my-logging`.
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
If you want to create a field index policy for a single log group, you can use [PutIndexPolicy] instead of ‘PutAccountPolicy`. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with [PutAccountPolicy].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_FilterLogEvents.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDataProtectionPolicy.html [5]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDestination.html [6]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html#CloudWatch-Logs-Transformation-Processors [7]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutTransformer.html [8]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Field-Indexing.html [9]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogGroupFields.html [10]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutIndexPolicy.html [11]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutAccountPolicy.html
4504 4505 4506 4507 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4504 def put_account_policy(params = {}, = {}) req = build_request(:put_account_policy, params) req.send_request() end |
#put_data_protection_policy(params = {}) ⇒ Types::PutDataProtectionPolicyResponse
Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that’s ingested by the log group by auditing and masking the sensitive log data.
Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the ‘logs:Unmask` permission can use a [GetLogEvents] or
- FilterLogEvents][2
-
operation with the ‘unmask` parameter set to
‘true` to view the unmasked log events. Users with the `logs:Unmask` can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the `unmask` query command.
For more information, including a list of types of data that can be audited and masked, see [Protect sensitive log data with masking].
The ‘PutDataProtectionPolicy` operation applies to only the specified log group. You can also use [PutAccountPolicy] to create an account-level data protection policy that applies to all log groups in the account, including both existing log groups and log groups that are created level. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_FilterLogEvents.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutAccountPolicy.html
4613 4614 4615 4616 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4613 def put_data_protection_policy(params = {}, = {}) req = build_request(:put_data_protection_policy, params) req.send_request() end |
#put_delivery_destination(params = {}) ⇒ Types::PutDeliveryDestinationResponse
Creates or updates a logical *delivery destination*. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations.
To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
-
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource].
-
Use ‘PutDeliveryDestination` to create a *delivery destination* in the same account of the actual delivery destination. The delivery destination that you create is a logical object that represents the actual delivery destination.
-
If you are delivering logs cross-account, you must use
- PutDeliveryDestinationPolicy][2
-
in the destination account to
assign an IAM policy to the destination. This policy allows delivery to that destination.
-
Use ‘CreateDelivery` to create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery].
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at [Enabling logging from Amazon Web Services services.]
If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html
4717 4718 4719 4720 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4717 def put_delivery_destination(params = {}, = {}) req = build_request(:put_delivery_destination, params) req.send_request() end |
#put_delivery_destination_policy(params = {}) ⇒ Types::PutDeliveryDestinationPolicyResponse
Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following:
-
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource].
-
Create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see [PutDeliveryDestination].
-
Use this operation in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
-
Create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery].
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at [Enabling logging from Amazon Web Services services.]
The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html
4784 4785 4786 4787 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4784 def put_delivery_destination_policy(params = {}, = {}) req = build_request(:put_delivery_destination_policy, params) req.send_request() end |
#put_delivery_source(params = {}) ⇒ Types::PutDeliverySourceResponse
Creates or updates a logical *delivery source*. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose.
To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:
-
Use ‘PutDeliverySource` to create a delivery source, which is a logical object that represents the resource that is actually sending the logs.
-
Use ‘PutDeliveryDestination` to create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see [PutDeliveryDestination].
-
If you are delivering logs cross-account, you must use
- PutDeliveryDestinationPolicy][2
-
in the destination account to
assign an IAM policy to the destination. This policy allows delivery to that destination.
-
Use ‘CreateDelivery` to create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery].
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at [Enabling logging from Amazon Web Services services.]
If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html
4912 4913 4914 4915 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4912 def put_delivery_source(params = {}, = {}) req = build_request(:put_delivery_source, params) req.send_request() end |
#put_destination(params = {}) ⇒ Types::PutDestinationResponse
Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.
A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real-time stream of log events for a different account, ingested using [PutLogEvents].
Through an access policy, a destination controls what is written to it. By default, ‘PutDestination` does not set any access policy with the destination, which means a cross-account user cannot call
- PutSubscriptionFilter][2
-
against this destination. To enable this,
the destination owner must call [PutDestinationPolicy] after ‘PutDestination`.
To perform a ‘PutDestination` operation, you must also have the `iam:PassRole` permission.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutSubscriptionFilter.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDestinationPolicy.html
4990 4991 4992 4993 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 4990 def put_destination(params = {}, = {}) req = build_request(:put_destination, params) req.send_request() end |
#put_destination_policy(params = {}) ⇒ Struct
Creates or updates an access policy associated with an existing destination. An access policy is an [IAM policy document] that is used to authorize claims to register a subscription filter against a given destination.
[1]: docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html
5043 5044 5045 5046 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5043 def put_destination_policy(params = {}, = {}) req = build_request(:put_destination_policy, params) req.send_request() end |
#put_index_policy(params = {}) ⇒ Types::PutIndexPolicyResponse
Creates or updates a *field index policy* for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see [Log classes].
You can use field index policies to create *field indexes* on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see [Create field indexes to improve query performance and reduce costs].
To find the fields that are in your log group events, use the
- GetLogGroupFields][3
-
operation.
For example, suppose you have created a field index for ‘requestId`. Then, any CloudWatch Logs Insights query on that log group that includes `requestId = value ` or `requestId IN [value, value, …]` will process fewer log events to reduce costs, and have improved performance.
Each index policy has the following quotas and restrictions:
-
As many as 20 fields can be included in the policy.
-
Each field name can include as many as 100 characters.
Matches of log events to the names of indexed fields are case-sensitive. For example, a field index of ‘RequestId` won’t match a log event containing ‘requestId`.
Log group-level field index policies created with ‘PutIndexPolicy` override account-level field index policies created with [PutAccountPolicy]. If you use `PutIndexPolicy` to create a field index policy for a log group, that log group uses only that policy. The log group ignores any account-wide field index policy that you might have created.
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch_Logs_Log_Classes.html [2]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Field-Indexing.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogGroupFields.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutAccountPolicy.html
5141 5142 5143 5144 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5141 def put_index_policy(params = {}, = {}) req = build_request(:put_index_policy, params) req.send_request() end |
#put_integration(params = {}) ⇒ Types::PutIntegrationResponse
Creates an integration between CloudWatch Logs and another service in this account. Currently, only integrations with OpenSearch Service are supported, and currently you can have only one integration in your account.
Integrating with OpenSearch Service makes it possible for you to create curated vended logs dashboards, powered by OpenSearch Service analytics. For more information, see [Vended log dashboards powered by Amazon OpenSearch Service].
You can use this operation only to create a new integration. You can’t modify an existing integration.
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-OpenSearch-Dashboards.html
5204 5205 5206 5207 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5204 def put_integration(params = {}, = {}) req = build_request(:put_integration, params) req.send_request() end |
#put_log_events(params = {}) ⇒ Types::PutLogEventsResponse
Uploads a batch of log events to the specified log stream.
The sequence token is now ignored in ‘PutLogEvents` actions. `PutLogEvents` actions are always accepted and never return `InvalidSequenceTokenException` or `DataAlreadyAcceptedException` even if the sequence token is not valid. You can use parallel `PutLogEvents` actions on the same log stream.
The batch of events must satisfy the following constraints:
-
The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
-
None of the log events in the batch can be more than 2 hours in the future.
-
None of the log events in the batch can be more than 14 days in the past. Also, none of the log events can be from earlier than the retention period of the log group.
-
The log events in the batch must be in chronological order by their timestamp. The timestamp is the time that the event occurred, expressed as the number of milliseconds after ‘Jan 1, 1970 00:00:00 UTC`. (In Amazon Web Services Tools for PowerShell and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format: `yyyy-mm-ddThh:mm:ss`. For example, `2017-09-15T13:45:30`.)
-
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
-
Each log event can be no larger than 256 KB.
-
The maximum number of log events in a batch is 10,000.
-
The quota of five requests per second per log stream has been removed. Instead, ‘PutLogEvents` actions are throttled based on a per-second per-account quota. You can request an increase to the per-second throttling quota by using the Service Quotas service.
If a call to ‘PutLogEvents` returns “UnrecognizedClientException” the most likely cause is a non-valid Amazon Web Services access key ID or secret key.
5314 5315 5316 5317 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5314 def put_log_events(params = {}, = {}) req = build_request(:put_log_events, params) req.send_request() end |
#put_metric_filter(params = {}) ⇒ Struct
Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through [PutLogEvents].
The maximum number of metric filters that can be associated with a log group is 100.
Using regular expressions in filter patterns is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in filter patterns, see [ Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail].
When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created.
Metrics extracted from log events are charged as custom metrics. To prevent unexpected high charges, do not specify high-cardinality fields such as ‘IPAddress` or `requestID` as dimensions. Each different value found for a dimension is treated as a separate metric and accrues charges as a separate custom metric.
CloudWatch Logs might disable a metric filter if it generates 1,000
different name/value pairs for your specified dimensions within one hour.
You can also set up a billing alarm to alert you if your charges are
higher than expected. For more information, see [ Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html [2]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html
5412 5413 5414 5415 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5412 def put_metric_filter(params = {}, = {}) req = build_request(:put_metric_filter, params) req.send_request() end |
#put_query_definition(params = {}) ⇒ Types::PutQueryDefinitionResponse
Creates or updates a query definition for CloudWatch Logs Insights. For more information, see [Analyzing Log Data with CloudWatch Logs Insights].
To update a query definition, specify its ‘queryDefinitionId` in your request. The values of `name`, `queryString`, and `logGroupNames` are changed to the values that you specify in your update operation. No current values are retained from the current query definition. For example, imagine updating a current query definition that includes log groups. If you don’t specify the ‘logGroupNames` parameter in your update operation, the query definition changes to contain no log groups.
You must have the ‘logs:PutQueryDefinition` permission to be able to perform this operation.
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html
5520 5521 5522 5523 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5520 def put_query_definition(params = {}, = {}) req = build_request(:put_query_definition, params) req.send_request() end |
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyResponse
Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per Amazon Web Services Region.
5586 5587 5588 5589 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5586 def put_resource_policy(params = {}, = {}) req = build_request(:put_resource_policy, params) req.send_request() end |
#put_retention_policy(params = {}) ⇒ Struct
Sets the retention of the specified log group. With a retention policy, you can configure the number of days for which to retain log events in the specified log group.
<note markdown=“1”> CloudWatch Logs doesn’t immediately delete log events when they reach their retention setting. It typically takes up to 72 hours after that before log events are deleted, but in rare situations might take longer.
To illustrate, imagine that you change a log group to have a longer
retention setting when it contains log events that are past the expiration date, but haven’t been deleted. Those log events will take up to 72 hours to be deleted after the new retention date is reached. To make sure that log data is deleted permanently, keep a log group at its lower retention setting until 72 hours after the previous retention period ends. Alternatively, wait to change the retention setting until you confirm that the earlier log events are deleted.
When log events reach their retention setting they are marked for
deletion. After they are marked for deletion, they do not add to your archival storage costs anymore, even if they are not actually deleted until later. These log events marked for deletion are also not included when you use an API to retrieve the ‘storedBytes` value to see how many bytes a log group is storing.
</note>
5646 5647 5648 5649 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5646 def put_retention_policy(params = {}, = {}) req = build_request(:put_retention_policy, params) req.send_request() end |
#put_subscription_filter(params = {}) ⇒ Struct
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through [PutLogEvents] and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
-
An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery.
-
A logical destination created with [PutDestination] that belongs to a different account, for cross-account delivery. We currently support Kinesis Data Streams and Firehose as logical destinations.
-
An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
-
An Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in ‘filterName`.
Using regular expressions in filter patterns is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in filter patterns, see [ Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail].
To perform a ‘PutSubscriptionFilter` operation for any destination except a Lambda function, you must also have the `iam:PassRole` permission.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDestination.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html
5780 5781 5782 5783 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5780 def put_subscription_filter(params = {}, = {}) req = build_request(:put_subscription_filter, params) req.send_request() end |
#put_transformer(params = {}) ⇒ Struct
Creates or updates a *log transformer* for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information.
After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see [ Processors that you can use].
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can also set up a transformer at the account level. For more information, see [PutAccountPolicy]. If there is both a log-group level transformer created with ‘PutTransformer` and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html#CloudWatch-Logs-Transformation-Processors [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutAccountPolicy.html
5985 5986 5987 5988 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 5985 def put_transformer(params = {}, = {}) req = build_request(:put_transformer, params) req.send_request() end |
#start_live_tail(params = {}) ⇒ Types::StartLiveTailResponse
Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see [Use Live Tail to view logs in near real time].
The response to this operation is a response stream, over which the server sends live log events and the client receives them.
The following objects are sent over the stream:
-
A single [LiveTailSessionStart] object is sent at the start of the session.
-
Every second, a [LiveTailSessionUpdate] object is sent. Each of these objects contains an array of the actual log events.
If no new log events were ingested in the past second, the ‘LiveTailSessionUpdate` object will contain an empty array.
The array of log events contained in a ‘LiveTailSessionUpdate` can include as many as 500 log events. If the number of log events matching the request exceeds 500 per second, the log events are sampled down to 500 log events to be included in each `LiveTailSessionUpdate` object.
If your client consumes the log events slower than the server produces them, CloudWatch Logs buffers up to 10 ‘LiveTailSessionUpdate` events or 5000 log events, after which it starts dropping the oldest events.
-
A [SessionStreamingException] object is returned if an unknown error occurs on the server side.
-
A [SessionTimeoutException] object is returned when the session times out, after it has been kept open for three hours.
You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks.
For examples of using an SDK to start a Live Tail session, see [ Start a Live Tail session using an Amazon Web Services SDK].
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs_LiveTail.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_LiveTailSessionStart.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_LiveTailSessionUpdate.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartLiveTailResponseStream.html#CWL-Type-StartLiveTailResponseStream-SessionStreamingException [5]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartLiveTailResponseStream.html#CWL-Type-StartLiveTailResponseStream-SessionTimeoutException [6]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/example_cloudwatch-logs_StartLiveTail_section.html
6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6254 def start_live_tail(params = {}, = {}) params = params.dup event_stream_handler = case handler = params.delete(:event_stream_handler) when EventStreams::StartLiveTailResponseStream then handler when Proc then EventStreams::StartLiveTailResponseStream.new.tap(&handler) when nil then EventStreams::StartLiveTailResponseStream.new else msg = "expected :event_stream_handler to be a block or "\ "instance of Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream"\ ", got `#{handler.inspect}` instead" raise ArgumentError, msg end yield(event_stream_handler) if block_given? req = build_request(:start_live_tail, params) req.context[:event_stream_handler] = event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 95) req.send_request() end |
#start_query(params = {}) ⇒ Types::StartQueryResponse
Starts a query of one or more log groups using CloudWatch Logs Insights. You specify the log groups and time range to query and the query string to use.
For more information, see [CloudWatch Logs Insights Query Syntax].
After you run a query using ‘StartQuery`, the query results are stored by CloudWatch Logs. You can use [GetQueryResults] to retrieve the results of a query, using the `queryId` that `StartQuery` returns.
<note markdown=“1”> To specify the log groups to query, a ‘StartQuery` operation must include one of the following:
* Either exactly one of the following parameters: `logGroupName`,
`logGroupNames`, or `logGroupIdentifiers`
-
Or the ‘queryString` must include a `SOURCE` command to select log groups for the query. The `SOURCE` command can select log groups based on log group name prefix, account ID, and log class.
For more information about the ‘SOURCE` command, see [SOURCE].
</note>
If you have associated a KMS key with the query results in this account, then [StartQuery] uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method.
Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start a query in a linked source account. For more information, see [CloudWatch cross-account observability]. For a cross-account ‘StartQuery` operation, the query definition must be defined in the monitoring account.
You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards.
[1]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetQueryResults.html [3]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax-Source.html [4]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartQuery.html [5]: docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html
6427 6428 6429 6430 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6427 def start_query(params = {}, = {}) req = build_request(:start_query, params) req.send_request() end |
#stop_query(params = {}) ⇒ Types::StopQueryResponse
Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running.
6458 6459 6460 6461 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6458 def stop_query(params = {}, = {}) req = build_request(:stop_query, params) req.send_request() end |
#tag_log_group(params = {}) ⇒ Struct
The TagLogGroup operation is on the path to deprecation. We recommend that you use [TagResource] instead.
Adds or updates the specified tags for the specified log group.
To list the tags for a log group, use [ListTagsForResource]. To remove tags, use [UntagResource].
For more information about tags, see [Tag Log Groups in Amazon CloudWatch Logs] in the *Amazon CloudWatch Logs User Guide*.
CloudWatch Logs doesn’t support IAM policies that prevent users from assigning specified tags to log groups using the ‘aws:Resource/key-name ` or `aws:TagKeys` condition keys. For more information about using tags to control access, see [Controlling access to Amazon Web Services resources using tags].
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_TagResource.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_ListTagsForResource.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_UntagResource.html [4]: docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#log-group-tagging [5]: docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html
6509 6510 6511 6512 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6509 def tag_log_group(params = {}, = {}) req = build_request(:tag_log_group, params) req.send_request() end |
#tag_resource(params = {}) ⇒ Struct
Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource. Currently, the only CloudWatch Logs resources that can be tagged are log groups and destinations.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.
Tags don’t have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.
You can use the ‘TagResource` action with a resource that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag.
You can associate as many as 50 tags with a CloudWatch Logs resource.
6567 6568 6569 6570 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6567 def tag_resource(params = {}, = {}) req = build_request(:tag_resource, params) req.send_request() end |
#test_metric_filter(params = {}) ⇒ Types::TestMetricFilterResponse
Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern.
6608 6609 6610 6611 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6608 def test_metric_filter(params = {}, = {}) req = build_request(:test_metric_filter, params) req.send_request() end |
#test_transformer(params = {}) ⇒ Types::TestTransformerResponse
Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions.
6781 6782 6783 6784 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6781 def test_transformer(params = {}, = {}) req = build_request(:test_transformer, params) req.send_request() end |
#untag_log_group(params = {}) ⇒ Struct
The UntagLogGroup operation is on the path to deprecation. We recommend that you use [UntagResource] instead.
Removes the specified tags from the specified log group.
To list the tags for a log group, use [ListTagsForResource]. To add tags, use [TagResource].
CloudWatch Logs doesn’t support IAM policies that prevent users from assigning specified tags to log groups using the ‘aws:Resource/key-name ` or `aws:TagKeys` condition keys.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_UntagResource.html [2]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_ListTagsForResource.html [3]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_TagResource.html
6823 6824 6825 6826 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6823 def untag_log_group(params = {}, = {}) req = build_request(:untag_log_group, params) req.send_request() end |
#untag_resource(params = {}) ⇒ Struct
Removes one or more tags from the specified resource.
6863 6864 6865 6866 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6863 def untag_resource(params = {}, = {}) req = build_request(:untag_resource, params) req.send_request() end |
#update_anomaly(params = {}) ⇒ Struct
Use this operation to suppress anomaly detection for a specified anomaly or pattern. If you suppress an anomaly, CloudWatch Logs won’t report new occurrences of that anomaly and won’t update that anomaly with new data. If you suppress a pattern, CloudWatch Logs won’t report any anomalies related to that pattern.
You must specify either ‘anomalyId` or `patternId`, but you can’t specify both parameters in the same operation.
If you have previously used this operation to suppress detection of a pattern or anomaly, you can use it again to cause CloudWatch Logs to end the suppression. To do this, use this operation and specify the anomaly or pattern to stop suppressing, and omit the ‘suppressionType` and `suppressionPeriod` parameters.
6943 6944 6945 6946 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6943 def update_anomaly(params = {}, = {}) req = build_request(:update_anomaly, params) req.send_request() end |
#update_delivery_configuration(params = {}) ⇒ Struct
Use this operation to update the configuration of a [delivery] to change either the S3 path pattern or the format of the delivered logs. You can’t use this operation to change the source or destination of the delivery.
[1]: docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_Delivery.html
6991 6992 6993 6994 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 6991 def update_delivery_configuration(params = {}, = {}) req = build_request(:update_delivery_configuration, params) req.send_request() end |
#update_log_anomaly_detector(params = {}) ⇒ Struct
Updates an existing log anomaly detector.
7041 7042 7043 7044 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 7041 def update_log_anomaly_detector(params = {}, = {}) req = build_request(:update_log_anomaly_detector, params) req.send_request() end |
#waiter_names ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
7070 7071 7072 |
# File 'lib/aws-sdk-cloudwatchlogs/client.rb', line 7070 def waiter_names [] end |