Class: Aws::Kendra::Types::WebCrawlerConfiguration
- Inherits:
-
Struct
- Object
- Struct
- Aws::Kendra::Types::WebCrawlerConfiguration
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-kendra/types.rb
Overview
Provides the configuration information required for Amazon Kendra Web Crawler.
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#authentication_configuration ⇒ Types::AuthenticationConfiguration
Configuration information required to connect to websites using authentication.
-
#crawl_depth ⇒ Integer
The ‘depth’ or number of levels from the seed level to crawl.
-
#max_content_size_per_page_in_mega_bytes ⇒ Float
The maximum size (in MB) of a web page or attachment to crawl.
-
#max_links_per_page ⇒ Integer
The maximum number of URLs on a web page to include when crawling a website.
-
#max_urls_per_minute_crawl_rate ⇒ Integer
The maximum number of URLs crawled per website host per minute.
-
#proxy_configuration ⇒ Types::ProxyConfiguration
Configuration information required to connect to your internal websites via a web proxy.
-
#url_exclusion_patterns ⇒ Array<String>
A list of regular expression patterns to exclude certain URLs to crawl.
-
#url_inclusion_patterns ⇒ Array<String>
A list of regular expression patterns to include certain URLs to crawl.
-
#urls ⇒ Types::Urls
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
Instance Attribute Details
#authentication_configuration ⇒ Types::AuthenticationConfiguration
Configuration information required to connect to websites using authentication.
You can connect to websites using basic authentication of user name and password. You use a secret in [Secrets Manager] to store your authentication credentials.
You must provide the website host name and port number. For example, the host name of a.example.com/page1.html is “a.example.com” and the port is 443, the standard port for HTTPS.
[1]: docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#crawl_depth ⇒ Integer
The ‘depth’ or number of levels from the seed level to crawl. For example, the seed URL page is depth 1 and any hyperlinks on this page that are also crawled are depth 2.
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#max_content_size_per_page_in_mega_bytes ⇒ Float
The maximum size (in MB) of a web page or attachment to crawl.
Files larger than this size (in MB) are skipped/not crawled.
The default maximum size of a web page or attachment is set to 50 MB.
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#max_links_per_page ⇒ Integer
The maximum number of URLs on a web page to include when crawling a website. This number is per web page.
As a website’s web pages are crawled, any URLs the web pages link to are also crawled. URLs on a web page are crawled in order of appearance.
The default maximum links per page is 100.
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#max_urls_per_minute_crawl_rate ⇒ Integer
The maximum number of URLs crawled per website host per minute.
A minimum of one URL is required.
The default maximum number of URLs crawled per website host per minute is 300.
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#proxy_configuration ⇒ Types::ProxyConfiguration
Configuration information required to connect to your internal websites via a web proxy.
You must provide the website host name and port number. For example, the host name of a.example.com/page1.html is “a.example.com” and the port is 443, the standard port for HTTPS.
Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in [Secrets Manager].
[1]: docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#url_exclusion_patterns ⇒ Array<String>
A list of regular expression patterns to exclude certain URLs to crawl. URLs that match the patterns are excluded from the index. URLs that don’t match the patterns are included in the index. If a URL matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the URL file isn’t included in the index.
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#url_inclusion_patterns ⇒ Array<String>
A list of regular expression patterns to include certain URLs to crawl. URLs that match the patterns are included in the index. URLs that don’t match the patterns are excluded from the index. If a URL matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the URL file isn’t included in the index.
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |
#urls ⇒ Types::Urls
Specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl.
You can include website subdomains. You can list up to 100 seed URLs and up to three sitemap URLs.
You can only crawl websites that use the secure communication protocol, Hypertext Transfer Protocol Secure (HTTPS). If you receive an error when crawling a website, it could be that the website is blocked from crawling.
*When selecting websites to index, you must adhere to the [Amazon Acceptable Use Policy] and all other Amazon terms. Remember that you must only use Amazon Kendra Web Crawler to index your own web pages, or web pages that you have authorization to index.*
[1]: aws.amazon.com/aup/
11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 |
# File 'lib/aws-sdk-kendra/types.rb', line 11256 class WebCrawlerConfiguration < Struct.new( :urls, :crawl_depth, :max_links_per_page, :max_content_size_per_page_in_mega_bytes, :max_urls_per_minute_crawl_rate, :url_inclusion_patterns, :url_exclusion_patterns, :proxy_configuration, :authentication_configuration) SENSITIVE = [] include Aws::Structure end |