diff --git a/instra/tutorial-gen/0-general-instructions.md b/instra/tutorial-gen/0-general-instructions.md index d201cbc..aa726ee 100644 --- a/instra/tutorial-gen/0-general-instructions.md +++ b/instra/tutorial-gen/0-general-instructions.md @@ -13,3 +13,6 @@ When creating tutorials for specific AWS services: 1. CRITICAL FOR VPC CREATION: - Use the architecture defined in vpc-example.md as reference - This step is mandatory before creating any VPC resources +2. CRITICAL FOR MARKDOWN FORMATTING: + - Follow the markdown-formatting-guide.md for consistent formatting + - Always use hyphens (-) for lists, never asterisks (*) or bullet characters (•) diff --git a/instra/tutorial-gen/3a-draft-tutorial.md b/instra/tutorial-gen/3a-draft-tutorial.md index 9585539..7e58e01 100644 --- a/instra/tutorial-gen/3a-draft-tutorial.md +++ b/instra/tutorial-gen/3a-draft-tutorial.md @@ -1,36 +1,41 @@ -# draft a tutorial - -Use the example commands and output in cli-workflow.md to generate a tutorial, with a section for each group of related commands, and sections on prerequisites and next steps. Include guidance before and after every code block, even if it's just one sentence. reference the content in golden-tutorial.md as an example of a good tutorial. If content in the prerequisites section of the golden tutorial applies to this tutorial, reuse it. Name the output file 3-tutorial-draft.md. - -## links - -The tutorial may be published in the service guide, so don't include any general links to the guide or documentation landing page. In the next steps section, link to a topic in the service guide for each feature or use case listed. The prerequisites section can also have links, but avoid adding links to the core sections of the tutorial where readers are following instructions. Links in these sections can pull the reader away from the tutorial unnecessarily. - -## formatting - -Only use two levels of headers. H1 for the topic title, and H2 for the sections. To add a title to a code block or procedure, just use bold text. - -Use sentence case for all headers and titles. -Use present tense and active voice as much as possible. - -Don't add linebreaks in the middle of a paragraph. Keep all of the text in the paragraph on one line. Ensure that there is an empty line between all headers, paragraphs, example titles, and code blocks. - -For any relative path links, replace the './' with the full path that it represents. - -## portability - -Omit the --region option in example commands, unless it is required because by the specific use case or the service API. For example, if a command requires you to specify an availability zone, you need to ensure that you are calling the service in the same AWS Region as the availability zone. Otherwise, assume that the reader wants to create resources in the Region that they configured when they set up the AWS CLI, or write a script that they can run in any Region. - -## naming rules - -**account ids** - Replace 12 digit AWS account numbers with 123456789012. For examples with two account numbers, use 234567890123 for the second number. - -**GUIDs** - Obfuscate GUIDs by making the second character sequence in the guid "xmpl". - -**resource IDs** - For hex sequences, replace characters in the example with "abcd1234". For other numeric IDs, renumber starting with 1234. For alphanumric ID strings, replace characters 5-8 with "xmpl". - -**timestamps** - Replace timestamps with a value representing January 13th of the current year. - -**IP addresses** - Replace public IP addresses with fictitious addresses such as 203.0.113.75 or another address in the 203.0.113 - +# draft a tutorial + +Use the example commands and output in cli-workflow.md to generate a tutorial, with a section for each group of related commands, and sections on prerequisites and next steps. Include guidance before and after every code block, even if it's just one sentence. reference the content in golden-tutorial.md as an example of a good tutorial. If content in the prerequisites section of the golden tutorial applies to this tutorial, reuse it. Name the output file 3-tutorial-draft.md. + +## links + +The tutorial may be published in the service guide, so don't include any general links to the guide or documentation landing page. In the next steps section, link to a topic in the service guide for each feature or use case listed. The prerequisites section can also have links, but avoid adding links to the core sections of the tutorial where readers are following instructions. Links in these sections can pull the reader away from the tutorial unnecessarily. + +## formatting + +Only use two levels of headers. H1 for the topic title, and H2 for the sections. To add a title to a code block or procedure, just use bold text. + +Use sentence case for all headers and titles. +Use present tense and active voice as much as possible. + +Don't add linebreaks in the middle of a paragraph. Keep all of the text in the paragraph on one line. Ensure that there is an empty line between all headers, paragraphs, example titles, and code blocks. + +**List formatting:** +- Always use hyphens (-) for unordered lists, never asterisks (*) or bullet characters (•) +- Maintain consistent indentation for nested lists +- Example: `- First item` not `* First item` or `• First item` + +For any relative path links, replace the './' with the full path that it represents. + +## portability + +Omit the --region option in example commands, unless it is required because by the specific use case or the service API. For example, if a command requires you to specify an availability zone, you need to ensure that you are calling the service in the same AWS Region as the availability zone. Otherwise, assume that the reader wants to create resources in the Region that they configured when they set up the AWS CLI, or write a script that they can run in any Region. + +## naming rules + +**account ids** - Replace 12 digit AWS account numbers with 123456789012. For examples with two account numbers, use 234567890123 for the second number. + +**GUIDs** - Obfuscate GUIDs by making the second character sequence in the guid "xmpl". + +**resource IDs** - For hex sequences, replace characters in the example with "abcd1234". For other numeric IDs, renumber starting with 1234. For alphanumric ID strings, replace characters 5-8 with "xmpl". + +**timestamps** - Replace timestamps with a value representing January 13th of the current year. + +**IP addresses** - Replace public IP addresses with fictitious addresses such as 203.0.113.75 or another address in the 203.0.113 + **bucket names** - For S3 buckets, the name in the tutorial must start with "amzn-s3-demo". The script can't use this name because it's reserved for documentation. Leave the script as is but replace the prefix used by the script with "amzn-s3-demo" in the tutorial. \ No newline at end of file diff --git a/instra/tutorial-gen/3b-validate-tutorial.md b/instra/tutorial-gen/3b-validate-tutorial.md index b3df9bf..d7167a5 100644 --- a/instra/tutorial-gen/3b-validate-tutorial.md +++ b/instra/tutorial-gen/3b-validate-tutorial.md @@ -5,6 +5,11 @@ Validating the content of the AWS CLI tutorial and surface issues about the gene Review the tutorial markdown for proper formatting: +**List formatting:** +- Verify all unordered lists use hyphens (-) consistently, never asterisks (*) or bullet characters (•) +- Check for consistent indentation in nested lists +- Flag any mixed list marker usage within the same document + **Backticks usage:** - Use backticks for all inline code, commands, file paths, resource IDs, status values, and technical terms - Examples: `aws s3 ls`, `my-bucket-name`, `ACTIVE`, `~/path/to/file`, `us-east-1` diff --git a/instra/tutorial-gen/golden-tutorial.md b/instra/tutorial-gen/golden-tutorial.md index 4cfc65d..562963a 100644 --- a/instra/tutorial-gen/golden-tutorial.md +++ b/instra/tutorial-gen/golden-tutorial.md @@ -4,14 +4,14 @@ This tutorial guides you through common Amazon Lightsail operations using the AW ## Topics -* [Prerequisites](#getstarted-awscli-prerequisites) -* [Generate SSH key pairs](#getstarted-awscli-generate-ssh-key-pairs) -* [Create and manage instances](#getstarted-awscli-create-and-manage-instances) -* [Connect to your instance](#getstarted-awscli-connect-to-your-instance) -* [Add storage to your instance](#getstarted-awscli-add-storage-to-your-instance) -* [Create and use snapshots](#getstarted-awscli-create-and-use-snapshots) -* [Clean up resources](#getstarted-awscli-clean-up-resources) -* [Next steps](#getstarted-awscli-next-steps) +- [Prerequisites](#getstarted-awscli-prerequisites) +- [Generate SSH key pairs](#getstarted-awscli-generate-ssh-key-pairs) +- [Create and manage instances](#getstarted-awscli-create-and-manage-instances) +- [Connect to your instance](#getstarted-awscli-connect-to-your-instance) +- [Add storage to your instance](#getstarted-awscli-add-storage-to-your-instance) +- [Create and use snapshots](#getstarted-awscli-create-and-use-snapshots) +- [Clean up resources](#getstarted-awscli-clean-up-resources) +- [Next steps](#getstarted-awscli-next-steps) ## Prerequisites diff --git a/instra/tutorial-gen/markdown-formatting-guide.md b/instra/tutorial-gen/markdown-formatting-guide.md new file mode 100644 index 0000000..4e5f649 --- /dev/null +++ b/instra/tutorial-gen/markdown-formatting-guide.md @@ -0,0 +1,32 @@ +# Markdown Formatting Guide + +This guide ensures consistent markdown formatting across all tutorials to prevent rendering issues. + +## List Formatting Rules + +**REQUIRED:** Always use hyphens (-) for unordered lists +- ✅ Correct: `- First item` +- ❌ Incorrect: `* First item` +- ❌ Incorrect: `• First item` + +**Consistency:** Use the same list marker throughout the entire document +- All lists in a single file must use hyphens (-) +- Never mix asterisks (*) and hyphens (-) in the same document +- Never use bullet characters (•) which are not standard markdown + +## Whitespace Rules + +**Trailing whitespace:** Remove all trailing whitespace from lines +- Use `sed -i 's/[[:space:]]*$//' filename.md` to clean up + +**Line endings:** Use Unix line endings (LF) not Windows (CRLF) +- Use `sed -i 's/\r$//' filename.md` to convert if needed + +## Validation Checklist + +Before submitting any tutorial, verify: +- [ ] All lists use hyphens (-) consistently +- [ ] No bullet characters (•) are present +- [ ] No trailing whitespace on any lines +- [ ] Unix line endings are used +- [ ] Consistent indentation for nested lists diff --git a/tuts/001-lightsail-gs/README.md b/tuts/001-lightsail-gs/README.md index 3d8eea1..baac69a 100644 --- a/tuts/001-lightsail-gs/README.md +++ b/tuts/001-lightsail-gs/README.md @@ -8,8 +8,8 @@ You can either run the automated script `lightsail-gs.sh` to execute all operati The script creates the following AWS resources in order: -• Lightsail instance (nano_3_0 bundle with Amazon Linux 2023) -• Lightsail disk (8 GB block storage disk) -• Lightsail instance snapshot (backup of the instance) +- Lightsail instance (nano_3_0 bundle with Amazon Linux 2023) +- Lightsail disk (8 GB block storage disk) +- Lightsail instance snapshot (backup of the instance) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/002-vpc-gs/README.md b/tuts/002-vpc-gs/README.md index 7c2a82c..c3edb73 100644 --- a/tuts/002-vpc-gs/README.md +++ b/tuts/002-vpc-gs/README.md @@ -8,21 +8,21 @@ You can either run the automated script `vpc-gs.sh` to execute all operations au The script creates the following AWS resources in order: -• EC2 VPC (10.0.0.0/16 CIDR block with DNS support and hostnames enabled) -• EC2 subnet (public subnet in AZ1 - 10.0.0.0/24) -• EC2 subnet (public subnet in AZ2 - 10.0.1.0/24) -• EC2 subnet (private subnet in AZ1 - 10.0.2.0/24) -• EC2 subnet (private subnet in AZ2 - 10.0.3.0/24) -• EC2 internet gateway (for public internet access) -• EC2 route table (public route table with internet gateway route) -• EC2 route table association (public subnet AZ1 to public route table) -• EC2 route table association (public subnet AZ2 to public route table) -• EC2 route table (private route table) -• EC2 route table association (private subnet AZ1 to private route table) -• EC2 route table association (private subnet AZ2 to private route table) -• EC2 elastic IP (for NAT gateway) -• EC2 NAT gateway (in public subnet AZ1 for private subnet internet access) -• EC2 security group (web server security group allowing HTTP/HTTPS) -• EC2 security group (database security group allowing MySQL from web servers) +- EC2 VPC (10.0.0.0/16 CIDR block with DNS support and hostnames enabled) +- EC2 subnet (public subnet in AZ1 - 10.0.0.0/24) +- EC2 subnet (public subnet in AZ2 - 10.0.1.0/24) +- EC2 subnet (private subnet in AZ1 - 10.0.2.0/24) +- EC2 subnet (private subnet in AZ2 - 10.0.3.0/24) +- EC2 internet gateway (for public internet access) +- EC2 route table (public route table with internet gateway route) +- EC2 route table association (public subnet AZ1 to public route table) +- EC2 route table association (public subnet AZ2 to public route table) +- EC2 route table (private route table) +- EC2 route table association (private subnet AZ1 to private route table) +- EC2 route table association (private subnet AZ2 to private route table) +- EC2 elastic IP (for NAT gateway) +- EC2 NAT gateway (in public subnet AZ1 for private subnet internet access) +- EC2 security group (web server security group allowing HTTP/HTTPS) +- EC2 security group (database security group allowing MySQL from web servers) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/003-s3-gettingstarted/README.md b/tuts/003-s3-gettingstarted/README.md index 2b6e68c..7d8d57b 100644 --- a/tuts/003-s3-gettingstarted/README.md +++ b/tuts/003-s3-gettingstarted/README.md @@ -8,14 +8,14 @@ You can either run the automated script `s3-gettingstarted.sh` to execute all op The script creates the following AWS resources in order: -• S3 bucket (primary bucket for tutorial) -• S3 bucket (secondary bucket for cross-region replication) -• S3 public access block configuration -• S3 bucket versioning configuration -• S3 bucket encryption configuration -• S3 object (sample text file) -• S3 object (sample image file) -• S3 object (sample document file) -• S3 bucket tagging configuration +- S3 bucket (primary bucket for tutorial) +- S3 bucket (secondary bucket for cross-region replication) +- S3 public access block configuration +- S3 bucket versioning configuration +- S3 bucket encryption configuration +- S3 object (sample text file) +- S3 object (sample image file) +- S3 object (sample document file) +- S3 bucket tagging configuration The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/004-cloudmap-custom-attributes/README.md b/tuts/004-cloudmap-custom-attributes/README.md index 0da4d21..367f64e 100644 --- a/tuts/004-cloudmap-custom-attributes/README.md +++ b/tuts/004-cloudmap-custom-attributes/README.md @@ -8,22 +8,22 @@ You can either run the automated script `cloudmap-custom-attributes.sh` to execu The script creates the following AWS resources in order: -• Service Discovery http namespace -• Service Discovery http namespace (b) -• DynamoDB table -• Service Discovery service -• Service Discovery instance -• Service Discovery instance (b) -• IAM role -• IAM policy -• IAM role policy -• IAM role policy (b) -• Service Discovery service (b) -• Lambda function -• Service Discovery instance (c) -• Service Discovery instance (d) -• Lambda function (b) -• Service Discovery instance (e) -• Service Discovery instance (f) +- Service Discovery http namespace +- Service Discovery http namespace (b) +- DynamoDB table +- Service Discovery service +- Service Discovery instance +- Service Discovery instance (b) +- IAM role +- IAM policy +- IAM role policy +- IAM role policy (b) +- Service Discovery service (b) +- Lambda function +- Service Discovery instance (c) +- Service Discovery instance (d) +- Lambda function (b) +- Service Discovery instance (e) +- Service Discovery instance (f) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/004-cloudmap-custom-attributes/cloudmap-custom-attributes.md b/tuts/004-cloudmap-custom-attributes/cloudmap-custom-attributes.md index 390da0a..f74a429 100644 --- a/tuts/004-cloudmap-custom-attributes/cloudmap-custom-attributes.md +++ b/tuts/004-cloudmap-custom-attributes/cloudmap-custom-attributes.md @@ -203,38 +203,38 @@ import random def lambda_handler(event, context): try: serviceclient = boto3.client('servicediscovery') - + response = serviceclient.discover_instances( NamespaceName='cloudmap-tutorial', ServiceName='data-service') - + if not response.get("Instances"): return { 'statusCode': 500, 'body': json.dumps({"error": "No instances found"}) } - + tablename = response["Instances"][0]["Attributes"].get("tablename") if not tablename: return { 'statusCode': 500, 'body': json.dumps({"error": "Table name attribute not found"}) } - + dynamodbclient = boto3.resource('dynamodb') - + table = dynamodbclient.Table(tablename) - + # Validate input if not isinstance(event, str): return { 'statusCode': 400, 'body': json.dumps({"error": "Input must be a string"}) } - + response = table.put_item( Item={ 'id': str(random.randint(1,100)), 'todo': event }) - + return { 'statusCode': 200, 'body': json.dumps(response) @@ -318,32 +318,32 @@ def lambda_handler(event, context): serviceclient = boto3.client('servicediscovery') response = serviceclient.discover_instances( - NamespaceName='cloudmap-tutorial', + NamespaceName='cloudmap-tutorial', ServiceName='data-service') - + if not response.get("Instances"): return { 'statusCode': 500, 'body': json.dumps({"error": "No instances found"}) } - + tablename = response["Instances"][0]["Attributes"].get("tablename") if not tablename: return { 'statusCode': 500, 'body': json.dumps({"error": "Table name attribute not found"}) } - + dynamodbclient = boto3.resource('dynamodb') - + table = dynamodbclient.Table(tablename) - + # Use pagination for larger tables response = table.scan( Select='ALL_ATTRIBUTES', Limit=50 # Limit results for demonstration purposes ) - + # For production, you would implement pagination like this: # items = [] # while 'LastEvaluatedKey' in response: @@ -417,33 +417,33 @@ try: print("Discovering write function...") response = serviceclient.discover_instances( - NamespaceName='cloudmap-tutorial', - ServiceName='app-service', + NamespaceName='cloudmap-tutorial', + ServiceName='app-service', QueryParameters={ 'action': 'write' } ) if not response.get("Instances"): print("Error: No instances found") exit(1) - + functionname = response["Instances"][0]["Attributes"].get("functionname") if not functionname: print("Error: Function name attribute not found") exit(1) - + print(f"Found function: {functionname}") lambdaclient = boto3.client('lambda') print("Invoking Lambda function...") resp = lambdaclient.invoke( - FunctionName=functionname, + FunctionName=functionname, Payload='"This is a test data"' ) payload = resp["Payload"].read() print(f"Response: {payload.decode('utf-8')}") - + except Exception as e: print(f"Error: {str(e)}") EOF @@ -463,33 +463,33 @@ try: print("Discovering read function...") response = serviceclient.discover_instances( - NamespaceName='cloudmap-tutorial', - ServiceName='app-service', + NamespaceName='cloudmap-tutorial', + ServiceName='app-service', QueryParameters={ 'action': 'read' } ) if not response.get("Instances"): print("Error: No instances found") exit(1) - + functionname = response["Instances"][0]["Attributes"].get("functionname") if not functionname: print("Error: Function name attribute not found") exit(1) - + print(f"Found function: {functionname}") lambdaclient = boto3.client('lambda') print("Invoking Lambda function...") resp = lambdaclient.invoke( - FunctionName=functionname, + FunctionName=functionname, InvocationType='RequestResponse' ) payload = resp["Payload"].read() print(f"Response: {payload.decode('utf-8')}") - + except Exception as e: print(f"Error: {str(e)}") EOF diff --git a/tuts/005-cloudfront-gettingstarted/README.md b/tuts/005-cloudfront-gettingstarted/README.md index 5ee2290..c79244f 100644 --- a/tuts/005-cloudfront-gettingstarted/README.md +++ b/tuts/005-cloudfront-gettingstarted/README.md @@ -8,8 +8,8 @@ You can either run the automated script `cloudfront-gettingstarted.sh` to execut The script creates the following AWS resources in order: -• CloudFront origin access control -• CloudFront distribution -• S3 bucket policy +- CloudFront origin access control +- CloudFront distribution +- S3 bucket policy The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/005-cloudfront-gettingstarted/cloudfront-gettingstarted.md b/tuts/005-cloudfront-gettingstarted/cloudfront-gettingstarted.md index ae816b5..8e98fe4 100644 --- a/tuts/005-cloudfront-gettingstarted/cloudfront-gettingstarted.md +++ b/tuts/005-cloudfront-gettingstarted/cloudfront-gettingstarted.md @@ -4,14 +4,14 @@ This tutorial shows you how to use the AWS CLI to set up a basic CloudFront dist ## Topics -* [Prerequisites](#prerequisites) -* [Create an Amazon S3 bucket](#create-an-amazon-s3-bucket) -* [Upload content to the bucket](#upload-content-to-the-bucket) -* [Create a CloudFront distribution with OAC](#create-a-cloudfront-distribution-with-oac) -* [Access your content through CloudFront](#access-your-content-through-cloudfront) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an Amazon S3 bucket](#create-an-amazon-s3-bucket) +- [Upload content to the bucket](#upload-content-to-the-bucket) +- [Create a CloudFront distribution with OAC](#create-a-cloudfront-distribution-with-oac) +- [Access your content through CloudFront](#access-your-content-through-cloudfront) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites diff --git a/tuts/007-chimesdk-routingcalls/README.md b/tuts/007-chimesdk-routingcalls/README.md index bcd44ad..8bd12e8 100644 --- a/tuts/007-chimesdk-routingcalls/README.md +++ b/tuts/007-chimesdk-routingcalls/README.md @@ -8,12 +8,12 @@ You can either run the automated script `chimesdk-routingcalls.sh` to execute al The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• Lambda function -• Lambda function (b) -• Chime SDK Voice sip media application -• Chime SDK Voice sip media application (b) -• Chime SDK Voice sip rule +- IAM role +- IAM role policy +- Lambda function +- Lambda function (b) +- Chime SDK Voice sip media application +- Chime SDK Voice sip media application (b) +- Chime SDK Voice sip rule The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/007-chimesdk-routingcalls/chimesdk-routingcalls.md b/tuts/007-chimesdk-routingcalls/chimesdk-routingcalls.md index ac6063f..16568dc 100644 --- a/tuts/007-chimesdk-routingcalls/chimesdk-routingcalls.md +++ b/tuts/007-chimesdk-routingcalls/chimesdk-routingcalls.md @@ -136,7 +136,7 @@ mkdir -p lambda cat > lambda/index.js << EOF exports.handler = async (event) => { console.log('Received event:', JSON.stringify(event, null, 2)); - + // Simple call handling logic const response = { SchemaVersion: '1.0', @@ -157,7 +157,7 @@ exports.handler = async (event) => { } ] }; - + return response; }; EOF @@ -289,7 +289,7 @@ First, create a backup Lambda function in the same region. cat > lambda/backup-index.js << EOF exports.handler = async (event) => { console.log('Received event in backup handler:', JSON.stringify(event, null, 2)); - + // Simple call handling logic for backup const response = { SchemaVersion: '1.0', @@ -310,7 +310,7 @@ exports.handler = async (event) => { } ] }; - + return response; }; EOF diff --git a/tuts/008-vpc-private-servers-gs/README.md b/tuts/008-vpc-private-servers-gs/README.md index 8dcd93c..f78fe9d 100644 --- a/tuts/008-vpc-private-servers-gs/README.md +++ b/tuts/008-vpc-private-servers-gs/README.md @@ -8,25 +8,25 @@ You can either run the automated script `vpc-private-servers-gs.sh` to execute a The script creates the following AWS resources in order: -• EC2 vpc -• EC2 subnet -• EC2 subnet (b) -• EC2 subnet (c) -• EC2 subnet (d) -• EC2 internet gateway -• EC2 internet gateway (b) -• EC2 route table -• EC2 route table (b) -• EC2 route table (c) -• EC2 route -• EC2 route table (d) -• EC2 route table (e) -• EC2 route table (f) -• EC2 route table (g) -• EC2 address -• EC2 address (b) -• EC2 nat gateway -• EC2 nat gateway (b) -• EC2 route (b) +- EC2 vpc +- EC2 subnet +- EC2 subnet (b) +- EC2 subnet (c) +- EC2 subnet (d) +- EC2 internet gateway +- EC2 internet gateway (b) +- EC2 route table +- EC2 route table (b) +- EC2 route table (c) +- EC2 route +- EC2 route table (d) +- EC2 route table (e) +- EC2 route table (f) +- EC2 route table (g) +- EC2 address +- EC2 address (b) +- EC2 nat gateway +- EC2 nat gateway (b) +- EC2 route (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/008-vpc-private-servers-gs/vpc-private-servers-gs.md b/tuts/008-vpc-private-servers-gs/vpc-private-servers-gs.md index d90150a..b7c84c6 100644 --- a/tuts/008-vpc-private-servers-gs/vpc-private-servers-gs.md +++ b/tuts/008-vpc-private-servers-gs/vpc-private-servers-gs.md @@ -6,13 +6,13 @@ This tutorial demonstrates how to create a VPC that you can use for servers in a Before you begin this tutorial, you need: -* The AWS CLI installed and configured with permissions to create VPC resources, EC2 instances, load balancers, and Auto Scaling groups. For information about installing the AWS CLI, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +- The AWS CLI installed and configured with permissions to create VPC resources, EC2 instances, load balancers, and Auto Scaling groups. For information about installing the AWS CLI, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). -* Basic knowledge of VPC concepts, including subnets, route tables, and internet gateways. +- Basic knowledge of VPC concepts, including subnets, route tables, and internet gateways. -* The `jq` command-line JSON processor installed. This is used to parse the output of AWS CLI commands. For information about installing jq, see [Download jq](https://stedolan.github.io/jq/download/). +- The `jq` command-line JSON processor installed. This is used to parse the output of AWS CLI commands. For information about installing jq, see [Download jq](https://stedolan.github.io/jq/download/). -* Sufficient service quotas for the resources you'll create, including: +- Sufficient service quotas for the resources you'll create, including: * At least 2 available Elastic IP addresses * At least 2 NAT gateways * At least 1 VPC @@ -20,10 +20,10 @@ Before you begin this tutorial, you need: * At least 1 Application Load Balancer **Estimated cost**: The resources created in this tutorial will incur charges in your AWS account: -* NAT Gateways: approximately $0.045 per hour, plus data processing charges -* Elastic IP addresses: Free when associated with running instances, approximately $0.005 per hour when not associated -* EC2 instances: Varies by instance type (t3.micro used in this tutorial) -* Application Load Balancer: approximately $0.0225 per hour, plus data processing charges +- NAT Gateways: approximately $0.045 per hour, plus data processing charges +- Elastic IP addresses: Free when associated with running instances, approximately $0.005 per hour when not associated +- EC2 instances: Varies by instance type (t3.micro used in this tutorial) +- Application Load Balancer: approximately $0.0225 per hour, plus data processing charges ## Create the VPC and subnets @@ -115,10 +115,10 @@ aws ec2 create-subnet \ Each command returns output containing the subnet ID. Note these IDs for use in subsequent commands: -* Public Subnet 1: `subnet-abcd1234` -* Private Subnet 1: `subnet-abcd5678` -* Public Subnet 2: `subnet-efgh1234` -* Private Subnet 2: `subnet-efgh5678` +- Public Subnet 1: `subnet-abcd1234` +- Private Subnet 1: `subnet-abcd5678` +- Public Subnet 2: `subnet-efgh1234` +- Private Subnet 2: `subnet-efgh5678` ## Create and configure internet connectivity @@ -153,9 +153,9 @@ aws ec2 create-route-table --vpc-id vpc-abcd1234 --tag-specifications 'ResourceT Each command returns output containing the route table ID. Note these IDs: -* Public Route Table: `rtb-abcd1234` -* Private Route Table 1: `rtb-efgh1234` -* Private Route Table 2: `rtb-ijkl1234` +- Public Route Table: `rtb-abcd1234` +- Private Route Table 1: `rtb-efgh1234` +- Private Route Table 2: `rtb-ijkl1234` Add a route to the Internet Gateway in the public route table to enable internet access. Replace `rtb-abcd1234` with your actual public route table ID and `igw-abcd1234` with your actual Internet Gateway ID. @@ -190,8 +190,8 @@ aws ec2 allocate-address --domain vpc --tag-specifications 'ResourceType=elastic Each command returns output containing the allocation ID. Note these IDs: -* EIP 1 Allocation ID: `eipalloc-abcd1234` -* EIP 2 Allocation ID: `eipalloc-efgh1234` +- EIP 1 Allocation ID: `eipalloc-abcd1234` +- EIP 2 Allocation ID: `eipalloc-efgh1234` Create NAT Gateways in each public subnet. Replace the subnet IDs and allocation IDs with your actual IDs. @@ -211,8 +211,8 @@ aws ec2 create-nat-gateway \ Each command returns output containing the NAT Gateway ID. Note these IDs: -* NAT Gateway 1: `nat-abcd1234` -* NAT Gateway 2: `nat-efgh1234` +- NAT Gateway 1: `nat-abcd1234` +- NAT Gateway 2: `nat-efgh1234` NAT Gateways take a few minutes to provision. Wait for them to be available before proceeding. Replace the NAT Gateway IDs with your actual IDs. @@ -527,10 +527,10 @@ aws ec2 delete-vpc --vpc-id vpc-abcd1234 Now that you've created a VPC with private subnets and NAT gateways, you might want to explore these related topics: -* [VPC security best practices](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html) -* [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) -* [Auto Scaling group scaling policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html) -* [Load balancer target group health checks](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html) +- [VPC security best practices](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html) +- [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) +- [Auto Scaling group scaling policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html) +- [Load balancer target group health checks](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html) ## Security Considerations diff --git a/tuts/009-vpc-ipam-gs/README.md b/tuts/009-vpc-ipam-gs/README.md index fdb6c1e..bdba959 100644 --- a/tuts/009-vpc-ipam-gs/README.md +++ b/tuts/009-vpc-ipam-gs/README.md @@ -8,10 +8,10 @@ You can either run the automated script `vpc-ipam-gs.sh` to execute all operatio The script creates the following AWS resources in order: -• EC2 ipam -• EC2 ipam pool -• EC2 ipam pool (b) -• EC2 ipam pool (c) -• EC2 vpc +- EC2 ipam +- EC2 ipam pool +- EC2 ipam pool (b) +- EC2 ipam pool (c) +- EC2 vpc The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/009-vpc-ipam-gs/vpc-ipam-gs.md b/tuts/009-vpc-ipam-gs/vpc-ipam-gs.md index 387233b..06b81a8 100644 --- a/tuts/009-vpc-ipam-gs/vpc-ipam-gs.md +++ b/tuts/009-vpc-ipam-gs/vpc-ipam-gs.md @@ -6,18 +6,18 @@ This tutorial guides you through the process of setting up and using Amazon VPC Before you begin this tutorial, make sure you have: -* An AWS account with permissions to create and manage IPAM resources -* The AWS CLI installed and configured with appropriate credentials -* Basic understanding of IP addressing and CIDR notation -* Basic knowledge of Amazon VPC concepts -* Approximately 30 minutes to complete the tutorial +- An AWS account with permissions to create and manage IPAM resources +- The AWS CLI installed and configured with appropriate credentials +- Basic understanding of IP addressing and CIDR notation +- Basic knowledge of Amazon VPC concepts +- Approximately 30 minutes to complete the tutorial ### Cost considerations The resources you create in this tutorial will incur the following costs: -* IPAM: $0.02 per hour for the Advanced tier (the default tier used in this tutorial) -* IPAM Pools: No additional charge for pools created within IPAM -* VPC: No charge for the VPC itself +- IPAM: $0.02 per hour for the Advanced tier (the default tier used in this tutorial) +- IPAM Pools: No additional charge for pools created within IPAM +- VPC: No charge for the VPC itself The total cost for running the resources created in this tutorial for one hour is approximately $0.02. To avoid ongoing charges, make sure to follow the cleanup instructions at the end of the tutorial. @@ -230,19 +230,19 @@ This command shows all allocations from the specified IPAM pool, including the V Here are some common issues you might encounter when working with IPAM: -* **Permission errors**: Ensure that your IAM user or role has the necessary permissions to create and manage IPAM resources. You may need the `ec2:CreateIpam`, `ec2:CreateIpamPool`, and other related permissions. +- **Permission errors**: Ensure that your IAM user or role has the necessary permissions to create and manage IPAM resources. You may need the `ec2:CreateIpam`, `ec2:CreateIpamPool`, and other related permissions. -* **Resource limit exceeded**: By default, you can create only one IPAM per account. If you already have an IPAM, you'll need to delete it before creating a new one or use the existing one. +- **Resource limit exceeded**: By default, you can create only one IPAM per account. If you already have an IPAM, you'll need to delete it before creating a new one or use the existing one. -* **CIDR allocation failures**: When provisioning CIDRs to pools, ensure that the CIDR you're trying to provision doesn't overlap with existing allocations in other pools. +- **CIDR allocation failures**: When provisioning CIDRs to pools, ensure that the CIDR you're trying to provision doesn't overlap with existing allocations in other pools. -* **API request timeouts**: If you encounter "RequestExpired" errors, it might be due to network latency or time synchronization issues. Try the command again. +- **API request timeouts**: If you encounter "RequestExpired" errors, it might be due to network latency or time synchronization issues. Try the command again. -* **Incorrect state errors**: If you receive "IncorrectState" errors, it might be because you're trying to perform an operation on a resource that's not in the correct state. Wait for the resource to be fully created or provisioned before proceeding. +- **Incorrect state errors**: If you receive "IncorrectState" errors, it might be because you're trying to perform an operation on a resource that's not in the correct state. Wait for the resource to be fully created or provisioned before proceeding. -* **Allocation size errors**: If you receive "InvalidParameterValue" errors about allocation size, ensure that the netmask length you're requesting is appropriate for the pool size. For example, you can't allocate a /25 CIDR from a /24 pool. +- **Allocation size errors**: If you receive "InvalidParameterValue" errors about allocation size, ensure that the netmask length you're requesting is appropriate for the pool size. For example, you can't allocate a /25 CIDR from a /24 pool. -* **Dependency violations**: When cleaning up resources, you might encounter "DependencyViolation" errors. This is because resources have dependencies on each other. Make sure to delete resources in the reverse order of creation and deprovision CIDRs before deleting pools. +- **Dependency violations**: When cleaning up resources, you might encounter "DependencyViolation" errors. This is because resources have dependencies on each other. Make sure to delete resources in the reverse order of creation and deprovision CIDRs before deleting pools. ## Clean up resources @@ -304,10 +304,10 @@ Note: You may need to wait between these operations to allow the resources to be Now that you've learned how to create and use IPAM with the AWS CLI, you might want to explore more advanced features: -* [Plan for IP address provisioning](https://docs.aws.amazon.com/vpc/latest/ipam/planning-ipam.html) - Learn how to plan your IP address space effectively -* [Monitor CIDR usage by resource](https://docs.aws.amazon.com/vpc/latest/ipam/monitor-cidr-compliance-ipam.html) - Understand how to monitor IP address usage -* [Share an IPAM pool using AWS RAM](https://docs.aws.amazon.com/vpc/latest/ipam/share-pool-ipam.html) - Learn how to share IPAM pools across AWS accounts -* [Integrate IPAM with accounts in an AWS Organization](https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam.html) - Discover how to use IPAM across your organization +- [Plan for IP address provisioning](https://docs.aws.amazon.com/vpc/latest/ipam/planning-ipam.html) - Learn how to plan your IP address space effectively +- [Monitor CIDR usage by resource](https://docs.aws.amazon.com/vpc/latest/ipam/monitor-cidr-compliance-ipam.html) - Understand how to monitor IP address usage +- [Share an IPAM pool using AWS RAM](https://docs.aws.amazon.com/vpc/latest/ipam/share-pool-ipam.html) - Learn how to share IPAM pools across AWS accounts +- [Integrate IPAM with accounts in an AWS Organization](https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam.html) - Discover how to use IPAM across your organization ## Security Considerations diff --git a/tuts/010-cloudmap-service-discovery/README.md b/tuts/010-cloudmap-service-discovery/README.md index 9745bc1..ace17da 100644 --- a/tuts/010-cloudmap-service-discovery/README.md +++ b/tuts/010-cloudmap-service-discovery/README.md @@ -8,10 +8,10 @@ You can either run the automated script `cloudmap-service-discovery.sh` to execu The script creates the following AWS resources in order: -• Service Discovery public dns namespace -• Service Discovery service -• Service Discovery service (b) -• Service Discovery instance -• Service Discovery instance (b) +- Service Discovery public dns namespace +- Service Discovery service +- Service Discovery service (b) +- Service Discovery instance +- Service Discovery instance (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/010-cloudmap-service-discovery/cloudmap-service-discovery.md b/tuts/010-cloudmap-service-discovery/cloudmap-service-discovery.md index cf0032d..dd7d1e6 100644 --- a/tuts/010-cloudmap-service-discovery/cloudmap-service-discovery.md +++ b/tuts/010-cloudmap-service-discovery/cloudmap-service-discovery.md @@ -6,9 +6,9 @@ This tutorial demonstrates how to use AWS Cloud Map service discovery using the Before you begin, make sure you have: -* [Installed and configured](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) the AWS CLI with appropriate permissions -* Completed the steps in [Set up to use AWS Cloud Map](https://docs.aws.amazon.com/cloud-map/latest/dg/setting-up-cloud-map.html) -* Installed the `dig` DNS lookup utility command for DNS verification +- [Installed and configured](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) the AWS CLI with appropriate permissions +- Completed the steps in [Set up to use AWS Cloud Map](https://docs.aws.amazon.com/cloud-map/latest/dg/setting-up-cloud-map.html) +- Installed the `dig` DNS lookup utility command for DNS verification ## Create an AWS Cloud Map namespace @@ -305,10 +305,10 @@ aws route53 list-hosted-zones-by-name \ Now that you've learned how to use AWS Cloud Map for service discovery, you can: -* Integrate AWS Cloud Map with your microservices architecture -* Explore health checking options for your service instances -* Use AWS Cloud Map with Amazon ECS or Amazon EKS for container service discovery -* Create private DNS namespaces for internal service discovery within your VPCs +- Integrate AWS Cloud Map with your microservices architecture +- Explore health checking options for your service instances +- Use AWS Cloud Map with Amazon ECS or Amazon EKS for container service discovery +- Create private DNS namespaces for internal service discovery within your VPCs ## Security Considerations diff --git a/tuts/011-getting-started-batch-fargate/README.md b/tuts/011-getting-started-batch-fargate/README.md index 7672ede..9b6f6de 100644 --- a/tuts/011-getting-started-batch-fargate/README.md +++ b/tuts/011-getting-started-batch-fargate/README.md @@ -8,10 +8,10 @@ You can either run the automated script `getting-started-batch-fargate.sh` to ex The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• Batch compute environment -• Batch job queue -• Batch job definition +- IAM role +- IAM role policy +- Batch compute environment +- Batch job queue +- Batch job definition The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/011-getting-started-batch-fargate/getting-started-batch-fargate.md b/tuts/011-getting-started-batch-fargate/getting-started-batch-fargate.md index 83d6a95..83ce8dc 100644 --- a/tuts/011-getting-started-batch-fargate/getting-started-batch-fargate.md +++ b/tuts/011-getting-started-batch-fargate/getting-started-batch-fargate.md @@ -16,17 +16,17 @@ By the end of this tutorial, you'll have a working AWS Batch setup that can proc ## Topics -* [Prerequisites](#prerequisites) -* [Create an IAM execution role](#create-an-iam-execution-role) -* [Create a compute environment](#create-a-compute-environment) -* [Create a job queue](#create-a-job-queue) -* [Create a job definition](#create-a-job-definition) -* [Submit and monitor a job](#submit-and-monitor-a-job) -* [View job output](#view-job-output) -* [Clean up resources](#clean-up-resources) -* [Troubleshooting](#troubleshooting) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an IAM execution role](#create-an-iam-execution-role) +- [Create a compute environment](#create-a-compute-environment) +- [Create a job queue](#create-a-job-queue) +- [Create a job definition](#create-a-job-definition) +- [Submit and monitor a job](#submit-and-monitor-a-job) +- [View job output](#view-job-output) +- [Clean up resources](#clean-up-resources) +- [Troubleshooting](#troubleshooting) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -461,12 +461,12 @@ For comprehensive guidance on production-ready architectures, see the [AWS Well- Now that you've completed this tutorial, you can explore more advanced AWS Batch features: -* [Job queues](https://docs.aws.amazon.com/batch/latest/userguide/job_queues.html) - Learn about job queue scheduling and priority management -* [Job definitions](https://docs.aws.amazon.com/batch/latest/userguide/job_definitions.html) - Explore advanced job definition configurations including environment variables, volumes, and retry strategies -* [Compute environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) - Understand different compute environment types and scaling options -* [Multi-node parallel jobs](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html) - Run jobs that span multiple compute nodes -* [Array jobs](https://docs.aws.amazon.com/batch/latest/userguide/array_jobs.html) - Submit large numbers of similar jobs efficiently -* [Best practices](https://docs.aws.amazon.com/batch/latest/userguide/best-practices.html) - Learn optimization techniques for production workloads +- [Job queues](https://docs.aws.amazon.com/batch/latest/userguide/job_queues.html) - Learn about job queue scheduling and priority management +- [Job definitions](https://docs.aws.amazon.com/batch/latest/userguide/job_definitions.html) - Explore advanced job definition configurations including environment variables, volumes, and retry strategies +- [Compute environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) - Understand different compute environment types and scaling options +- [Multi-node parallel jobs](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html) - Run jobs that span multiple compute nodes +- [Array jobs](https://docs.aws.amazon.com/batch/latest/userguide/array_jobs.html) - Submit large numbers of similar jobs efficiently +- [Best practices](https://docs.aws.amazon.com/batch/latest/userguide/best-practices.html) - Learn optimization techniques for production workloads ## Security Considerations diff --git a/tuts/012-transitgateway-gettingstarted/README.md b/tuts/012-transitgateway-gettingstarted/README.md index 4fb40f2..98abdaf 100644 --- a/tuts/012-transitgateway-gettingstarted/README.md +++ b/tuts/012-transitgateway-gettingstarted/README.md @@ -8,16 +8,16 @@ You can either run the automated script `transitgateway-gettingstarted.sh` to ex The script creates the following AWS resources in order: -• EC2 vpc -• EC2 subnet -• EC2 subnet (b) -• EC2 vpc (b) -• EC2 subnet (c) -• EC2 subnet (d) -• EC2 transit gateway -• EC2 transit gateway vpc attachment -• EC2 transit gateway vpc attachment (b) -• EC2 route -• EC2 route (b) +- EC2 vpc +- EC2 subnet +- EC2 subnet (b) +- EC2 vpc (b) +- EC2 subnet (c) +- EC2 subnet (d) +- EC2 transit gateway +- EC2 transit gateway vpc attachment +- EC2 transit gateway vpc attachment (b) +- EC2 route +- EC2 route (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/013-ec2-basics/README.md b/tuts/013-ec2-basics/README.md index e5e5625..fc8a0bb 100644 --- a/tuts/013-ec2-basics/README.md +++ b/tuts/013-ec2-basics/README.md @@ -8,12 +8,12 @@ You can either run the automated script `ec2-basics.sh` to execute all the steps The script creates the following AWS resources in order: -• EC2 key pair -• EC2 security group -• EC2 instances -• EC2 instances (b) -• EC2 address -• EC2 address (b) -• EC2 instances (c) +- EC2 key pair +- EC2 security group +- EC2 instances +- EC2 instances (b) +- EC2 address +- EC2 address (b) +- EC2 instances (c) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/015-vpc-peering/README.md b/tuts/015-vpc-peering/README.md index 67693d4..945d71d 100644 --- a/tuts/015-vpc-peering/README.md +++ b/tuts/015-vpc-peering/README.md @@ -8,17 +8,17 @@ You can run the shell script to automatically create the VPC peering infrastruct The script creates the following AWS resources in order: -• EC2 vpc -• EC2 vpc (b) -• EC2 vpc (c) -• EC2 subnet -• EC2 subnet (b) -• EC2 vpc peering connection -• EC2 route table -• EC2 route -• EC2 route table (b) -• EC2 route table (c) -• EC2 route (b) -• EC2 route table (d) +- EC2 vpc +- EC2 vpc (b) +- EC2 vpc (c) +- EC2 subnet +- EC2 subnet (b) +- EC2 vpc peering connection +- EC2 route table +- EC2 route +- EC2 route table (b) +- EC2 route table (c) +- EC2 route (b) +- EC2 route table (d) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/016-opensearch-service-gs/README.md b/tuts/016-opensearch-service-gs/README.md index 568be1d..a706d3f 100644 --- a/tuts/016-opensearch-service-gs/README.md +++ b/tuts/016-opensearch-service-gs/README.md @@ -8,6 +8,6 @@ You can run the shell script to automatically set up the OpenSearch Service doma The script creates the following AWS resources in order: -• OpenSearch domain +- OpenSearch domain The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/016-opensearch-service-gs/opensearch-service-gs.md b/tuts/016-opensearch-service-gs/opensearch-service-gs.md index debf80a..9c774f3 100644 --- a/tuts/016-opensearch-service-gs/opensearch-service-gs.md +++ b/tuts/016-opensearch-service-gs/opensearch-service-gs.md @@ -79,7 +79,7 @@ Make note of this endpoint as you'll need it for the next steps. ## Upload data to your domain -Once your domain is active, you can upload data to it. In this section, you'll upload documents using the master user authentication method. +Once your domain is active, you can upload data to it. In this section, you'll upload documents using the master user authentication method. You'll upload a single document and upload multiple documents in bulk. ### Verify variables are set correctly diff --git a/tuts/018-ecs-ec2/README.md b/tuts/018-ecs-ec2/README.md index 9006bba..05717e5 100644 --- a/tuts/018-ecs-ec2/README.md +++ b/tuts/018-ecs-ec2/README.md @@ -8,14 +8,14 @@ You can either run the automated shell script (`ecs-ec2-getting-started.sh`) to The script creates the following AWS resources in order: -• ECS cluster -• EC2 key pair -• EC2 security group -• IAM role -• IAM role policy -• IAM instance profile -• EC2 instances -• ECS task definition -• ECS service +- ECS cluster +- EC2 key pair +- EC2 security group +- IAM role +- IAM role policy +- IAM instance profile +- EC2 instances +- ECS task definition +- ECS service The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/018-ecs-ec2/ecs-ec2-getting-started.md b/tuts/018-ecs-ec2/ecs-ec2-getting-started.md index 0384e7b..5cbb809 100644 --- a/tuts/018-ecs-ec2/ecs-ec2-getting-started.md +++ b/tuts/018-ecs-ec2/ecs-ec2-getting-started.md @@ -4,13 +4,13 @@ This tutorial guides you through setting up an Amazon Elastic Container Service ## Topics -* [Prerequisites](#prerequisites) -* [Create an ECS cluster](#create-an-ecs-cluster) -* [Launch a container instance](#launch-a-container-instance) -* [Register a task definition](#register-a-task-definition) -* [Create and monitor a service](#create-and-monitor-a-service) -* [Clean up resources](#clean-up-resources) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an ECS cluster](#create-an-ecs-cluster) +- [Launch a container instance](#launch-a-container-instance) +- [Register a task definition](#register-a-task-definition) +- [Create and monitor a service](#create-and-monitor-a-service) +- [Clean up resources](#clean-up-resources) +- [Next steps](#next-steps) ## Prerequisites @@ -22,7 +22,7 @@ Before you begin this tutorial, make sure you have the following. 4. An AWS account with permissions to create and manage ECS, EC2, and IAM resources. Your IAM user should have the [AmazonECS_FullAccess](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonECS_FullAccess) policy attached. 5. A default VPC in your AWS account. If you don't have one, you can [create a VPC](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-a-vpc) using the Amazon VPC console. -Before you start, verify your AWS CLI configuration. +Before you start, verify your AWS CLI configuration. ``` $ aws sts get-caller-identity @@ -685,9 +685,9 @@ All resources have been successfully cleaned up. Now that you've learned how to create and manage Amazon ECS services with the EC2 launch type, you can explore more advanced features: -* [Use Application Load Balancers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html) to distribute traffic across multiple tasks in your service -* [Configure auto scaling](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html) to automatically adjust the number of running tasks based on demand -* [Set up CloudWatch logging](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) to collect and monitor logs from your containers -* [Use Amazon ECR](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECR_on_ECS.html) to store and manage your container images -* [Deploy multi-container applications](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) using more complex task definitions -* [Configure service discovery](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html) to enable services to find and communicate with each other +- [Use Application Load Balancers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html) to distribute traffic across multiple tasks in your service +- [Configure auto scaling](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html) to automatically adjust the number of running tasks based on demand +- [Set up CloudWatch logging](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) to collect and monitor logs from your containers +- [Use Amazon ECR](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECR_on_ECS.html) to store and manage your container images +- [Deploy multi-container applications](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) using more complex task definitions +- [Configure service discovery](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html) to enable services to find and communicate with each other diff --git a/tuts/019-lambda-gettingstarted/README.md b/tuts/019-lambda-gettingstarted/README.md index 23e5974..505fb14 100644 --- a/tuts/019-lambda-gettingstarted/README.md +++ b/tuts/019-lambda-gettingstarted/README.md @@ -8,8 +8,8 @@ You can either run the automated script `lambda-gettingstarted.sh` to execute al The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• Lambda function +- IAM role +- IAM role policy +- Lambda function The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md b/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md index 1a8c37e..2f05441 100644 --- a/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md +++ b/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md @@ -86,19 +86,19 @@ Create a file named `index.mjs` with the following content: ```javascript export const handler = async (event, context) => { - + const length = event.length; const width = event.width; let area = calculateArea(length, width); console.log(`The area is ${area}`); - + console.log('CloudWatch log group: ', context.logGroupName); - + let data = { "area": area, }; return JSON.stringify(data); - + function calculateArea(length, width) { return length * width; } @@ -119,20 +119,20 @@ logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): - + # Get the length and width parameters from the event object length = event['length'] width = event['width'] - + area = calculate_area(length, width) print(f"The area is {area}") - + logger.info(f"CloudWatch logs group: {context.log_group_name}") - + # return the calculated area as a JSON string data = {"area": area} return json.dumps(data) - + def calculate_area(length, width): return length*width ``` diff --git a/tuts/020-ebs-gs-volumes/README.md b/tuts/020-ebs-gs-volumes/README.md index da16b51..274efa9 100644 --- a/tuts/020-ebs-gs-volumes/README.md +++ b/tuts/020-ebs-gs-volumes/README.md @@ -8,9 +8,9 @@ You can either run the automated shell script (`ebs-gs-volumes.sh`) to quickly s The script creates the following AWS resources in order: -• EC2 volume -• EC2 security group -• EC2 instances -• EC2 volume (b) +- EC2 volume +- EC2 security group +- EC2 instances +- EC2 volume (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/021-cloudformation-gs/README.md b/tuts/021-cloudformation-gs/README.md index 9700d92..eee5438 100644 --- a/tuts/021-cloudformation-gs/README.md +++ b/tuts/021-cloudformation-gs/README.md @@ -8,6 +8,6 @@ You can either run the automated script `cloudformation-gs.sh` to execute all th The script creates the following AWS resources in order: -• CloudFormation stack +- CloudFormation stack The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/021-cloudformation-gs/cloudformation-gs.md b/tuts/021-cloudformation-gs/cloudformation-gs.md index 32a9503..bc7e726 100644 --- a/tuts/021-cloudformation-gs/cloudformation-gs.md +++ b/tuts/021-cloudformation-gs/cloudformation-gs.md @@ -6,16 +6,16 @@ This tutorial walks you through creating your first CloudFormation stack using t ## Topics -* [Prerequisites](#prerequisites) -* [Create a CloudFormation template](#create-a-cloudformation-template) -* [Validate and deploy the template](#validate-and-deploy-the-template) -* [Monitor stack creation](#monitor-stack-creation) -* [View stack resources and outputs](#view-stack-resources-and-outputs) -* [Test the web server](#test-the-web-server) -* [Troubleshoot common issues](#troubleshoot-common-issues) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create a CloudFormation template](#create-a-cloudformation-template) +- [Validate and deploy the template](#validate-and-deploy-the-template) +- [Monitor stack creation](#monitor-stack-creation) +- [View stack resources and outputs](#view-stack-resources-and-outputs) +- [Test the web server](#test-the-web-server) +- [Troubleshoot common issues](#troubleshoot-common-issues) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -56,7 +56,7 @@ Parameters: - t3.micro - t2.micro ConstraintDescription: must be a valid EC2 instance type. - + MyIP: Description: Your IP address in CIDR format (e.g. 203.0.113.1/32). Type: String @@ -103,9 +103,9 @@ Outputs: This template defines a simple web server infrastructure with the following components: -* **Parameters**: Values that can be passed to the template when creating the stack, including the AMI ID, instance type, and your IP address. -* **Resources**: The AWS resources to create, including a security group that allows HTTP access from your IP address and an EC2 instance running Apache HTTP Server. -* **Outputs**: Values that are returned after the stack is created, including the URL of the web server. +- **Parameters**: Values that can be passed to the template when creating the stack, including the AMI ID, instance type, and your IP address. +- **Resources**: The AWS resources to create, including a security group that allows HTTP access from your IP address and an EC2 instance running Apache HTTP Server. +- **Outputs**: Values that are returned after the stack is created, including the URL of the web server. Note that we're using Amazon Linux 2023, the latest version of Amazon Linux, which includes several improvements over Amazon Linux 2. diff --git a/tuts/022-ebs-intermediate/README.md b/tuts/022-ebs-intermediate/README.md index 9a271f6..094373e 100644 --- a/tuts/022-ebs-intermediate/README.md +++ b/tuts/022-ebs-intermediate/README.md @@ -8,8 +8,8 @@ You can run the shell script to automatically create and configure advanced EBS The script creates the following AWS resources in order: -• EC2 volume -• EC2 snapshot -• EC2 volume (b) +- EC2 volume +- EC2 snapshot +- EC2 volume (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/022-ebs-intermediate/ebs-intermediate.md b/tuts/022-ebs-intermediate/ebs-intermediate.md index 9fcc543..ea09d80 100644 --- a/tuts/022-ebs-intermediate/ebs-intermediate.md +++ b/tuts/022-ebs-intermediate/ebs-intermediate.md @@ -4,13 +4,13 @@ This tutorial guides you through essential Amazon EBS operations using the AWS C ## Topics -* [Prerequisites](#prerequisites) -* [Enable Amazon EBS encryption by default](#enable-amazon-ebs-encryption-by-default) -* [Create an EBS snapshot](#create-an-ebs-snapshot) -* [Create and initialize a volume from a snapshot](#create-and-initialize-a-volume-from-a-snapshot) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Enable Amazon EBS encryption by default](#enable-amazon-ebs-encryption-by-default) +- [Create an EBS snapshot](#create-an-ebs-snapshot) +- [Create and initialize a volume from a snapshot](#create-and-initialize-a-volume-from-a-snapshot) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -119,7 +119,7 @@ $ VOLUME_ID=$(aws ec2 create-volume --availability-zone $AVAILABILITY_ZONE --siz $ echo $VOLUME_ID ``` -An example response is as follows: +An example response is as follows: ``` vol-abcd1234 ``` @@ -292,7 +292,7 @@ aws ec2 describe-volumes --volume-ids $NEW_VOLUME_ID \ ### Step 3: Connect to the instance and find the device name #### 3.1: Ensure the instance has the required IAM role for Systems Manager -Once the above steps are complete, you can connect using AWS Systems Manager Session Manager (no SSH key required). +Once the above steps are complete, you can connect using AWS Systems Manager Session Manager (no SSH key required). First, configure the EC2 instance to have an IAM role with Systems Manager permissions: ``` @@ -433,7 +433,7 @@ sudo fio --filename=$DEVICE_NAME --rw=read --bs=1M --iodepth=8 --ioengine=libaio tail -f /tmp/fio-init.log ``` -To exit from the instance session, type the following: +To exit from the instance session, type the following: ``` exit ``` @@ -552,8 +552,8 @@ For more information on building production-ready solutions, refer to: Now that you've learned how to work with Amazon EBS encryption, snapshots, and volume initialization, you might want to explore these related topics: -* [Amazon EBS volume types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html) -* [Amazon EBS fast snapshot restore](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html) -* [Amazon Data Lifecycle Manager](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html) -* [Amazon EBS encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) -* [Amazon EBS performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) +- [Amazon EBS volume types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html) +- [Amazon EBS fast snapshot restore](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html) +- [Amazon Data Lifecycle Manager](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html) +- [Amazon EBS encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) +- [Amazon EBS performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) diff --git a/tuts/024-glue-gs/README.md b/tuts/024-glue-gs/README.md index 53a4130..0dc1ad6 100644 --- a/tuts/024-glue-gs/README.md +++ b/tuts/024-glue-gs/README.md @@ -8,7 +8,7 @@ You can either run the automated script `glue-gs.sh` to execute all operations a The script creates the following AWS resources in order: -• Glue database -• Glue table +- Glue database +- Glue table The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/024-glue-gs/glue-gs.md b/tuts/024-glue-gs/glue-gs.md index 01b38a3..d56640d 100644 --- a/tuts/024-glue-gs/glue-gs.md +++ b/tuts/024-glue-gs/glue-gs.md @@ -4,13 +4,13 @@ This tutorial guides you through creating and managing AWS Glue Data Catalog res ## Topics -* [Prerequisites](#prerequisites) -* [Create a database](#create-a-database) -* [Create a table](#create-a-table) -* [Explore the Data Catalog](#explore-the-data-catalog) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create a database](#create-a-database) +- [Create a table](#create-a-table) +- [Explore the Data Catalog](#explore-the-data-catalog) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -441,11 +441,11 @@ For more information on building production-ready solutions, refer to: Now that you've learned how to create and manage AWS Glue Data Catalog resources using the AWS CLI, you can explore more advanced features: -* [Create and run a crawler](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) to automatically discover and catalog data -* [Create ETL jobs](https://docs.aws.amazon.com/glue/latest/dg/author-job-glue.html) to transform your data -* [Set up triggers](https://docs.aws.amazon.com/glue/latest/dg/trigger-job.html) to automate your ETL workflows -* [Use the AWS Glue Schema Registry](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html) to manage and enforce schemas for your data -* [Integrate with AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/what-is-lake-formation.html) for fine-grained access control +- [Create and run a crawler](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) to automatically discover and catalog data +- [Create ETL jobs](https://docs.aws.amazon.com/glue/latest/dg/author-job-glue.html) to transform your data +- [Set up triggers](https://docs.aws.amazon.com/glue/latest/dg/trigger-job.html) to automate your ETL workflows +- [Use the AWS Glue Schema Registry](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html) to manage and enforce schemas for your data +- [Integrate with AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/what-is-lake-formation.html) for fine-grained access control ## Troubleshooting diff --git a/tuts/025-documentdb-gs/README.md b/tuts/025-documentdb-gs/README.md index f210ab2..c85a46a 100644 --- a/tuts/025-documentdb-gs/README.md +++ b/tuts/025-documentdb-gs/README.md @@ -8,9 +8,9 @@ You can run the shell script to automatically set up the Amazon DocumentDB clust The script creates the following AWS resources in order: -• Secrets Manager secret -• Docdb db subnet group -• Docdb db cluster -• Docdb db instance +- Secrets Manager secret +- Docdb db subnet group +- Docdb db cluster +- Docdb db instance The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/025-documentdb-gs/documentdb-gs.md b/tuts/025-documentdb-gs/documentdb-gs.md index a10847a..118cb37 100644 --- a/tuts/025-documentdb-gs/documentdb-gs.md +++ b/tuts/025-documentdb-gs/documentdb-gs.md @@ -4,16 +4,16 @@ This tutorial guides you through the process of creating and using an Amazon Doc ## Topics -* [Prerequisites](#prerequisites) -* [Create a DB subnet group](#create-a-db-subnet-group) -* [Create a DocumentDB cluster](#create-a-documentdb-cluster) -* [Create a DocumentDB instance](#create-a-documentdb-instance) -* [Configure security and connectivity](#configure-security-and-connectivity) -* [Connect to your cluster](#connect-to-your-cluster) -* [Perform database operations](#perform-database-operations) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create a DB subnet group](#create-a-db-subnet-group) +- [Create a DocumentDB cluster](#create-a-documentdb-cluster) +- [Create a DocumentDB instance](#create-a-documentdb-instance) +- [Configure security and connectivity](#configure-security-and-connectivity) +- [Connect to your cluster](#connect-to-your-cluster) +- [Perform database operations](#perform-database-operations) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -41,7 +41,7 @@ First, identify your default VPC: aws ec2 describe-vpcs --filters "Name=isDefault,Values=true" --query "Vpcs[0].VpcId" --output text ``` -This command returns the ID of your default VPC. Next, find subnets in this VPC. Replace `vpc-abcd1234` with your actual VPC ID. +This command returns the ID of your default VPC. Next, find subnets in this VPC. Replace `vpc-abcd1234` with your actual VPC ID. ```bash aws ec2 describe-subnets --filters "Name=vpc-id,Values=vpc-abcd1234" --query "Subnets[*].[SubnetId,AvailabilityZone]" --output text @@ -51,7 +51,7 @@ The output will show subnet IDs and their Availability Zones. You'll need to sel **Create the DB subnet group** -Now, create a DB subnet group using subnets from different Availability Zones. Replace `subnet-abcd1234` and `subnet-efgh5678` with actual subnet IDs from different Availability Zones. +Now, create a DB subnet group using subnets from different Availability Zones. Replace `subnet-abcd1234` and `subnet-efgh5678` with actual subnet IDs from different Availability Zones. ```bash aws docdb create-db-subnet-group \ @@ -97,7 +97,7 @@ With the subnet group in place, you can now create your DocumentDB cluster. **Store credentials securely** -For better security, let's store our database credentials in AWS Secrets Manager instead of using them directly in commands. Replace `YourStrongPassword123!` with a secure password of your choice. +For better security, let's store our database credentials in AWS Secrets Manager instead of using them directly in commands. Replace `YourStrongPassword123!` with a secure password of your choice. ```bash aws secretsmanager create-secret \ @@ -110,7 +110,7 @@ This command stores your credentials securely and returns information about the **Create the cluster** -The following command creates a DocumentDB cluster with version 5.0.0. Replace `YourStrongPassword123!` with the same password you stored in Secrets Manager. +The following command creates a DocumentDB cluster with version 5.0.0. Replace `YourStrongPassword123!` with the same password you stored in Secrets Manager. ```bash aws docdb create-db-cluster \ @@ -235,7 +235,7 @@ You should see the certificate file in the output. ## Connect to your cluster -Since DocumentDB clusters are only accessible from within the VPC, we'll use AWS Systems Manager Session Manager to connect through the EC2 instance. +Since DocumentDB clusters are only accessible from within the VPC, we'll use AWS Systems Manager Session Manager to connect through the EC2 instance. **Get your EC2 Instance ID** Find the Instance ID of the EC2 instance that's created before: @@ -244,9 +244,9 @@ aws ec2 describe-instances \ --filters "Name=tag:Name,Values=DocumentDB-Tutorial-Instance" \ --query "Reservations[0].Instances[0].InstanceId" \ --output text -``` +``` -Save the instance ID for the following use. +Save the instance ID for the following use. **Start Session Manager** Replace `YOUR_INSTANCE_ID` with the actual Instance ID from the previous step. @@ -255,7 +255,7 @@ aws ssm start-session --target YOUR_INSTANCE_ID ``` **Set up certificate for SSM user** -Once in the session (you'll see `sh-4.2$` prompt), run the following commands: +Once in the session (you'll see `sh-4.2$` prompt), run the following commands: ```bash sudo mkdir -p /home/ssm-user/certs sudo cp /root/certs/global-bundle.pem /home/ssm-user/certs/ @@ -264,7 +264,7 @@ sudo chown ssm-user:ssm-user /home/ssm-user/certs/global-bundle.pem **Connect to MongoDB Shell** -Use the following command to connect to your cluster. Replace `/home/ssm-user/certs/global-bundle.pem` with the certificate path that you created in the previous step. Replace the host with your actual cluster endpoint and the password with your actual password. +Use the following command to connect to your cluster. Replace `/home/ssm-user/certs/global-bundle.pem` with the certificate path that you created in the previous step. Replace the host with your actual cluster endpoint and the password with your actual password. ```bash mongosh --tls --tlsCAFile /home/ssm-user/certs/global-bundle.pem \ @@ -315,9 +315,9 @@ Insert multiple documents at once: ```javascript db.profiles.insertMany([ - { _id: 1, name: 'Matt', status: 'active', level: 12, score: 202 }, - { _id: 2, name: 'Frank', status: 'inactive', level: 2, score: 9 }, - { _id: 3, name: 'Karen', status: 'active', level: 7, score: 87 }, + { _id: 1, name: 'Matt', status: 'active', level: 12, score: 202 }, + { _id: 2, name: 'Frank', status: 'inactive', level: 2, score: 9 }, + { _id: 3, name: 'Karen', status: 'active', level: 7, score: 87 }, { _id: 4, name: 'Katie', status: 'active', level: 3, score: 27 } ]) ``` diff --git a/tuts/027-connect-gs/README.md b/tuts/027-connect-gs/README.md index c913b0b..8ce065a 100644 --- a/tuts/027-connect-gs/README.md +++ b/tuts/027-connect-gs/README.md @@ -8,7 +8,7 @@ You can either run the automated script `connect-gs.sh` to execute all operation The script creates the following AWS resources in order: -• Connect instance -• Connect user +- Connect instance +- Connect user The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/027-connect-gs/connect-gs.md b/tuts/027-connect-gs/connect-gs.md index 5be5de4..d617160 100644 --- a/tuts/027-connect-gs/connect-gs.md +++ b/tuts/027-connect-gs/connect-gs.md @@ -6,21 +6,21 @@ Set up a cloud-based contact center with Amazon Connect Before you begin this tutorial, you need: -* An AWS account with permissions to create Amazon Connect resources -* The AWS CLI installed and configured. For installation instructions, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). -* The `AmazonConnect_FullAccess` managed policy attached to your IAM user or role (for production environments, consider using more restrictive permissions) -* Basic familiarity with command line interfaces and JSON formatting -* Approximately 15-20 minutes to complete the tutorial +- An AWS account with permissions to create Amazon Connect resources +- The AWS CLI installed and configured. For installation instructions, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +- The `AmazonConnect_FullAccess` managed policy attached to your IAM user or role (for production environments, consider using more restrictive permissions) +- Basic familiarity with command line interfaces and JSON formatting +- Approximately 15-20 minutes to complete the tutorial ## Cost estimate This tutorial creates resources that may incur charges to your AWS account: -* Amazon Connect phone number: $1.00 per month for a toll-free number in the US -* No charges for the Amazon Connect instance itself -* No charges for creating users or configuring the instance -* S3 storage for call recordings and chat transcripts: Standard S3 rates apply (approximately $0.023 per GB per month) -* KMS key usage for encryption: $1.00 per month per key plus $0.03 per 10,000 API requests +- Amazon Connect phone number: $1.00 per month for a toll-free number in the US +- No charges for the Amazon Connect instance itself +- No charges for creating users or configuring the instance +- S3 storage for call recordings and chat transcripts: Standard S3 rates apply (approximately $0.023 per GB per month) +- KMS key usage for encryption: $1.00 per month per key plus $0.03 per 10,000 API requests Total estimated cost: Less than $0.01 for completing the tutorial if you clean up resources afterward. If you keep the resources running, expect to pay approximately $1.00 per month for the phone number plus any applicable storage costs. @@ -352,9 +352,9 @@ Deleting the instance will also delete all associated resources, including users Now that you've created an Amazon Connect instance, you can explore additional features: -* [Set up contact flows](https://docs.aws.amazon.com/connect/latest/adminguide/contact-flow.html) to define how contacts are handled in your contact center -* [Configure queues](https://docs.aws.amazon.com/connect/latest/adminguide/create-queue.html) to manage how contacts are distributed to agents -* [Set up quick connects](https://docs.aws.amazon.com/connect/latest/adminguide/quick-connects.html) to enable agents to transfer contacts to specific destinations -* [Enable contact recording](https://docs.aws.amazon.com/connect/latest/adminguide/set-up-recordings.html) to record customer interactions for quality assurance -* [Integrate with Amazon Lex](https://docs.aws.amazon.com/connect/latest/adminguide/amazon-lex.html) to add chatbots to your contact center -* [Set up real-time and historical metrics](https://docs.aws.amazon.com/connect/latest/adminguide/real-time-metrics-reports.html) to monitor your contact center performance +- [Set up contact flows](https://docs.aws.amazon.com/connect/latest/adminguide/contact-flow.html) to define how contacts are handled in your contact center +- [Configure queues](https://docs.aws.amazon.com/connect/latest/adminguide/create-queue.html) to manage how contacts are distributed to agents +- [Set up quick connects](https://docs.aws.amazon.com/connect/latest/adminguide/quick-connects.html) to enable agents to transfer contacts to specific destinations +- [Enable contact recording](https://docs.aws.amazon.com/connect/latest/adminguide/set-up-recordings.html) to record customer interactions for quality assurance +- [Integrate with Amazon Lex](https://docs.aws.amazon.com/connect/latest/adminguide/amazon-lex.html) to add chatbots to your contact center +- [Set up real-time and historical metrics](https://docs.aws.amazon.com/connect/latest/adminguide/real-time-metrics-reports.html) to monitor your contact center performance diff --git a/tuts/028-sagemaker-featurestore/README.md b/tuts/028-sagemaker-featurestore/README.md index 5b46791..564efe3 100644 --- a/tuts/028-sagemaker-featurestore/README.md +++ b/tuts/028-sagemaker-featurestore/README.md @@ -8,17 +8,17 @@ You can run the shell script to automatically set up the SageMaker Feature Store The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• IAM role policy (b) -• S3 bucket -• S3 bucket (b) -• S3 public access block -• SageMaker feature group -• SageMaker feature group (b) -• Sagemaker-Featurestore-Runtime record -• Sagemaker-Featurestore-Runtime record (b) -• Sagemaker-Featurestore-Runtime record (c) -• Sagemaker-Featurestore-Runtime record (d) +- IAM role +- IAM role policy +- IAM role policy (b) +- S3 bucket +- S3 bucket (b) +- S3 public access block +- SageMaker feature group +- SageMaker feature group (b) +- Sagemaker-Featurestore-Runtime record +- Sagemaker-Featurestore-Runtime record (b) +- Sagemaker-Featurestore-Runtime record (c) +- Sagemaker-Featurestore-Runtime record (d) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/028-sagemaker-featurestore/sagemaker-featurestore.md b/tuts/028-sagemaker-featurestore/sagemaker-featurestore.md index ef4fd36..c2f4c97 100644 --- a/tuts/028-sagemaker-featurestore/sagemaker-featurestore.md +++ b/tuts/028-sagemaker-featurestore/sagemaker-featurestore.md @@ -4,15 +4,15 @@ This tutorial guides you through the process of using Amazon SageMaker Feature S ## Topics -* [Prerequisites](#prerequisites) -* [Set up IAM permissions](#set-up-iam-permissions) -* [Create a SageMaker execution role](#create-a-sagemaker-execution-role) -* [Create feature groups](#create-feature-groups) -* [Ingest data into feature groups](#ingest-data-into-feature-groups) -* [Retrieve records from feature groups](#retrieve-records-from-feature-groups) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Set up IAM permissions](#set-up-iam-permissions) +- [Create a SageMaker execution role](#create-a-sagemaker-execution-role) +- [Create feature groups](#create-feature-groups) +- [Ingest data into feature groups](#ingest-data-into-feature-groups) +- [Retrieve records from feature groups](#retrieve-records-from-feature-groups) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -72,7 +72,7 @@ aws iam create-policy \ "s3:ListBucket", "s3:GetBucketAcl", "s3:GetBucketLocation", - "s3:GetBucketVersioning" + "s3:GetBucketVersioning" ], "Resource": [ "arn:aws:s3:::amzndemo-s3-demo-bucket/*", @@ -90,7 +90,7 @@ aws iam create-policy \ "glue:DeletePartition" ], "Resource": "*" - } + } ] }' ``` @@ -135,7 +135,7 @@ aws iam create-role \ **Attach the policy to your role** -After creating the policy, attach it to the SageMaker execution role. Replace `YourSageMakerExecutionRole` with the name of your SageMaker execution role and `123456789012` with your AWS account ID. +After creating the policy, attach it to the SageMaker execution role. Replace `YourSageMakerExecutionRole` with the name of your SageMaker execution role and `123456789012` with your AWS account ID. ``` aws iam attach-role-policy \ @@ -627,7 +627,7 @@ The following commands delete the SageMaker execution role that's created for th Note: Replace `123456789012` with your account ID. ``` -# Delete the custom policy +# Delete the custom policy aws iam detach-role-policy \ --role-name YourSageMakerExecutionRole \ --policy-arn "arn:aws:iam::123456789012:policy/SageMakerFeatureStorePolicy" diff --git a/tuts/030-marketplace-buyer-gs/README.md b/tuts/030-marketplace-buyer-gs/README.md index 8cff250..34f70bd 100644 --- a/tuts/030-marketplace-buyer-gs/README.md +++ b/tuts/030-marketplace-buyer-gs/README.md @@ -8,8 +8,8 @@ You can either run the automated shell script (`marketplace-buyer-getting-starte The script creates the following AWS resources in order: -• EC2 key pair -• EC2 security group -• EC2 instances +- EC2 key pair +- EC2 security group +- EC2 instances The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/030-marketplace-buyer-gs/marketplace-buyer-getting-started.md b/tuts/030-marketplace-buyer-gs/marketplace-buyer-getting-started.md index 04a9948..017226a 100644 --- a/tuts/030-marketplace-buyer-gs/marketplace-buyer-getting-started.md +++ b/tuts/030-marketplace-buyer-gs/marketplace-buyer-getting-started.md @@ -6,14 +6,14 @@ This tutorial guides you through common AWS Marketplace buyer operations using t ## Topics -* [Prerequisites](#prerequisites) -* [Searching for products](#searching-for-products) -* [Creating resources for your instance](#creating-resources-for-your-instance) -* [Launching an instance](#launching-an-instance) -* [Managing your software](#managing-your-software) -* [Cleaning up resources](#cleaning-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Searching for products](#searching-for-products) +- [Creating resources for your instance](#creating-resources-for-your-instance) +- [Launching an instance](#launching-an-instance) +- [Managing your software](#managing-your-software) +- [Cleaning up resources](#cleaning-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -202,7 +202,7 @@ Replace `your-instance-public-dns` with the actual public DNS name of your insta ## Managing your instances -After launching your instance, you can monitor it. +After launching your instance, you can monitor it. **Monitor your EC2 instances** diff --git a/tuts/031-cloudwatch-dynamicdash/README.md b/tuts/031-cloudwatch-dynamicdash/README.md index ecee8a5..e01172c 100644 --- a/tuts/031-cloudwatch-dynamicdash/README.md +++ b/tuts/031-cloudwatch-dynamicdash/README.md @@ -8,9 +8,9 @@ You can either run the provided shell script to automatically create the dynamic The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• Lambda function -• CloudWatch dashboard +- IAM role +- IAM role policy +- Lambda function +- CloudWatch dashboard The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/032-cloudwatch-streams/README.md b/tuts/032-cloudwatch-streams/README.md index bef19b6..373670f 100644 --- a/tuts/032-cloudwatch-streams/README.md +++ b/tuts/032-cloudwatch-streams/README.md @@ -8,10 +8,10 @@ You can run the shell script to automatically set up the CloudWatch log resource The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• Lambda function -• Lambda function (b) -• CloudWatch dashboard +- IAM role +- IAM role policy +- Lambda function +- Lambda function (b) +- CloudWatch dashboard The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/032-cloudwatch-streams/cloudwatch-streams.md b/tuts/032-cloudwatch-streams/cloudwatch-streams.md index 92d84bb..47828cf 100644 --- a/tuts/032-cloudwatch-streams/cloudwatch-streams.md +++ b/tuts/032-cloudwatch-streams/cloudwatch-streams.md @@ -4,13 +4,13 @@ This tutorial guides you through creating a CloudWatch dashboard that uses a pro ## Topics -* [Prerequisites](#prerequisites) -* [Create Lambda functions for monitoring](#create-lambda-functions-for-monitoring) -* [Create a CloudWatch dashboard](#create-a-cloudwatch-dashboard) -* [Add a property variable to the dashboard](#add-a-property-variable-to-the-dashboard) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create Lambda functions for monitoring](#create-lambda-functions-for-monitoring) +- [Create a CloudWatch dashboard](#create-a-cloudwatch-dashboard) +- [Add a property variable to the dashboard](#add-a-property-variable-to-the-dashboard) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -128,7 +128,7 @@ Now that you have Lambda functions with metrics, you can create a CloudWatch das **Create a basic dashboard** -First, create a simple dashboard with a widget showing Lambda invocation metrics. +First, create a simple dashboard with a widget showing Lambda invocation metrics. Note: The `region` property in the widget configuration should match the AWS Region where your Lambda function is deployed. In this example, we use "us-west-2" as the target Region. @@ -151,13 +151,13 @@ cat > dashboard-body.json << EOF ], "view": "timeSeries", "stacked": false, - "region": "us-west-2", + "region": "us-west-2", "title": "Lambda Invocations", "period": 300, "stat": "Sum", "annotations": { "horizontal": [] - } + } } } ] @@ -207,7 +207,7 @@ After completing these steps, your dashboard will have a dropdown menu at the to **Add more widgets that use the variable** -Once you've added the property variable through the console, you can add more widgets that use the same variable. For example, you might want to add widgets for errors and duration metrics as follows. +Once you've added the property variable through the console, you can add more widgets that use the same variable. For example, you might want to add widgets for errors and duration metrics as follows. Note: Specify the region to the AWS Region where your Lambda functions are located. In this example, we use "us-west-2". @@ -227,13 +227,13 @@ cat > dashboard-body-updated.json << EOF ], "view": "timeSeries", "stacked": false, - "region": "us-west-2", + "region": "us-west-2", "title": "Lambda Invocations for \${functionName}", "period": 300, "stat": "Sum", - "annotations": { + "annotations": { "horizontal": [] - } + } } }, { @@ -248,7 +248,7 @@ cat > dashboard-body-updated.json << EOF ], "view": "timeSeries", "stacked": false, - "region": "us-west-2", + "region": "us-west-2", "title": "Lambda Errors for \${functionName}", "period": 300, "stat": "Sum", @@ -269,7 +269,7 @@ cat > dashboard-body-updated.json << EOF ], "view": "timeSeries", "stacked": false, - "region": "us-west-2", + "region": "us-west-2", "title": "Lambda Duration for \${functionName}", "period": 300, "stat": "Average", diff --git a/tuts/033-ses-gs/README.md b/tuts/033-ses-gs/README.md index 98472bd..ccc56d3 100644 --- a/tuts/033-ses-gs/README.md +++ b/tuts/033-ses-gs/README.md @@ -8,8 +8,8 @@ You can run the shell script to automatically configure the Amazon SES resources The script creates the following AWS resources in order: -• SES email identity verification -• SES domain identity verification (optional) -• SES DKIM setup (optional) +- SES email identity verification +- SES domain identity verification (optional) +- SES DKIM setup (optional) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/034-eks-gs/README.md b/tuts/034-eks-gs/README.md index 62fe204..b4235d7 100644 --- a/tuts/034-eks-gs/README.md +++ b/tuts/034-eks-gs/README.md @@ -8,14 +8,14 @@ You can either run the provided shell script to automatically set up your EKS cl The script creates the following AWS resources in order: -• CloudFormation stack -• IAM role -• IAM role policy -• IAM role (b) -• IAM role policy (b) -• IAM role policy (c) -• IAM role policy (d) -• EKS cluster -• EKS nodegroup +- CloudFormation stack +- IAM role +- IAM role policy +- IAM role (b) +- IAM role policy (b) +- IAM role policy (c) +- IAM role policy (d) +- EKS cluster +- EKS nodegroup The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/034-eks-gs/eks-gs.md b/tuts/034-eks-gs/eks-gs.md index 8e2e77b..f4e5be9 100644 --- a/tuts/034-eks-gs/eks-gs.md +++ b/tuts/034-eks-gs/eks-gs.md @@ -384,7 +384,7 @@ This tutorial is designed to help you learn how to create and manage an EKS clus ### Security considerations -1. **Network security**: +1. **Network security**: - Place worker nodes in private subnets only - Use security groups to restrict traffic between pods - Consider using private API server endpoints @@ -424,9 +424,9 @@ For more information on EKS architecture best practices, see the [EKS Best Pract Now that you've learned how to create and manage an Amazon EKS cluster using the AWS CLI, you can explore more advanced features and use cases: -* Deploy a [sample application](https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html) to your EKS cluster -* Learn how to [manage access to your cluster](https://docs.aws.amazon.com/eks/latest/userguide/grant-k8s-access.html) for other IAM users and roles -* Explore [cluster autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html) to automatically adjust the size of your node groups based on demand -* Configure [persistent storage](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) for your applications using Amazon EBS or Amazon EFS -* Set up [monitoring and logging](https://docs.aws.amazon.com/eks/latest/userguide/monitoring.html) for your EKS cluster -* Implement [security best practices](https://docs.aws.amazon.com/eks/latest/userguide/security.html) for your Kubernetes workloads +- Deploy a [sample application](https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html) to your EKS cluster +- Learn how to [manage access to your cluster](https://docs.aws.amazon.com/eks/latest/userguide/grant-k8s-access.html) for other IAM users and roles +- Explore [cluster autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html) to automatically adjust the size of your node groups based on demand +- Configure [persistent storage](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) for your applications using Amazon EBS or Amazon EFS +- Set up [monitoring and logging](https://docs.aws.amazon.com/eks/latest/userguide/monitoring.html) for your EKS cluster +- Implement [security best practices](https://docs.aws.amazon.com/eks/latest/userguide/security.html) for your Kubernetes workloads diff --git a/tuts/035-workspaces-personal/README.md b/tuts/035-workspaces-personal/README.md index 09ec291..f3a36d8 100644 --- a/tuts/035-workspaces-personal/README.md +++ b/tuts/035-workspaces-personal/README.md @@ -8,7 +8,7 @@ You can either run the provided shell script to automatically provision your Wor The script creates the following AWS resources in order: -• WorkSpaces workspace directory -• WorkSpaces workspaces +- WorkSpaces workspace directory +- WorkSpaces workspaces The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/036-rds-gs/README.md b/tuts/036-rds-gs/README.md index 4bc1f48..4c4bbd3 100644 --- a/tuts/036-rds-gs/README.md +++ b/tuts/036-rds-gs/README.md @@ -8,13 +8,13 @@ You can run the shell script to automatically provision the Amazon RDS database The script creates the following AWS resources in order: -• EC2 security group -• EC2 security group (b) -• RDS db subnet group -• RDS db subnet group (b) -• Secrets Manager secret -• Secrets Manager secret (b) -• RDS db instance -• RDS db instance (b) +- EC2 security group +- EC2 security group (b) +- RDS db subnet group +- RDS db subnet group (b) +- Secrets Manager secret +- Secrets Manager secret (b) +- RDS db instance +- RDS db instance (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/036-rds-gs/rds-gs.md b/tuts/036-rds-gs/rds-gs.md index f6de4f3..9baaf4e 100644 --- a/tuts/036-rds-gs/rds-gs.md +++ b/tuts/036-rds-gs/rds-gs.md @@ -4,14 +4,14 @@ This tutorial guides you through the process of creating and managing an Amazon ## Topics -* [Prerequisites](#prerequisites) -* [Set up networking components](#set-up-networking-components) -* [Create a DB subnet group](#create-a-db-subnet-group) -* [Create a DB instance](#create-a-db-instance) -* [Connect to your DB instance](#connect-to-your-db-instance) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Set up networking components](#set-up-networking-components) +- [Create a DB subnet group](#create-a-db-subnet-group) +- [Create a DB instance](#create-a-db-instance) +- [Connect to your DB instance](#connect-to-your-db-instance) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -373,7 +373,7 @@ mysql> ); -- Insert some test data - INSERT INTO users (name, email) VALUES + INSERT INTO users (name, email) VALUES ('Alice Johnson', 'alice@example.com'), ('Bob Smith', 'bob@example.com'), ('Carol Davis', 'carol@example.com'); @@ -582,10 +582,10 @@ For more information on building production-ready database environments, refer t Now that you've learned how to create and manage an RDS DB instance using the AWS CLI, you might want to explore these related topics: -* [Working with automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html) -* [Creating a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html) -* [Setting up Multi-AZ deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) -* [Monitoring RDS metrics with CloudWatch](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MonitoringOverview.html) -* [Using parameter groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) -* [Implementing connection pooling](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) -* [Setting up read replicas](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) +- [Working with automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html) +- [Creating a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html) +- [Setting up Multi-AZ deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) +- [Monitoring RDS metrics with CloudWatch](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MonitoringOverview.html) +- [Using parameter groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) +- [Implementing connection pooling](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) +- [Setting up read replicas](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) diff --git a/tuts/037-emr-gs/README.md b/tuts/037-emr-gs/README.md index a3966c0..2cfba85 100644 --- a/tuts/037-emr-gs/README.md +++ b/tuts/037-emr-gs/README.md @@ -8,8 +8,8 @@ You can run the shell script to automatically provision the Amazon EMR cluster a The script creates the following AWS resources in order: -• EMR default roles -• EC2 key pair -• EMR cluster +- EMR default roles +- EC2 key pair +- EMR cluster The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/037-emr-gs/emr-gs.md b/tuts/037-emr-gs/emr-gs.md index 7860508..53b0ac1 100644 --- a/tuts/037-emr-gs/emr-gs.md +++ b/tuts/037-emr-gs/emr-gs.md @@ -90,10 +90,10 @@ def calculate_red_violations(data_source, output_uri): restaurants_df.createOrReplaceTempView("restaurant_violations") # Create a DataFrame of the top 10 restaurants with the most Red violations - top_red_violation_restaurants = spark.sql("""SELECT name, count(*) AS total_red_violations - FROM restaurant_violations - WHERE violation_type = 'RED' - GROUP BY name + top_red_violation_restaurants = spark.sql("""SELECT name, count(*) AS total_red_violations + FROM restaurant_violations + WHERE violation_type = 'RED' + GROUP BY name ORDER BY total_red_violations DESC LIMIT 10""") # Write the results to the specified output URI @@ -159,7 +159,7 @@ aws emr create-cluster \ --log-uri s3://amzndemo-s3-demo-bucket/logs/ ``` -Replace `your-key-pair-name` with the name of your EC2 key pair. In this tutorial, we use `emr-tutorial-key` as your key pair name. +Replace `your-key-pair-name` with the name of your EC2 key pair. In this tutorial, we use `emr-tutorial-key` as your key pair name. This command creates a cluster with one primary node and two core nodes, all using m5.xlarge instances. The cluster will have Spark installed and will use the default IAM roles. The command returns a cluster ID, which you'll need for subsequent operations: @@ -224,7 +224,7 @@ This command submits your PySpark script as a step to the cluster. The `Args` pa **Check step status** -Monitor the status of your step. Replace `s-1234ABCDEFGH` with your actual step ID. +Monitor the status of your step. Replace `s-1234ABCDEFGH` with your actual step ID. ```bash aws emr describe-step --cluster-id j-1234ABCD5678 --step-id s-1234ABCDEFGH @@ -311,8 +311,8 @@ Step 2. Find your cluster's security group. Replace `j-1234ABCD5678` with your c ```bash aws emr describe-cluster --cluster-id j-1234ABCD5678 --query 'Cluster.Ec2InstanceAttributes.EmrManagedMasterSecurityGroup' --output text -``` - +``` + Step 3. Add SSH access rule to the security group. Replace `sg-xxxxxxxxx` with your security group ID that's returned in Step 2. Replace YOUR_IP_ADDRESS with the IP from Step 1. @@ -334,7 +334,7 @@ aws emr ssh --cluster-id j-1234ABCD5678 --key-pair-file ~/path/to/your-key-pair. **View Spark logs** -Once connected, you can view Spark logs in two locations: +Once connected, you can view Spark logs in two locations: Option 1: View local Spark service logs ```bash @@ -366,19 +366,19 @@ sudo cat /var/log/spark/spark-history-server.out **Useful commands while Connected** -• **Check cluster status:** `yarn application -list` -• **View HDFS contents:** `hdfs dfs -ls /` -• **Monitor system resources:** `top` -• **Exit SSH session:** `exit` +- **Check cluster status:** `yarn application -list` +- **View HDFS contents:** `hdfs dfs -ls /` +- **Monitor system resources:** `top` +- **Exit SSH session:** `exit` **Troubleshooting** -• **Connection timeout:** Verify that your security group allows SSH (port 22) from your IP -• **Permission denied:** Ensure your key pair file has correct permissions. Replace `~/emr-tutorial-key.pem` with the path to your key pair file. In this example, we use `~/emr-tutorial-key` as the path to your key pair. +- **Connection timeout:** Verify that your security group allows SSH (port 22) from your IP +- **Permission denied:** Ensure your key pair file has correct permissions. Replace `~/emr-tutorial-key.pem` with the path to your key pair file. In this example, we use `~/emr-tutorial-key` as the path to your key pair. ``` chmod 400 ~/emr-tutorial-key.pem ``` -• **Key not found:** Verify the path to your key pair file is correct +- **Key not found:** Verify the path to your key pair file is correct ## Clean up resources @@ -399,7 +399,7 @@ Check the termination status. Replace `j-1234ABCD5678` with your cluster ID. aws emr describe-cluster --cluster-id j-1234ABCD5678 ``` -The cluster is terminated when its state changes to `TERMINATED`. An example response is as follows: +The cluster is terminated when its state changes to `TERMINATED`. An example response is as follows: ```json { diff --git a/tuts/038-redshift-serverless/README.md b/tuts/038-redshift-serverless/README.md index f3d5196..77b58cf 100644 --- a/tuts/038-redshift-serverless/README.md +++ b/tuts/038-redshift-serverless/README.md @@ -8,14 +8,14 @@ You can either run the automated script `redshift-serverless.sh` to execute all The script creates the following AWS resources in order: -• Secrets Manager secret -• IAM role -• IAM role (b) -• IAM role policy -• IAM role policy (b) -• Redshift-Serverless namespace -• Redshift-Serverless namespace (b) -• Redshift-Serverless workgroup -• Redshift-Serverless workgroup (b) +- Secrets Manager secret +- IAM role +- IAM role (b) +- IAM role policy +- IAM role policy (b) +- Redshift-Serverless namespace +- Redshift-Serverless namespace (b) +- Redshift-Serverless workgroup +- Redshift-Serverless workgroup (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/038-redshift-serverless/redshift-serverless.md b/tuts/038-redshift-serverless/redshift-serverless.md index 35f6bdf..9c6ceeb 100644 --- a/tuts/038-redshift-serverless/redshift-serverless.md +++ b/tuts/038-redshift-serverless/redshift-serverless.md @@ -4,14 +4,14 @@ This tutorial guides you through setting up and using Amazon Redshift Serverless ## Topics -* [Prerequisites](#prerequisites) -* [Creating an IAM role for Amazon S3 access](#creating-an-iam-role-for-amazon-s3-access) -* [Creating a Redshift Serverless namespace and workgroup](#creating-a-redshift-serverless-namespace-and-workgroup) -* [Creating tables and loading sample data](#creating-tables-and-loading-sample-data) -* [Running queries on your data](#running-queries-on-your-data) -* [Cleaning up resources](#cleaning-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Creating an IAM role for Amazon S3 access](#creating-an-iam-role-for-amazon-s3-access) +- [Creating a Redshift Serverless namespace and workgroup](#creating-a-redshift-serverless-namespace-and-workgroup) +- [Creating tables and loading sample data](#creating-tables-and-loading-sample-data) +- [Running queries on your data](#running-queries-on-your-data) +- [Cleaning up resources](#cleaning-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -265,11 +265,11 @@ Now, let's load data into these tables from the public Amazon Redshift sample da aws redshift-data execute-statement \ --database dev \ --workgroup-name default-workgroup \ - --sql "COPY users - FROM 's3://redshift-downloads/tickit/allusers_pipe.txt' - DELIMITER '|' + --sql "COPY users + FROM 's3://redshift-downloads/tickit/allusers_pipe.txt' + DELIMITER '|' TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' - IGNOREHEADER 1 + IGNOREHEADER 1 IAM_ROLE '$ROLE_ARN';" ``` @@ -282,10 +282,10 @@ aws redshift-data execute-statement \ --database dev \ --workgroup-name default-workgroup \ --sql "COPY event - FROM 's3://redshift-downloads/tickit/allevents_pipe.txt' - DELIMITER '|' + FROM 's3://redshift-downloads/tickit/allevents_pipe.txt' + DELIMITER '|' TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' - IGNOREHEADER 1 + IGNOREHEADER 1 IAM_ROLE '$ROLE_ARN';" ``` @@ -296,10 +296,10 @@ aws redshift-data execute-statement \ --database dev \ --workgroup-name default-workgroup \ --sql "COPY sales - FROM 's3://redshift-downloads/tickit/sales_tab.txt' - DELIMITER '\t' + FROM 's3://redshift-downloads/tickit/sales_tab.txt' + DELIMITER '\t' TIMEFORMAT 'MM/DD/YYYY HH:MI:SS' - IGNOREHEADER 1 + IGNOREHEADER 1 IAM_ROLE '$ROLE_ARN';" ``` @@ -363,7 +363,7 @@ First, let's find the top 10 buyers by quantity: QUERY1_ID=$(aws redshift-data execute-statement \ --database dev \ --workgroup-name default-workgroup \ - --sql "SELECT firstname, lastname, total_quantity + --sql "SELECT firstname, lastname, total_quantity FROM (SELECT buyerid, sum(qtysold) total_quantity FROM sales GROUP BY buyerid @@ -394,8 +394,8 @@ Let's run another query to find events in the 99.9 percentile in terms of all-ti QUERY2_ID=$(aws redshift-data execute-statement \ --database dev \ --workgroup-name default-workgroup \ - --sql "SELECT eventname, total_price - FROM (SELECT eventid, total_price, ntile(1000) over(order by total_price desc) as percentile + --sql "SELECT eventname, total_price + FROM (SELECT eventid, total_price, ntile(1000) over(order by total_price desc) as percentile FROM (SELECT eventid, sum(pricepaid) total_price FROM sales GROUP BY eventid)) Q, event E @@ -483,10 +483,10 @@ For more information, see the [AWS Well-Architected Framework](https://aws.amazo Now that you've learned how to set up and use Amazon Redshift Serverless with the AWS CLI, you can explore more advanced features: -* [Connect to Amazon Redshift Serverless using JDBC and ODBC drivers](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html) -* [Use the Amazon Redshift Data API for programmatic access](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html) -* [Build machine learning models with Amazon Redshift ML](https://docs.aws.amazon.com/redshift/latest/dg/getting-started-machine-learning.html) -* [Query data directly from an Amazon S3 data lake](https://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html) -* [Manage Amazon Redshift Serverless workgroups and namespaces](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-workgroups-and-namespaces.html) +- [Connect to Amazon Redshift Serverless using JDBC and ODBC drivers](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html) +- [Use the Amazon Redshift Data API for programmatic access](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html) +- [Build machine learning models with Amazon Redshift ML](https://docs.aws.amazon.com/redshift/latest/dg/getting-started-machine-learning.html) +- [Query data directly from an Amazon S3 data lake](https://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html) +- [Manage Amazon Redshift Serverless workgroups and namespaces](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-workgroups-and-namespaces.html) You can also explore the [Amazon Redshift Serverless pricing](https://aws.amazon.com/redshift/serverless/pricing/) to understand the cost structure for your specific workloads. diff --git a/tuts/039-redshift-provisioned/README.md b/tuts/039-redshift-provisioned/README.md index ded70c6..f3d2460 100644 --- a/tuts/039-redshift-provisioned/README.md +++ b/tuts/039-redshift-provisioned/README.md @@ -8,8 +8,8 @@ You can run the shell script to automatically provision the Amazon Redshift clus The script creates the following AWS resources in order: -• Redshift cluster -• IAM role -• IAM role policy +- Redshift cluster +- IAM role +- IAM role policy The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/039-redshift-provisioned/redshift-provisioned.md b/tuts/039-redshift-provisioned/redshift-provisioned.md index fcaa9c1..0807828 100644 --- a/tuts/039-redshift-provisioned/redshift-provisioned.md +++ b/tuts/039-redshift-provisioned/redshift-provisioned.md @@ -4,14 +4,14 @@ This tutorial guides you through setting up an Amazon Redshift provisioned clust ## Topics -* [Prerequisites](#prerequisites) -* [Create a Redshift cluster](#create-a-redshift-cluster) -* [Create an IAM role for S3 access](#create-an-iam-role-for-s3-access) -* [Create tables and load data](#create-tables-and-load-data) -* [Run example queries](#run-example-queries) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create a Redshift cluster](#create-a-redshift-cluster) +- [Create an IAM role for S3 access](#create-an-iam-role-for-s3-access) +- [Create tables and load data](#create-tables-and-load-data) +- [Run example queries](#run-example-queries) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -343,10 +343,10 @@ For more information on building production-ready solutions with Amazon Redshift Now that you've learned the basics of working with Amazon Redshift using the AWS CLI, you can explore more advanced features: -* Learn about [Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-console.html) for on-demand data warehousing without managing clusters -* Explore [Amazon Redshift query editor v2](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-using.html) for a web-based SQL client experience -* Discover [Amazon Redshift data sharing](https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html) to share data across clusters and AWS accounts -* Implement [Amazon Redshift Spectrum](https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html) to query data directly from files in Amazon S3 -* Set up [automated snapshots and backups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html) for disaster recovery +- Learn about [Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-console.html) for on-demand data warehousing without managing clusters +- Explore [Amazon Redshift query editor v2](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-using.html) for a web-based SQL client experience +- Discover [Amazon Redshift data sharing](https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html) to share data across clusters and AWS accounts +- Implement [Amazon Redshift Spectrum](https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html) to query data directly from files in Amazon S3 +- Set up [automated snapshots and backups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html) for disaster recovery For more information about Amazon Redshift features and best practices, see the [Amazon Redshift Database Developer Guide](https://docs.aws.amazon.com/redshift/latest/dg/welcome.html). diff --git a/tuts/040-qbusiness-ica/README.md b/tuts/040-qbusiness-ica/README.md index 6d76a15..3421ca4 100644 --- a/tuts/040-qbusiness-ica/README.md +++ b/tuts/040-qbusiness-ica/README.md @@ -8,10 +8,10 @@ You can either run the automated shell script `qbusiness-ica.sh` to create all t The script creates the following AWS resources in order: -• IAM role for Amazon Q Business application (with CloudWatch and logging permissions) -• IAM policy with necessary permissions for the application role -• Amazon Q Business application -• User assignment to the application -• User subscription for the application +- IAM role for Amazon Q Business application (with CloudWatch and logging permissions) +- IAM policy with necessary permissions for the application role +- Amazon Q Business application +- User assignment to the application +- User subscription for the application The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. diff --git a/tuts/040-qbusiness-ica/qbusiness-ica.md b/tuts/040-qbusiness-ica/qbusiness-ica.md index 1280ea7..c726d7d 100644 --- a/tuts/040-qbusiness-ica/qbusiness-ica.md +++ b/tuts/040-qbusiness-ica/qbusiness-ica.md @@ -8,17 +8,17 @@ By the end of this tutorial, you'll have a fully functional Amazon Q Business ap Before you begin this tutorial, make sure you have: -* An AWS account with permissions to create and manage Amazon Q Business resources, IAM Identity Center, IAM roles, and policies. -* The AWS CLI installed and configured with appropriate credentials. For information about installing the AWS CLI, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). -* Basic familiarity with AWS CLI commands and JSON syntax. -* Approximately 30 minutes to complete the tutorial. +- An AWS account with permissions to create and manage Amazon Q Business resources, IAM Identity Center, IAM roles, and policies. +- The AWS CLI installed and configured with appropriate credentials. For information about installing the AWS CLI, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +- Basic familiarity with AWS CLI commands and JSON syntax. +- Approximately 30 minutes to complete the tutorial. ### Cost considerations This tutorial creates resources that incur charges to your AWS account: -* Amazon Q Business Pro user subscription: $40 per user per month -* Amazon Q Business Lite user subscription (optional): $20 per user per month +- Amazon Q Business Pro user subscription: $40 per user per month +- Amazon Q Business Lite user subscription (optional): $20 per user per month The total cost for running the resources in this tutorial for one hour is approximately $0.056 (for one Pro user) or $0.084 (if you also create a group with a Lite subscription). To avoid ongoing charges, follow the cleanup steps at the end of the tutorial. @@ -76,7 +76,7 @@ cat > qbusiness-trust-policy.json << EOF EOF ``` -Next, create a permissions policy file that defines what actions the role can perform. +Next, create a permissions policy file that defines what actions the role can perform. Note: For this tutorial, replace "123456789012" with your AWS account number. Replace "us-east-1" with the AWS Region name that you plan to use. @@ -170,9 +170,9 @@ After creating the role and policy, wait for them to propagate (approximately 15 Before creating the Amazon Q Business application, you need to set up a user in IAM Identity Center who will access the application. -First, get the Identity Store ID associated with your IAM Identity Center instance. +First, get the Identity Store ID associated with your IAM Identity Center instance. -Replace "arn:aws:sso:::instance/ssoins-abcd1234xmpl" with the ARN of your IAM Identity Center instance. Replace "us-east-1" with the AWS Region where your IAM Identity Center instance is located. +Replace "arn:aws:sso:::instance/ssoins-abcd1234xmpl" with the ARN of your IAM Identity Center instance. Replace "us-east-1" with the AWS Region where your IAM Identity Center instance is located. ```bash aws sso-admin describe-instance \ @@ -182,9 +182,9 @@ aws sso-admin describe-instance \ --output text ``` -Make a note of the Identity Store ID in the response. You'll use it in the following command. +Make a note of the Identity Store ID in the response. You'll use it in the following command. -Now, create a user in the Identity Store. Replace "d-abcd1234xmpl" with your actual Identity Store ID. Replace "us-east-1" with the AWS Region where your IAM Identity Center instance is located. +Now, create a user in the Identity Store. Replace "d-abcd1234xmpl" with your actual Identity Store ID. Replace "us-east-1" with the AWS Region where your IAM Identity Center instance is located. Note: In a production environment, use valid email addresses from your organization's domain instead of example.com. ```bash @@ -508,7 +508,7 @@ aws iam create-policy \ --output text ``` -Attach the policy to the role. Replace "123456789012" with the AWS account number. Replace "us-east-1" with the AWS Region name that you plan to use. +Attach the policy to the role. Replace "123456789012" with the AWS account number. Replace "us-east-1" with the AWS Region name that you plan to use. ```bash aws iam attach-role-policy \ @@ -543,9 +543,9 @@ aws qbusiness get-web-experience \ --output text ``` -This URL is where your users can access the Amazon Q Business application through a web browser. +This URL is where your users can access the Amazon Q Business application through a web browser. -To sign in and access the URL through a web browser, for username, use the user-name "qbusiness-user-abcd1234" that you specify in Step 3. For Password, choose "Forgot password" to receive the reset password email from your email that's specified in Step 3. +To sign in and access the URL through a web browser, for username, use the user-name "qbusiness-user-abcd1234" that you specify in Step 3. For Password, choose "Forgot password" to receive the reset password email from your email that's specified in Step 3. ## Step 9: Verify your resources @@ -657,7 +657,7 @@ For more information on AWS architecture best practices, see the [AWS Well-Archi Now that you've created an Amazon Q Business application, you might want to explore these related topics: -* [Adding data sources to your Amazon Q Business application](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/data-source-overview.html) -* [Managing user subscriptions in Amazon Q Business](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/tiers.html) -* [Customizing your Amazon Q Business web experience](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/customizing-web-experience.html) -* [Monitoring Amazon Q Business with CloudWatch](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/monitoring-overview.html) \ No newline at end of file +- [Adding data sources to your Amazon Q Business application](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/data-source-overview.html) +- [Managing user subscriptions in Amazon Q Business](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/tiers.html) +- [Customizing your Amazon Q Business web experience](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/customizing-web-experience.html) +- [Monitoring Amazon Q Business with CloudWatch](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/monitoring-overview.html) \ No newline at end of file diff --git a/tuts/042-qbusiness-anon/README.md b/tuts/042-qbusiness-anon/README.md index 5baec48..8d8309c 100644 --- a/tuts/042-qbusiness-anon/README.md +++ b/tuts/042-qbusiness-anon/README.md @@ -8,8 +8,8 @@ You can either run the automated script `qbusiness-anon.sh` to execute all opera The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• Qbusiness application +- IAM role +- IAM role policy +- Qbusiness application The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/042-qbusiness-anon/qbusiness-anon.md b/tuts/042-qbusiness-anon/qbusiness-anon.md index 199b1f9..64c84c7 100644 --- a/tuts/042-qbusiness-anon/qbusiness-anon.md +++ b/tuts/042-qbusiness-anon/qbusiness-anon.md @@ -4,13 +4,13 @@ This tutorial guides you through creating an Amazon Q Business application envir ## Topics -* [Prerequisites](#prerequisites) -* [Create an IAM role for Amazon Q Business](#create-an-iam-role-for-amazon-q-business) -* [Create an Amazon Q Business application with anonymous access](#create-an-amazon-q-business-application-with-anonymous-access) -* [Verify the application creation](#verify-the-application-creation) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an IAM role for Amazon Q Business](#create-an-iam-role-for-amazon-q-business) +- [Create an Amazon Q Business application with anonymous access](#create-an-amazon-q-business-application-with-anonymous-access) +- [Verify the application creation](#verify-the-application-creation) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites diff --git a/tuts/043-amazon-mq-gs/README.md b/tuts/043-amazon-mq-gs/README.md index 6592528..e58286d 100644 --- a/tuts/043-amazon-mq-gs/README.md +++ b/tuts/043-amazon-mq-gs/README.md @@ -8,7 +8,7 @@ You can either run the provided shell script to automatically set up your Amazon The script creates the following AWS resources in order: -• Secrets Manager secret -• Mq broker +- Secrets Manager secret +- Mq broker The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/043-amazon-mq-gs/amazon-mq-gs.md b/tuts/043-amazon-mq-gs/amazon-mq-gs.md index 6ae5ed1..2f0d2a7 100644 --- a/tuts/043-amazon-mq-gs/amazon-mq-gs.md +++ b/tuts/043-amazon-mq-gs/amazon-mq-gs.md @@ -237,7 +237,7 @@ public class AmazonMQExample { // Broker connection details private final static String WIRE_LEVEL_ENDPOINT = "$WIRE_ENDPOINT"; private final static String SECRET_NAME = "$SECRET_NAME"; - + // Credentials will be retrieved from AWS Secrets Manager private static String username; private static String password; @@ -245,7 +245,7 @@ public class AmazonMQExample { public static void main(String[] args) throws JMSException { // Retrieve credentials from AWS Secrets Manager retrieveCredentials(); - + final ActiveMQConnectionFactory connectionFactory = createActiveMQConnectionFactory(); final PooledConnectionFactory pooledConnectionFactory = createPooledConnectionFactory(connectionFactory); @@ -254,26 +254,26 @@ public class AmazonMQExample { pooledConnectionFactory.stop(); } - + private static void retrieveCredentials() { try { // Create a Secrets Manager client SecretsManagerClient client = SecretsManagerClient.builder() .region(Region.of(System.getenv("AWS_REGION"))) .build(); - + GetSecretValueRequest getSecretValueRequest = GetSecretValueRequest.builder() .secretId(SECRET_NAME) .build(); - + GetSecretValueResponse getSecretValueResponse = client.getSecretValue(getSecretValueRequest); String secretString = getSecretValueResponse.secretString(); - + // Parse the JSON string JsonObject jsonObject = new Gson().fromJson(secretString, JsonObject.class); username = jsonObject.get("username").getAsString(); password = jsonObject.get("password").getAsString(); - + System.out.println("Successfully retrieved credentials from AWS Secrets Manager"); } catch (Exception e) { System.err.println("Error retrieving credentials from AWS Secrets Manager: " + e.getMessage()); diff --git a/tuts/044-amazon-managed-grafana-gs/README.md b/tuts/044-amazon-managed-grafana-gs/README.md index f2a7cd4..bd897f9 100644 --- a/tuts/044-amazon-managed-grafana-gs/README.md +++ b/tuts/044-amazon-managed-grafana-gs/README.md @@ -8,9 +8,9 @@ You can either run the provided shell script to automatically set up your Amazon The script creates the following AWS resources in order: -• IAM role -• IAM policy -• IAM role policy -• Grafana workspace +- IAM role +- IAM policy +- IAM role policy +- Grafana workspace The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md b/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md index 1603045..2528460 100644 --- a/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md +++ b/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md @@ -4,15 +4,15 @@ This tutorial guides you through creating and configuring an Amazon Managed Graf ## Topics -* [Prerequisites](#prerequisites) -* [Create an IAM role for your workspace](#create-an-iam-role-for-your-workspace) -* [Create a Grafana workspace](#create-a-grafana-workspace) -* [Configure authentication](#configure-authentication) -* [Configure optional settings](#configure-optional-settings) -* [Access your Grafana workspace](#access-your-grafana-workspace) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an IAM role for your workspace](#create-an-iam-role-for-your-workspace) +- [Create a Grafana workspace](#create-a-grafana-workspace) +- [Configure authentication](#configure-authentication) +- [Configure optional settings](#configure-optional-settings) +- [Access your Grafana workspace](#access-your-grafana-workspace) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites diff --git a/tuts/045-aws-iam-identity-center-gs/README.md b/tuts/045-aws-iam-identity-center-gs/README.md index 85de104..d348d36 100644 --- a/tuts/045-aws-iam-identity-center-gs/README.md +++ b/tuts/045-aws-iam-identity-center-gs/README.md @@ -8,14 +8,14 @@ You can either run the provided shell script to automatically configure your IAM The script creates the following AWS resources in order: -• Sso-Admin instance -• Identitystore user -• Identitystore group -• Identitystore group membership -• Sso-Admin permission set -• Sso-Admin managed policy to permission set -• Sso-Admin account assignment -• Sso-Admin application -• Sso-Admin application assignment +- Sso-Admin instance +- Identitystore user +- Identitystore group +- Identitystore group membership +- Sso-Admin permission set +- Sso-Admin managed policy to permission set +- Sso-Admin account assignment +- Sso-Admin application +- Sso-Admin application assignment The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/046-aws-systems-manager-gs/README.md b/tuts/046-aws-systems-manager-gs/README.md index 15ce9fc..3a99113 100644 --- a/tuts/046-aws-systems-manager-gs/README.md +++ b/tuts/046-aws-systems-manager-gs/README.md @@ -8,9 +8,9 @@ You can either run the automated script `aws-systems-manager-gs.sh` to execute a The script creates the following AWS resources in order: -• IAM policy -• IAM role -• IAM role policy -• Ssm-Quicksetup configuration manager +- IAM policy +- IAM role +- IAM role policy +- Ssm-Quicksetup configuration manager The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/047-aws-network-firewall-gs/README.md b/tuts/047-aws-network-firewall-gs/README.md index 2cd3f98..5f2c1ac 100644 --- a/tuts/047-aws-network-firewall-gs/README.md +++ b/tuts/047-aws-network-firewall-gs/README.md @@ -8,15 +8,15 @@ You can either run the automated script `aws-network-firewall-gs.sh` to execute The script creates the following AWS resources in order: -• EC2 route -• Network-Firewall rule group -• Network-Firewall rule group (b) -• Network-Firewall firewall policy -• Network-Firewall firewall -• EC2 route table -• EC2 route (b) -• EC2 route (c) -• EC2 route (d) -• EC2 route (e) +- EC2 route +- Network-Firewall rule group +- Network-Firewall rule group (b) +- Network-Firewall firewall policy +- Network-Firewall firewall +- EC2 route table +- EC2 route (b) +- EC2 route (c) +- EC2 route (d) +- EC2 route (e) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/047-aws-network-firewall-gs/aws-network-firewall-gs.md b/tuts/047-aws-network-firewall-gs/aws-network-firewall-gs.md index 1362eea..5020034 100644 --- a/tuts/047-aws-network-firewall-gs/aws-network-firewall-gs.md +++ b/tuts/047-aws-network-firewall-gs/aws-network-firewall-gs.md @@ -4,14 +4,14 @@ This tutorial guides you through setting up AWS Network Firewall using the AWS C ## Topics -* [Prerequisites](#prerequisites) -* [Create rule groups](#create-rule-groups) -* [Create a firewall policy](#create-a-firewall-policy) -* [Create a firewall](#create-a-firewall) -* [Update route tables](#update-route-tables) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create rule groups](#create-rule-groups) +- [Create a firewall policy](#create-a-firewall-policy) +- [Create a firewall](#create-a-firewall) +- [Update route tables](#update-route-tables) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -30,8 +30,8 @@ Before you begin this tutorial, make sure you have the following: The resources you create in this tutorial will incur the following approximate costs if left running: -* Network Firewall Endpoint: $0.395 per hour in US East (N. Virginia) -* Network Firewall Traffic Processing: $0.065 per GB processed in US East (N. Virginia) +- Network Firewall Endpoint: $0.395 per hour in US East (N. Virginia) +- Network Firewall Traffic Processing: $0.065 per GB processed in US East (N. Virginia) For a firewall running continuously for a month (730 hours) with 100 GB of traffic, the cost would be approximately $295. Prices may vary by region. This tutorial includes cleanup instructions to help you avoid ongoing charges. @@ -39,10 +39,10 @@ For a firewall running continuously for a month (730 hours) with 100 GB of traff When working with Network Firewall resources using the CLI, consider these best practices: -* **Use unique resource names**: Generate unique identifiers for your resources to avoid naming conflicts. For example, append a random string to resource names like `StatelessRuleGroup-abcd1234`. -* **Implement proper error handling**: Check the exit status of commands and handle failures appropriately. -* **Wait for resource readiness**: Always wait for resources to reach the appropriate state before proceeding to dependent operations. -* **Use timeouts**: Implement timeouts for long-running operations to avoid indefinite waits. +- **Use unique resource names**: Generate unique identifiers for your resources to avoid naming conflicts. For example, append a random string to resource names like `StatelessRuleGroup-abcd1234`. +- **Implement proper error handling**: Check the exit status of commands and handle failures appropriately. +- **Wait for resource readiness**: Always wait for resources to reach the appropriate state before proceeding to dependent operations. +- **Use timeouts**: Implement timeouts for long-running operations to avoid indefinite waits. For information about managing subnets and route tables in your VPC, see [VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) and [Route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) in the Amazon Virtual Private Cloud User Guide. @@ -195,12 +195,12 @@ while true; do --firewall-name "Firewall-abcd1234" \ --query "FirewallStatus.Status" \ --output text) - + if [ "$STATUS" = "READY" ]; then echo "Firewall is ready!" break fi - + echo "Firewall not ready yet (status: $STATUS), waiting 20 seconds..." sleep 20 done diff --git a/tuts/048-amazon-simple-notification-service-gs/README.md b/tuts/048-amazon-simple-notification-service-gs/README.md index e37ec84..72be12d 100644 --- a/tuts/048-amazon-simple-notification-service-gs/README.md +++ b/tuts/048-amazon-simple-notification-service-gs/README.md @@ -8,6 +8,6 @@ You can run the shell script to automatically set up the Amazon SNS resources, o The script creates the following AWS resources in order: -• SNS topic +- SNS topic The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/048-amazon-simple-notification-service-gs/amazon-simple-notification-service-gs.md b/tuts/048-amazon-simple-notification-service-gs/amazon-simple-notification-service-gs.md index 5f50e33..98f398c 100644 --- a/tuts/048-amazon-simple-notification-service-gs/amazon-simple-notification-service-gs.md +++ b/tuts/048-amazon-simple-notification-service-gs/amazon-simple-notification-service-gs.md @@ -6,9 +6,9 @@ This tutorial guides you through the process of creating and managing Amazon Sim Before you begin, make sure you have: -* An AWS account with appropriate permissions to create and manage SNS resources -* AWS CLI installed and configured with your credentials -* Basic familiarity with command-line operations +- An AWS account with appropriate permissions to create and manage SNS resources +- AWS CLI installed and configured with your credentials +- Basic familiarity with command-line operations To install the AWS CLI, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). @@ -17,9 +17,9 @@ To configure the AWS CLI, see [Configuration basics](https://docs.aws.amazon.com ### Cost information The resources and operations used in this tutorial fall within the AWS Free Tier limits for Amazon SNS, which includes: -* 1 million Amazon SNS requests per month -* 100,000 HTTP/HTTPS notifications per month -* 1,000 email notifications per month +- 1 million Amazon SNS requests per month +- 100,000 HTTP/HTTPS notifications per month +- 1,000 email notifications per month If you're not within the Free Tier period or exceed these limits, the costs are minimal for the operations in this tutorial. For current pricing information, see [Amazon SNS pricing](https://aws.amazon.com/sns/pricing/). @@ -98,7 +98,7 @@ The command returns details about your subscription: } ``` -Note that the `SubscriptionArn` is now a full ARN instead of "pending confirmation", which indicates that the subscription has been confirmed. +Note that the `SubscriptionArn` is now a full ARN instead of "pending confirmation", which indicates that the subscription has been confirmed. Make note of the SubscriptionArn value as you'll need it for the following steps. ## Publish a message to the topic @@ -150,7 +150,7 @@ These commands don't produce any output if they're successful. **Issue 1**: You don't receive the confirmation email. -**Solution**: +**Solution**: - Check your spam or junk folder - Verify that you entered the correct email address - Try subscribing again with the same command @@ -211,7 +211,7 @@ For more information on building production-ready applications with Amazon SNS, Now that you've learned the basics of Amazon SNS, you can explore more advanced features: -* [Creating an Amazon SNS FIFO topic](https://docs.aws.amazon.com/sns/latest/dg/sns-fifo-topics.html) - Learn how to create and use FIFO (First-In-First-Out) topics for applications that require strict message ordering -* [Amazon SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) - Discover how to filter messages so that subscribers receive only the messages they're interested in -* [Securing Amazon SNS data with server-side encryption](https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html) - Learn how to protect the contents of your messages using encryption -* [Amazon SNS dead-letter queues](https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html) - Find out how to capture and analyze messages that couldn't be delivered to subscribers +- [Creating an Amazon SNS FIFO topic](https://docs.aws.amazon.com/sns/latest/dg/sns-fifo-topics.html) - Learn how to create and use FIFO (First-In-First-Out) topics for applications that require strict message ordering +- [Amazon SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) - Discover how to filter messages so that subscribers receive only the messages they're interested in +- [Securing Amazon SNS data with server-side encryption](https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html) - Learn how to protect the contents of your messages using encryption +- [Amazon SNS dead-letter queues](https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html) - Find out how to capture and analyze messages that couldn't be delivered to subscribers diff --git a/tuts/049-aws-end-user-messaging-gs/README.md b/tuts/049-aws-end-user-messaging-gs/README.md index 1ba1289..5fba283 100644 --- a/tuts/049-aws-end-user-messaging-gs/README.md +++ b/tuts/049-aws-end-user-messaging-gs/README.md @@ -8,6 +8,6 @@ You can either run the automated script `aws-end-user-messaging-gs.sh` to execut The script creates the following AWS resources in order: -• Pinpoint app +- Pinpoint app The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/049-aws-end-user-messaging-gs/aws-end-user-messaging-gs.md b/tuts/049-aws-end-user-messaging-gs/aws-end-user-messaging-gs.md index 1abd236..b453696 100644 --- a/tuts/049-aws-end-user-messaging-gs/aws-end-user-messaging-gs.md +++ b/tuts/049-aws-end-user-messaging-gs/aws-end-user-messaging-gs.md @@ -108,13 +108,13 @@ To enable the APNs channel, you can use either key credentials (recommended) or ``` aws pinpoint update-apns-channel \ --application-id abcd1234xmplabcd1234abcd1234 \ - --apns-channel-request '{ - "Enabled": true, - "DefaultAuthenticationMethod": "KEY", - "TokenKey": "YOUR_P8_FILE_CONTENT", - "TokenKeyId": "YOUR_KEY_ID", - "BundleId": "YOUR_BUNDLE_ID", - "TeamId": "YOUR_TEAM_ID" + --apns-channel-request '{ + "Enabled": true, + "DefaultAuthenticationMethod": "KEY", + "TokenKey": "YOUR_P8_FILE_CONTENT", + "TokenKeyId": "YOUR_KEY_ID", + "BundleId": "YOUR_BUNDLE_ID", + "TeamId": "YOUR_TEAM_ID" }' ``` @@ -125,12 +125,12 @@ Replace the placeholder values with your actual APNs credentials from your Apple ``` aws pinpoint update-apns-channel \ --application-id abcd1234xmplabcd1234abcd1234 \ - --apns-channel-request '{ - "Enabled": true, - "DefaultAuthenticationMethod": "CERTIFICATE", - "Certificate": "YOUR_BASE64_ENCODED_CERTIFICATE", - "PrivateKey": "YOUR_PRIVATE_KEY", - "CertificateType": "PRODUCTION" + --apns-channel-request '{ + "Enabled": true, + "DefaultAuthenticationMethod": "CERTIFICATE", + "Certificate": "YOUR_BASE64_ENCODED_CERTIFICATE", + "PrivateKey": "YOUR_PRIVATE_KEY", + "CertificateType": "PRODUCTION" }' ``` @@ -148,10 +148,10 @@ To enable the Baidu Cloud Push channel, use the following command: ``` aws pinpoint update-baidu-channel \ --application-id abcd1234xmplabcd1234abcd1234 \ - --baidu-channel-request '{ - "Enabled": true, - "ApiKey": "YOUR_BAIDU_API_KEY", - "SecretKey": "YOUR_BAIDU_SECRET_KEY" + --baidu-channel-request '{ + "Enabled": true, + "ApiKey": "YOUR_BAIDU_API_KEY", + "SecretKey": "YOUR_BAIDU_SECRET_KEY" }' ``` @@ -164,10 +164,10 @@ To enable the ADM channel, use the following command: ``` aws pinpoint update-adm-channel \ --application-id abcd1234xmplabcd1234abcd1234 \ - --adm-channel-request '{ - "Enabled": true, - "ClientId": "YOUR_ADM_CLIENT_ID", - "ClientSecret": "YOUR_ADM_CLIENT_SECRET" + --adm-channel-request '{ + "Enabled": true, + "ClientId": "YOUR_ADM_CLIENT_ID", + "ClientSecret": "YOUR_ADM_CLIENT_SECRET" }' ``` diff --git a/tuts/051-aws-direct-connect-gs/README.md b/tuts/051-aws-direct-connect-gs/README.md index a8c73be..3ae217c 100644 --- a/tuts/051-aws-direct-connect-gs/README.md +++ b/tuts/051-aws-direct-connect-gs/README.md @@ -8,8 +8,8 @@ You can either run the provided shell script to automatically set up your Direct The script creates the following AWS resources in order: -• Directconnect connection -• EC2 vpn gateway -• Directconnect private virtual interface +- Directconnect connection +- EC2 vpn gateway +- Directconnect private virtual interface The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/051-aws-direct-connect-gs/aws-direct-connect-gs.md b/tuts/051-aws-direct-connect-gs/aws-direct-connect-gs.md index 467aece..0bf5d4c 100644 --- a/tuts/051-aws-direct-connect-gs/aws-direct-connect-gs.md +++ b/tuts/051-aws-direct-connect-gs/aws-direct-connect-gs.md @@ -17,8 +17,8 @@ Before you begin this tutorial, make sure you have the following: AWS Direct Connect enables you to establish a dedicated network connection between your network and one of the AWS Direct Connect locations. There are two types of connections: -* **Dedicated Connection**: A physical Ethernet connection associated with a single customer. Available bandwidths are 1 Gbps, 10 Gbps, 100 Gbps, and 400 Gbps. -* **Hosted Connection**: A physical Ethernet connection that an AWS Direct Connect Partner provisions on behalf of a customer. Available bandwidths range from 50 Mbps to 10 Gbps. +- **Dedicated Connection**: A physical Ethernet connection associated with a single customer. Available bandwidths are 1 Gbps, 10 Gbps, 100 Gbps, and 400 Gbps. +- **Hosted Connection**: A physical Ethernet connection that an AWS Direct Connect Partner provisions on behalf of a customer. Available bandwidths range from 50 Mbps to 10 Gbps. In this tutorial, we'll focus on dedicated connections that you can create and manage directly through the AWS CLI. diff --git a/tuts/052-aws-waf-gs/README.md b/tuts/052-aws-waf-gs/README.md index 3f884f8..178ef3b 100644 --- a/tuts/052-aws-waf-gs/README.md +++ b/tuts/052-aws-waf-gs/README.md @@ -8,7 +8,7 @@ You can either run the provided shell script to automatically set up your AWS WA The script creates the following AWS resources in order: -• WAFv2 web acl -• WAFv2 web acl (b) +- WAFv2 web acl +- WAFv2 web acl (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/053-aws-config-gs/README.md b/tuts/053-aws-config-gs/README.md index 714514c..88fca27 100644 --- a/tuts/053-aws-config-gs/README.md +++ b/tuts/053-aws-config-gs/README.md @@ -8,16 +8,16 @@ You can either run the provided shell script to automatically configure AWS Conf The script creates the following AWS resources in order: -• S3 bucket -• S3 bucket (b) -• S3 public access block -• SNS topic -• IAM role -• IAM role policy -• IAM role policy (b) -• Configservice configuration recorder -• Configservice delivery channel -• Configservice delivery channel (b) -• Configservice configuration recorder (b) +- S3 bucket +- S3 bucket (b) +- S3 public access block +- SNS topic +- IAM role +- IAM role policy +- IAM role policy (b) +- Configservice configuration recorder +- Configservice delivery channel +- Configservice delivery channel (b) +- Configservice configuration recorder (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/053-aws-config-gs/aws-config-gs.md b/tuts/053-aws-config-gs/aws-config-gs.md index 2864a3e..0567ebd 100644 --- a/tuts/053-aws-config-gs/aws-config-gs.md +++ b/tuts/053-aws-config-gs/aws-config-gs.md @@ -4,17 +4,17 @@ This tutorial guides you through setting up AWS Config using the AWS Command Lin ## Topics -* [Prerequisites](#prerequisites) -* [Create an Amazon S3 bucket](#create-an-amazon-s3-bucket) -* [Create an Amazon SNS topic](#create-an-amazon-sns-topic) -* [Create an IAM role for AWS Config](#create-an-iam-role-for-aws-config) -* [Set up the AWS Config configuration recorder](#set-up-the-aws-config-configuration-recorder) -* [Set up the AWS Config delivery channel](#set-up-the-aws-config-delivery-channel) -* [Start the configuration recorder](#start-the-configuration-recorder) -* [Verify the AWS Config setup](#verify-the-aws-config-setup) -* [Going to production](#going-to-production) -* [Clean up resources](#clean-up-resources) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an Amazon S3 bucket](#create-an-amazon-s3-bucket) +- [Create an Amazon SNS topic](#create-an-amazon-sns-topic) +- [Create an IAM role for AWS Config](#create-an-iam-role-for-aws-config) +- [Set up the AWS Config configuration recorder](#set-up-the-aws-config-configuration-recorder) +- [Set up the AWS Config delivery channel](#set-up-the-aws-config-delivery-channel) +- [Start the configuration recorder](#start-the-configuration-recorder) +- [Verify the AWS Config setup](#verify-the-aws-config-setup) +- [Going to production](#going-to-production) +- [Clean up resources](#clean-up-resources) +- [Next steps](#next-steps) ## Prerequisites diff --git a/tuts/054-amazon-kinesis-video-streams-gs/README.md b/tuts/054-amazon-kinesis-video-streams-gs/README.md index 4a36e4b..8e5a3db 100644 --- a/tuts/054-amazon-kinesis-video-streams-gs/README.md +++ b/tuts/054-amazon-kinesis-video-streams-gs/README.md @@ -8,6 +8,6 @@ You can run the shell script to automatically set up the Amazon Kinesis Video St The script creates the following AWS resources in order: -• Kinesisvideo stream +- Kinesisvideo stream The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/054-amazon-kinesis-video-streams-gs/amazon-kinesis-video-streams-gs.md b/tuts/054-amazon-kinesis-video-streams-gs/amazon-kinesis-video-streams-gs.md index 36431d1..6aaf25c 100644 --- a/tuts/054-amazon-kinesis-video-streams-gs/amazon-kinesis-video-streams-gs.md +++ b/tuts/054-amazon-kinesis-video-streams-gs/amazon-kinesis-video-streams-gs.md @@ -24,7 +24,7 @@ To avoid ongoing charges, follow the cleanup instructions at the end of this tut First, you'll create a Kinesis video stream that will store and process your video data. The stream acts as a resource that continuously captures, processes, and stores video data. -The following command creates a new Kinesis video stream with a 24-hour data retention period, and save this ARN to a variable for easy reference: +The following command creates a new Kinesis video stream with a 24-hour data retention period, and save this ARN to a variable for easy reference: ``` $ STREAM_ARN=$(aws kinesisvideo create-stream --stream-name "MyKinesisVideoStream" --data-retention-in-hours 24 --query "StreamARN" --output text) diff --git a/tuts/055-amazon-vpc-lattice-gs/README.md b/tuts/055-amazon-vpc-lattice-gs/README.md index b2c191a..7ff80f7 100644 --- a/tuts/055-amazon-vpc-lattice-gs/README.md +++ b/tuts/055-amazon-vpc-lattice-gs/README.md @@ -8,9 +8,9 @@ You can either run the automated shell script (`amazon-vpc-lattice-getting-start The script creates the following AWS resources in order: -• Vpc-Lattice service network -• Vpc-Lattice service -• Vpc-Lattice service network service association -• Vpc-Lattice service network vpc association +- Vpc-Lattice service network +- Vpc-Lattice service +- Vpc-Lattice service network service association +- Vpc-Lattice service network vpc association The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/055-amazon-vpc-lattice-gs/amazon-vpc-lattice-getting-started.md b/tuts/055-amazon-vpc-lattice-gs/amazon-vpc-lattice-getting-started.md index 1a71ec7..9a995bc 100644 --- a/tuts/055-amazon-vpc-lattice-gs/amazon-vpc-lattice-getting-started.md +++ b/tuts/055-amazon-vpc-lattice-gs/amazon-vpc-lattice-getting-started.md @@ -153,7 +153,7 @@ The command returns a list of VPC IDs and their names (if they have a Name tag): ``` vpc-abcd1234EXAMPLE my-vpc -vpc-efgh5678EXAMPLE +vpc-efgh5678EXAMPLE ``` Make note of the VPC ID that you want to associate with the service network. diff --git a/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/README.md b/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/README.md index 05b5582..ae04259 100644 --- a/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/README.md +++ b/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/README.md @@ -8,13 +8,13 @@ You can either run the provided shell script to automatically set up your Amazon The script creates the following AWS resources in order: -• MSK cluster -• IAM policy -• IAM role -• IAM role policy -• IAM instance profile -• EC2 security group -• EC2 key pair -• EC2 instances +- MSK cluster +- IAM policy +- IAM role +- IAM role policy +- IAM instance profile +- EC2 security group +- EC2 key pair +- EC2 instances The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/amazon-managed-streaming-for-apache-kafka-gs.md b/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/amazon-managed-streaming-for-apache-kafka-gs.md index 6b5f37d..8088df0 100644 --- a/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/amazon-managed-streaming-for-apache-kafka-gs.md +++ b/tuts/057-amazon-managed-streaming-for-apache-kafka-gs/amazon-managed-streaming-for-apache-kafka-gs.md @@ -144,15 +144,15 @@ echo "Waiting for cluster to become active (this may take 15-20 minutes)..." while true; do CLUSTER_STATUS=$(aws kafka describe-cluster --cluster-arn "$CLUSTER_ARN" --query "ClusterInfo.State" --output text 2>/dev/null) - + if [ $? -ne 0 ]; then echo "Failed to get cluster status. Retrying in 30 seconds..." sleep 30 continue fi - + echo "Current cluster status: $CLUSTER_STATUS" - + if [ "$CLUSTER_STATUS" = "ACTIVE" ]; then echo "Cluster is now active!" break @@ -160,7 +160,7 @@ while true; do echo "Error: Cluster creation failed" exit 1 fi - + echo "Still waiting for cluster to become active... (checking again in 60 seconds)" sleep 60 done @@ -338,31 +338,31 @@ Let's find a suitable combination of subnet and instance type that's available i find_suitable_subnet_and_instance_type() { local vpc_id="$1" local -a subnet_array=("${!2}") - + # List of instance types to try, in order of preference local instance_types=("t3.micro" "t2.micro" "t3.small" "t2.small") - + echo "Finding suitable subnet and instance type combination..." - + for instance_type in "${instance_types[@]}"; do echo "Trying instance type: $instance_type" - + for subnet_id in "${subnet_array[@]}"; do # Get the availability zone for this subnet local az=$(aws ec2 describe-subnets \ --subnet-ids "$subnet_id" \ --query 'Subnets[0].AvailabilityZone' \ --output text) - + echo " Checking subnet $subnet_id in AZ $az" - + # Check if this instance type is available in this AZ local available=$(aws ec2 describe-instance-type-offerings \ --location-type availability-zone \ --filters "Name=location,Values=$az" "Name=instance-type,Values=$instance_type" \ --query 'InstanceTypeOfferings[0].InstanceType' \ --output text 2>/dev/null) - + if [ "$available" = "$instance_type" ]; then echo " ✓ Found suitable combination: $instance_type in $az (subnet: $subnet_id)" SELECTED_SUBNET_ID="$subnet_id" @@ -373,7 +373,7 @@ find_suitable_subnet_and_instance_type() { fi done done - + echo "Error: Could not find any suitable subnet and instance type combination" return 1 } @@ -580,7 +580,7 @@ if [ -z "$CLIENT_DNS" ] || [ "$CLIENT_DNS" = "None" ]; then --instance-ids "$INSTANCE_ID" \ --query 'Reservations[0].Instances[0].PublicIpAddress' \ --output text) - + if [ -z "$CLIENT_DNS" ] || [ "$CLIENT_DNS" = "None" ]; then echo "Error: Failed to get public DNS name or IP address for instance" exit 1 @@ -608,7 +608,7 @@ while [ -z "$BOOTSTRAP_BROKERS" ] || [ "$BOOTSTRAP_BROKERS" = "None" ]; do # Get the full bootstrap brokers response BOOTSTRAP_RESPONSE=$(aws kafka get-bootstrap-brokers \ --cluster-arn "$CLUSTER_ARN" 2>/dev/null) - + if [ $? -eq 0 ] && [ -n "$BOOTSTRAP_RESPONSE" ]; then # Try to get IAM authentication brokers first using grep BOOTSTRAP_BROKERS=$(echo "$BOOTSTRAP_RESPONSE" | grep -o '"BootstrapBrokerStringSaslIam": "[^"]*' | cut -d'"' -f4) @@ -622,9 +622,9 @@ while [ -z "$BOOTSTRAP_BROKERS" ] || [ "$BOOTSTRAP_BROKERS" = "None" ]; do fi fi fi - + RETRY_COUNT=$((RETRY_COUNT + 1)) - + if [ "$RETRY_COUNT" -ge "$MAX_RETRIES" ]; then echo "Warning: Could not get bootstrap brokers after $MAX_RETRIES attempts." echo "You may need to manually retrieve them later using:" @@ -633,7 +633,7 @@ while [ -z "$BOOTSTRAP_BROKERS" ] || [ "$BOOTSTRAP_BROKERS" = "None" ]; do AUTH_METHOD="UNKNOWN" break fi - + if [ -z "$BOOTSTRAP_BROKERS" ] || [ "$BOOTSTRAP_BROKERS" = "None" ]; then echo "Bootstrap brokers not available yet. Retrying in 30 seconds... (Attempt $RETRY_COUNT/$MAX_RETRIES)" sleep 30 diff --git a/tuts/058-elastic-load-balancing-gs/README.md b/tuts/058-elastic-load-balancing-gs/README.md index 1213dbe..a979731 100644 --- a/tuts/058-elastic-load-balancing-gs/README.md +++ b/tuts/058-elastic-load-balancing-gs/README.md @@ -8,10 +8,10 @@ You can either run the automated shell script (`elastic-load-balancing-gs.sh`) t The script creates the following AWS resources in order: -• EC2 security group -• ELBv2 load balancer -• ELBv2 target group -• ELBv2 targets -• ELBv2 listener +- EC2 security group +- ELBv2 load balancer +- ELBv2 target group +- ELBv2 targets +- ELBv2 listener The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/058-elastic-load-balancing-gs/elastic-load-balancing-gs.md b/tuts/058-elastic-load-balancing-gs/elastic-load-balancing-gs.md index 8a4867a..643ab42 100644 --- a/tuts/058-elastic-load-balancing-gs/elastic-load-balancing-gs.md +++ b/tuts/058-elastic-load-balancing-gs/elastic-load-balancing-gs.md @@ -4,17 +4,17 @@ This tutorial guides you through creating and configuring an Application Load Ba ## Topics -* [Prerequisites](#prerequisites) -* [Create an Application Load Balancer](#create-an-application-load-balancer) -* [Create a target group](#create-a-target-group) -* [Register targets](#register-targets) -* [Create a listener](#create-a-listener) -* [Verify your configuration](#verify-your-configuration) -* [Add an HTTPS listener (optional)](#add-an-https-listener-optional) -* [Add path-based routing (optional)](#add-path-based-routing-optional) -* [Going to production](#going-to-production) -* [Clean up resources](#clean-up-resources) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an Application Load Balancer](#create-an-application-load-balancer) +- [Create a target group](#create-a-target-group) +- [Register targets](#register-targets) +- [Create a listener](#create-a-listener) +- [Verify your configuration](#verify-your-configuration) +- [Add an HTTPS listener (optional)](#add-an-https-listener-optional) +- [Add path-based routing (optional)](#add-path-based-routing-optional) +- [Going to production](#going-to-production) +- [Clean up resources](#clean-up-resources) +- [Next steps](#next-steps) ## Prerequisites @@ -341,9 +341,9 @@ For more information on building production-ready architectures, refer to: Now that you've learned how to create and configure an Application Load Balancer using the AWS CLI, you might want to explore these related topics: -* [Configure health checks for your target group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html) -* [Use sticky sessions with your load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html) -* [Configure access logs for your load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html) -* [Monitor your load balancer with CloudWatch](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudwatch-metrics.html) -* [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html) -* [Create a Gateway Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html) +- [Configure health checks for your target group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html) +- [Use sticky sessions with your load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html) +- [Configure access logs for your load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html) +- [Monitor your load balancer with CloudWatch](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudwatch-metrics.html) +- [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html) +- [Create a Gateway Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html) diff --git a/tuts/059-amazon-datazone-gs/README.md b/tuts/059-amazon-datazone-gs/README.md index f0166e1..9297d81 100644 --- a/tuts/059-amazon-datazone-gs/README.md +++ b/tuts/059-amazon-datazone-gs/README.md @@ -8,24 +8,24 @@ You can either run the provided shell script to automatically set up your Amazon The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• IAM role policy (b) -• IAM role policy (c) -• IAM role policy (d) -• DataZone domain -• DataZone project -• DataZone project (b) -• DataZone environment profile -• DataZone environment -• Glue database -• IAM role (b) -• IAM role policy (e) -• DataZone data source -• DataZone form type -• DataZone asset type -• DataZone asset -• DataZone listing change set -• DataZone subscription request +- IAM role +- IAM role policy +- IAM role policy (b) +- IAM role policy (c) +- IAM role policy (d) +- DataZone domain +- DataZone project +- DataZone project (b) +- DataZone environment profile +- DataZone environment +- Glue database +- IAM role (b) +- IAM role policy (e) +- DataZone data source +- DataZone form type +- DataZone asset type +- DataZone asset +- DataZone listing change set +- DataZone subscription request The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/059-amazon-datazone-gs/amazon-datazone-gs.md b/tuts/059-amazon-datazone-gs/amazon-datazone-gs.md index 8ee7192..2659737 100644 --- a/tuts/059-amazon-datazone-gs/amazon-datazone-gs.md +++ b/tuts/059-amazon-datazone-gs/amazon-datazone-gs.md @@ -4,16 +4,16 @@ This tutorial guides you through setting up and using Amazon DataZone using the ## Topics -* [Prerequisites](#prerequisites) -* [Create an Amazon DataZone domain](#create-an-amazon-datazone-domain) -* [Create projects](#create-projects) -* [Create an environment profile and environment](#create-an-environment-profile-and-environment) -* [Create a data source for AWS Glue](#create-a-data-source-for-aws-glue) -* [Create and publish custom assets](#create-and-publish-custom-assets) -* [Search for assets and subscribe](#search-for-assets-and-subscribe) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create an Amazon DataZone domain](#create-an-amazon-datazone-domain) +- [Create projects](#create-projects) +- [Create an environment profile and environment](#create-an-environment-profile-and-environment) +- [Create a data source for AWS Glue](#create-a-data-source-for-aws-glue) +- [Create and publish custom assets](#create-and-publish-custom-assets) +- [Search for assets and subscribe](#search-for-assets-and-subscribe) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites diff --git a/tuts/061-amazon-athena-gs/README.md b/tuts/061-amazon-athena-gs/README.md index 9bb39cf..4188c58 100644 --- a/tuts/061-amazon-athena-gs/README.md +++ b/tuts/061-amazon-athena-gs/README.md @@ -8,12 +8,12 @@ You can either run the automated script `amazon-athena-gs.sh` to execute all ope The script creates the following AWS resources in order: -• Athena query execution -• Athena query execution (b) -• Athena query execution (c) -• Athena named query -• Athena query execution (d) -• Athena query execution (e) -• Athena query execution (f) +- Athena query execution +- Athena query execution (b) +- Athena query execution (c) +- Athena named query +- Athena query execution (d) +- Athena query execution (e) +- Athena query execution (f) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/061-amazon-athena-gs/amazon-athena-gs.md b/tuts/061-amazon-athena-gs/amazon-athena-gs.md index f430abf..216dbe7 100644 --- a/tuts/061-amazon-athena-gs/amazon-athena-gs.md +++ b/tuts/061-amazon-athena-gs/amazon-athena-gs.md @@ -6,9 +6,9 @@ This tutorial walks you through using Amazon Athena with the AWS Command Line In Before you begin this tutorial, you need: -* An AWS account. If you don't have one, sign up at [https://aws.amazon.com/free/](https://aws.amazon.com/free/). -* The AWS CLI installed and configured with appropriate permissions. For installation instructions, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). -* Basic knowledge of SQL queries. +- An AWS account. If you don't have one, sign up at [https://aws.amazon.com/free/](https://aws.amazon.com/free/). +- The AWS CLI installed and configured with appropriate permissions. For installation instructions, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +- Basic knowledge of SQL queries. This tutorial uses live resources, so you are charged for the queries that you run. The estimated cost for completing this tutorial is approximately $0.0001 (one-tenth of a cent), assuming you follow the cleanup instructions. You aren't charged for the sample data in the location that this tutorial uses, but if you upload your own data files to Amazon S3, additional charges may apply. @@ -170,7 +170,7 @@ TABLE_QUERY="CREATE EXTERNAL TABLE IF NOT EXISTS mydatabase.cloudfront_logs ( os STRING, Browser STRING, BrowserVersion STRING - ) + ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( \"input.regex\" = \"^(?!#)([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+([^ ]+)\\\\s+[^\\\\(]+[\\\\(]([^\\\\;]+).*\\\\%20([^\\\\/]+)[\\\\/](.*)$\" @@ -244,9 +244,9 @@ Run the following command: ```bash # Execute the query and capture the query ID QUERY_ID=$(aws athena start-query-execution \ - --query-string "SELECT os, COUNT(*) count - FROM mydatabase.cloudfront_logs - WHERE date BETWEEN date '2014-07-05' AND date '2014-08-05' + --query-string "SELECT os, COUNT(*) count + FROM mydatabase.cloudfront_logs + WHERE date BETWEEN date '2014-07-05' AND date '2014-08-05' GROUP BY os" \ --result-configuration "OutputLocation=s3://$S3_BUCKET/output/" \ --query "QueryExecutionId" --output text) @@ -345,9 +345,9 @@ NAMED_QUERY_ID=$(aws athena create-named-query \ --name "OS Count Query" \ --description "Count of operating systems in CloudFront logs" \ --database "mydatabase" \ - --query-string "SELECT os, COUNT(*) count - FROM mydatabase.cloudfront_logs - WHERE date BETWEEN date '2014-07-05' AND date '2014-08-05' + --query-string "SELECT os, COUNT(*) count + FROM mydatabase.cloudfront_logs + WHERE date BETWEEN date '2014-07-05' AND date '2014-08-05' GROUP BY os" \ --query "NamedQueryId" --output text) @@ -539,7 +539,7 @@ For more information on these topics, see: Now that you've learned the basics of using Amazon Athena with the AWS CLI, you can explore these additional features: -* [Use AWS Glue Data Catalog with Athena](https://docs.aws.amazon.com/athena/latest/ug/data-sources-glue.html) - Learn how to use AWS Glue to create and manage your data catalog. -* [Query Amazon CloudFront logs](https://docs.aws.amazon.com/athena/latest/ug/cloudfront-logs.html) - Explore more advanced queries for CloudFront logs. -* [Connect to other data sources](https://docs.aws.amazon.com/athena/latest/ug/work-with-data-stores.html) - Learn how to connect Athena to various data sources. -* [Use workgroups to control query access and costs](https://docs.aws.amazon.com/athena/latest/ug/workgroups.html) - Organize users and applications into workgroups for better resource management. +- [Use AWS Glue Data Catalog with Athena](https://docs.aws.amazon.com/athena/latest/ug/data-sources-glue.html) - Learn how to use AWS Glue to create and manage your data catalog. +- [Query Amazon CloudFront logs](https://docs.aws.amazon.com/athena/latest/ug/cloudfront-logs.html) - Explore more advanced queries for CloudFront logs. +- [Connect to other data sources](https://docs.aws.amazon.com/athena/latest/ug/work-with-data-stores.html) - Learn how to connect Athena to various data sources. +- [Use workgroups to control query access and costs](https://docs.aws.amazon.com/athena/latest/ug/workgroups.html) - Organize users and applications into workgroups for better resource management. diff --git a/tuts/062-aws-support-gs/README.md b/tuts/062-aws-support-gs/README.md index bf22318..12b25ea 100644 --- a/tuts/062-aws-support-gs/README.md +++ b/tuts/062-aws-support-gs/README.md @@ -8,7 +8,7 @@ You can either run the automated script `aws-support-gs.sh` to execute all opera The script creates the following AWS resources in order: -• Support case -• Support case (b) +- Support case +- Support case (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/062-aws-support-gs/aws-support-gs.md b/tuts/062-aws-support-gs/aws-support-gs.md index ddaf2b3..bcb5f9a 100644 --- a/tuts/062-aws-support-gs/aws-support-gs.md +++ b/tuts/062-aws-support-gs/aws-support-gs.md @@ -4,14 +4,14 @@ This tutorial guides you through common AWS Support operations using the AWS Com ## Topics -* [Prerequisites](#prerequisites) -* [Check available services and severity levels](#check-available-services-and-severity-levels) -* [Create a support case](#create-a-support-case) -* [Manage your support cases](#manage-your-support-cases) -* [Add communications to a case](#add-communications-to-a-case) -* [Resolve a support case](#resolve-a-support-case) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Check available services and severity levels](#check-available-services-and-severity-levels) +- [Create a support case](#create-a-support-case) +- [Manage your support cases](#manage-your-support-cases) +- [Add communications to a case](#add-communications-to-a-case) +- [Resolve a support case](#resolve-a-support-case) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -24,7 +24,7 @@ Before you begin this tutorial, make sure you have the following: **Time to complete:** Approximately 15-20 minutes -**Cost:** This tutorial uses the AWS Support API, which doesn't incur additional costs beyond your AWS Support plan subscription. For pricing details, see https://aws.amazon.com/premiumsupport/pricing/. +**Cost:** This tutorial uses the AWS Support API, which doesn't incur additional costs beyond your AWS Support plan subscription. For pricing details, see https://aws.amazon.com/premiumsupport/pricing/. Let's get started with using the AWS Support API through the AWS CLI. @@ -317,9 +317,9 @@ For more comprehensive guidance on building production-ready solutions, refer to Now that you've learned how to use the AWS Support API through the AWS CLI, you can explore more advanced features: -* Learn how to [request a service quota increase](https://docs.aws.amazon.com/awssupport/latest/user/create-service-quota-increase.html) -* Explore [AWS Trusted Advisor](https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html) to optimize your AWS environment -* Understand [AWS Support response times](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#response-times-for-support-cases) for different support plans -* Learn about [adding attachments to support cases](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#adding-attachments) for more detailed troubleshooting +- Learn how to [request a service quota increase](https://docs.aws.amazon.com/awssupport/latest/user/create-service-quota-increase.html) +- Explore [AWS Trusted Advisor](https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html) to optimize your AWS environment +- Understand [AWS Support response times](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#response-times-for-support-cases) for different support plans +- Learn about [adding attachments to support cases](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#adding-attachments) for more detailed troubleshooting For more information about AWS Support and available commands, refer to the [AWS CLI Command Reference for AWS Support](https://docs.aws.amazon.com/cli/latest/reference/support/index.html). diff --git a/tuts/063-aws-iot-core-gs/README.md b/tuts/063-aws-iot-core-gs/README.md index 1ab16bd..cb1abde 100644 --- a/tuts/063-aws-iot-core-gs/README.md +++ b/tuts/063-aws-iot-core-gs/README.md @@ -8,12 +8,12 @@ You can either run the automated script `aws-iot-core-gs.sh` to execute all oper The script creates the following AWS resources in order: -• IoT Core policy -• IoT Core thing -• IoT Core keys and certificate -• IoT Core policy (b) -• IoT Core thing principal -• IoT Core policy (c) -• IoT Core policy (d) +- IoT Core policy +- IoT Core thing +- IoT Core keys and certificate +- IoT Core policy (b) +- IoT Core thing principal +- IoT Core policy (c) +- IoT Core policy (d) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/063-aws-iot-core-gs/aws-iot-core-gs.md b/tuts/063-aws-iot-core-gs/aws-iot-core-gs.md index 610aef9..4f2936c 100644 --- a/tuts/063-aws-iot-core-gs/aws-iot-core-gs.md +++ b/tuts/063-aws-iot-core-gs/aws-iot-core-gs.md @@ -6,11 +6,11 @@ This tutorial guides you through the process of setting up AWS IoT Core and conn Before you begin this tutorial, you need: -* An AWS account with permissions to create AWS IoT resources -* The AWS CLI installed and configured with your credentials -* Python 3.7 or later installed on your computer -* Git installed on your computer -* Basic familiarity with the command line interface +- An AWS account with permissions to create AWS IoT resources +- The AWS CLI installed and configured with your credentials +- Python 3.7 or later installed on your computer +- Git installed on your computer +- Basic familiarity with the command line interface **Time to complete:** Approximately 20-30 minutes @@ -430,8 +430,8 @@ For more information, see [AWS IoT Core Quotas](https://docs.aws.amazon.com/iot/ Now that you've successfully connected a device to AWS IoT Core and exchanged MQTT messages, you can explore more advanced features: -* [Working with rules for AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html) - Learn how to route messages to other AWS services -* [Working with device shadows](https://docs.aws.amazon.com/iot/latest/developerguide/iot-device-shadows.html) - Learn how to store and retrieve device state -* [Device provisioning](https://docs.aws.amazon.com/iot/latest/developerguide/iot-provision.html) - Learn how to provision devices at scale -* [Device Defender](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender.html) - Learn how to audit and monitor your IoT devices for security issues -* [Message Quality of Service in AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/mqtt.html#mqtt-qos) - Learn about MQTT QoS levels and how they affect message delivery reliability +- [Working with rules for AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html) - Learn how to route messages to other AWS services +- [Working with device shadows](https://docs.aws.amazon.com/iot/latest/developerguide/iot-device-shadows.html) - Learn how to store and retrieve device state +- [Device provisioning](https://docs.aws.amazon.com/iot/latest/developerguide/iot-provision.html) - Learn how to provision devices at scale +- [Device Defender](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender.html) - Learn how to audit and monitor your IoT devices for security issues +- [Message Quality of Service in AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/mqtt.html#mqtt-qos) - Learn about MQTT QoS levels and how they affect message delivery reliability diff --git a/tuts/064-amazon-neptune-gs/README.md b/tuts/064-amazon-neptune-gs/README.md index 3fa1da2..b0a9be5 100644 --- a/tuts/064-amazon-neptune-gs/README.md +++ b/tuts/064-amazon-neptune-gs/README.md @@ -8,20 +8,20 @@ You can either run the automated script `amazon-neptune-gs.sh` to execute all op The script creates the following AWS resources in order: -• EC2 vpc -• EC2 internet gateway -• EC2 internet gateway (b) -• EC2 subnet -• EC2 subnet (b) -• EC2 subnet (c) -• EC2 route table -• EC2 route -• EC2 route table (b) -• EC2 route table (c) -• EC2 route table (d) -• EC2 security group -• Neptune db subnet group -• Neptune db cluster -• Neptune db instance +- EC2 vpc +- EC2 internet gateway +- EC2 internet gateway (b) +- EC2 subnet +- EC2 subnet (b) +- EC2 subnet (c) +- EC2 route table +- EC2 route +- EC2 route table (b) +- EC2 route table (c) +- EC2 route table (d) +- EC2 security group +- Neptune db subnet group +- Neptune db cluster +- Neptune db instance The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/064-amazon-neptune-gs/amazon-neptune-gs.md b/tuts/064-amazon-neptune-gs/amazon-neptune-gs.md index 344999b..b7a6e97 100644 --- a/tuts/064-amazon-neptune-gs/amazon-neptune-gs.md +++ b/tuts/064-amazon-neptune-gs/amazon-neptune-gs.md @@ -6,11 +6,11 @@ This tutorial guides you through setting up an Amazon Neptune graph database usi Before you begin, make sure you have: -* An AWS account with permissions to create Neptune resources -* AWS CLI installed and configured with appropriate credentials -* Basic understanding of AWS networking concepts (VPC, subnets, security groups) -* Approximately 20-30 minutes to complete the tutorial -* Estimated cost: The resources created in this tutorial will incur charges. A db.r5.large Neptune instance costs approximately $0.35 per hour, with minimal storage costs (around $0.01 per hour for the minimum 10GB allocation). The total cost for completing this tutorial should be less than $0.20 if you delete all resources immediately after completion. Remember to delete all resources after completing the tutorial to avoid unnecessary charges. +- An AWS account with permissions to create Neptune resources +- AWS CLI installed and configured with appropriate credentials +- Basic understanding of AWS networking concepts (VPC, subnets, security groups) +- Approximately 20-30 minutes to complete the tutorial +- Estimated cost: The resources created in this tutorial will incur charges. A db.r5.large Neptune instance costs approximately $0.35 per hour, with minimal storage costs (around $0.01 per hour for the minimum 10GB allocation). The total cost for completing this tutorial should be less than $0.20 if you delete all resources immediately after completion. Remember to delete all resources after completing the tutorial to avoid unnecessary charges. ## Create a VPC for your Neptune database @@ -327,7 +327,7 @@ For more information on building production-ready applications with Neptune, see Now that you've learned how to create and use a Neptune database, you might want to explore: -* [Using Neptune with graph notebooks](https://docs.aws.amazon.com/neptune/latest/userguide/graph-notebooks.html) - Learn how to use Jupyter notebooks to interact with your Neptune database -* [Loading data into Neptune](https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html) - Learn how to bulk load data into your Neptune database -* [Neptune ML](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning.html) - Explore machine learning capabilities with Neptune -* [Neptune analytics](https://docs.aws.amazon.com/neptune/latest/userguide/analytics.html) - Learn about Neptune's analytics features +- [Using Neptune with graph notebooks](https://docs.aws.amazon.com/neptune/latest/userguide/graph-notebooks.html) - Learn how to use Jupyter notebooks to interact with your Neptune database +- [Loading data into Neptune](https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html) - Learn how to bulk load data into your Neptune database +- [Neptune ML](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning.html) - Explore machine learning capabilities with Neptune +- [Neptune analytics](https://docs.aws.amazon.com/neptune/latest/userguide/analytics.html) - Learn about Neptune's analytics features diff --git a/tuts/065-amazon-elasticache-gs/README.md b/tuts/065-amazon-elasticache-gs/README.md index 3ad183e..30d6e6e 100644 --- a/tuts/065-amazon-elasticache-gs/README.md +++ b/tuts/065-amazon-elasticache-gs/README.md @@ -8,6 +8,6 @@ You can either run the automated script `amazon-elasticache-gs.sh` to execute al The script creates the following AWS resources in order: -• ElastiCache serverless cache +- ElastiCache serverless cache The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/065-amazon-elasticache-gs/amazon-elasticache-gs.md b/tuts/065-amazon-elasticache-gs/amazon-elasticache-gs.md index 778892d..8786a14 100644 --- a/tuts/065-amazon-elasticache-gs/amazon-elasticache-gs.md +++ b/tuts/065-amazon-elasticache-gs/amazon-elasticache-gs.md @@ -4,13 +4,13 @@ This tutorial guides you through the process of creating, using, and managing an ## Topics -* [Prerequisites](#prerequisites) -* [Set up security group for ElastiCache access](#set-up-security-group-for-elasticache-access) -* [Create a Valkey serverless cache](#create-a-valkey-serverless-cache) -* [Connect to your cache](#connect-to-your-cache) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Set up security group for ElastiCache access](#set-up-security-group-for-elasticache-access) +- [Create a Valkey serverless cache](#create-a-valkey-serverless-cache) +- [Connect to your cache](#connect-to-your-cache) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -341,8 +341,8 @@ For more information on building production-ready applications with ElastiCache, Now that you've learned the basics of creating and using an ElastiCache serverless cache, you can explore more advanced features: -* [Learn about ElastiCache serverless architecture](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/serverless-overview.html) -* [Explore different caching strategies](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Strategies.html) -* [Configure user access control](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html) -* [Set up CloudWatch monitoring for your cache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/CacheMetrics.html) -* [Learn about high availability with read replicas](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ReadReplicas.html) +- [Learn about ElastiCache serverless architecture](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/serverless-overview.html) +- [Explore different caching strategies](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Strategies.html) +- [Configure user access control](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Clusters.RBAC.html) +- [Set up CloudWatch monitoring for your cache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/CacheMetrics.html) +- [Learn about high availability with read replicas](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ReadReplicas.html) diff --git a/tuts/066-amazon-cognito-gs/README.md b/tuts/066-amazon-cognito-gs/README.md index ef4be2e..539eaa4 100644 --- a/tuts/066-amazon-cognito-gs/README.md +++ b/tuts/066-amazon-cognito-gs/README.md @@ -8,8 +8,8 @@ You can either run the automated script `amazon-cognito-gs.sh` to execute all op The script creates the following AWS resources in order: -• Cognito user pool -• Cognito user pool client -• Cognito user pool domain +- Cognito user pool +- Cognito user pool client +- Cognito user pool domain The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/067-aws-payment-cryptography-gs/README.md b/tuts/067-aws-payment-cryptography-gs/README.md index 6a8cc82..b31c19e 100644 --- a/tuts/067-aws-payment-cryptography-gs/README.md +++ b/tuts/067-aws-payment-cryptography-gs/README.md @@ -8,8 +8,8 @@ You can either run the automated script `aws-payment-cryptography-gs.sh` to exec The script creates the following AWS resources in order: -• Payment-Cryptography key -• Payment-Cryptography-Data card validation data -• Payment-Cryptography-Data card validation data (b) +- Payment-Cryptography key +- Payment-Cryptography-Data card validation data +- Payment-Cryptography-Data card validation data (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/067-aws-payment-cryptography-gs/aws-payment-cryptography-gs.md b/tuts/067-aws-payment-cryptography-gs/aws-payment-cryptography-gs.md index 7005c9c..37cd157 100644 --- a/tuts/067-aws-payment-cryptography-gs/aws-payment-cryptography-gs.md +++ b/tuts/067-aws-payment-cryptography-gs/aws-payment-cryptography-gs.md @@ -6,9 +6,9 @@ This tutorial walks you through the process of using AWS Payment Cryptography to Before you begin, make sure that: -* You have an AWS account with permission to access the AWS Payment Cryptography service. For more information, see [IAM policies](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/security_iam_service-with-iam.html). -* You have the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed and configured with your credentials. -* You are using a region where AWS Payment Cryptography is available. +- You have an AWS account with permission to access the AWS Payment Cryptography service. For more information, see [IAM policies](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/security_iam_service-with-iam.html). +- You have the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed and configured with your credentials. +- You are using a region where AWS Payment Cryptography is available. This tutorial takes approximately 10 minutes to complete and uses minimal AWS resources. The only resource created is a cryptographic key, which has no direct cost, but standard AWS Payment Cryptography service rates apply for API operations. The total cost for running this tutorial is approximately $0.00154 (less than one cent) if you delete the key immediately after completing the tutorial. If you don't delete the key, it will continue to incur a storage cost of approximately $1.00 per month. @@ -225,9 +225,9 @@ For more information on building production-ready applications with AWS Payment Now that you've learned the basics of AWS Payment Cryptography, you might want to explore more advanced features: -* Learn about other types of [card data operations](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/card-data-operations.html) such as PIN verification and EMV cryptograms -* Explore [key management](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/key-management.html) features like key import, export, and rotation -* Set up [key aliases](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/key-aliases.html) for easier key management -* Implement [encryption and decryption](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/encrypt-decrypt.html) of sensitive payment data +- Learn about other types of [card data operations](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/card-data-operations.html) such as PIN verification and EMV cryptograms +- Explore [key management](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/key-management.html) features like key import, export, and rotation +- Set up [key aliases](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/key-aliases.html) for easier key management +- Implement [encryption and decryption](https://docs.aws.amazon.com/payment-cryptography/latest/userguide/encrypt-decrypt.html) of sensitive payment data For more examples and deployment patterns, check out the [AWS Payment Cryptography Workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/b85843d4-a5e4-40fc-9a96-de0a99312a4b/en-US) or explore sample projects on [GitHub](https://github.com/aws-samples/samples-for-payment-cryptography-service). diff --git a/tuts/069-aws-fault-injection-service-gs/README.md b/tuts/069-aws-fault-injection-service-gs/README.md index dc06cbf..636da34 100644 --- a/tuts/069-aws-fault-injection-service-gs/README.md +++ b/tuts/069-aws-fault-injection-service-gs/README.md @@ -8,23 +8,23 @@ You can either run the automated shell script (`aws-fault-injection-service-gett The script creates the following AWS resources in order: -• IAM role -• IAM role (b) -• IAM role policy -• IAM role policy (b) -• IAM role (c) -• IAM role (d) -• IAM role policy (c) -• IAM role policy (d) -• IAM instance profile -• IAM instance profile (b) -• EC2 instances -• EC2 instances (b) -• CloudWatch metric alarm -• CloudWatch metric alarm (b) -• Fis experiment template -• Fis experiment template (b) -• Fis experiment -• Fis experiment (b) +- IAM role +- IAM role (b) +- IAM role policy +- IAM role policy (b) +- IAM role (c) +- IAM role (d) +- IAM role policy (c) +- IAM role policy (d) +- IAM instance profile +- IAM instance profile (b) +- EC2 instances +- EC2 instances (b) +- CloudWatch metric alarm +- CloudWatch metric alarm (b) +- Fis experiment template +- Fis experiment template (b) +- Fis experiment +- Fis experiment (b) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/069-aws-fault-injection-service-gs/aws-fault-injection-service-getting-started.md b/tuts/069-aws-fault-injection-service-gs/aws-fault-injection-service-getting-started.md index 7acbb22..117d5e5 100644 --- a/tuts/069-aws-fault-injection-service-gs/aws-fault-injection-service-getting-started.md +++ b/tuts/069-aws-fault-injection-service-gs/aws-fault-injection-service-getting-started.md @@ -157,7 +157,7 @@ aws iam add-role-to-instance-profile \ --role-name EC2SSMRole ``` -Wait a few seconds for the IAM role to propagate, then confirm that the role name was added to the instance profile: +Wait a few seconds for the IAM role to propagate, then confirm that the role name was added to the instance profile: ```bash sleep 10 @@ -334,7 +334,7 @@ echo "Current alarm state: $ALARM_STATE" if [ "$ALARM_STATE" != "OK" ]; then echo "Alarm not in OK state. Waiting for alarm to stabilize (additional 60 seconds)..." sleep 60 - + ALARM_STATE=$(aws cloudwatch describe-alarms \ --alarm-names "$ALARM_NAME" \ --query "MetricAlarms[0].StateValue" \ diff --git a/tuts/070-amazon-dynamodb-gs/README.md b/tuts/070-amazon-dynamodb-gs/README.md index 3878322..3a2b792 100644 --- a/tuts/070-amazon-dynamodb-gs/README.md +++ b/tuts/070-amazon-dynamodb-gs/README.md @@ -8,10 +8,10 @@ You can either run the automated shell script (`amazon-dynamodb-gs.sh`) to quick The script creates the following AWS resources in order: -• DynamoDB table -• DynamoDB item -• DynamoDB item (b) -• DynamoDB item (c) -• DynamoDB item (d) +- DynamoDB table +- DynamoDB item +- DynamoDB item (b) +- DynamoDB item (c) +- DynamoDB item (d) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/073-aws-secrets-manager-gs/README.md b/tuts/073-aws-secrets-manager-gs/README.md index bf6a638..9b8e104 100644 --- a/tuts/073-aws-secrets-manager-gs/README.md +++ b/tuts/073-aws-secrets-manager-gs/README.md @@ -8,10 +8,10 @@ You can either run the automated script `aws-secrets-manager-gs.sh` to execute a The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• IAM role (b) -• Secrets Manager secret -• Secrets Manager resource policy +- IAM role +- IAM role policy +- IAM role (b) +- Secrets Manager secret +- Secrets Manager resource policy The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md b/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md index ba14668..8aedf0c 100644 --- a/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md +++ b/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md @@ -6,10 +6,10 @@ This tutorial guides you through the process of moving hardcoded secrets from yo Before you begin this tutorial, you need: -* An AWS account with permissions to create IAM roles and use AWS Secrets Manager -* The AWS Command Line Interface (AWS CLI) installed and configured -* Basic knowledge of the AWS CLI and IAM -* Approximately 15 minutes to complete the tutorial +- An AWS account with permissions to create IAM roles and use AWS Secrets Manager +- The AWS Command Line Interface (AWS CLI) installed and configured +- Basic knowledge of the AWS CLI and IAM +- Approximately 15 minutes to complete the tutorial ### Costs @@ -19,8 +19,8 @@ This tutorial creates IAM roles and a secret in AWS Secrets Manager. The IAM rol In this tutorial, you'll use two IAM roles to manage permissions to your secret: -* A role for managing secrets (SecretsManagerAdmin) -* A role for retrieving secrets at runtime (RoleToRetrieveSecretAtRuntime) +- A role for managing secrets (SecretsManagerAdmin) +- A role for retrieving secrets at runtime (RoleToRetrieveSecretAtRuntime) First, create the SecretsManagerAdmin role. This role will have permissions to create and manage secrets. @@ -199,14 +199,14 @@ from botocore.exceptions import ClientError def get_secret(): secret_name = "MyAPIKey" region_name = "us-east-1" # Replace with your region - + # Create a Secrets Manager client session = boto3.session.Session() client = session.client( service_name='secretsmanager', region_name=region_name ) - + try: get_secret_value_response = client.get_secret_value( SecretId=secret_name @@ -229,7 +229,7 @@ try: secret_dict = get_secret() client_id = secret_dict['ClientID'] client_secret = secret_dict['ClientSecret'] - + # Now use client_id and client_secret in your application print(f"Successfully retrieved secret for client ID: {client_id}") except Exception as e: @@ -355,8 +355,8 @@ aws iam delete-role --role-name "SecretsManagerAdmin" Now that you've learned how to move hardcoded secrets to AWS Secrets Manager, consider these next steps: -* Implement [automatic rotation for your secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) to enhance security -* Learn how to [cache secrets in your application](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) to improve performance and reduce costs -* For multi-region applications, explore [replicating secrets across regions](https://docs.aws.amazon.com/secretsmanager/latest/userguide/replicate-secrets.html) to improve latency -* Use [Amazon CodeGuru Reviewer](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) to find hardcoded secrets in your Java and Python applications -* Learn about different ways to [grant permissions to secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-policies.html) using resource-based policies +- Implement [automatic rotation for your secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) to enhance security +- Learn how to [cache secrets in your application](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) to improve performance and reduce costs +- For multi-region applications, explore [replicating secrets across regions](https://docs.aws.amazon.com/secretsmanager/latest/userguide/replicate-secrets.html) to improve latency +- Use [Amazon CodeGuru Reviewer](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) to find hardcoded secrets in your Java and Python applications +- Learn about different ways to [grant permissions to secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-policies.html) using resource-based policies diff --git a/tuts/074-amazon-textract-gs/README.md b/tuts/074-amazon-textract-gs/README.md index 763b314..d59ed9b 100644 --- a/tuts/074-amazon-textract-gs/README.md +++ b/tuts/074-amazon-textract-gs/README.md @@ -8,7 +8,7 @@ You can either run the provided shell script (`amazon-textract-getting-started.s The script creates the following AWS resources in order: -• S3 bucket (for document storage) -• Textract document analysis job +- S3 bucket (for document storage) +- Textract document analysis job The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/075-aws-database-migration-service-gs/README.md b/tuts/075-aws-database-migration-service-gs/README.md index d513a71..c4ef70b 100644 --- a/tuts/075-aws-database-migration-service-gs/README.md +++ b/tuts/075-aws-database-migration-service-gs/README.md @@ -8,25 +8,25 @@ You can either run the provided shell script to automatically set up your DMS re The script creates the following AWS resources in order: -• Secrets Manager secret -• EC2 vpc -• EC2 subnet -• EC2 subnet (b) -• EC2 subnet (c) -• EC2 subnet (d) -• EC2 internet gateway -• EC2 internet gateway (b) -• EC2 route table -• EC2 route -• EC2 route table (b) -• EC2 route table (c) -• RDS db parameter group -• RDS db parameter group (b) -• RDS db subnet group -• RDS db instance -• RDS db instance (b) -• EC2 key pair -• EC2 instances -• Database Migration Service replication subnet group +- Secrets Manager secret +- EC2 vpc +- EC2 subnet +- EC2 subnet (b) +- EC2 subnet (c) +- EC2 subnet (d) +- EC2 internet gateway +- EC2 internet gateway (b) +- EC2 route table +- EC2 route +- EC2 route table (b) +- EC2 route table (c) +- RDS db parameter group +- RDS db parameter group (b) +- RDS db subnet group +- RDS db instance +- RDS db instance (b) +- EC2 key pair +- EC2 instances +- Database Migration Service replication subnet group The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/075-aws-database-migration-service-gs/aws-database-migration-service-gs.md b/tuts/075-aws-database-migration-service-gs/aws-database-migration-service-gs.md index ecc87eb..0c37f44 100644 --- a/tuts/075-aws-database-migration-service-gs/aws-database-migration-service-gs.md +++ b/tuts/075-aws-database-migration-service-gs/aws-database-migration-service-gs.md @@ -455,12 +455,12 @@ while true; do --filters Name=replication-instance-id,Values="DMS-instance" \ --query 'ReplicationInstances[0].Status' \ --output text) - + if [ "$STATUS" = "available" ]; then echo "DMS replication instance is now available" break fi - + echo "Current status: $STATUS. Waiting 30 seconds..." sleep 30 done @@ -561,7 +561,7 @@ while true; do --filters Name=endpoint-arn,Values=$SOURCE_ENDPOINT_ARN \ --query 'Connections[0].Status' \ --output text) - + if [ "$STATUS" = "successful" ]; then echo "Source endpoint connection test successful" break @@ -569,7 +569,7 @@ while true; do echo "Source endpoint connection test failed" exit 1 fi - + echo "Current status: $STATUS. Waiting 10 seconds..." sleep 10 done @@ -592,7 +592,7 @@ while true; do --filters Name=endpoint-arn,Values=$TARGET_ENDPOINT_ARN \ --query 'Connections[0].Status' \ --output text) - + if [ "$STATUS" = "successful" ]; then echo "Target endpoint connection test successful" break @@ -600,7 +600,7 @@ while true; do echo "Target endpoint connection test failed" exit 1 fi - + echo "Current status: $STATUS. Waiting 10 seconds..." sleep 10 done @@ -707,12 +707,12 @@ while true; do --filters Name=replication-task-arn,Values=$TASK_ARN \ --query 'ReplicationTasks[0].Status' \ --output text) - + if [ "$STATUS" = "ready" ]; then echo "Migration task is now ready" break fi - + echo "Current status: $STATUS. Waiting 30 seconds..." sleep 30 done diff --git a/tuts/077-aws-account-management-gs/README.md b/tuts/077-aws-account-management-gs/README.md index 810d45d..dcc8507 100644 --- a/tuts/077-aws-account-management-gs/README.md +++ b/tuts/077-aws-account-management-gs/README.md @@ -8,7 +8,7 @@ You can either run the automated script `aws-account-management-gs.sh` to execut The script creates the following AWS resources in order: -• Account Management alternate contact information -• Account Management billing preferences +- Account Management alternate contact information +- Account Management billing preferences The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/077-aws-account-management-gs/aws-account-management-gs.md b/tuts/077-aws-account-management-gs/aws-account-management-gs.md index 2440f2f..dadf244 100644 --- a/tuts/077-aws-account-management-gs/aws-account-management-gs.md +++ b/tuts/077-aws-account-management-gs/aws-account-management-gs.md @@ -6,16 +6,16 @@ This tutorial guides you through common AWS account management operations using ## Topics -* [Prerequisites](#prerequisites) -* [View account identifiers](#view-account-identifiers) -* [View account information](#view-account-information) -* [Manage AWS regions](#manage-aws-regions) -* [Manage alternate contacts](#manage-alternate-contacts) -* [Update account name](#update-account-name) -* [Manage root user email](#manage-root-user-email) -* [Troubleshooting common issues](#troubleshooting-common-issues) -* [Cleanup](#cleanup) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [View account identifiers](#view-account-identifiers) +- [View account information](#view-account-information) +- [Manage AWS regions](#manage-aws-regions) +- [Manage alternate contacts](#manage-alternate-contacts) +- [Update account name](#update-account-name) +- [Manage root user email](#manage-root-user-email) +- [Troubleshooting common issues](#troubleshooting-common-issues) +- [Cleanup](#cleanup) +- [Next steps](#next-steps) ## Prerequisites @@ -419,9 +419,9 @@ aws account put-account-name --account-name "Original Account Name" Now that you've learned how to manage your AWS account using the AWS CLI, you might want to explore these related topics: -* [Managing AWS account alternate contacts](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html) -* [Enabling and disabling AWS Regions](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) -* [Updating your AWS account name](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-acct-name.html) -* [Updating the root user email address](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-root-user-email.html) -* [Viewing AWS account identifiers](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html) -* [Setting up AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html) for managing multiple accounts +- [Managing AWS account alternate contacts](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html) +- [Enabling and disabling AWS Regions](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) +- [Updating your AWS account name](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-acct-name.html) +- [Updating the root user email address](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-root-user-email.html) +- [Viewing AWS account identifiers](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html) +- [Setting up AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html) for managing multiple accounts diff --git a/tuts/078-amazon-elastic-container-registry-gs/README.md b/tuts/078-amazon-elastic-container-registry-gs/README.md index fae6f4e..c400d9c 100644 --- a/tuts/078-amazon-elastic-container-registry-gs/README.md +++ b/tuts/078-amazon-elastic-container-registry-gs/README.md @@ -8,6 +8,6 @@ You can either run the automated shell script (`amazon-elastic-container-registr The script creates the following AWS resources in order: -• Ecr repository +- Ecr repository The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/079-aws-iot-device-defender-gs/README.md b/tuts/079-aws-iot-device-defender-gs/README.md index 41e5369..9180a8b 100644 --- a/tuts/079-aws-iot-device-defender-gs/README.md +++ b/tuts/079-aws-iot-device-defender-gs/README.md @@ -8,13 +8,13 @@ You can either run the provided shell script to automatically set up your IoT De The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• IAM role policy (b) -• IAM role policy (c) -• IoT Core on demand audit task -• IoT Core mitigation action -• IoT Core audit mitigation actions task -• SNS topic +- IAM role +- IAM role policy +- IAM role policy (b) +- IAM role policy (c) +- IoT Core on demand audit task +- IoT Core mitigation action +- IoT Core audit mitigation actions task +- SNS topic The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/079-aws-iot-device-defender-gs/aws-iot-device-defender-gs.md b/tuts/079-aws-iot-device-defender-gs/aws-iot-device-defender-gs.md index 0514dcf..013bc90 100644 --- a/tuts/079-aws-iot-device-defender-gs/aws-iot-device-defender-gs.md +++ b/tuts/079-aws-iot-device-defender-gs/aws-iot-device-defender-gs.md @@ -204,7 +204,7 @@ while [ "$TASK_STATUS" != "COMPLETED" ]; do TASK_DETAILS=$(aws iot describe-audit-task --task-id "$TASK_ID") TASK_STATUS=$(echo "$TASK_DETAILS" | grep -o '"taskStatus": "[^"]*' | cut -d'"' -f4) echo "Current task status: $TASK_STATUS" - + # Exit the loop if the task fails if [ "$TASK_STATUS" == "FAILED" ]; then echo "Audit task failed" @@ -272,23 +272,23 @@ If the audit found any non-compliant resources, we can apply our mitigation acti if [ "$HAS_FINDINGS" = true ]; then MITIGATION_TASK_ID="MitigationTask-$(date +%s)" echo "Starting mitigation actions task with ID: $MITIGATION_TASK_ID" - + aws iot start-audit-mitigation-actions-task \ --task-id "$MITIGATION_TASK_ID" \ --target "{\"findingIds\":[\"$FINDING_ID\"]}" \ --audit-check-to-actions-mapping "{\"LOGGING_DISABLED_CHECK\":[\"EnableErrorLoggingAction\"]}" - + echo "Mitigation actions task started successfully" - + # Wait for the mitigation task to complete echo "Waiting for mitigation task to complete..." sleep 10 - + # List mitigation tasks MITIGATION_TASKS=$(aws iot list-audit-mitigation-actions-tasks \ --start-time "$(date -d 'today' '+%Y-%m-%d')" \ --end-time "$(date -d 'tomorrow' '+%Y-%m-%d')") - + echo "Mitigation tasks:" echo "$MITIGATION_TASKS" else @@ -348,14 +348,14 @@ LOGGING_RESULT=$(aws iot set-v2-logging-options \ # Check if v2 logging succeeded if echo "$LOGGING_RESULT" | grep -q "error\|Error\|ERROR"; then echo "Failed to set up AWS IoT v2 logging, trying v1 logging..." - + # Create the logging options payload for v1 API LOGGING_OPTIONS_PAYLOAD="{\"roleArn\":\"$LOGGING_ROLE_ARN\",\"logLevel\":\"ERROR\"}" - + # Try the older set-logging-options command with proper payload format LOGGING_RESULT_V1=$(aws iot set-logging-options \ --logging-options-payload "$LOGGING_OPTIONS_PAYLOAD" 2>&1) - + if echo "$LOGGING_RESULT_V1" | grep -q "error\|Error\|ERROR"; then echo "Failed to set up AWS IoT logging with both v1 and v2 methods" echo "V2 result: $LOGGING_RESULT" @@ -403,7 +403,7 @@ if echo "$DISABLE_V2_RESULT" | grep -q "error\|Error\|ERROR"; then # Try v1 logging disable DISABLE_V1_RESULT=$(aws iot set-logging-options \ --logging-options-payload "{\"logLevel\":\"DISABLED\"}" 2>&1) - + if echo "$DISABLE_V1_RESULT" | grep -q "error\|Error\|ERROR"; then echo "Warning: Could not disable logging through either v1 or v2 methods" else diff --git a/tuts/080-aws-step-functions-gs/README.md b/tuts/080-aws-step-functions-gs/README.md index 2227f01..c95220a 100644 --- a/tuts/080-aws-step-functions-gs/README.md +++ b/tuts/080-aws-step-functions-gs/README.md @@ -8,14 +8,14 @@ You can either run the automated shell script (`aws-step-functions-gs.sh`) to qu The script creates the following AWS resources in order: -• IAM role -• IAM policy -• IAM role policy -• Step Functions state machine -• Step Functions execution -• Step Functions execution (b) -• IAM policy (b) -• IAM role policy (b) -• Step Functions execution (c) +- IAM role +- IAM policy +- IAM role policy +- Step Functions state machine +- Step Functions execution +- Step Functions execution (b) +- IAM policy (b) +- IAM role policy (b) +- Step Functions execution (c) The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/081-aws-elemental-mediaconnect-gs/README.md b/tuts/081-aws-elemental-mediaconnect-gs/README.md index ff26656..419dd00 100644 --- a/tuts/081-aws-elemental-mediaconnect-gs/README.md +++ b/tuts/081-aws-elemental-mediaconnect-gs/README.md @@ -8,6 +8,6 @@ You can either run the provided shell script to automatically set up your MediaC The script creates the following AWS resources in order: -• Elemental MediaConnect flow +- Elemental MediaConnect flow The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/081-aws-elemental-mediaconnect-gs/aws-elemental-mediaconnect-gs.md b/tuts/081-aws-elemental-mediaconnect-gs/aws-elemental-mediaconnect-gs.md index 8515ceb..d32cf05 100644 --- a/tuts/081-aws-elemental-mediaconnect-gs/aws-elemental-mediaconnect-gs.md +++ b/tuts/081-aws-elemental-mediaconnect-gs/aws-elemental-mediaconnect-gs.md @@ -4,15 +4,15 @@ This tutorial shows you how to use AWS Elemental MediaConnect with the AWS Comma ## Topics -* [Prerequisites](#prerequisites) -* [Verify access to AWS Elemental MediaConnect](#verify-access-to-aws-elemental-mediaconnect) -* [Create a flow](#create-a-flow) -* [Add an output](#add-an-output) -* [Grant an entitlement](#grant-an-entitlement) -* [Share details with affiliates](#share-details-with-affiliates) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Verify access to AWS Elemental MediaConnect](#verify-access-to-aws-elemental-mediaconnect) +- [Create a flow](#create-a-flow) +- [Add an output](#add-an-output) +- [Grant an entitlement](#grant-an-entitlement) +- [Share details with affiliates](#share-details-with-affiliates) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -25,14 +25,14 @@ Before you begin this tutorial, make sure you have the following: This tutorial is based on a scenario where you want to: -* Ingest a live video stream of an awards show that is taking place in New York City -* Distribute your video to an affiliate in Boston who does not have an AWS account, and wants content sent to their on-premises encoder -* Share your video with an affiliate in Philadelphia who wants to use their AWS account to distribute the video to their three local stations +- Ingest a live video stream of an awards show that is taking place in New York City +- Distribute your video to an affiliate in Boston who does not have an AWS account, and wants content sent to their on-premises encoder +- Share your video with an affiliate in Philadelphia who wants to use their AWS account to distribute the video to their three local stations ### Estimated time and cost -* **Time to complete**: Approximately 30 minutes -* **Cost**: The resources created in this tutorial will cost approximately $0.41 per hour while running, including: +- **Time to complete**: Approximately 30 minutes +- **Cost**: The resources created in this tutorial will cost approximately $0.41 per hour while running, including: * Flow ingest: $0.08/hour * Flow egress: $0.08/hour * Data transfer (for a typical 5 Mbps stream): $0.25/hour @@ -66,12 +66,12 @@ If you have the correct permissions, this command will return a list of existing Now, create an AWS Elemental MediaConnect flow to ingest your video from your on-premises encoder into the AWS Cloud. For this tutorial, we'll use the following details: -* Flow name: AwardsNYCShow -* Source name: AwardsNYCSource -* Source protocol: Zixi push -* Zixi stream ID: ZixiAwardsNYCFeed -* CIDR block sending the content: 10.24.34.0/23 -* Source encryption: None +- Flow name: AwardsNYCShow +- Source name: AwardsNYCSource +- Source protocol: Zixi push +- Zixi stream ID: ZixiAwardsNYCFeed +- CIDR block sending the content: 10.24.34.0/23 +- Source encryption: None **Create a flow** @@ -138,12 +138,12 @@ $ FLOW_ARN="arn:aws:mediaconnect:us-east-2:123456789012:flow:1-abcd1234-b786ff4d To send content to your affiliate in Boston, add an output to your flow. This output will send your video to your Boston affiliate's on-premises encoder. We'll use these details: -* Output name: AwardsNYCOutput -* Output protocol: Zixi push -* Zixi stream ID: ZixiAwardsOutput -* IP address of the Boston affiliate's on-premises encoder: 198.51.100.11 -* Port: 1024 -* Output encryption: None +- Output name: AwardsNYCOutput +- Output protocol: Zixi push +- Zixi stream ID: ZixiAwardsOutput +- IP address of the Boston affiliate's on-premises encoder: 198.51.100.11 +- Port: 1024 +- Output encryption: None **Add an output to the flow** @@ -174,9 +174,9 @@ $ aws mediaconnect add-flow-outputs \ Grant an entitlement to allow your Philadelphia affiliate to use your content as the source for their AWS Elemental MediaConnect flow. We'll use these details: -* Entitlement name: PhillyTeam -* Philadelphia affiliate's AWS account ID: 222233334444 -* Output encryption: None +- Entitlement name: PhillyTeam +- Philadelphia affiliate's AWS account ID: 222233334444 +- Output encryption: None **Grant an entitlement** @@ -304,15 +304,15 @@ This tutorial demonstrates the basic functionality of AWS Elemental MediaConnect For comprehensive guidance on building production-ready architectures, refer to: -* [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) -* [Media & Entertainment on AWS](https://aws.amazon.com/media/) -* [AWS Media Services](https://aws.amazon.com/media-services/) +- [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) +- [Media & Entertainment on AWS](https://aws.amazon.com/media/) +- [AWS Media Services](https://aws.amazon.com/media-services/) ## Next steps Now that you've learned the basics of using AWS Elemental MediaConnect with the AWS CLI, you can explore more advanced features: -* Learn how to [encrypt your content](https://docs.aws.amazon.com/mediaconnect/latest/ug/encryption.html) for secure transmission -* Explore [monitoring options](https://docs.aws.amazon.com/mediaconnect/latest/ug/monitoring.html) for your MediaConnect flows -* Set up [failover sources](https://docs.aws.amazon.com/mediaconnect/latest/ug/sources-failover.html) for high availability -* Learn about [MediaConnect gateways](https://docs.aws.amazon.com/mediaconnect/latest/ug/gateways.html) for cloud-based video processing +- Learn how to [encrypt your content](https://docs.aws.amazon.com/mediaconnect/latest/ug/encryption.html) for secure transmission +- Explore [monitoring options](https://docs.aws.amazon.com/mediaconnect/latest/ug/monitoring.html) for your MediaConnect flows +- Set up [failover sources](https://docs.aws.amazon.com/mediaconnect/latest/ug/sources-failover.html) for high availability +- Learn about [MediaConnect gateways](https://docs.aws.amazon.com/mediaconnect/latest/ug/gateways.html) for cloud-based video processing diff --git a/tuts/082-amazon-polly-gs/README.md b/tuts/082-amazon-polly-gs/README.md index 72af9be..085f042 100644 --- a/tuts/082-amazon-polly-gs/README.md +++ b/tuts/082-amazon-polly-gs/README.md @@ -14,7 +14,7 @@ You can either run the automated shell script (`amazon-polly-getting-started.sh` The script creates the following AWS resources in order: -• Polly lexicon +- Polly lexicon The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. ## Prerequisites diff --git a/tuts/082-amazon-polly-gs/amazon-polly-getting-started.md b/tuts/082-amazon-polly-gs/amazon-polly-getting-started.md index d65ed77..f153b1b 100644 --- a/tuts/082-amazon-polly-gs/amazon-polly-getting-started.md +++ b/tuts/082-amazon-polly-gs/amazon-polly-getting-started.md @@ -151,12 +151,12 @@ First, create a lexicon file that defines custom pronunciations. You can create ```bash cat << 'EOF' > example.pls - AWS diff --git a/tuts/085-amazon-ecs-service-connect/README.md b/tuts/085-amazon-ecs-service-connect/README.md index 77199e5..1ad0023 100644 --- a/tuts/085-amazon-ecs-service-connect/README.md +++ b/tuts/085-amazon-ecs-service-connect/README.md @@ -8,12 +8,12 @@ You can either run the automated shell script (`amazon-ecs-service-connect.sh`) The script creates the following AWS resources in order: -• EC2 security group -• Logs log group -• Logs log group (b) -• ECS cluster -• IAM role -• ECS task definition -• ECS service +- EC2 security group +- Logs log group +- Logs log group (b) +- ECS cluster +- IAM role +- ECS task definition +- ECS service The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/085-amazon-ecs-service-connect/amazon-ecs-service-connect.md b/tuts/085-amazon-ecs-service-connect/amazon-ecs-service-connect.md index eaa0f65..7d318fc 100644 --- a/tuts/085-amazon-ecs-service-connect/amazon-ecs-service-connect.md +++ b/tuts/085-amazon-ecs-service-connect/amazon-ecs-service-connect.md @@ -4,17 +4,17 @@ This tutorial guides you through setting up Amazon ECS Service Connect using the ## Topics -* [Prerequisites](#prerequisites) -* [Create the VPC infrastructure](#create-the-vpc-infrastructure) -* [Set up logging](#set-up-logging) -* [Create the ECS cluster](#create-the-ecs-cluster) -* [Configure IAM roles](#configure-iam-roles) -* [Register the task definition](#register-the-task-definition) -* [Create the service with Service Connect](#create-the-service-with-service-connect) -* [Verify the deployment](#verify-the-deployment) -* [Clean up resources](#clean-up-resources) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create the VPC infrastructure](#create-the-vpc-infrastructure) +- [Set up logging](#set-up-logging) +- [Create the ECS cluster](#create-the-ecs-cluster) +- [Configure IAM roles](#configure-iam-roles) +- [Register the task definition](#register-the-task-definition) +- [Create the service with Service Connect](#create-the-service-with-service-connect) +- [Verify the deployment](#verify-the-deployment) +- [Clean up resources](#clean-up-resources) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -683,19 +683,19 @@ This tutorial is designed to help you learn how ECS Service Connect works in a s ### Security considerations -* **Private subnets**: Move ECS tasks to private subnets and use a NAT Gateway for outbound internet access -* **Service Connect TLS**: Enable TLS encryption for service-to-service communication -* **Secrets management**: Use AWS Secrets Manager for sensitive configuration data -* **Network security**: Implement more restrictive security group rules and consider network ACLs -* **Container security**: Scan container images for vulnerabilities and use private ECR repositories +- **Private subnets**: Move ECS tasks to private subnets and use a NAT Gateway for outbound internet access +- **Service Connect TLS**: Enable TLS encryption for service-to-service communication +- **Secrets management**: Use AWS Secrets Manager for sensitive configuration data +- **Network security**: Implement more restrictive security group rules and consider network ACLs +- **Container security**: Scan container images for vulnerabilities and use private ECR repositories ### Architecture considerations -* **Auto scaling**: Configure ECS Service Auto Scaling based on CloudWatch metrics -* **Load balancing**: Add an Application Load Balancer for external traffic -* **Multi-region deployment**: Implement cross-region deployment for disaster recovery -* **Monitoring and observability**: Add comprehensive monitoring, alerting, and distributed tracing -* **Database integration**: Add managed database services with proper scaling and backup strategies +- **Auto scaling**: Configure ECS Service Auto Scaling based on CloudWatch metrics +- **Load balancing**: Add an Application Load Balancer for external traffic +- **Multi-region deployment**: Implement cross-region deployment for disaster recovery +- **Monitoring and observability**: Add comprehensive monitoring, alerting, and distributed tracing +- **Database integration**: Add managed database services with proper scaling and backup strategies For comprehensive guidance on production-ready architectures, see the [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) and [AWS Architecture Center](https://aws.amazon.com/architecture/). For security best practices, see the [AWS Security Best Practices](https://docs.aws.amazon.com/security/latest/userguide/security-best-practices.html). @@ -703,9 +703,9 @@ For comprehensive guidance on production-ready architectures, see the [AWS Well- Now that you've successfully configured ECS Service Connect, consider exploring these related topics: -* [Service Connect concepts](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-concepts.html) - Learn more about Service Connect architecture and components -* [Service Connect configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-configuration.html) - Explore advanced Service Connect configuration options -* [ECS Service Connect with Application Load Balancer](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-alb.html) - Integrate Service Connect with load balancers for external traffic -* [Service Connect TLS encryption](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-tls.html) - Secure inter-service communication with TLS -* [ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html) - Debug and troubleshoot your containers using ECS Exec -* [Amazon ECS monitoring](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html) - Monitor your ECS services with CloudWatch metrics and logs +- [Service Connect concepts](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-concepts.html) - Learn more about Service Connect architecture and components +- [Service Connect configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-configuration.html) - Explore advanced Service Connect configuration options +- [ECS Service Connect with Application Load Balancer](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-alb.html) - Integrate Service Connect with load balancers for external traffic +- [Service Connect TLS encryption](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-tls.html) - Secure inter-service communication with TLS +- [ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html) - Debug and troubleshoot your containers using ECS Exec +- [Amazon ECS monitoring](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html) - Monitor your ECS services with CloudWatch metrics and logs diff --git a/tuts/086-amazon-ecs-fargate-linux/README.md b/tuts/086-amazon-ecs-fargate-linux/README.md index 8c50fac..516258c 100644 --- a/tuts/086-amazon-ecs-fargate-linux/README.md +++ b/tuts/086-amazon-ecs-fargate-linux/README.md @@ -8,10 +8,10 @@ You can either run the automated shell script (`amazon-ecs-fargate-linux.sh`) to The script creates the following AWS resources in order: -• IAM role -• IAM role policy -• ECS cluster -• ECS task definition -• EC2 security group +- IAM role +- IAM role policy +- ECS cluster +- ECS task definition +- EC2 security group The script prompts you to clean up resources when you run it, including if there's an error part way through. If you need to clean up resources later, you can use the script log as a reference point for which resources were created. \ No newline at end of file diff --git a/tuts/086-amazon-ecs-fargate-linux/amazon-ecs-fargate-linux.md b/tuts/086-amazon-ecs-fargate-linux/amazon-ecs-fargate-linux.md index 889453b..350fd5b 100644 --- a/tuts/086-amazon-ecs-fargate-linux/amazon-ecs-fargate-linux.md +++ b/tuts/086-amazon-ecs-fargate-linux/amazon-ecs-fargate-linux.md @@ -4,14 +4,14 @@ This tutorial shows you how to create and run an Amazon ECS Linux task using the ## Topics -* [Prerequisites](#prerequisites) -* [Create the cluster](#create-the-cluster) -* [Create a task definition](#create-a-task-definition) -* [Create the service](#create-the-service) -* [View your service](#view-your-service) -* [Clean up](#clean-up) -* [Going to production](#going-to-production) -* [Next steps](#next-steps) +- [Prerequisites](#prerequisites) +- [Create the cluster](#create-the-cluster) +- [Create a task definition](#create-a-task-definition) +- [Create the service](#create-the-service) +- [View your service](#view-your-service) +- [Clean up](#clean-up) +- [Going to production](#going-to-production) +- [Next steps](#next-steps) ## Prerequisites @@ -24,9 +24,9 @@ Before you begin this tutorial, make sure you have the following. The AWS CLI attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the AWS CLI can create this IAM role, one of the following must be true: -* Your user has administrator access. -* Your user has the IAM permissions to create a service role. -* A user with administrator access has manually created the task execution role so that it is available on the account to be used. +- Your user has administrator access. +- Your user has the IAM permissions to create a service role. +- A user with administrator access has manually created the task execution role so that it is available on the account to be used. ### Cost considerations @@ -80,7 +80,7 @@ A task definition is like a blueprint for your application. It specifies which D **Register a task definition** -First, create a JSON file that defines your task. The following command creates a task definition file for a simple web application. Replace the `executionRoleArn` with your own. +First, create a JSON file that defines your task. The following command creates a task definition file for a simple web application. Replace the `executionRoleArn` with your own. ``` $ cat > task-definition.json << 'EOF' @@ -626,17 +626,17 @@ This tutorial is designed to help you understand how Amazon ECS and AWS Fargate For comprehensive guidance on production-ready architectures and security best practices, see: -* [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) -* [Amazon ECS security best practices](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security.html) -* [AWS Architecture Center](https://aws.amazon.com/architecture/) -* [Amazon ECS Workshop](https://ecsworkshop.com/) +- [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) +- [Amazon ECS security best practices](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security.html) +- [AWS Architecture Center](https://aws.amazon.com/architecture/) +- [Amazon ECS Workshop](https://ecsworkshop.com/) ## Next steps Now that you've successfully created and run an Amazon ECS task using Fargate, you can explore additional features: -* [Amazon ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) - Learn more about configuring task definitions with advanced options like environment variables, volumes, and logging. -* [Amazon ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) - Discover how to configure load balancers, auto scaling, and service discovery for your services. -* [Amazon ECS clusters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) - Explore cluster management, capacity providers, and container insights. -* [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) - Learn about Fargate platform versions, task networking, and storage options. -* [Amazon ECS security](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security.html) - Understand security best practices for ECS tasks and services. +- [Amazon ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) - Learn more about configuring task definitions with advanced options like environment variables, volumes, and logging. +- [Amazon ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) - Discover how to configure load balancers, auto scaling, and service discovery for your services. +- [Amazon ECS clusters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) - Explore cluster management, capacity providers, and container insights. +- [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) - Learn about Fargate platform versions, task networking, and storage options. +- [Amazon ECS security](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security.html) - Understand security best practices for ECS tasks and services.