Background
I just cleared my "AWS Certified Developer - Associate" certification exam yesterday with 90%. I have already cleared "AWS Certified Solutions Architect - Associate" exam 6 months back with 89%. You can see my badges below-
While preparing I realized that there are some questions based on service limits in AWS. These can be straightforward questions or they can be slightly twisted. Either case knowing service limits help out a lot. So I am going to summarize most of them which I feel important from certification perspective.
NOTE: AWS service limits can change anytime. So it is best to refer the FAQ sections of corresponding services to confirm. Following limits are as of June 2018.
AWS service limits & constraints
Following are AWS services and their corresponding limits. There would be more limits and constraints to each service. I am simply trying to summarise based on my exam preparation, test quizzes, and actual exam experience. Please let me know in comments if these limits are changed and I can update accordingly. Thanks.
Consolidated billing
- There is a soft limit of 20 accounts per organization and a hard limit of one level of billing hierarchy.
- For more detials refer - https://aws.amazon.com/answers/account-management/aws-multi-account-billing-strategy/
AWS S3
- By default, customers can provision up to 100 buckets per AWS account. However, you can increase your Amazon S3 bucket limit by visiting AWS Service Limits.
- The bucket name can be between 3 and 63 characters long and can contain only lower-case characters, numbers, periods, and dashes.
- Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
- For more details refer - https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
- AWS S3 offers unlimited storage
- Each object on S3, however, can be 0 bytes to 5TB.
- The largest object that can be uploaded in a single PUT is 5GB
- For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
- For further details refer - https://aws.amazon.com/s3/faqs/
Glacier
- There is no maximum limit to the total amount of data that can be stored in Amazon Glacier.
- Individual archives are limited to a maximum size of 40 terabytes.
- For more details refer - https://aws.amazon.com/glacier/faqs/
Redshift
- Block size for columnar storage is 1024 kb or 1 MB
- For more details refer - https://docs.aws.amazon.com/redshift/latest/dg/c_columnar_storage_disk_mem_mgmnt.html
AWS EC2
- There is a limit of 20 EC2 instances per region. However, this may vary from region to region. Use the EC2 Service Limits page in the Amazon EC2 console to view the current limits for resources provided by Amazon EC2 on a per-region basis. This limit can be increased on request.
- For more details refer - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html
- Size limit for a root device for Amazon EBS-Backed AMI is 16 TiB
- Size limit for a root device for Amazon Instance Store-Backed AMI is 10 GiB
- For more details refer - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html
- When you enable connection draining, you can specify a maximum time for the load balancer to keep connections alive before reporting the instance as de-registered. The maximum timeout value can be set between 1 and 3,600 seconds (the default is 300 seconds).
- For more details refer - https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-conn-drain.html
VPC
- You can have a maximum of 5 VPCs per region.
- You can have a maximum of 200 subnets per VPC
- Only one internet gateway can be attached to a VPC at a time.
- Only one virtual private gateway can be attached to a VPC at a time.
- One subnet always corresponds to 1 AZ.
- For more details refer - https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html
Route 53
- There are 50 domain names available by default, however, it is a soft limit and can be raised by contacting AWS support
- For more details refer - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html
Cloud watch
- Standard/Basic monitoring - 5 mins
- Detailed monitoring - 1 mins
- For more details refer - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html
Cloud formation
- Maximum number of AWS CloudFormation stacks that you can create - 200 stacks
- Maximum number of parameters that you can declare in your AWS CloudFormation template - 60
- Maximum number of outputs that you can declare in your AWS CloudFormation template - 60
- For more details refer - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html
Lambda
- Maximum configuration: 3GB memory and 5 mins timeout
- Default configuration: 128 MB memory and 3 seconds timeout
- 512 MB of temp space i.e /tmp
- For more details refer - https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Dynamo DB
- There is an initial limit of 256 tables per region. You can raise a request to increase this limit.
- You can define a maximum of 5 local secondary indexes and 5 global secondary indexes per table(hard limit) - total 10 secondary indexes
- The maximum size of item collection is 10GB
- The minimum amount of reserved capacity that can be bought - 100
- The maximum item size in DynamoDB is 400 KB, which includes both attribute name binary length (UTF-8 length) and attribute value lengths (again binary length). The attribute name counts towards the size limit. No limit on the number of items.
- For more details refer - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html
- A BatchGetItem single operation can retrieve up to 16 MB of data, which can contain as many as 100 items
- For more details refer - https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
- A single Scan operation will read up to the maximum number of items set (if using the Limit parameter) or a maximum of 1 MB of data and then apply any filtering to the results using FilterExpression.
- For more details refer - https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html
SQS
- You can create any number of message queues.
- Max configuration: 14 days retention and 12 hours visibility timeout
- Default configuration: 4 days retention and 30 seconds visibility timeout
- A single request can have up to 1 to 10 messages up to a maximum payload of 256KB.
- Each 64 kb chunk payload is billed as 1 request. So a single API call with 256kb payload will be billed as 4 requests.
- To configure the maximum message size, use the console or the SetQueueAttributes method to set the MaximumMessageSize attribute. This attribute specifies the limit on bytes that an Amazon SQS message can contain. Set this limit to a value between 1,024 bytes (1 KB), and 262,144 bytes (256 KB).
- For more details refer - https://aws.amazon.com/sqs/faqs/
SNS
- By default, SNS offers 10 million subscriptions per topic and 100,000 topics per account. To request a higher limit, please contact Support.
- Topic names are limited to 256 characters.
- SNS subscription confirmation time period is 3 days
SWF
- Maximum registered domains – 100
- Maximum workflow and activity types – 10,000 each per domain
- Maximum open activity tasks – 1,000 per workflow execution
- Year of retention for workflow execution
- For more details refer - https://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-limits.html
Again as mentioned before this is obviously not an exhaustive list but merely a summary of what I thought could be best to revise before going to the associate exams. Let me know if you think something else needs to be added here for the benefit of everyone.
Since you have taken time to go through the limits here is a bonus question for you :)
Question: You receive a call from a potential client who explains that one of the many services they offer is a website running on a t2.micro EC2 instance where users can submit requests for customized e-cards to be sent to their friends and family. The e-card website administrator was on a cruise and was shocked when he returned to the office in mid-January to find hundreds of angry emails complaining that customers' loved ones had not received their Christmas cards. He also had several emails from CloudWatch alerting him that the SQS queue for the e-card application had grown to over 500 messages on December 25th. You investigate and find that the problem was caused by a crashed EC2 instance which serves as an application server. What do you advise your client to do first? Choose the correct answer from the options below
Options:
- Use an autoscaling group to create as many application servers as needed to access all of the Christmas card SQS messages.
- Reboot the application server immediately so that it begins processing the Christmas cards SQS messages.
- Redeploy the application server as larger instance type so that it processed the Christmas cards SQS faster.
- Send an apology to the customer notifying them that their cards will not be delivered.
Answer:
4. Send an apology to the customer notifying them that their cards will not be delivered.
Explanation:
Since 500 message count was as of December 25th and e-card website administrator returned mid-Jan the difference is more than 14 days which is the maximum retention period for SQS messages.
To be honest I had select option 1 in my 1st attempt :)
No comments:
Post a Comment