Top 6 AWS certification exam passing technique

Cracking AWS certification exams requires knowledge and experience. But at the same time being smart and hacky with choosing the right approach to solve the question to equally important. This becomes more relevant for professional exams which are tough and questions are lengthy.

In this blog we are going to look at five such technique which give you edge in the exam. All techniques are discussed with examples and alerts or warning for all questions.

Question technique 1

An application uses an Application Load Balancer, an Auto Scaling Group and currently, 10 EC2 instances. To ensure cost efficacy you have been asked to ensure that when average CPU utilisation is below 20% instances are terminated and added when average CPU is above 70%.

Alert1 - As it can be seen this is a pretty average type of question in term of its length and difficulty. All questions have some unnecessary information so we should start with identifying the keywords in it which actually matter. Click toggle button below to find the idetified keywords.

  • Question highlights

    An application uses an Application Load Balancer, an Auto Scaling Group and currently, 10 EC2 instances. To ensure cost efficacy you have been asked to ensure that when average CPU utilisation is below 15% instances are terminated and added when average CPU is above 65%.

Which option should you suggest?

  1. Implement a Scheduled Scaling Policy to add instances during periods of heavy CPU usage and remove them when CPU usage is below 20%
  2. Run a script on each EC2 instance to report the CPU load back to the auto scaling service which can make decisions based on target rules
  3. Implement Target Tracking Scaling Policies at 20% and 70%, use an IAM Role to provide the policies with permissions to add and remove EC2 instances
  4. Use CloudWatch to monitor average CPU levels and create simple scaling policies within the Auto Scaling Group

Answer highlights

  1. Implement a Scheduled Scaling Policy to add instances during periods of heavy CPU usage and remove them when CPU usage is below 20%

    not the case of scheduled scaling policy. Eliminated.

  2. Run a script on each EC2 instance to report the CPU load back to the auto scaling service which can make decisions based on target rules

    This is not the case of scheduled scaling policy and secondly we hardly use script based solution in AWS.

  3. Implement Target Tracking Scaling Policies at 20% and 70%, use an IAM Role to provide the policies with permissions to add and remove EC2 instances

    This talks about target tracking while questions seems to be needing simple scaling and it talks about IAM Role which we don’t use for autoscaling.

  4. Use CloudWatch to monitor average CPU levels and create simple scaling policies within the Auto Scaling Group

    Only option which fits well after elimination.

Trick1 - Eliminate absurd answers.

Start with identifing the keywords in the question and the answer and eliminate one or two answers quickly.

Question technique 2

A professional baseball league has chosen to use AWS DynamoDB for its backend data storage. Many of the data requirements involve high-speed processing of images caputured using a flying drone. All these data including position and images are stored in DynamoDB. Users of the analytics applications using this database complain of slow load times for the positioning data including images. Currently the data and related informations are stored within DynamoDB.

Which option represents the best fix for this type of problem?

Alert2- Just think why it mentions DynamoDB as backend data storage. It also mentions storing image in the database. Why would someone use DynamoDB to storage images.

Question highlights

A professional baseball league has chosen to use AWS DynamoDB for its backend data storage. Many of the data requirements involve high-speed processing of images caputured using a flying drone. All these data including position and images are stored in DynamoDB. Users of the analytics applications using this database complain of slow load times for the positioning data including images. Currently the data and related informations are stored within DynamoDB.

Which option represents the best fix for this type of problem?

Which options do you suggest

Which option represents the best fix for this type of problem? (choose one)

  1. Change from DynamoDB to Aurora running in a VPC and use multiple replicas to scale read capability for the analytics application.
  2. Copy the drone images to S3, replace the database stored images with a link to the S3 location.
  3. Change from DynamoDB to Aurora running in a VPC and use multiple replicas to scale read capability for the analytics application.
  4. Modify the DynamoDB table to use on-demand pricing to cope with the incoming demand, use an SQS queue to buffer writes to cope with peak load.

Answer highlights

  1. Change from DynamoDB to Aurora running in a VPC and use multiple replicas to scale read capability for the analytics application.

    Analytics application is based on nosql database. We can’t change the whole database and application.

  2. Copy the drone images to S3, replace the database stored images with a link to the S3 location.

    As it can be seen the main issue storing the captured image data in DynamoDB itself. DynamoDB has limits on per items storage and that loading will take time. Changing this to point to image location on S3 seems a good option as with minimal changes in the application to show the images from the coming link will solve the issue.

  3. Adjust the RCU and WCU on the DynamoDB tables to 10,000 each to cope with the load on the database.

    There is no limit to increasing the RCU and WCU. What if larger images keep getting stored in the database.

  4. Modify the DynamoDB table to use on-demand pricing to cope with the incoming demand, use an SQS queue to buffer writes to cope with peak load.

    Similar to the last option, there is no limit to increasing demand of the application. Main issue is storing the image.

Trick2- Find anti-patterns.

Some questions give application level information which are not written obviously. Like this question mentions storing image in the DynamoDB itself. DynamoDB is not good for storing binary data. So trick is to find anti-patterns that the application might be using.

Question technique 3

A non-profit organization elastic **website runs on EC2 instances provisioned and terminated by an Auto Scaling Group. Authors connect to the system publish a post with attached images which can receive millions of views a day. With the introduction of the auto scaling group to allow the site to scale a regular bug is that posts have broken links instead of images.

Which of the following options is a potential fix? (choose one)

Motivation - This question is actually interesting because it doesn't give you a heads as to what is the root cause of the problem. It is very open ended and they give you little information about its implementation.This means to get additional information to about the question we will have to use the answers. It also mentions that ec2 instances termination of older instances by auto scaling group. This means problem is likely to be related to this change. This could be causing the broken links in the broken images. Good thing with this question is the its answers are very short so easier and quicker to evaluate.

Question highlights

A non-profit organization elastic website runs on EC2 instances provisioned and terminated by an Auto Scaling Group. Authors connect to the system publish a post with attached images which can receive millions of views a day. With the introduction of the auto scaling group to allow the site to scale a regular bug is that posts have broken links instead of images.

Which options do you suggest

  1. Implement CloudFront to cache images to avoid the broken links
  2. Change the EC2 volumes on all instances in the ASG from ST1 to GP2, adjust the ASG to use GP2 for any newly provisioned instances
  3. Implement EFS and configure all Instances to mount it via a Mount Target
  4. Use EBS Snapshots to restore any missing images on a case by case basis

Answer highlight

  1. Implement CloudFront to cache images to avoid the broken links

Can’t decide to use CloudFront without actually knowing the backend source of images.

  1. Change the EC2 volumes on all instances in the ASG from ST1 to GP2, adjust the ASG to use GP2 for any newly provisioned instances

ST1 to GP2 only changes the performance. Not the actual problem.

  1. Implement EFS and configure all Instances to mount it via a Mount Target

This is the only good option after elimination of rest three. This is a shared file system to all instances with images stored in it permanently. So this will work like a charm.

  1. Use EBS Snapshots to restore any missing images on a case by case basis

Elastic solutions need to be automated. Not case by casis.

Trick3 - Get clue from answers for open ended questions

This question required you need to read between the lines and understand the reasons why images could be broken because the storage of the instnace is vanishing as instances get terminated by ASG. This type of questions which require reviews and analysis are very common in the professional level. Even in associate level certifications have some questions like this.

Question technique 4

You are auditing a serverless application for a live aution system. The application uses API Gateway, S3 and Lambda to provide the frontend serverless compute and DynamoDB for backend data storage. During yearly auction registration periods the system is expected to have 10000x the load vs other times of the year. The DynamoDB tables use provisioned capacity of 50 RCU/WCU.

Which architecture changes could you suggest to reduce the impact of the extra load? (choose two)

Motivation- This is a clasic example of burst problem that for a certain period of time large amount of requests come which vanish later. If we look closely we find that frontend and backend part of the application is serverless which means it will scale automatically so this can’t be a concern for us to scale. API gateway, S3 and lambda are serverless services. But if we look at DynamoDB it is not. So, we may have to focus on this. Last part mentioning 50 RCU/WCU also gives us clue that this detail is useful to us. So answer mostly lies in DynamoDB.

Question highlights

You are auditing a serverless application for a live aution system. The application uses API Gateway, S3 and Lambda to provide the frontend serverless compute and DynamoDB for backend data storage. During yearly auction registration periods the system is expected to have 10000x the load vs other times of the year. The DynamoDB tables use provisioned capacity of 50 RCU/WCU.

Which architecture changes could you suggest to reduce the impact of the extra load? (choose two)

Which options do you suggest

  1. Launch 100 DynamoDB databases during the peak period to spread the 10000x load
  2. Backup the data from DynamoDB and restore the snapshot into an Aurora Serverless cluster, configure for public access and modify the application code
  3. Change from provisioned to on-demand capacity
  4. Add an SQS queue, modify the application so it writes to the queue and use a backend Lambda to parse auction registration records from the queue and add to the database over time
  5. Increase the RCU and WCU on the table from 50 to 500,000 for the brief peak periods and return afterwards

Answer highlights

  1. Launch 100 DynamoDB databases during the peak period to spread the 10000x load

    Straight no.

  2. Backup the data from DynamoDB and restore the snapshot into an Aurora Serverless cluster, configure for public access and modify the application code

    Straight no. Can’t replatform.

  3. Change from provisioned to on-demand capacity

    Yes. Let AWS handle the scaling. It may be very costly but this solution will work. So it is a potential solution. Since we have to select two this can be one of that. If we were to select cheapest one, then we could avoid this. But that’s not the case here.

  4. Add an SQS queue, modify the application so it writes to the queue and use a backend Lambda to parse auction registration records from the queue and add to the database over time.

    Decoupled solution are great. This soultion will work at low cost as well and handle the peak volume using SQS queue. So, it is a great solution.

  5. Increase the RCU and WCU on the table from 50 to 500,000 for the brief peak periods and return afterwards

    Nope. We need automated solution and there is no limit to load increase. What if load increases even higher.
    Trick4- Focus on non-serverless services for performance. Go for decoupled solutions.

Question technique 5

You are auditing the AWS environment for an enterprise application. It runs from EC2 instances provisioned via an ASG connected to an application load balancer. A SysAdmin team manages the AWS and EC2 environment, and development teams connect to EC2 when they perform application maintenance. You're adding SSL capability and need to ensure the Development team, who have Root access to the EC2 instances can't access the SSL Certificate for the application.

Which solution should you suggest? (choose one)

Motivation -

  • Question highlights

You are auditing the AWS environment for an enterprise application. It runs from EC2 instances provisioned via an ASG connected to an application load balancer. A SysAdmin team manages the AWS and EC2 environment, and development teams connect to EC2 when they perform application maintenance. You're adding SSL capability and need to ensure the Development team, who have Root access to the EC2 instances can't access the SSL Certificate for the application.

Which solution should you suggest? (choose one)

Which options do you suggest

  1. Store the SSL Certificate on S3, copy onto the EC2 instances at boot, load and remove afterwards
  2. Store the SSL certificate on the EC2 instances and set the permissions to allow access onto from the SysAdmins IAM group
  3. Generate a certificate within ACM, configure it on the ALB and set the EC2 instances to use the HTTPS protocol for ALB -> Instance connections
  4. Import the certificate into ACM, configure it on the ALB and set the EC2 instances to use the HTTP protocol for ALB-> Instance Connections

Answer highlights

  1. Store the SSL Certificate on S3, copy onto the EC2 instances at boot, load and remove afterwards

Once certificate is loaded on the instance root user will still have access to it.

  1. Store the SSL certificate on the EC2 instances and set the permissions to allow access onto from the SysAdmins IAM group

    Any user with root permission could remove this still.

  2. Generate a certificate within ACM, configure it on the ALB and set the EC2 instances to use the HTTPS protocol for ALB -> Instance connections

    If you set certificate on ALB then ALB→ instance connection will become http not https without setting another certificate on ec2 again.

  3. Import the certificate into ACM, configure it on the ALB and set the EC2 instances to use the HTTP protocol for ALB-> Instance Connections

    Works well. Set the certificate on ALB and that make the application level access to the end users https and we can leave the ALB→instance connection http.

Trick5- Focus on wording difference in similar answers.

In this case last two options were very similar which is good because they have higher chances of being a valid answer and it is easy to find the keyword difference between them. And just focusing on that part will answer the question.

In this case first option of the last two options are the same because ACM supports both options → generating and importing certificates. Just focusing on the second part of the answers shows the difference.

Question technique 6

A software gaming company has produced an online racing game which has become an overnight craze. Due to the overwhelming success of the gaming application you need to implement security controls within the environment. The application uses EC2 instances, provisioned by an ASG, connected to an application load balancer. You need to implement a system using AWS tools and services which can conduct an analysis of EC2 instances checking for vulnerabilities. And a tool which can check the AWS account, products and services for compliance against best practice standards over time.

Which AWS products should you suggest? (choose two)

Motivation - Essentially this question revolves around security. It talks about vulnerabilities scan and compliance. Most likely everything about ALB, ASG are just useless information. Also it asks for AWS products recommendations so we just need to find a suitable service which meets the requirement.

Question highlight

A software gaming company has produced an online racing game which has become an overnight craze. Due to the overwhelming success of the gaming application you need to implement security controls within the environment. The application uses EC2 instances, provisioned by an ASG, connected to an application load balancer. You need to implement a system using AWS tools and services which can conduct an analysis of EC2 instances checking for vulnerabilities. And a tool which can check the AWS account, products and services for compliance against best practice standards over time.

Which AWS products should you suggest? (choose two)

Which options do you suggest

  1. CloudTrail
  2. AWS Config
  3. Inspector
  4. WAF & Shield

Answer highlights

  1. CloudTrail

Used for access logging. Nothing to do with vulnerabilities scan.

  1. AWS Config

Great to check mismatch from compliance standards.

  1. Inspector

Great to do vulnerability scan on ec2.

  1. WAF & Shield

Used for application security. Nothing to do with AWS account or ec2 instances.

Trick6- Ignore the garbage

Some questions have large amount of unncessary information. Finding this comes with experience. For example in this last two requirements actually matter. Rest whole paragraph is useless.