Attempt 2
Question 1:
Skipped

You would like your Elastic Beanstalk environment to expose an HTTPS endpoint instead of an HTTP endpoint to get in-flight encryption between your clients and your web servers.

What must be done to set up HTTPS on Beanstalk?

Explanation

Correct option:

The simplest way to use HTTPS with an Elastic Beanstalk environment is to assign a server certificate to your environment's load balancer. When you configure your load balancer to terminate HTTPS, the connection between the client and the load balancer is secure. Backend connections between the load balancer and EC2 instances use HTTP, so no additional configuration of the instances is required.

via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html

Create a config file in the .ebextensions folder to configure the Load Balancer

To update your AWS Elastic Beanstalk environment to use HTTPS, you need to configure an HTTPS listener for the load balancer in your environment. Two types of load balancers support an HTTPS listener: Classic Load Balancer and Application Load Balancer.

Example .ebextensions/securelistener-alb.config

Use this example when your environment has an Application Load Balancer. The example uses options in the aws:elbv2:listener namespace to configure an HTTPS listener on port 443 with the specified certificate. The listener routes traffic to the default process.

option_settings:
  aws:elbv2:listener:443:
    ListenerEnabled: 'true'
    Protocol: HTTPS
    SSLCertificateArns: arn:aws:acm:us-east-2:1234567890123:certificate/####################################

Incorrect options:

Use a separate CloudFormation template to load the SSL certificate onto the Load Balancer - A separate CloudFormation template won't be able to mutate the state of a Load Balancer managed by Elastic Beanstalk, so this option is incorrect.

Open up the port 80 for the security group - Port 80 is for HTTP traffic, so this option is incorrect.

Configure Health Checks - Health Checks are not related to SSL certificates, so this option is ruled out.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-elb.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html

Question 2:
Skipped

You would like to run the X-Ray daemon for your Docker containers deployed using AWS Fargate.

What do you need to do to ensure the setup will work? (Select two)

Explanation

Correct options:

Deploy the X-Ray daemon agent as a sidecar container

In Amazon ECS, create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster. You can use port mappings and network mode settings in your task definition file to allow your application to communicate with the daemon container.

As we are using AWS Fargate, we do not have control over the underlying EC2 instance and thus we can't deploy the agent on the EC2 instance or run an X-Ray agent container as a daemon set (only available for ECS classic).

via - https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html

Provide the correct IAM task role to the X-Ray container

For Fargate, we can only provide IAM roles to tasks, which is also the best security practice should we use EC2 instances.

Incorrect options:

Deploy the X-Ray daemon agent as a daemon set on ECS - As explained above, since we are using AWS Fargate, we do not have control over the underlying EC2 instance and thus we can't run an X-Ray agent container as a daemon set.

Deploy the X-Ray daemon agent as a process on your EC2 instance

Provide the correct IAM instance role to the EC2 instance

As we are using AWS Fargate, we do not have control over the underlying EC2 instance, so both these options are incorrect.

Reference:

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html

Question 3:
Skipped

You've just deployed an AWS Lambda function. The lambda function will be invoked via the API Gateway. The API Gateway will need to control access to it.

Which of the following mechanisms is not supported for API Gateway?

Explanation

Correct option:

Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.

How API Gateway Works: via - https://aws.amazon.com/api-gateway/

STS

The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). However, is not supported at the time with API Gateway.

Incorrect options:

IAM permissions with sigv4 - They can be applied to an entire API or individual methods.

Lambda Authorizer - Control access to REST API methods using bearer token authentication as well as information described by headers, paths, query strings, stage variables, or context variables request parameter.

Cognito User Pools - Use Cognito User Pools to create customizable authentication and authorization solutions for your REST APIs.

Reference:

https://aws.amazon.com/api-gateway/

Question 4:
Skipped

Your Lambda function must use the Node.js drivers to connect to your RDS PostgreSQL database in your VPC.

How do you bundle your Lambda function to add the dependencies?

Explanation

Correct option:

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.

How Lambda function works: via - https://aws.amazon.com/lambda/

Put the function and the dependencies in one folder and zip them together

A deployment package is a ZIP archive that contains your function code and dependencies. You need to create a deployment package if you use the Lambda API to manage functions, or if you need to include libraries and dependencies other than the AWS SDK. You can upload the package directly to Lambda, or you can use an Amazon S3 bucket, and then upload it to Lambda. If the deployment package is larger than 50 MB, you must use Amazon S3. This is the standard way of packaging Lambda functions.

Incorrect options:

Zip the function as-is with a package.json file so that AWS Lambda can resolve the dependencies for you

Upload the code through the AWS console and upload the dependencies as a zip

Zip the function and the dependencies separately and upload them in AWS Lambda as two parts

These three options are incorrect as there's only one way of deploying a Lambda function, which is to provide the zip file with all dependencies that it'll need.

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/nodejs-create-deployment-pkg.html

Question 5:
Skipped

A development team has a mix of applications hosted on-premises as well as on EC2 instances. The on-premises application controls all applications deployed on the EC2 instances. In case of any errors, the team wants to leverage Amazon CloudWatch to monitor and troubleshoot the on-premises application.

As a Developer Associate, which of the following solutions would you suggest to address this use-case?

Explanation

Correct option:

Configure the CloudWatch agent on the on-premises server by using IAM user credentials with permissions for CloudWatch

The CloudWatch agent enables you to do the following:

Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as well as servers not managed by AWS.

Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server.

To enable the CloudWatch agent to send data from an on-premises server, you must specify the access key and secret key of the IAM user that you created earlier.

via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-premise.html

Incorrect options:

Configure CloudWatch Logs to directly read the logs from the on-premises server - This is a made-up option as you cannot have CloudWatch Logs directly communicate with the on-premises server. You have to go via the CloudWatch Agent.

Upload log files from the on-premises server to an EC2 instance which further forwards the logs to CloudWatch

Upload log files from the on-premises server to S3 and let CloudWatch process the files from S3

Both these options require significant customizations and still will not be as neatly integrated with CloudWatch as compared to just using the CloudWatch Agent which is available off-the-shelf.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-premise.html

Question 6:
Skipped

Your company wants to move away from manually managing Lambda in the AWS console and wants to upload and update them using AWS CloudFormation.

How do you declare an AWS Lambda function in CloudFormation? (Select two)

Explanation

Correct options:

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.

How Lambda function works: via - https://aws.amazon.com/lambda/

Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block

You can upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block.

The AWS::Lambda::Function resource creates a Lambda function. To create a function, you need a deployment package and an execution role. The deployment package contains your function code.

Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block as long as there are no third-party dependencies

The other option is to write the code inline for Node.js and Python as long as there are no dependencies for your code, besides the dependencies already provided by AWS in your Lambda Runtime (aws-sdk and cfn-response and many other AWS related libraries are preloaded via, for example, boto3 (python) in the lambda instances.)

YAML template for creating a Lambda function:

Type: AWS::Lambda::Function
Properties:
  Code:
    Code
  DeadLetterConfig:
    DeadLetterConfig
  Description: String
  Environment:
    Environment
  FileSystemConfigs:
    - FileSystemConfig
  FunctionName: String
  Handler: String
  KmsKeyArn: String
  Layers:
    - String
  MemorySize: Integer
  ReservedConcurrentExecutions: Integer
  Role: String
  Runtime: String
  Tags:
    - Tag
  Timeout: Integer
  TracingConfig:
    TracingConfig
  VpcConfig:
    VpcConfig

Incorrect options:

Upload all the code to CodeCommit and refer to the CodeCommit Repository in AWS::Lambda::Function block

Upload all the code as a folder to S3 and refer the folder in AWS::Lambda::Function block

Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block and reference the dependencies as a zip file stored in S3

These three options contradict the explanation provided earlier. So these are incorrect.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html

Question 7:
Skipped

You have created a DynamoDB table to support your application and provisioned RCU and WCU to it so that your application has been running for over a year now without any throttling issues. Your application now requires a second type of query over your table and as such, you have decided to create an LSI and a GSI on a new table to support that use case. One month after having implemented such indexes, it seems your table is experiencing throttling.

Upon looking at the table's metrics, it seems the RCU and WCU provisioned are still sufficient. What's happening?

Explanation

Correct option:

The GSI is throttling so you need to provision more RCU and WCU to the GSI

DynamoDB supports two types of secondary indexes:

Global secondary index — An index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered "global" because queries on the index can span all of the data in the base table, across all partitions. A global secondary index is stored in its own partition space away from the base table and scales separately from the base table.

Local secondary index — An index that has the same partition key as the base table, but a different sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.

Differences between GSI and LSI: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

If you perform heavy write activity on the table, but a global secondary index on that table has insufficient write capacity, then the write activity on the table will be throttled. To avoid potential throttling, the provisioned write capacity for a global secondary index should be equal or greater than the write capacity of the base table since new updates will write to both the base table and global secondary index.

Incorrect options

The LSI is throttling so you need to provision more RCU and WCU to the LSI - LSI use the RCU and WCU of the main table, so you can't provision more RCU and WCU to the LSI.

Adding both an LSI and a GSI to a table is not recommended by AWS best practices as this is a known cause for creating throttles - This option has been added as a distractor. It is fine to have LSI and GSI together.

Metrics are lagging in your CloudWatch dashboard and you should see the RCU and WCU peaking for the main table in a few minutes - This could be a reason, but in this case, the GSI is at fault as the application has been running fine for months.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html#GSI.ThroughputConsiderations

Question 8:
Skipped

You are implementing a banking application in which you need to update the Exchanges DynamoDB table and the AccountBalance DynamoDB table at the same time or not at all.

Which DynamoDB feature should you use?

Explanation

Correct option:

DynamoDB Transactions

You can use DynamoDB transactions to make coordinated all-or-nothing changes to multiple items both within and across tables. Transactions provide atomicity, consistency, isolation, and durability (ACID) in DynamoDB, helping you to maintain data correctness in your applications.

DynamoDB Transactions Overview: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html

Incorrect options:

DynamoDB TTL - DynamoDB TTL allows you to expire data based on a timestamp, so this option is not correct.

DynamoDB Streams - DynamoDB Streams gives a changelog of changes that happened to your tables and then may even relay these to a Lambda function for further processing.

DynamoDB Indexes - GSI and LSI are used to allow you to query your tables using different partition/sort keys.

Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html

Question 9:
Skipped

Your company has developers worldwide with access to the company's Amazon Simple Storage Service (S3) buckets. The objects in the buckets are encrypted at the server-side but need more flexibility with access control, auditing, rotation, and deletion of keys. You would also like to limit who can use the key.

Which encryption mechanism best fits your needs?

Explanation

Correct option:

SSE-KMS

You have the following options for protecting data at rest in Amazon S3:

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.

Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

Server-Side Encryption with Customer Master Keys (CMKs) stored in AWS Key Management Service (SSE-KMS) is similar to SSE-S3. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. Additionally, you can create and manage customer-managed CMKs or use AWS managed CMKs that are unique to you, your service, and your Region.

Please review these three options for Server Side Encryption on S3: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

Incorrect options:

SSE-C When retrieving objects encrypted server-side with SSE-C, you must provide the same encryption key as part of your request. Amazon S3 first verifies that the encryption key you provided matches, and then decrypts the object before returning the object data to you

Client-Side Encryption - You can encrypt the data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

SSE-S3 - When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. So this option is incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

Question 10:
Skipped

You would like to retrieve a subset of your dataset stored in S3 with the CSV format. You would like to retrieve a month of data and only 3 columns out of the 10.

You need to minimize compute and network costs for this, what should you use?

Explanation

Correct option:

S3 Select

S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by your application, you can achieve drastic performance increases in many cases you can get as much as a 400% improvement.

via - https://aws.amazon.com/blogs/aws/s3-glacier-select/

Incorrect options:

S3 Inventory - Amazon S3 inventory is one of the tools Amazon S3 provides to help manage your storage. You can use it to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs.

S3 Analytics - By using Amazon S3 analytics storage class analysis you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. This new Amazon S3 analytics feature observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for infrequent access) storage class.

S3 Access Logs - Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill.

Reference:

https://aws.amazon.com/blogs/aws/s3-glacier-select/

Question 11:
Skipped

You would like to paginate the results of an S3 List to show 100 results per page to your users and minimize the number of API calls that you will use.

Which CLI options should you use? (Select two)

Explanation

Correct options:

--max-items

--starting-token

For commands that can return a large list of items, the AWS Command Line Interface (AWS CLI) has three options to control the number of items included in the output when the AWS CLI calls a service's API to populate the list.

--page-size

--max-items

--starting-token

By default, the AWS CLI uses a page size of 1000 and retrieves all available items. For example, if you run aws s3api list-objects on an Amazon S3 bucket that contains 3,500 objects, the AWS CLI makes four calls to Amazon S3, handling the service-specific pagination logic for you in the background and returning all 3,500 objects in the final output.

Here's an example: aws s3api list-objects --bucket my-bucket --max-items 100 --starting-token eyJNYXJrZXIiOiBudWxsLCAiYm90b190cnVuY2F0ZV9hbW91bnQiOiAxfQ==

Incorrect options:

"--page-size" - You can use the --page-size option to specify that the AWS CLI requests a smaller number of items from each call to the AWS service. The CLI still retrieves the full list but performs a larger number of service API calls in the background and retrieves a smaller number of items with each call.

"--next-token" - This is a made-up option and has been added as a distractor.

"--limit" - This is a made-up option and has been added as a distractor.

Reference:

https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html

Question 12:
Skipped

Your company likes to operate multiple AWS accounts so that teams have their environments. Services deployed across these accounts interact with one another, and now there's a requirement to implement X-Ray traces across all your applications deployed on EC2 instances and AWS accounts.

As such, you would like to have a unified account to view all the traces. What should you in your X-Ray daemon set up to make this work? (Select two)

Explanation

Correct option:

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

How X-Ray Works: via - https://aws.amazon.com/xray/

Create a role in the target unified account and allow roles in each sub-account to assume the role

Configure the X-Ray daemon to use an IAM instance role

The X-Ray agent can assume a role to publish data into an account different from the one in which it is running. This enables you to publish data from various components of your application into a central account.

X-Ray can also track requests flowing through applications or services across multiple AWS Regions.

via - https://aws.amazon.com/xray/faqs/

You can create the necessary configurations for cross-account access via this reference documentation - https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-configuration.html

Incorrect options:

Create a user in the target unified account and generate access and secret keys

Configure the X-Ray daemon to use access and secret keys

These two options combined together would work but wouldn't be a best-practice security-wise. Therefore these are not correct.

Enable Cross Account collection in the X-Ray console - This is a made-up option and has been added as a distractor.

References:

https://aws.amazon.com/xray/faqs/

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-configuration.html

Question 13:
Skipped

Your team lead has finished creating a CodeBuild project in the management console and a build spec has been defined for the project. After the build is run, CodeBuild fails to pull a Docker image into the build environment.

What is the most likely cause?

Explanation

Correct option:

Missing IAM permissions for the CodeBuild Service

By default, IAM users don't have permission to create or modify Amazon Elastic Container Registry (Amazon ECR) resources or perform tasks using the Amazon ECR API. A user who uses the AWS CodeBuild console must have a minimum set of permissions that allows the user to describe other AWS resources for the AWS account.

via - https://docs.aws.amazon.com/codebuild/latest/userguide/sample-ecr.html

Incorrect options:

The Docker image is missing some tags - Tags are optional for naming purposes

CodeBuild cannot work with custom Docker images - Custom docker images are supported, so this option is incorrect.

The Docker image is too big - It is good to properly design the image but in this case, it does not affect the CodeBuild. You can also look at multi-stage builds, which are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain.

Reference:

https://docs.aws.amazon.com/codebuild/latest/userguide/sample-ecr.html

Question 14:
Skipped

Your development team has created a popular mobile app written for Android. The team is looking for a technology that can send messages to mobile devices using a mobile app.

Mobile app users will register and be given permissions to access AWS resources. Which technology would you recommend for subscribing users to messages?

Explanation

Correct option:

SNS

Amazon SNS enables message filtering and fanout to a large number of subscribers, including serverless functions, queues, and distributed systems. Additionally, Amazon SNS fans out notifications to end users via mobile push messages, SMS, and email.

Amazon SNS follows the 'publish-subscribe' (pub-sub) messaging paradigm, with notifications being delivered to clients using a 'push' mechanism that eliminates the need to periodically check or 'poll' for new information and updates.

How SNS Works: via - https://aws.amazon.com/sns/

Incorrect options:

SQS - SQS is a distributed queuing system. Messages are not pushed to receivers. Receivers have to poll SQS to receive messages

Kinesis - This is used for processing real-time streams meant for big data workloads.

SES - Amazon SES is an inexpensive way to send and receive emails.

Reference:

https://aws.amazon.com/sns/

Question 15:
Skipped

You are running a web application where users can author blogs and share them with their followers. Most of the workflow is read based, but when a blog is updated, you would like to ensure that the latest data is served to the users (no stale data). The Developer has already suggested using ElastiCache to cope with the read load but has asked you to implement a caching strategy that complies with the requirements of the site.

Which strategy would you recommend?

Explanation

Correct option:

Use a Write Through strategy

The write-through strategy adds data or updates data in the cache whenever data is written to the database.

In a Write Through strategy, any new blog or update to the blog will be written to both the database layer and the caching layer, thus ensuring that the latest data is always served from the cache.

via - https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Incorrect options:

Use a Lazy Loading strategy without TTL

Lazy Loading is a caching strategy that loads data into the cache only when necessary. Whenever your application requests data, it first requests the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data doesn't exist in the cache or has expired, your application requests the data from your data store.

Use a Lazy Loading strategy with TTL

In the case of Lazy Loading, the data is loaded onto the cache whenever the data is missing from the cache. In case the blog gets updated, it won't be updated from the cache unless that cache expires (in case you used a TTL). Time to live (TTL) is an integer value that specifies the number of seconds until the key expires. When an application attempts to read an expired key, it is treated as though the key is not found. The database is queried for the key and the cache is updated. Therefore, for a while, old data will be served to users which is a problem from a requirements perspective as we don't want any stale data.

Use DAX - This is a cache for DynamoDB based implementations, but in this question, we are considering ElastiCache. Therefore this option is not relevant.

Reference:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Question 16:
Skipped

You are responsible for an application that runs on multiple Amazon EC2 instances. In front of the instances is an Internet-facing load balancer that takes requests from clients over the internet and distributes them to the EC2 instances. A health check is configured to ping the index.html page found in the root directory for the health status. When accessing the website via the internet visitors of the website receive timeout errors.

What should be checked first to resolve the issue?

Explanation

Correct option:

Security Groups

A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance.

Check the security group rules of your EC2 instance. You need a security group rule that allows inbound traffic from your public IPv4 address on the proper port.

Incorrect options:

IAM Roles - Usually you run into issues with authorization of APIs with roles but not for timeout, so this option does not fit the given use-case.

The application is down - Although you can set a health check for application ping or HTTP, timeouts are usually caused by blocked firewall access.

The ALB is warming up - ALB has a slow start mode which allows a warm-up period before being able to respond to requests with optimal performance. So this is not the issue.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html#TroubleshootingInstancesConnectionTimeout

Question 17:
Skipped

You are looking to invoke an AWS Lambda function every hour (similar to a cron job) in a serverless way.

Which event source should you use for your AWS Lambda function?

Explanation

Correct option:

CloudWatch Events

You can create a Lambda function and direct CloudWatch Events to execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression.

CloudWatch Events Key Concepts: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

Schedule Expressions for CloudWatch Events Rules: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html

Incorrect options:

Amazon S3

SQS

Kinesis

These three AWS services don't have cron capabilities, so these options are incorrect.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html

Question 18:
Skipped

Your company is shifting towards Elastic Container Service (ECS) to deploy applications. The process should be automated using the AWS CLI to create a service where at least ten instances of a task definition are kept running under the default cluster.

Which of the following commands should be executed?

Explanation

Correct option:

aws ecs create-service --service-name ecs-simple-service --task-definition ecs-demo --desired-count 10

To create a new service you would use this command which creates a service in your default region called ecs-simple-service. The service uses the ecs-demo task definition and it maintains 10 instantiations of that task.

Incorrect options:

aws ecr create-service --service-name ecs-simple-service --task-definition ecs-demo --desired-count 10 - This command is referencing a different service called Amazon Elastic Container Registry (ECR) which's is a fully-managed Docker container registry

docker-compose create ecs-simple-service - This is a docker command to create containers for a service.

aws ecs run-task --cluster default --task-definition ecs-demo - This is a valid command but used for starting a new task using a specified task definition.

Reference:

https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html

Question 19:
Skipped

The development team at an e-commerce company is preparing for the upcoming Thanksgiving sale. The product manager wants the development team to implement appropriate caching strategy on Amazon ElastiCache to withstand traffic spikes on the website during the sale. A key requirement is to facilitate consistent updates to the product prices and product description, so that the cache never goes out of sync with the backend.

As a Developer Associate, which of the following solutions would you recommend for the given use-case?

Explanation

Correct option:

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.

Broadly, you can set up two types of caching strategies:

  1. Lazy Loading

  2. Write-Through

via - https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

via - https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Use a caching strategy to write to the backend first and then invalidate the cache

This option is similar to the write-through strategy wherein the application writes to the backend first and then invalidate the cache. As the cache gets invalidated, the caching engine would then fetch the latest value from the backend, thereby making sure that the product prices and product description stay consistent with the backend.

Incorrect options:

Use a caching strategy to update the cache and the backend at the same time - The cache and the backend cannot be updated at the same time via a single atomic operation as these are two separate systems. Therefore this option is incorrect.

Use a caching strategy to write to the backend first and wait for the cache to expire via TTL - This strategy could work if the TTL is really short. However, for the duration of this TTL, the cache would be out of sync with the backend, hence this option is not correct for the given use-case.

Use a caching strategy to write to the cache directly and sync the backend at a later time - This option is given as a distractor as this strategy is not viable to address the given use-case. The product prices and description on the cache must always stay consistent with the backend. You cannot sync the backend at a later time.

Reference:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Question 20:
Skipped

You are running a video website and provide users with S3 pre-signed URLs allowing your users to securely upload their video content onto your buckets. The average file size uploaded to your buckets is 500MB and you would like your users to efficiently send the content.

What would you recommend doing in the client SDK?

Explanation

Correct option:

Do a multi-part upload

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

Incorrect options:

Upload as one part - As explained in the explanation above, if you upload as one part, then you are not maximizing the bandwidth and are not being efficient.

Use SSE-S3 encryption - Encryption won't help to increase the efficiency of the uploads to S3.

Zip the video file before sending - Video files are binary formats and should already be optimized in size. Applying zip compression on a video file won't help reduce its size. This option has been added as a distractor.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html

Question 21:
Skipped

An EC2 instance has an IAM instance role attached to it, providing it read and write access to the S3 bucket 'my_bucket'. You have tested the IAM instance role and both reads and writes are working. You then remove the IAM role from the EC2 instance and test both read and write again. Writes stopped working but reads are still working.

What is the likely cause of this behavior?

Explanation

Correct option:

The S3 bucket policy authorizes reads

When evaluating an IAM policy of an EC2 instance doing actions on S3, the least-privilege union of both the IAM policy of the EC2 instance and the bucket policy of the S3 bucket are taken into account.

For the given use-case, as IAM role has been removed, therefore only the S3 bucket policy comes into effect which authorizes reads.

Here is a great reference blog for understanding the various scenarios for using IAM policy vs S3 bucket policy - https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/

Incorrect options:

The EC2 instance is using cached temporary IAM credentials - As the IAM instance role has been removed that wouldn't be the case

Removing an instance role from an EC2 instance can take a few minutes before being active - It is immediately active and even if it wasn't, it wouldn't make sense as we can still do reads but not writes.

When a read is done on a bucket, there's a grace period of 5 minutes to do the same read again - This is not true. Every single request is evaluated against IAM in the AWS model.

Reference:

https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/

Question 22:
Skipped

You were assigned to a project that requires the use of the AWS CLI to build a project with AWS CodeBuild. Your project's root directory includes the buildspec.yml file to run build commands and would like your build artifacts to be automatically encrypted at the end.

How should you configure CodeBuild to accomplish this?

Explanation

Correct option:

Specify a KMS key to use

AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications.

For AWS CodeBuild to encrypt its build output artifacts, it needs access to an AWS KMS customer master key (CMK). By default, AWS CodeBuild uses the AWS-managed CMK for Amazon S3 in your AWS account. The following environment variable provides these details:

CODEBUILD_KMS_KEY_ID: The identifier of the AWS KMS key that CodeBuild is using to encrypt the build output artifact (for example, arn:aws:kms:region-ID:account-ID:key/key-ID or alias/key-alias).

Incorrect options:

Use an AWS Lambda Hook - Code hook is used for integration with Lambda and is not relevant for the given use-case.

Use the AWS Encryption SDK - The SDK just makes it easier for you to implement encryption best practices in your application and is not relevant for the given use-case.

Use In-Flight encryption (SSL) - SSL is usually for internet traffic which in this case will be using internal traffic through AWS and is not relevant for the given use-case.

References:

https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html

https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html

Question 23:
Skipped

You would like to deploy a Lambda function globally so that requests are filtered at the AWS edge locations.

Which Lambda deployment mode do you need?

Explanation

Correct option:

Use a Lambda@Edge

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don't have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume - there is no charge when your code is not running.

How Lambda@Edge Works: via - https://aws.amazon.com/lambda/edge/

Incorrect options:

Use a Global DynamoDB table as a Lambda source - A Lambda function can run off of DynamoDB Streams using Event Sources, however, it does not deploy the Lambda function globally.

Deploy Lambda in a Global VPC - This option is a distractor as there is no concept of Global VPC in AWS.

Deploy Lambda in S3 - You can't deploy Lambda in S3 but can have your Lambda functions triggered by S3 events.

Reference:

https://aws.amazon.com/lambda/edge/

Question 24:
Skipped

Your company is new to cloud computing and would like to host a static HTML5 website on the cloud and be able to access it via domain www.mycompany.com. You have created a bucket in Amazon Simple Storage Service (S3), enabled website hosting, and set the index.html as the default page. Finally, you create an Alias record in Amazon Route 53 that points to the S3 website endpoint of your S3 bucket.

When you test the domain www.mycompany.com you get the following error: 'HTTP response code 403 (Access Denied)'. What can you do to resolve this error?

Explanation

Correct answer

Create a bucket policy

Bucket policy is an access policy option available for you to grant permission to your Amazon S3 resources. It uses JSON-based access policy language.

If you want to configure an existing bucket as a static website that has public access, you must edit block public access settings for that bucket. You may also have to edit your account-level block public access settings.

via - https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

Incorrect:

Create an IAM role - This will not help because IAM roles are attached to services and in this case, we have public users.

Enable CORS - CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. Here we are not dealing with cross domains.

Enable Encryption - For the most part, encryption does not have an effect on access denied/forbidden errors. On the website endpoint, if a user requests an object that doesn't exist, Amazon S3 returns HTTP response code 404 (Not Found). If the object exists but you haven't granted read permission on it, the website endpoint returns HTTP response code 403 (Access Denied).

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteAccessPermissionsReqd.html

Question 25:
Skipped

You would like your Elastic Beanstalk environment to expose an HTTPS endpoint and an HTTP endpoint. The HTTPS endpoint should be used to get in-flight encryption between your clients and your web servers, while the HTTP endpoint should only be used to redirect traffic to HTTPS and support URLs starting with http://.

What must be done to configure this setup? (Select three)

Explanation

Correct options:

Assign an SSL certificate to the Load Balancer

This ensures that the Load Balancer can expose an HTTPS endpoint.

Open up port 80 & port 443

This ensures that the Load Balancer will allow both the HTTP (80) and HTTPS (443) protocol for incoming connections

Configure your EC2 instances to redirect HTTP traffic to HTTPS

This ensures traffic originating from HTTP onto the Load Balancer forces a redirect to HTTPS by the EC2 instances before being correctly served, thus ensuring the traffic served is fully encrypted.

Incorrect options:

Only open up port 80 - This is not correct as it would not allow HTTPS traffic (port 443).

Only open up port 443 - This is not correct as it would not allow HTTP traffic (port 80).

Configure your EC2 instances to redirect HTTPS traffic to HTTP - This is not correct as it would force HTTP traffic, instead of HTTPS.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-httpredirect.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-elb.html

Question 26:
Skipped

Your organization has a single Amazon Simple Storage Service (S3) bucket that contains folders labeled with customer names. Several administrators have IAM access to the S3 bucket and versioning is enabled to easily recover from unintended user actions.

Which of the following statements about versioning is NOT true based on this scenario?

Explanation

Correct option:

Versioning can be enabled only for a specific folder

The versioning state applies to all (never some) of the objects in that bucket. The first time you enable a bucket for versioning, objects in it are thereafter always versioned and given a unique version ID.

Versioning Overview: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

Incorrect options:

Overwriting a file increases its versions - If you overwrite an object (file), it results in a new object version in the bucket. You can always restore the previous version.

Deleting a file is a recoverable operation - Correct, when you delete an object (file), Amazon S3 inserts a delete marker, which becomes the current object version and you can restore the previous version.

Any file that was unversioned before enabling versioning will have the 'null' version - Objects stored in your bucket before you set the versioning state have a version ID of null. Those existing objects in your bucket do not change.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

Question 27:
Skipped

You are using AWS SQS FIFO queues to get the ordering of messages on a per user_id basis.

As a developer, which message parameter should you set the value of user_id to guarantee the ordering?

Explanation

Correct option:

AWS FIFO queues are designed to enhance messaging between applications when the order of operations and events has to be enforced.

via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

MessageGroupId

The message group ID is the tag that specifies that a message belongs to a specific message group. Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group (however, messages that belong to different message groups might be processed out of order).

Incorrect options:

MessageDeduplicationId - The message deduplication ID is the token used for the deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.

MessageOrderId - This is a made-up option and has been added as a distractor.

MessageHash - This is a made-up option and has been added as a distractor.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html

Question 28:
Skipped

Which of the following CLI options will allow you to retrieve a subset of the attributes coming from a DynamoDB scan?

Explanation

Correct option:

--projection-expression

A projection expression is a string that identifies the attributes you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated.

via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ProjectionExpressions.html

To read data from a table, you use operations such as GetItem, Query, or Scan. DynamoDB returns all of the item attributes by default. To get just some, rather than all of the attributes, use a projection expression.

Incorrect options:

--filter-expression - If you need to further refine the Query results, you can optionally provide a filter expression. A filter expression determines which items within the Query results should be returned to you. All of the other results are discarded. A filter expression is applied after Query finishes, but before the results are returned. Therefore, a Query will consume the same amount of read capacity, regardless of whether a filter expression is present.

--page-size - You can use the --page-size option to specify that the AWS CLI requests a smaller number of items from each call to the AWS service. The CLI still retrieves the full list but performs a larger number of service API calls in the background and retrieves a smaller number of items with each call.

--max-items - To include fewer items at a time in the AWS CLI output, use the --max-items option. The AWS CLI still handles pagination with the service as described above, but prints out only the number of items at a time that you specify.

Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ProjectionExpressions.html

Question 29:
Skipped

A media company wants to migrate a video editing service to Amazon EC2 while following security best practices. The videos are sourced and read from a non-public S3 bucket.

As a Developer Associate, which of the following solutions would you recommend for the given use-case?

Explanation

Correct option:

Set up an EC2 service role with read-only permissions for the S3 bucket and attach the role to the EC2 instance profile

As an AWS security best practice, you should not create an IAM user and pass the user's credentials to the application or embed the credentials in the application. Instead, create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance. When an application uses these credentials in AWS, it can perform all of the operations that are allowed by the policies attached to the role.

So for the given use-case, you should create an IAM role with read-only permissions for the S3 bucket and apply it to the EC2 instance profile.

via - https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html

Incorrect options:

Set up an IAM user with read-only permissions for the S3 bucket. Configure AWS credentials for this user via AWS CLI on the EC2 instance

Set up an IAM user with read-only permissions for the S3 bucket. Configure the IAM user credentials in the user data of the EC2 instance

As mentioned in the explanation above, it is dangerous to pass an IAM user's credentials to the application or embed the credentials in the application or even configure these credentials in the user data of the EC2 instance. So both these options are incorrect.

Set up an S3 service role with read-only permissions for the S3 bucket and attach the role to the EC2 instance profile - As the application is running on EC2 instances, therefore you need to set up an EC2 service role, not an S3 service role.

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html

Question 30:
Skipped

A company ingests real-time data into its on-premises data center and subsequently a daily data feed is compressed into a single file and uploaded on Amazon S3 for backup. The typical compressed file size is around 2 GB.

Which of the following is the fastest way to upload the daily compressed file into S3?

Explanation

Correct option:

Upload the compressed file using multipart upload with S3 transfer acceleration

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. If you're uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance. If you're uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.

Incorrect options:

Upload the compressed file in a single operation - In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Multipart upload provides improved throughput - you can upload parts in parallel to improve throughput. Therefore, this option is not correct.

Upload the compressed file using multipart upload - Although using multipart upload would certainly speed up the process, combining with S3 transfer acceleration would further improve the transfer speed. Therefore just using multipart upload is not the correct option.

FTP the compressed file into an EC2 instance that runs in the same region as the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket - This is a roundabout process of getting the file into S3 and added as a distractor. Although it is technically feasible to follow this process, it would involve a lot of scripting and certainly would not be the fastest way to get the file into S3.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html

Question 31:
Skipped

A developer has created a new Application Load Balancer but has not registered any targets with the target groups.

Which of the following errors would be generated by the Load Balancer?

Explanation

Correct option:

HTTP 503: Service unavailable

The Load Balancer generates the HTTP 503: Service unavailable error when the target groups for the load balancer have no registered targets.

Incorrect options:

HTTP 500: Internal server error

HTTP 502: Bad gateway

HTTP 504: Gateway timeout

Here is a summary of the possible causes for these error types:

via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html

Question 32:
Skipped

You are storing bids information on your betting application and you would like to automatically expire DynamoDB table data after one week.

What should you use?

Explanation

Correct option:

Use TTL

Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be automatically deleted from the database. TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.

Incorrect options:

Use DynamoDB Streams - These help you get a changelog of your DynamoDB table but won't help you delete expired data. Note that data expired using a TTL will appear as an event in your DynamoDB streams.

Use DAX - Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement: from milliseconds to microseconds: even at millions of requests per second. This is a caching technology for your DynamoDB tables.

Use a Lambda function - This could work but would require setting up indexes, queries, or scans to work, as well as trigger them often enough using a CloudWatch Events. This band-aid solution would never be as good as using the TTL feature in DynamoDB.

Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

Question 33:
Skipped

You are creating a web application in which users can follow each other. Some users will be more popular than others and thus their data will be requested very often. Currently, the user data sits in RDS and it has been recommended by your Developer to use ElastiCache as a caching layer to improve the read performance. The whole dataset of users cannot sit in ElastiCache without incurring tremendous costs and therefore you would like to cache only the most often requested users profiles there. As your website is high traffic, it is accepted to have stale data for users for a while, as long as the stale data is less than a minute old.

What caching strategy do you recommend implementing?

Explanation

Correct option

Use a Lazy Loading strategy with TTL

Lazy loading is a caching strategy that loads data into the cache only when necessary. Whenever your application requests data, it first requests the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data doesn't exist in the cache or has expired, your application requests the data from your data store. Your datastore then returns the data to your application.

In this case, data that is actively requested by users will be cached in ElastiCache, and thanks to the TTL, we can expire that data after a minute to limit the data staleness.

via - https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html#Strategies.LazyLoading

Incorrect option:

Use a Lazy Loading strategy without TTL - This fits the read requirements, but won't help expiring stale data, so we need TTL.

Use a Write Through strategy with TTL

Use a Write Through strategy without TTL

The problem with these two options for the write-through strategy is that we would fill up the cache with unnecessary data and as mentioned in the question we don't have enough space in the cache to fit all the dataset. Therefore we can't use a write-through strategy.

Reference:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html#Strategies.LazyLoading

Question 34:
Skipped

Your AWS account is now growing to 200 users and you would like to provide each of these users a personal space in the S3 bucket 'my_company_space' with the prefix /home/<username>, where they have read/write access.

How can you do this efficiently?

Explanation

Correct option:

Create one customer-managed policy with policy variables and attach it to a group of all users

You can assign access to "dynamically calculated resources" by using policy variables, a feature that lets you specify placeholders in a policy. When the policy is evaluated, the policy variables are replaced with values that come from the context of the request itself.

This is ideal when you want want to generalize the policy so it works for many users without having to make a unique copy of the policy for each user. For example, consider writing a policy to allow each user to have access to his or her own objects in an Amazon S3 bucket, as in the previous example. But don't create a separate policy for each user that explicitly specifies the user's name as part of the resource. Instead, create a single group policy that works for any user in that group.

via - https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html

Incorrect options:

Create an S3 bucket policy and change it as users are added and removed

This doesn't scale and the S3 bucket policy size may be maxed out. The IAM policies bump up against a size limit (up to 2 kb for users, 5 kb for groups, and 10 kb for roles). S3 supports bucket policies of up 20 kb.

Create inline policies for each user as they are onboarded: This would work but doesn't scale and it's inefficient.

Create one customer-managed policy per user and attach them to the relevant users: This would work but doesn't scale and would be a nightmare to manage.

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html

Question 35:
Skipped

Your Lambda function processes files for your customers and as part of that process, it creates a lot of intermediary files it needs to store on its disk and then discard.

What is the best way to store temporary files for your Lambda functions that will be discarded when the function stops running?

Explanation

Correct option:

Use the local directory /tmp

This is 512MB of temporary space you can use for your Lambda functions.

Incorrect options:

Create a tmp/ directory in the source zip file and use it - This option has been added as a distractor, as you can't access a directory within a zip file.

Use the local directory /opt - This option has been added as a distractor. This path is not accessible.

Use an S3 bucket - This won't be temporary after the Lambda function is deleted, so this option is incorrect.

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/limits.html

Question 36:
Skipped

A company has recently launched a new gaming application that the users are adopting rapidly. The company uses RDS MySQL as the database. The development team wants an urgent solution to this issue where the rapidly increasing workload might exceed the available database storage.

As a developer associate, which of the following solutions would you recommend so that it requires minimum development effort to address this requirement?

Explanation

Correct option:

Enable storage auto-scaling for RDS MySQL

If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance. With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space it automatically scales up your storage. Amazon RDS starts a storage modification for an autoscaling-enabled DB instance when these factors apply:

Free available space is less than 10 percent of the allocated storage.

The low-storage condition lasts at least five minutes.

At least six hours have passed since the last storage modification.

The maximum storage threshold is the limit that you set for autoscaling the DB instance. You can't set the maximum storage threshold for autoscaling-enabled instances to a value greater than the maximum allocated storage.

Incorrect options:

Migrate RDS MySQL to Aurora which offers storage auto-scaling - Although Aurora offers automatic storage scaling, this option is ruled out since it involves significant systems administration effort to migrate from RDS MySQL to Aurora. It is much easier to just enable storage auto-scaling for RDS MySQL.

Migrate RDS MySQL database to DynamoDB which automatically allocates storage space when required - This option is ruled out since DynamoDB is a NoSQL database which implies significant development effort to change the application logic to connect and query data from the underlying database. It is much easier to just enable storage auto-scaling for RDS MySQL.

Create read replica for RDS MySQL - Read replicas make it easy to take advantage of supported engines' built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create multiple read replicas for a given source DB Instance and distribute your application’s read traffic amongst them. This option acts as a distractor as read replicas cannot help to automatically scale storage for the primary database.

Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html

Question 37:
Skipped

You are running a public DNS service on an EC2 instance where the DNS name is pointing to the IP address of the instance. You wish to upgrade your DNS service but would like to do it without any downtime.

Which of the following options will help you accomplish this?

Explanation

Correct option:

Route 53 is a DNS managed by AWS, but nothing prevents you from running your own DNS (it's just a software) on an EC2 instance. The trick of this question is that it's about EC2, running some software that needs a fixed IP, and not about Route 53 at all.

Elastic IP

DNS services are identified by a public IP, so you need to use Elastic IP.

Incorrect options:

Create a Load Balancer and an auto-scaling group - Load balancers do not provide an IP, instead they provide a DNS name, so this option is ruled out.

Provide a static private IP - If you provide a private IP it will not be accessible from the internet, so this option is incorrect.

Use Route 53 - Route 53 is a DNS service from AWS but the use-case talks about offering a DNS service using an EC2 instance, so this option is incorrect.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#using-instance-addressing-eips-associating-different

Question 38:
Skipped

You would like to have a one-stop dashboard for all the CI/CD needs of one of your projects. You don't need heavy control of the individual configuration of each component in your CI/CD, but need to be able to get a holistic view of your projects.

Which service do you recommend?

Explanation

Correct option:

CodeStar

AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a project management dashboard, including an integrated issue tracking capability powered by Atlassian JIRA Software. With the AWS CodeStar project dashboard, you can easily track progress across your entire software development process, from your backlog of work items to teams’ recent code deployments.

Incorrect options:

CodeBuild

CodeDeploy

CodePipeline

All these options are individual services encompassed by CodeStar when you deploy a project. They have to be used individually and don't provide a unified "project" view.

Reference:

https://aws.amazon.com/codestar/

Question 39:
Skipped

You need to load SSL certificates onto your Load Balancers and also have EC2 instances dynamically retrieve them when needed for service to service two-way TLS communication.

What service should you use to centrally manage and automatically renew these SSL certificates?

Explanation

Correct option:

ACM

AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.

Incorrect options:

S3 - This is used for object storage. Although you could store SSL certificate on S3, this wouldn't be an efficient solution.

KMS - AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications.

IAM - IAM service is used to manage users, groups, roles and policies. Use IAM as a certificate manager only when you must support HTTPS connections in a Region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all Regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM.

Reference:

https://aws.amazon.com/certificate-manager/

Question 40:
Skipped

You are a developer working on AWS Lambda functions that are triggered by Amazon API Gateway and would like to perform testing on a low volume of traffic for new API versions.

Which of the following features will accomplish this task?

Explanation

Correct option:

Canary Deployment

In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a preconfigured ratio. Typically, the canary release receives a small percentage of API traffic and the production release takes up the rest. The updated API features are only visible to API traffic through the canary. You can adjust the canary traffic percentage to optimize test coverage or performance.

via - https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html

Incorrect options:

Stage Variables - They act like environment variables and can be used in your API setup.

Mapping Templates - Its a script to map the payload from a method request to the corresponding integration request and also maps the integration response to the corresponding method response.

Custom Authorizers - Used for authentication purposes and must return AWS Identity and Access Management (IAM) policies.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html

Question 41:
Skipped

A security company is requiring all developers to perform server-side encryption with customer-provided encryption keys when performing operations in AWS S3. Developers should write software with C# using the AWS SDK and implement the requirement in the PUT, GET, Head, and Copy operations.

Which of the following encryption methods meets this requirement?

Explanation

Correct option:

SSE-C

You have the following options for protecting data at rest in Amazon S3:

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.

Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

For the given use-case, the company wants to manage the encryption keys via its custom application and let S3 manage the encryption, therefore you must use Server-Side Encryption with Customer-Provided Keys (SSE-C).

Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your encryption keys. With the encryption key you provide as part of your request, Amazon S3 manages both the encryption, as it writes to disks, and decryption, when you access your objects.

Please review these three options for Server Side Encryption on S3: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

Incorrect options:

SSE-KMS - Server-Side Encryption with Customer Master Keys (CMKs) stored in AWS Key Management Service (SSE-KMS) is similar to SSE-S3. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. Additionally, you can create and manage customer-managed CMKs or use AWS managed CMKs that are unique to you, your service, and your Region.

Client-Side Encryption - You can encrypt the data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

SSE-S3 - When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. So this option is incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

Question 42:
Skipped

An IT company leverages CodePipeline to automate its release pipelines. The development team wants to write a Lambda function that will send notifications for state changes within the pipeline.

As a Developer Associate, which steps would you suggest to associate the Lambda function with the event source?

Explanation

Correct option:

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams.

CloudWatch Events Key Concepts: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

Set up an Amazon CloudWatch Events rule that uses CodePipeline as an event source with the target as the Lambda function

You can use Amazon CloudWatch Events to detect and react to changes in the state of a pipeline, stage, or action. Then, based on rules you create, CloudWatch Events invokes one or more target actions when a pipeline, stage, or action enters the state you specify in a rule. For the given use-case, you can set up a rule that detects pipeline changes and invokes an AWS Lambda function.

Amazon CloudWatch Events With CodePipeline: https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html

Incorrect options:

Set up an Amazon CloudWatch alarm that monitors status changes in Code Pipeline and triggers the Lambda function - As mentioned in the explanation above, you need to use a CloudWatch event and not CloudWatch alarm for this use-case.

Use the Lambda console to configure a trigger that invokes the Lambda function with CodePipeline as the event source - You cannot create a trigger with CodePipeline as the event source via the Lambda Console.

Use the CodePipeline console to set up a trigger for the Lambda function - CodePipeline console cannot be used to configure a trigger for a Lambda function.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html

Question 43:
Skipped

Applications running on EC2 instances process messages from an SQS queue but sometimes they experience errors due to messages not being processed.

To isolate the messages, which option will help with further debugging?

Explanation

Correct option:

Implement a Dead Letter Queue

Dead-letter queues can be used by other queues (source queues) as a target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed.

Sometimes, messages can’t be processed because of a variety of possible issues, such as when a user comments on a story but it remains unprocessed because the original story itself is deleted by the author while the comments were being posted. In such a case, the dead-letter queue can be used to handle message processing failures.

How do dead-letter queues work? via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html

Use-cases for dead-letter queues: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html

Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that cannot be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed.

Incorrect:

Use DeleteMessage - This API call deletes the message in the queue but does not help you find the issue.

Reduce the VisibilityTimeout - Amazon SQS uses a visibility timeout to prevent other consumers from receiving and processing the same message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. If you reduce the VisibilityTimeout, more consumers will get the failed message

Increase the VisibilityTimeout - It won't help because you don't need more time but rather an isolated place to debug.

Reference:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html

Question 44:
Skipped

A company's e-commerce application becomes slow when traffic spikes. The application has a three-tier architecture (web, application and database tier) that uses synchronous transactions. The development team at the company has identified certain bottlenecks in the application tier and it is looking for a long term solution to improve the application's performance.

As a developer associate, which of the following solutions would you suggest to meet the required application response times while accounting for any traffic spikes?

Explanation

Correct option:

Leverage horizontal scaling for the web and application tiers by using Auto Scaling groups and Application Load Balancer - A horizontally scalable system is one that can increase capacity by adding more computers to the system. This is in contrast to a vertically scalable system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage.

Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.

Elastic Load Balancing is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed.

To use Elastic Load Balancing with your Auto Scaling group, you attach the load balancer to your Auto Scaling group to register the group with the load balancer. Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group.

When you use Elastic Load Balancing with your Auto Scaling group, it's not necessary to register individual EC2 instances with the load balancer. Instances that are launched by your Auto Scaling group are automatically registered with the load balancer. Likewise, instances that are terminated by your Auto Scaling group are automatically deregistered from the load balancer.

This option will require fewer design changes, it's mostly configuration changes and the ability for the web/application tier to be able to communicate across instances. Hence, this is the right solution for the current use case.

Incorrect options:

Leverage SQS with asynchronous AWS Lambda calls to decouple the application and data tiers - This is incorrect as it uses asynchronous AWS Lambda calls and the application uses synchronous transactions. The question says there should be no change in the application architecture.

Leverage horizontal scaling for the application's persistence layer by adding Oracle RAC on AWS - The issue is not with the persistence layer at all. This option has only been used as a distractor.

You can deploy scalable Oracle Real Application Clusters (RAC) on Amazon EC2 using Amazon Machine Images (AMI) on AWS Marketplace. Oracle RAC is a shared-everything database cluster technology from Oracle that allows a single database (a set of data files) to be concurrently accessed and served by one or many database server instances.

Leverage vertical scaling for the application instance by provisioning a larger Amazon EC2 instance size - Vertical scaling is just a band-aid solution and will not work long term.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html

https://aws.amazon.com/blogs/compute/operating-lambda-understanding-event-driven-architecture-part-1/

Question 45:
Skipped

A data analytics company ingests a large number of messages and stores them in an RDS database using Lambda. Because of the increased payload size, it is taking more than 15 minutes to process each message.

As a Developer Associate, which of the following options would you recommend to process each message in the MOST scalable way?

Explanation

Correct option:

Provision EC2 instances in an Auto Scaling group to poll the messages from an SQS queue

As each message takes more than 15 minutes to process, therefore the development team cannot use Lambda for message processing. To build a scalable solution, the EC2 instances must be provisioning via an Auto Scaling group to handle variations in the message processing workload.

Amazon EC2 Auto Scaling Overview: via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html

Incorrect options:

Provision an EC2 instance to poll the messages from an SQS queue - Just using a single EC2 instance may not be sufficient to handle a sudden spike in the number of incoming messages.

Contact AWS Support to increase the Lambda timeout to 60 minutes - AWS Support cannot increase the Lambda timeout upper limit.

Use DynamoDB instead of RDS as database - This option has been added as a distractor, as changing the database would have no impact on the Lambda timeout while processing the message.

Reference:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html

Question 46:
Skipped

You are working with a t2.small instance bastion host that has the AWS CLI installed to help manage all the AWS services installed on it. You would like to know the security group and the instance id of the current instance.

Which of the following will help you fetch the needed information?

Explanation

Correct option:

Query the metadata at http://169.254.169.254/latest/meta-data - Because your instance metadata is available from your running instance, you do not need to use the Amazon EC2 console or the AWS CLI. This can be helpful when you're writing scripts to run from your instance. For example, you can access the local IP address of your instance from instance metadata to manage a connection to an external application. To view all categories of instance metadata from within a running instance, use the following URI - http://169.254.169.254/latest/meta-data/. The IP address 169.254.169.254 is a link-local address and is valid only from the instance. All instance metadata is returned as text (HTTP content type text/plain).

Incorrect options:

Create an IAM role and attach it to your EC2 instance that helps you perform a 'describe' API call - The AWS CLI has a describe-instances API call needs instance ID as an input. So, this will not work for the current use case wherein we do not know the instance ID.

Query the user data at http://169.254.169.254/latest/user-data - This address retrieves the user data that you specified when launching your instance.

Query the user data at http://254.169.254.169/latest/meta-data - The IP address specified is wrong.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html

https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-instances.html

Question 47:
Skipped

You are a cloud security engineer working for a popular cyber-forensics company that offers vulnerability scanning solutions to government contractors. The scanning solutions are integrated with AWS resources to monitor EC2 and S3 API calls which then display results to users on an analytical dashboard.

Which of the following AWS services makes this possible?

Explanation

Correct option:

CloudTrail

With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. You can use AWS CloudTrail to answer questions such as - “Who made an API call to modify this resource?”. CloudTrail provides event history of your AWS account activity thereby enabling governance, compliance, operational auditing, and risk auditing of your AWS account.

How CloudTrail Works: via - https://aws.amazon.com/cloudtrail/

CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console and code calls to the Amazon S3 APIs. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Amazon S3. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history.

Incorrect options:

S3 Access Logs - This captures records of access attempts made against objects in your bucket. Logs contain info for bucket request, time, remote IP, request-URI, and more. This option does not address the use-case mentioned in the question.

VPC Flow Logs - VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. This option does not address the use-case mentioned in the question.

IAM - IAM allows you to add users, group, and set permissions but not for auditing API calls, so this is incorrect.

References:

https://aws.amazon.com/cloudtrail/

https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html

Question 48:
Skipped

You have been collecting AWS X-Ray traces across multiple applications and you would now like to index your XRay traces to search and filter through them efficiently.

What should you use in your instrumentation?

Explanation

Correct option:

Annotations

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

You can use X-Ray to collect data across AWS Accounts. The X-Ray agent can assume a role to publish data into an account different from the one in which it is running. This enables you to publish data from various components of your application into a central account.

How X-Ray Works: via - https://aws.amazon.com/xray/

Annotations are simple key-value pairs that are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.

X-Ray indexes up to 50 annotations per trace.

Incorrect options:

Metadata - Metadata are key-value pairs with values of any type, including objects and lists, but that is not indexed. Use metadata to record data you want to store in the trace but don't need to use for searching traces.

Segments - The computing resources running your application logic send data about their work as segments. A segment provides the resource's name, details about the request, and details about the work done.

Sampling - To ensure efficient tracing and provide a representative sample of the requests that your application serves, the X-Ray SDK applies a sampling algorithm to determine which requests get traced. By default, the X-Ray SDK records the first request each second, and five percent of any additional requests.

Reference:

https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html

Question 49:
Skipped

You are using AWS SQS FIFO queues to get the ordering of messages on a per user_id basis. On top of this, you would like to make sure that duplicate messages should not be sent to SQS as this would cause application failure.

As a developer, which message parameter should you set for deduplicating messages?

Explanation

Correct option:

AWS FIFO queues are designed to enhance messaging between applications when the order of operations and events has to be enforced.

via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

MessageDeduplicationId

The message deduplication ID is the token used for the deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.

Incorrect options:

MessageGroupId - The message group ID is the tag that specifies that a message belongs to a specific message group. Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group (however, messages that belong to different message groups might be processed out of order).

ReceiveRequestAttemptId - This parameter applies only to FIFO (first-in-first-out) queues. The token is used for deduplication of ReceiveMessage calls. If a networking issue occurs after a ReceiveMessage action, and instead of a response you receive a generic error, you can retry the same action with an identical ReceiveRequestAttemptId to retrieve the same set of messages, even if their visibility timeout has not yet expired.

ContentBasedDeduplication - This is not a message parameter, but a queue setting. Enable content-based deduplication to instruct Amazon SQS to use an SHA-256 hash to generate the message deduplication ID using the body of the message - but not the attributes of the message.

Reference:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html

Question 50:
Skipped

As part of your video processing application, you are looking to perform a set of repetitive and scheduled tasks asynchronously. Your application is deployed on Elastic Beanstalk.

Which Elastic Beanstalk environment should you set up for performing the repetitive tasks?

Explanation

Correct option:

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

Elastic BeanStalk Key Concepts: via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html

Setup a Worker environment and a cron.yaml file

An environment is a collection of AWS resources running an application version. An environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier.

If your AWS Elastic Beanstalk application performs operations or workflows that take a long time to complete, you can offload those tasks to a dedicated worker environment. Decoupling your web application front end from a process that performs blocking operations is a common way to ensure that your application stays responsive under load.

For a worker environment, you need a cron.yaml file to define the cron jobs and do repetitive tasks.

via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html

Incorrect options:

Setup a Web Server environment and a cron.yaml file

Setup a Worker environment and a .ebextensions file

Setup a Web Server environment and a .ebextensions file

.ebextensions/ won't work to define cron jobs, and Web Server environments cannot be set up to perform repetitive and scheduled tasks. So these three options are incorrect.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html

Question 51:
Skipped

A financial services company has developed a REST API which is deployed in an Auto Scaling Group behind an Application Load Balancer. The API stores the data payload in DynamoDB and the static content is served through S3. Upon analyzing the usage pattern, it's found that 80% of the read requests are shared across all users.

As a Developer Associate, how can you improve the application performance while optimizing the cost with the least development effort?

Explanation

Correct option:

Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3

DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.

DAX is tightly integrated with DynamoDB—you simply provision a DAX cluster, use the DAX client SDK to point your existing DynamoDB API calls at the DAX cluster, and let DAX handle the rest. Because DAX is API-compatible with DynamoDB, you don't have to make any functional application code changes. DAX is used to natively cache DynamoDB reads.

CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.

When a user requests content that you serve with CloudFront, their request is routed to a nearby Edge Location. If CloudFront has a cached copy of the requested file, CloudFront delivers it to the user, providing a fast (low-latency) response. If the file they’ve requested isn’t yet cached, CloudFront retrieves it from your origin – for example, the S3 bucket where you’ve stored your content.

So, you can use CloudFront to improve application performance to serve static content from S3.

Incorrect options:

Enable ElastiCache Redis for DynamoDB and CloudFront for S3

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.

ElastiCache for Redis Overview: via - https://aws.amazon.com/elasticache/redis/

Although, you can integrate Redis with DynamoDB, it's much more involved from a development perspective. For the given use-case, you should use DAX which is a much better fit.

Enable DAX for DynamoDB and ElastiCache Memcached for S3

Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3

Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database.

ElastiCache cannot be used as a cache to serve static content from S3, so both these options are incorrect.

References:

https://aws.amazon.com/dynamodb/dax/

https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/

https://aws.amazon.com/elasticache/redis/

Question 52:
Skipped

You are working for a small organization that does not have a database administrator and the organization needs to install a database on the cloud quickly to support an accounting application used by thousands of users. The application will act as a backend and will perform (CRUD) operations such as create, read, update and delete as well as inner joins.

Which database is best suited for this scenario?

Explanation

Correct option:

RDS

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database with support for transactions in the cloud. A relational database is a collection of data items with pre-defined relationships between them. RDS supports the most demanding database applications. You can choose between two SSD-backed storage options: one optimized for high-performance Online Transaction Processing (OLTP) applications, and the other for cost-effective general-purpose use.

Incorrect options:

DynamoDB - RDS can run expensive joins which DynamoDB does not support. DynamoDB is a better choice for scaling by storing complex hierarchical data within a single item.

Redshift - RDS is your best choice here but Amazon Redshift provides an excellent scale-out option as your data and query complexity grows. Redshift is a data warehouse.

ElastiCache - You can use ElastiCache in combination with RDS. This would be a good option for slow performing database queries in RDS that need to be cached for your application users.

References:

https://aws.amazon.com/rds/

https://aws.amazon.com/blogs/database/automating-sql-caching-for-amazon-elasticache-and-amazon-rds/

Question 53:
Skipped

An IT company uses AWS CloudFormation templates to provision their AWS infrastructure for Amazon EC2, Amazon VPC, and Amazon S3 resources. Using cross-stack referencing, a developer creates a stack called NetworkStack which will export the subnetId that can be used when creating EC2 instances in another stack.

To use the exported value in another stack, which of the following functions must be used?

Explanation

Correct option:

!ImportValue

The intrinsic function Fn::ImportValue returns the value of an output exported by another stack. You typically use this function to create cross-stack references.

Incorrect options:

!Ref - Returns the value of the specified parameter or resource.

!GetAtt - Returns the value of an attribute from a resource in the template.

!Sub - Substitutes variables in an input string with values that you specify.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html

Question 54:
Skipped

You have created a test environment in Elastic Beanstalk and as part of that environment, you have created an RDS database.

How can you make sure the database can be explored after the environment is destroyed?

Explanation

Correct option:

Make a snapshot of the database before it gets deleted

Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple an RDS DB instance from environment.

Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the RDS DB instance.

Note: An RDS DB instance attached to an Elastic Beanstalk environment is ideal for development and testing environments. However, it's not ideal for production environments because the lifecycle of the database instance is tied to the lifecycle of your application environment. If you terminate the environment, then you lose your data because the RDS DB instance is deleted by the environment. For more information, see Using Elastic Beanstalk with Amazon RDS.

This is the only way to recover the database data before it gets deleted by Elastic Beanstalk.

Please review this excellent document that addresses this use-case :

https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

Incorrect options:

Make a selective delete in Elastic Beanstalk - This is not a feature in Elastic Beanstalk.

Change the Elastic Beanstalk environment variables - Environment variables won't help with the provisioned RDS database.

Convert the Elastic Beanstalk environment to a worker environment - You can't convert Elastic Beanstalk environments, you can only change their settings.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

Question 55:
Skipped

Which environment variable can be used by AWS X-Ray SDK to ensure that the daemon is correctly discovered on ECS?

Explanation

Correct option:

AWS_XRAY_DAEMON_ADDRESS

Set the host and port of the X-Ray daemon listener. By default, the SDK uses 127.0.0.1:2000 for both trace data (UDP) and sampling (TCP). Use this variable if you have configured the daemon to listen on a different port or if it is running on a different host.

Incorrect options:

AWS_XRAY_TRACING_NAME - This sets a service name that the SDK uses for segments.

AWS_XRAY_CONTEXT_MISSING - This should be set to LOG_ERROR to avoid throwing exceptions when your instrumented code attempts to record data when no segment is open.

AWS_XRAY_DEBUG_MODE - This should be set to TRUE to configure the SDK to output logs to the console, instead of configuring a logger.

Reference:

https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-nodejs-configuration.html

Question 56:
Skipped

You are deploying Lambda functions that operate on your S3 buckets to read files and extract key metadata. The Lambda functions are managed using SAM.

Which Policy should you insert in your serverless model template to give buckets read access?

Explanation

Correct option:

S3ReadPolicy

The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS.

A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. Note that a serverless application is more than just a Lambda function—it can include additional resources such as APIs, databases, and event source mappings.

AWS SAM allows you to choose from a list of policy templates to scope the permissions of your Lambda functions to the resources that are used by your application.

AWS SAM applications in the AWS Serverless Application Repository that use policy templates don't require any special customer acknowledgments to deploy the application from the AWS Serverless Application Repository.

S3ReadPolicy => Gives read-only permission to objects in an Amazon S3 bucket.

S3CrudPolicy => Gives create, read, update, and delete permission to objects in an Amazon S3 bucket.

SQSPollerPolicy => Permits to poll an Amazon SQS Queue.

LambdaInvokePolicy => Permits to invoke a Lambda function, alias, or version.

Incorrect options:

SQSPollerPolicy

S3CrudPolicy

LambdaInvokePolicy

These three options contradict the explanation provided earlier. So these are incorrect.

Reference:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html

Question 57:
Skipped

A media analytics company has built a streaming application on Lambda using Serverless Application Model (SAM).

As a Developer Associate, which of the following would you identify as the correct order of execution to successfully deploy the application?

Explanation

Correct option:

Develop the SAM template locally => upload the template to S3 => deploy your application to the cloud

The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML.

You can develop and test your serverless application locally, and then you can deploy your application by using the sam deploy command. The sam deploy command zips your application artifacts, uploads them to Amazon Simple Storage Service (Amazon S3), and deploys your application to the AWS Cloud. AWS SAM uses AWS CloudFormation as the underlying deployment mechanism.

via - https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html

Incorrect options:

Develop the SAM template locally => upload the template to Lambda => deploy your application to the cloud

Develop the SAM template locally => upload the template to CodeCommit => deploy your application to CodeDeploy

Develop the SAM template locally => deploy the template to S3 => use your application in the cloud

These three options contradict the details provided in the explanation above, so these are incorrect.

Reference:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html

Question 58:
Skipped

One of your Kinesis Stream is experiencing increased traffic due to a sale day. Therefore your Kinesis Administrator has split shards and thus you went from having 6 shards to having 10 shards in your Kinesis Stream. Your consuming application is running a KCL-based application on EC2 instances.

What is the maximum number of EC2 instances that can be deployed to process the shards?

Explanation

Correct option:

10

Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs.

A Kinesis data stream is a set of shards. A shard is a uniquely identified sequence of data records in a stream. A stream is composed of one or more shards, each of which provides a fixed unit of capacity.

Kinesis Data Streams Overview: via - https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

Each KCL consumer application instance uses "workers" to process data in Kinesis shards. At any given time, each shard of data records is bound to a particular worker via a lease. For the given use-case, an EC2 instance acts as the worker for the KCL application. You can have at most one EC2 instance per shard in Kinesis for the given application. As we have 10 shards, the max number of EC2 instances is 10.

via - https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html

Incorrect options:

1

6

20

These three options contradict the explanation provided earlier. So these are incorrect.

Reference:

https://docs.aws.amazon.com/streams/latest/dev/developing-consumers-with-kcl.html

Question 59:
Skipped

Your client wants to deploy a service on EC2 instances, and as EC2 instances are added into an ASG, each EC2 instance should be running 3 different Docker Containers simultaneously.

What Elastic Beanstalk platform should they choose?

Explanation

Correct option:

Docker multi-container platform

Docker is a container platform that allows you to define your software stack and store it in an image that can be downloaded from a remote repository. Use the Multicontainer Docker platform if you need to run multiple containers on each instance. The Multicontainer Docker platform does not include a proxy server. Elastic Beanstalk uses Amazon Elastic Container Service (Amazon ECS) to coordinate container deployments to multi-container Docker environments.

Incorrect options:

Docker single-container platform - Docker is a container platform that allows you to define your software stack and store it in an image that can be downloaded from a remote repository. Use the Single Container Docker platform if you only need to run a single Docker container on each instance in your environment. The single container platform includes an Nginx proxy server.

Custom Platform - Elastic Beanstalk supports custom platforms. A custom platform provides more advanced customization than a custom image in several ways. A custom platform lets you develop an entirely new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility enables you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn't provide a managed platform. Compare that to custom images, where you modify an Amazon Machine Image (AMI) for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform's software stack. Besides, with custom platforms, you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance.

Third Party Platform - This is a made-up option.

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.mcdocker

Question 60:
Skipped

Your client has tasked you with finding a service that would enable you to get cross-account tracing and visualization.

Which service do you recommend?

Explanation

Correct option:

AWS X-Ray

AWS X-Ray is a service that collects data about requests that your application serves and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs.

How X-Ray Works: via - https://aws.amazon.com/xray/

Incorrect options

VPC Flow Logs - VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.

Flow logs can help you with several tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn helps you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.

CloudWatch Events - Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.

CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

Reference:

https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html

Question 61:
Skipped

A developer created an online shopping application that runs on EC2 instances behind load balancers. The same web application version is hosted on several EC2 instances and the instances run in an Auto Scaling group. The application uses STS to request credentials but after an hour your application stops working.

What is the most likely cause of this issue?

Explanation

Correct option:

Your application needs to renew the credentials after 1 hour when they expire

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). By default, AWS Security Token Service (STS) is available as a global service, and all AWS STS requests go to a single endpoint at https://sts.amazonaws.com.

Credentials that are created by using account credentials can range from 900 seconds (15 minutes) up to a maximum of 3,600 seconds (1 hour), with a default of 1 hour. Hence you need to renew the credentials post expiry.

via - https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html

Incorrect options:

Your IAM policy is wrong - If your policy was wrong, a reboot would not solve the issue.

A lambda function revokes your access every hour - Revoking can be done by an IAM policy. Lambda function cannot revoke access.

The IAM service is experiencing downtime once an hour - The IAM service is reliable as it's managed by AWS.

Reference:

https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html

Question 62:
Skipped

When your company first created an AWS account, you began with a single sign-in principal called a root user account that had complete access to all AWS services and resources.

What should you do to adhere to best practices for using the root user account?

Explanation

Correct option:

It should be accessible by one admin only after enabling Multi-factor authentication

AWS Root Account Security Best Practices: via - https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials

If you continue to use the root user credentials, we recommend that you follow the security best practice to enable multi-factor authentication (MFA) for your account. Because your root user can perform sensitive operations in your account, adding a layer of authentication helps you to better secure your account. Multiple types of MFA are available.

Incorrect options:

It should be accessible by 3 to 6 members of the IT team - Only the owner of the AWS account should have access to the root account credentials. You should create an IT group with admin permissions via IAM and then assign a few users to this group.

It should be accessible using the access key id and secret access key - AWS recommends that you should not use the access key id and secret access key for the AWS account root user.

It should be accessible by no one, throw away the passwords after creating the account - You will still need to store the password somewhere for your root account.

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials

Question 63:
Skipped

A popular mobile app retrieves data from an AWS DynamoDB table that was provisioned with read-capacity units (RCU’s) that are evenly shared across four partitions. One of those partitions is receiving more traffic than the other partitions, causing hot partition issues.

What technology will allow you to reduce the read traffic on your AWS DynamoDB table with minimal effort?

Explanation

Correct option:

DynamoDB DAX

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement: from milliseconds to microseconds: even at millions of requests per second.

Incorrect options:

DynamoDB Streams - A stream record contains information about a data modification to a single item in a DynamoDB table. This is not the correct option for the given use-case.

ElastiCache - ElastiCache can cache the results from anything but you will need to modify your code to check the cache before querying the main query store. As the given use-case mandates minimal effort, so this option is not correct.

More partitions - This option has been added as a distractor as DynamoDB handles that for you automatically.

Reference:

https://aws.amazon.com/dynamodb/dax/

Question 64:
Skipped

An e-commerce company has multiple EC2 instances operating in a private subnet which is part of a custom VPC. These instances are running an image processing application that needs to access images stored on S3. Once each image is processed, the status of the corresponding record needs to be marked as completed in a DynamoDB table.

How would you go about providing private access to these AWS resources which are not part of this custom VPC?

Explanation

Correct option:

Create a separate gateway endpoint for S3 and DynamoDB each. Add two new target entries for these two gateway endpoints in the route table of the custom VPC

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.

A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:

Amazon S3

DynamoDB

You should note that S3 now supports both gateway endpoints as well as the interface endpoints.

Incorrect options:

Create a gateway endpoint for S3 and add it as a target in the route table of the custom VPC. Create an interface endpoint for DynamoDB and then connect to the DynamoDB service using the private IP address

Create a separate interface endpoint for S3 and DynamoDB each. Then connect to these services using the private IP address

DynamoDB does not support interface endpoints, so these two options are incorrect.

Create a gateway endpoint for DynamoDB and add it as a target in the route table of the custom VPC. Create an API endpoint for S3 and then connect to the S3 service using the private IP address - There is no such thing as an API endpoint for S3. API endpoints are used with AWS API Gateway. This option has been added as a distractor.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html

Question 65:
Skipped

Your organization has set up a full CI/CD pipeline leveraging CodePipeline and the deployment is done on Elastic Beanstalk. This pipeline has worked for over a year now but you are approaching the limits of Elastic Beanstalk in terms of how many versions can be stored in the service.

How can you remove older versions that are not used by Elastic Beanstalk so that new versions can be created for your applications?

Explanation

Correct option:

Use a Lifecycle Policy

Each time you upload a new version of your application with the Elastic Beanstalk console or the EB CLI, Elastic Beanstalk creates an application version. If you don't delete versions that you no longer use, you will eventually reach the application version limit and be unable to create new versions of that application.

You can avoid hitting the limit by applying an application version lifecycle policy to your applications. A lifecycle policy tells Elastic Beanstalk to delete old application versions or to delete application versions when the total number of versions for an application exceeds a specified number.

Elastic Beanstalk applies an application's lifecycle policy each time you create a new application version and deletes up to 100 versions each time the lifecycle policy is applied. Elastic Beanstalk deletes old versions after creating the new version and does not count the new version towards the maximum number of versions defined in the policy.

via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-lifecycle.html

Incorrect options: Setup an .ebextensions files - You can add AWS Elastic Beanstalk configuration files (.ebextensions) to your web application's source code to configure your environment and customize the AWS resources that it contains. This does not help with managing versions.

Define a Lambda function - This could work but would require a lot of manual scripting, to achieve the same desired effect as the Lifecycle Policy EB feature.

Use Worker Environments - This won't help. If your application performs operations or workflows that take a long time to complete, you can offload those tasks to a dedicated worker environment. Decoupling your web application front end from a process that performs blocking operations is a common way to ensure that your application stays responsive under load.

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-lifecycle.html