Chart

Pie chart with 4 slices.
End of interactive chart.
Attempt 8
Question 1: Correct

A customer requires a serverless application with an API which mobile clients will use. The API will have both and AWS Lambda function and an Amazon DynamoDB table as data sources. Responses that are sent to the mobile clients must contain data that is aggregated from both of these data sources.

The developer must minimize the number of API endpoints and must minimize the number of API calls that are required to retrieve the necessary data.

Which solution should the developer use to meet these requirements?

Explanation

GraphQL APIs built with AWS AppSync give front-end developers the ability to query multiple databases, microservices, and APIs from a single GraphQL endpoint. This would not be possible with a REST API running on API Gateway which would have a single target for each API endpoint.

The example diagram below depicts a solution that includes AWS AppSync in front of mobile clients. The services then connect to Lambda and DynamoDB via API Gateway and AppSync.

CORRECT: "GraphQL API on AWS AppSync" is the correct answer (as explained above.)

INCORRECT: "REST API on Amazon API Gateway" is incorrect.

A REST API is not suitable as the question asks to reduce the number of API endpoints. With a REST API there is a single target such as a Lambda per API endpoint so more endpoints would be required.

INCORRECT: "GraphQL API on an Amazon EC2 instance" is incorrect.

This would not be a serverless solution and the question states that the solution must be serverless.

INCORRECT: "REST API on AWS Elastic Beanstalk" is incorrect.

As explained above.

References:

https://aws.amazon.com/blogs/mobile/appsync-microservices/

Question 2: Correct

To reduce the cost of API actions performed on an Amazon SQS queue, a Developer has decided to implement long polling. Which of the following modifications should the Developer make to the API actions? 

Explanation

The process of consuming messages from a queue depends on whether you use short or long polling. By default, Amazon SQS uses short polling, querying only a subset of its servers (based on a weighted random distribution) to determine whether any messages are available for a response. You can use long polling to reduce your costs while allowing your consumers to receive messages as soon as they arrive in the queue.

When you consume messages from a queue using short polling, Amazon SQS samples a subset of its servers (based on a weighted random distribution) and returns messages from only those servers. Thus, a particular ReceiveMessage request might not return all of your messages. However, if you have fewer than 1,000 messages in your queue, a subsequent request will return your messages. If you keep consuming from your queues, Amazon SQS samples all of its servers, and you receive all of your messages.

The following diagram shows the short-polling behavior of messages returned from a standard queue after one of your system components makes a receive request. Amazon SQS samples several of its servers (in gray) and returns messages A, C, D, and B from these servers. Message E isn't returned for this request but is returned for a subsequent request.

When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20 seconds. Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response).

Long polling occurs when the WaitTimeSeconds parameter of a ReceiveMessage request is set to a value greater than 0 in one of two ways:

The ReceiveMessage call sets WaitTimeSeconds to a value greater than 0.

The ReceiveMessage call doesn’t set WaitTimeSeconds, but the queue attribute ReceiveMessageWaitTimeSeconds is set to a value greater than 0.

Therefore, the Developer should set the ReceiveMessage API with a WaitTimeSeconds of 20.

CORRECT: "Set the ReceiveMessage API with a WaitTimeSeconds of 20" is the correct answer.

INCORRECT: "Set the SetQueueAttributes API with a DelaySeconds of 20" is incorrect as this would be used to configure a delay queue where the delivery of messages in the queue is delayed.

INCORRECT: "Set the ReceiveMessage API with a VisibilityTimeout of 30" is incorrect as this would configure the visibility timeout which is the length of time a message that has been received is invisible.

INCORRECT: "Set the SetQueueAttributes with a MessageRetentionPeriod of 60" is incorrect as this would configure how long messages are retained in the queue.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SetQueueAttributes.html

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html


Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 3: Incorrect

A developer has deployed an application on AWS Lambda. The application uses Python and must generate and then upload a file to an Amazon S3 bucket. The developer must implement the upload functionality with the least possible change to the application code.

Which solution BEST meets these requirements?

Explanation

The best practice for Lambda development is to bundle all dependencies used by your Lambda function, including the AWS SDK. However, since this question specifically requests that the least possible changes are made to the application code, the developer can instead use the SDK for Python that is installed in the Lambda environment to upload the file to Amazon S3.

CORRECT: "Use the AWS SDK for Python that is installed in the Lambda execution environment" is the correct answer (as explained above.)

INCORRECT: "Include the AWS SDK for Python in the Lambda function code" is incorrect.

This is the best practice for deployment. However, in this case the developer must minimize changes to code and including the SDK as a dependency in the code would require potential updates to existing Python code.

INCORRECT: "Make an HTTP request directly to the S3 API to upload the file" is incorrect.

AWS supports uploads to S3 using the console, AWS SDKs, REST API, and the AWS CLI.

INCORRECT: "Use the AWS CLI that is installed in the Lambda execution environment" is incorrect.

The AWS CLI is not installed in the Lambda execution environment.

References:

https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 4: Correct

A Developer is creating a serverless application that uses an Amazon DynamoDB table. The application must make idempotent, all-or-nothing operations for multiple groups of write actions.

Which solution will meet these requirements?

Explanation

TransactWriteItems is a synchronous and idempotent write operation that groups up to 25 write actions in a single all-or-nothing operation. These actions can target up to 25 distinct items in one or more DynamoDB tables within the same AWS account and in the same Region. The aggregate size of the items in the transaction cannot exceed 4 MB. The actions are completed atomically so that either all of them succeed or none of them succeeds.

A TransactWriteItems operation differs from a BatchWriteItem operation in that all the actions it contains must be completed successfully, or no changes are made at all. With a BatchWriteItem operation, it is possible that only some of the actions in the batch succeed while the others do not.

CORRECT: "Update the items in the table using the TransactWriteltems operation to group the changes" is the correct answer.

INCORRECT: "Update the items in the table using the BatchWriteltem operation and configure idempotency at the table level" is incorrect. As explained above, the TransactWriteItems operation must be used.

INCORRECT: "Enable DynamoDB streams and capture new images. Update the items in the table using the BatchWriteltem" is incorrect. DynamoDB streams will not assist with making idempotent write operations.

INCORRECT: "Create an Amazon SQS FIFO queue and use the SendMessageBatch operation to group the changes" is incorrect. Amazon SQS should not be used as it does not assist and this solution is supposed to use a DynamoDB table

References:

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 5: Correct

A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?

Explanation

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).

Access keys are stored in one of the locations on a client that needs to make authenticated API calls to AWS services:

   · Linux: ~/.aws/credentials

   · Windows: %UserProfle%\.aws\credentials

In this scenario the application server is running on-premises. Therefore, you cannot assign an IAM role (which would be the preferable solution for Amazon EC2 instances). In this case it is therefore better to use access keys.

CORRECT: "Create an IAM user and generate access keys. Create a credentials file on the application server" is the correct answer.

INCORRECT: "Create an IAM role with the necessary permissions and assign it to the application server" is incorrect. This is an on-premises server so it is not possible to use an IAM role. If it was an EC2 instance, this would be the preferred (best practice) option.

INCORRECT: "Create an IAM group with the necessary permissions and add the on-premise application server to the group" is incorrect. You cannot add a server to an IAM group. You put IAM users into groups and assign permissions to them using a policy.

INCORRECT: "Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services" is incorrect as key pairs are used for SSH access to Amazon EC2 instances. You cannot use them in API calls to AWS services.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/

Question 6: Correct

A company has created a set of APIs using Amazon API Gateway and exposed them to partner companies. The APIs have caching enabled for all stages. The partners require a method of invalidating the cache that they can build into their applications.

What can the partners use to invalidate the API cache?

Explanation

You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.

When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.

A client of your API can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The client must send a request that contains the Cache-Control: max-age=0 header.

The client receives the response directly from the integration endpoint instead of the cache, provided that the client is authorized to do so. This replaces the existing cache entry with the new response, which is fetched from the integration endpoint.

To grant permission for a client, attach a policy of the following format to an IAM execution role for the user.

This policy allows the API Gateway execution service to invalidate the cache for requests on the specified resource (or resources).

Therefore, as described above the solution is to get the partners to pass the HTTP header Cache-Control: max-age=0.

CORRECT: "They can pass the HTTP header Cache-Control: max-age=0" is the correct answer.

INCORRECT: "They can use the query string parameter INVALIDATE_CACHE" is incorrect. This is not a valid method of invalidating the cache with API Gateway.

INCORRECT: "They must wait for the TTL to expire" is incorrect as this is not true, you do not need to wait as you can pass the HTTP header Cache-Control: max-age=0 whenever you need to in order to invalidate the cache.

INCORRECT: "They can invoke an AWS API endpoint which invalidates the cache" is incorrect. This is not a valid method of invalidating the cache with API Gateway.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 7: Correct

A Developer has recently created an application that uses an AWS Lambda function, an Amazon DynamoDB table, and also sends notifications using Amazon SNS. The application is not working as expected and the Developer needs to analyze what is happening across all components of the application.

What is the BEST way to analyze the issue?

Explanation

AWS X-Ray makes it easy for developers to analyze the behavior of their production, distributed applications with end-to-end tracing capabilities. You can use X-Ray to identify performance bottlenecks, edge case errors, and other hard to detect issues.

AWS X-Ray provides an end-to-end, cross-service view of requests made to your application. It gives you an application-centric view of requests flowing through your application by aggregating the data gathered from individual services in your application into a single unit called a trace. You can use this trace to follow the path of an individual request as it passes through each service or tier in your application so that you can pinpoint where issues are occurring.

AWS X-Ray will assist the developer with visually analyzing the end-to-end view of connectivity between the application components and how they are performing using a Service Map. X-Ray also provides aggregated data about the application.

CORRECT: "Enable X-Ray tracing for the Lambda function" is the correct answer.

INCORRECT: "Create an Amazon CloudWatch Events rule" is incorrect as this feature of CloudWatch is used to trigger actions based on changes in the state of AWS services.

INCORRECT: "Assess the application with Amazon Inspector" is incorrect. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

INCORRECT: "Monitor the application with AWS Trusted Advisor" is incorrect. AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices.

References:

https://aws.amazon.com/xray/features/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 8: Correct

A Developer will be launching several Docker containers on a new Amazon ECS cluster using the EC2 Launch Type. The containers will all run a web service on port 80.

What is the EASIEST way the Developer can configure the task definition to ensure the web services run correctly and there are no port conflicts on the host instances?

Explanation

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. The container port is the port number on the container that is bound to the user-specified or automatically assigned host port. The host port is the port number on the container instance to reserve for your container.

As we cannot have multiple services bound to the same host port, we need to ensure that each container port mapping uses a different host port. The easiest way to do this is to set the host port number to 0 and ECS will automatically assign an available port. We also need to assign port 80 to the container port so that the web service is able to run.

CORRECT: "Specify port 80 for the container port and port 0 for the host port" is the correct answer.

INCORRECT: "Specify port 80 for the container port and a unique port number for the host port" is incorrect as this is more difficult to manage as you have to manually assign the port number.

INCORRECT: "Specify a unique port number for the container port and port 80 for the host port" is incorrect as the web service on the container needs to run on pot 80 and you can only bind one container to port 80 on the host so this would not allow more than one container to work.

INCORRECT: "Leave both the container port and host port configuration blank" is incorrect as this would mean that ECS would dynamically assign both the container and host port. As the web service must run on port 80 this would not work correctly.

References:

https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 9: Incorrect

A company has implemented AWS CodePipeline to automate its release pipelines. The Development team is writing an AWS Lambda function that will send notifications for state changes of each of the actions in the stages.

Which steps must be taken to associate the Lambda function with the event source?

Explanation

Amazon CloudWatch Events help you to respond to state changes in your AWS resources. When your resources change state, they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.

AWS CodePipeline can be configured as an event source in CloudWatch Events and can then send notifications using as service such as Amazon SNS.

Therefore, the best answer is to create an Amazon CloudWatch Events rule that uses CodePipeline as an event source.

CORRECT: "Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source" is the correct answer.

INCORRECT: "Create a trigger that invokes the Lambda function from the Lambda console by selecting CodePipeline as the event source" is incorrect as CodePipeline cannot be configured as a trigger for Lambda.

INCORRECT: "Create an event trigger and specify the Lambda function from the CodePipeline console" is incorrect as CodePipeline cannot be configured as a trigger for Lambda.

INCORRECT: "Create an Amazon CloudWatch alarm that monitors status changes in CodePipeline and triggers the Lambda function" is incorrect as CloudWatch Events is used for monitoring state changes.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 10: Correct

A Developer is designing a fault-tolerant application that will use Amazon EC2 instances and an Elastic Load Balancer. The Developer needs to ensure that if an EC2 instance fails session data is not lost. How can this be achieved?

Explanation

For this scenario the key requirement is to ensure the data is not lost. Therefore, the data must be stored in a durable data store outside of the EC2 instances. Amazon DynamoDB is a suitable solution for storing session data. DynamoDB has a session handling capability for multiple languages as in the below example for PHP:

“The DynamoDB Session Handler is a custom session handler for PHP that allows developers to use Amazon DynamoDB as a session store. Using DynamoDB for session storage alleviates issues that occur with session handling in a distributed web application by moving sessions off of the local file system and into a shared location. DynamoDB is fast, scalable, easy to setup, and handles replication of your data automatically.”

Therefore, the best answer is to use DynamoDB to store the session data.

CORRECT: "Use Amazon DynamoDB to perform scalable session handling" is the correct answer.

INCORRECT: "Enable Sticky Sessions on the Elastic Load Balancer" is incorrect. Sticky sessions attempts to direct a user that has reconnected to the application to the same EC2 instance that they connected to previously. However, this does not ensure that the session data is going to be available.

INCORRECT: "Use an EC2 Auto Scaling group to automatically launch new instances" is incorrect as this does not provide a solution for storing the session data.

INCORRECT: "Use Amazon SQS to save session data" is incorrect as Amazon SQS is not suitable for storing session data.

References:

https://docs.aws.amazon.com/aws-sdk-php/v2/guide/feature-dynamodb-session-handler.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 11: Correct

An application uses an Amazon RDS database. The company requires that the performance of database reads is improved, and they want to add a caching layer in front of the database. The cached data must be encrypted, and the solution must be highly available.

Which solution will meet these requirements?

Explanation

Amazon ElastiCache is an in-memory database cache that can be used in front of Amazon RDS. The key to answering this question is to know the differences between ElastiCache Memcached and ElastiCache Redis. To support both encryption and high availability we must use ElastiCache Redis with cluster mode enabled.

You can see the differences between the different engines and configuration options for ElastiCache in the table below:


CORRECT: "Amazon ElastiCache for Redis in cluster mode" is the correct answer (as explained above.)

INCORRECT: "Amazon ElastiCache for Memcached" is incorrect.

The Memcached engine does not support encryption or high availability.

INCORRECT: "Amazon CloudFront with multiple origins" is incorrect.

You cannot configure an Amazon RDS as an origin for Amazon RDS. Also, what would the second origin be anyway? There’s only one database!

INCORRECT: "Amazon DynamoDB Accelerator (DAX)" is incorrect.

DynamoDB DAX can be used to increase the performance of DynamoDB tables and offload read requests. It cannot be used in front of an Amazon RDS database.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.Redis-RedisCluster.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 12: Correct

A company has a large Amazon DynamoDB table which they scan periodically so they can analyze several attributes. The scans are consuming a lot of provisioned throughput. What technique can a Developer use to minimize the impact of the scan on the table's provisioned throughput?

Explanation

In general, Scan operations are less efficient than other operations in DynamoDB. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result you want, essentially adding the extra step of removing data from the result set.

If possible, you should avoid using a Scan operation on a large table or index with a filter that removes many results. Also, as a table or index grows, the Scan operation slows. The Scan operation examines every item for the requested values and can use up the provisioned throughput for a large table or index in a single operation.

The following diagram illustrates the impact of a sudden spike of capacity unit usage by Query and Scan operations, and its impact on your other requests against the same table.

Instead of using a large Scan operation, you can use the following techniques to minimize the impact of a scan on a table's provisioned throughput.

Reduce page size

Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a "pause" between each request.

Isolate scan operations

DynamoDB is designed for easy scalability. As a result, an application can create tables for distinct purposes, possibly even duplicating content across several tables. You want to perform scans on a table that is not taking "mission-critical" traffic. Some applications handle this load by rotating traffic hourly between two tables—one for critical traffic, and one for bookkeeping. Other applications can do this by performing every write on two tables: a "mission-critical" table, and a "shadow" table.

Therefore, the best option to reduce the impact of the scan on the table's provisioned throughput is to set a smaller page size for the scan.

CORRECT: "Set a smaller page size for the scan" is the correct answer.

INCORRECT: "Use parallel scans" is incorrect as this will return results faster but place more burden on the table’s provisioned throughput.

INCORRECT: "Define a range key on the table" is incorrect. A range key is a composite key that includes the hash key and another attribute. This is of limited use in this scenario as the table is being scanned to analyze multiple attributes.

INCORRECT: "Prewarm the table by updating all items" is incorrect as updating all items would incur significant costs in terms of provisioned throughput and would not be advantageous.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 13: Correct

A serverless application uses an Amazon API Gateway and AWS Lambda. The application processes data submitted in a form by users of the application and certain data must be stored and available to subsequent function calls.

What is the BEST solution for storing this data?

Explanation

AWS Lambda is a stateless compute service and so you cannot store session data in AWS Lambda itself. You can store a limited amount of information (up to 512 MB) in the /tmp directory. This information is preserved if the function is reused (i.e. the execution context is reused). However, it is not guaranteed that the execution context will be reused so the data could be destroyed.

The /tmp should only be used for data that can be regenerated or for operations that require a local filesystem, but not as a permanent storage solution. It is ideal for setting up database connections that will be needed across invocations of the function as the connection is made once and preserved across invocations.

Amazon DynamoDB is a good solution for this scenario as it is a low-latency NoSQL database that is often used for storing session state data. Amazon S3 would also be a good fit for this scenario but is not offered as an option.

With both Amazon DynamoDB and Amazon S3 you can store data long-term and it is available for multiple invocations of your function as well as being available from multiple invocations simultaneously.

CORRECT: "Store the data in an Amazon DynamoDB table" is the correct answer.

INCORRECT: "Store the data in an Amazon Kinesis Data Stream" is incorrect as this service is used for streaming data. It is not used for session-store use cases.

INCORRECT: "Store the data in the /tmp directory" is incorrect as any data stored in the /tmp may not be available for subsequent calls to your function. The /tmp directory content remains when the execution context is frozen, providing transient cache that can be used for multiple invocations. However, it is not guaranteed that the execution context will be reused so the data could be lost.

INCORRECT: "Store the data in an Amazon SQS queue" is incorrect as a message queue is not used for long-term storage of data.

References:

https://aws.amazon.com/dynamodb/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 14: Correct

An organization developed an application that uses a set of APIs that are being served through Amazon API Gateway. The API calls must be authenticated based on OpenID identity providers such as Amazon, Google, or Facebook. The APIs should allow access based on a custom authorization model.
Which is the simplest and MOST secure design to use to build an authentication and authorization model for the APIs?

Explanation

With Amazon Cognito User Pools your app users can sign in either directly through a user pool or federate through a third-party identity provider (IdP). The user pool manages the overhead of handling the tokens that are returned from social sign-in through Facebook, Google, Amazon, and Apple, and from OpenID Connect (OIDC) and SAML IdPs.

After successful authentication, Amazon Cognito returns user pool tokens to your app. You can use the tokens to grant your users access to your own server-side resources, or to the Amazon API Gateway. Or, you can exchange them for AWS credentials to access other AWS services.

The ID token is a JSON Web Token (JWT) that contains claims about the identity of the authenticated user such as name, email, and phone_number. You can use this identity information inside your application. The ID token can also be used to authenticate users against your resource servers or server applications.

CORRECT: "Use Amazon Cognito user pools and a custom authorizer to authenticate and authorize users based on JSON Web Tokens" is the correct answer.

INCORRECT: "Use Amazon ElastiCache to store user credentials and pass them to the APIs for authentication and authorization" is incorrect. This option does not provide a solution for authenticating based on Open ID providers and is not secure as there is no mechanism mentioned for ensuring the secrecy of the credentials.

INCORRECT: "Use Amazon DynamoDB to store user credentials and have the application retrieve temporary credentials from AWS STS. Make API calls by passing user credentials to the APIs for authentication and authorization" is incorrect. This option also does not solve the requirement of integrating with Open ID providers and also suffers from the same security concerns as the option above.

INCORRECT: "Build an OpenID token broker with Amazon and Facebook. Users will authenticate with these identify providers and pass the JSON Web Token to the API to authenticate each API call" is incorrect. This may be a workable and secure solution however it is definitely not the simplest as it would require significant custom development.

References:

https://docs.aws.amazon.com/cognito/latest/developerguide/authentication.html

https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-with-identity-providers.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 15: Correct

An application component writes thousands of item-level changes to a DynamoDB table per day. The developer requires that a record is maintained of the items before they were modified. What MUST the developer do to retain this information? (Select TWO.)

Explanation

DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.

You can also use the CreateTable or UpdateTable API operations to enable or modify a stream. The StreamSpecification parameter determines how the stream is configured:

StreamEnabled — Specifies whether a stream is enabled (true) or disabled (false) for the table.

StreamViewType — Specifies the information that will be written to the stream whenever data in the table is modified:

KEYS_ONLY — Only the key attributes of the modified item.

NEW_IMAGE — The entire item, as it appears after it was modified.

OLD_IMAGE — The entire item, as it appeared before it was modified.

NEW_AND_OLD_IMAGES — Both the new and the old images of the item.

In this scenario, we only need to keep a copy of the items before they were modified. Therefore, the solution is to enable DynamoDB streams and set the StreamViewType to OLD_IMAGES.

CORRECT: "Enable DynamoDB Streams for the table" is the correct answer.

CORRECT: "Set the StreamViewType to OLD_IMAGE" is the correct answer.

INCORRECT: "Create a CloudWatch alarm that sends a notification when an item is modified" is incorrect as DynamoDB streams is the best way to capture a time-ordered sequence of item-level modifications in a DynamoDB table.

INCORRECT: "Set the StreamViewType to NEW_AND_OLD_IMAGES" is incorrect as we only need to keep a record of the items before they were modified. This setting would place a record in the stream that includes the item before and after modification.

INCORRECT: "Use an AWS Lambda function to extract the item records from the notification and write to an S3 bucket" is incorrect. There is no requirement to write the updates to S3 and if you did want to do this with Lambda you would need to extract the information from the stream, not a notification.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 16: Correct

A Developer is creating a web application that will be used by employees working from home. The company uses a SAML directory on-premises for storing user information. The Developer must integrate with the SAML directory and authorize each employee to access only their own data when using the application.

Which approach should the Developer take?

Explanation

Amazon Cognito leverages IAM roles to generate temporary credentials for your application's users. Access to permissions is controlled by a role's trust relationships.

In this example the Developer must limit access to specific identities in the SAML directory. The Developer can create a trust policy with an IAM condition key that limits access to a specific set of app users by checking the value of cognito-identity.amazonaws.com:sub:

CORRECT: "Use an Amazon Cognito identity pool, federate with the SAML provider, and use a trust policy with an IAM condition key to limit employee access" is the correct answer.

INCORRECT: "Use Amazon Cognito user pools, federate with the SAML provider, and use user pool groups with an IAM policy" is incorrect. A user pool can be used to authenticate but the identity pool is used to provide authorized access to AWS services.

INCORRECT: "Create the application within an Amazon VPC and use a VPC endpoint with a trust policy to grant access to the employees" is incorrect. You cannot provide access to an on-premises SAML directory using a VPC endpoint.

INCORRECT: "Create a unique IAM role for each employee and have each employee assume the role to access the application so they can access their personal data only" is incorrect. This is not an integration into the SAML directory and would be very difficult to manage.

References:

https://docs.aws.amazon.com/cognito/latest/Developerguide/role-trust-and-permissions.html

https://docs.aws.amazon.com/cognito/latest/Developerguide/iam-roles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 17: Correct

A Developer is creating a new web application that will be deployed using AWS Elastic Beanstalk from the AWS Management Console. The Developer is about to create a source bundle which will be uploaded using the console.

Which of the following are valid requirements for creating the source bundle? (Select TWO.)

Explanation

When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you'll need to upload a source bundle. Your source bundle must meet the following requirements:

  •  Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)

  •  Not exceed 512 MB

  •  Not include a parent folder or top-level directory (subdirectories are fine)

If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file, but in other cases it is not required.

CORRECT: "Must not include a parent folder or top-level directory" is a correct answer.

CORRECT: "Must not exceed 512 MB" is also a correct answer.

INCORRECT: "Must include the cron.yaml file" is incorrect. As mentioned above, this is not required in all cases.

INCORRECT: "Must include a parent folder or top-level directory" is incorrect. A parent folder or top-level directory must NOT be included.

INCORRECT: "Must consist of one or more ZIP files" is incorrect. You bundle into a single ZIP or WAR file.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-sourcebundle.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 18: Correct

An application on-premises uses Linux servers and a relational database using PostgreSQL. The company will be migrating the application to AWS and require a managed service that will take care of capacity provisioning, load balancing, and auto-scaling.

Which combination of services should the Developer use? (Select TWO.)

Explanation

The company require a managed service therefore the Developer should choose to use Elastic Beanstalk for the compute layer and Amazon RDS with the PostgreSQL engine for the database layer.

AWS Elastic Beanstalk will handle all capacity provisioning, load balancing, and auto-scaling for the web front-end and Amazon RDS provides push-button scaling for the backend.

CORRECT: "AWS Elastic Beanstalk" is a correct answer.

CORRECT: "Amazon RDS with PostrgreSQL" is also a correct answer.

INCORRECT: "Amazon EC2 with Auto Scaling" is incorrect as though these services will be used to provide the automatic scalability required for the solution, they still need to be managed. The questions asks for a managed solution and Elastic Beanstalk will manage this for you. Also, there is no mention of a load balancer so connections cannot be distributed to instances.

INCORRECT: "Amazon EC2 with PostgreSQL" is incorrect as the question asks for a managed service and therefore the database should be run on Amazon RDS.

INCORRECT: "AWS Lambda with CloudWatch Events" is incorrect as there is no mention of refactoring application code to run on AWS Lambda.

References:

https://aws.amazon.com/elasticbeanstalk/

https://aws.amazon.com/rds/postgresql/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

https://digitalcloud.training/amazon-rds/

Question 19: Correct

A developer has deployed an application on an Amazon EC2 instance in a private subnet within a VPC. The subnet does not have Internet connectivity. The developer would like to write application logs to an Amazon S3 bucket. What MUST be configured to enable connectivity?

Explanation

Please note that the question specifically asks how to enable connectivity so this is not about permissions. When using a private subnet with no Internet connectivity there are only two options available for connecting to Amazon S3 (which remember, is a service with a public endpoint, it’s not in your VPC).

The first option is to enable Internet connectivity through either a NAT Gateway or a NAT Instance. However, there is no answer offering either of these as a solution. The other option is to enable a VPC endpoint for S3.

The specific type of VPC endpoint to S3 is a Gateway Endpoint. EC2 instances running in private subnets of a VPC can use the endpoint to enable controlled access to S3 buckets, objects, and API functions that are in the same region as the VPC. You can then use an S3 bucket policy to indicate which VPCs and which VPC Endpoints have access to your S3 buckets.

In the following diagram, instances in subnet 2 can access Amazon S3 through the gateway endpoint.

Therefore, the only answer that presents a solution to this challenge is to provision an VPC endpoint for S3.

CORRECT: "A VPC endpoint should be provisioned for S3" is the correct answer.

INCORRECT: "An IAM role must be added to the instance that has permissions to write to the S3 bucket" is incorrect. You do need to do this, but the question is asking about connectivity, not permissions.

INCORRECT: "A bucket policy needs to be added specifying the principles that are allowed to write data to the bucket" is incorrect. You may choose to use a bucket policy to enable permissions but the question is asking about connectivity, not permissions.

INCORRECT: "A VPN should be established to enable private connectivity to S3" is incorrect. You can create a VPN to establish an encrypted tunnel into a VPC from a location outside of AWS. However, you cannot create a VPN connection from a subnet within a VPC to Amazon S3.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 20: Correct

A Developer has been tasked by a client to create an application. The client has provided the following requirements for the application:

     · Performance efficiency of seconds with up to a minute of latency

     · Data storage requirements will be up to thousands of terabytes

     · Per-message sizes may vary between 100 KB and 100 MB

     · Data can be stored as key/value stores supporting eventual consistency

What is the MOST cost-effective AWS service to meet these requirements?

Explanation

The question is looking for a cost-effective solution. Multiple options can support the latency and scalability requirements. Amazon RDS is not a key/value store so that rules that option out. Of the remaining options ElastiCache would be expensive and DynamoDB only supports a maximum item size of 400 KB. Therefore, the best option is Amazon S3 which delivers all of the requirements.

CORRECT: "Amazon S3" is the correct answer.

INCORRECT: "Amazon DynamoDB" is incorrect as it supports a maximum item size of 400 KB and the messages will be up to 100 MB.

INCORRECT: "Amazon RDS (with a MySQL engine)" is incorrect as it is not a key/value store.

INCORRECT: "Amazon ElastiCache" is incorrect as it is an in-memory database and would be the most expensive solution.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 21: Correct

A Developer is building an application that will store data relating to financial transactions in multiple DynamoDB tables. The Developer needs to ensure the transactions provide atomicity, isolation, and durability (ACID) and that changes are committed following an all-or nothing paradigm.

What write API should be used for the DynamoDB table?

Explanation

Amazon DynamoDB transactions simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables. Transactions provide atomicity, consistency, isolation, and durability (ACID) in DynamoDB, helping you to maintain data correctness in your applications.

You can use the DynamoDB transactional read and write APIs to manage complex business workflows that require adding, updating, or deleting multiple items as a single, all-or-nothing operation. For example, a video game developer can ensure that players’ profiles are updated correctly when they exchange items in a game or make in-game purchases.

With the transaction write API, you can group multiple Put, Update, Delete, and ConditionCheck actions. You can then submit the actions as a single TransactWriteItems operation that either succeeds or fails as a unit. The same is true for multiple Get actions, which you can group and submit as a single TransactGetItems operation.

There is no additional cost to enable transactions for your DynamoDB tables. You pay only for the reads or writes that are part of your transaction. DynamoDB performs two underlying reads or writes of every item in the transaction: one to prepare the transaction and one to commit the transaction. These two underlying read/write operations are visible in your Amazon CloudWatch metrics.

CORRECT: "Transactional" is the correct answer.

INCORRECT: "Standard" is incorrect as this will not provide the ACID / all-or nothing transactional writes that are required for this solution.

INCORRECT: "Strongly consistent" is incorrect as this applies to reads only, not writes.

INCORRECT: "Eventually consistent" is incorrect as this applies to reads only, not writes.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 22: Correct

An AWS Lambda function has been packaged for deployment to multiple environments including development, test, and production. The Lambda function uses an Amazon RDS MySQL database for storing data. Each environment has a different RDS MySQL database.

How can a Developer configure the Lambda function package to ensure the correct database connection string is used for each environment?

Explanation

You can use environment variables to store secrets securely and adjust your function's behavior without updating code. An environment variable is a pair of strings that are stored in a function's version-specific configuration.

Use environment variables to pass environment-specific settings to your code. For example, you can have two functions with the same code but different configuration. One function connects to a test database, and the other connects to a production database.

In this situation, you use environment variables to tell the function the hostname and other connection details for the database. You might also set an environment variable to configure your test environment to use more verbose logging or more detailed tracing.

You set environment variables on the unpublished version of your function by specifying a key and value. When you publish a version, the environment variables are locked for that version along with other version-specific configuration.

It is possible to create separate versions of a function with different environment variables referencing the relevant database connection strings. Aliases can then be used to differentiate the environments and be used for connecting to the functions as in the image below:

Therefore, using environment variables is the best way to ensure the environment-specific database connection strings are available in a single deployment package.

CORRECT: "Use environment variables for the database connection strings" is the correct answer.

INCORRECT: "Use a separate function for development and production" is incorrect as there’s a single deployment package that must contain the connection strings for multiple environments. Therefore, using environment variables is necessary.

INCORRECT: "Include the resources in the function code" is incorrect. It would not be secure to include the database connection strings in the function code. With environment variables the password string can be encrypted using KMS which is much more secure.

INCORRECT: "Use layers for storing the database connection strings" is incorrect. Layers are used for adding external libraries to your functions.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html


Question 23: Incorrect

A Development team wants to run their container workloads on Amazon ECS. Each application container needs to share data with another container to collect logs and metrics.

What should the Development team do to meet these requirements?

Explanation

Amazon ECS tasks support Docker volumes. To use data volumes, you must specify the volume and mount point configurations in your task definition. Docker volumes are supported for the EC2 launch type only.

To configure a Docker volume, in the task definition volumes section, define a data volume with name and DockerVolumeConfiguration values. In the containerDefinitions section, define multiple containers with mountPoints values that reference the name of the defined volume and the containerPath value to mount the volume at on the container.

The containers should both be specified in the same task definition. Therefore, the Development team should create one task definition, specify both containers in the definition and then mount a shared volume between those two containers

CORRECT: "Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers" is the correct answer.

INCORRECT: "Create two pod specifications. Make one to include the application container and the other to include the other container. Link the two pods together" is incorrect as pods are a concept associated with the Elastic Kubernetes Service (EKS).

INCORRECT: "Create two task definitions. Make one to include the application container and the other to include the other container. Mount a shared volume between the two tasks" is incorrect as a single task definition should be created with both containers.

INCORRECT: "Create a single pod specification. Include both containers in the specification. Mount a persistent volume to both containers" is incorrect as pods are a concept associated with the Elastic Kubernetes Service (EKS).

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 24: Correct

A Developer is writing an imaging microservice on AWS Lambda. The service is dependent on several libraries that are not available in the Lambda runtime environment.

Explanation

A deployment package is a ZIP archive that contains your function code and dependencies. You need to create a deployment package if you use the Lambda API to manage functions, or if you need to include libraries and dependencies other than the AWS SDK.

You can upload the package directly to Lambda, or you can use an Amazon S3 bucket, and then upload it to Lambda. If the deployment package is larger than 50 MB, you must use Amazon S3.

CORRECT: "Create a ZIP file with the source code and all dependent libraries" is the correct answer.

INCORRECT: "Create a ZIP file with the source code and a script that installs the dependent libraries at runtime" is incorrect as the Developer should not run a script at runtime as this will cause latency. Instead, the Developer should include the dependent libraries in the ZIP package.

INCORRECT: "Create a ZIP file with the source code and an appspec.yml file. Add the libraries to the appspec.yml file and upload to Amazon S3. Deploy using CloudFormation" is incorrect. The appspec.yml file is used with CodeDeploy, you cannot add libraries into it, and it is not deployed using CloudFormation.

INCORRECT: "Create a ZIP file with the source code and a buildspec.yml file that installs the dependent libraries on AWS Lambda" is incorrect as the buildspec.yml file is used with CodeBuild for compiling source code and running tests. It cannot be used to install dependent libraries within Lambda.

References:

https://docs.aws.amazon.com/lambda/latest/dg/python-package.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 25: Correct

A company has an application that logs all information to Amazon S3. Whenever there is a new log file, an AWS Lambda function is invoked to process the log files. The code works, gathering all of the necessary information. However, when checking the Lambda function logs, duplicate entries with the same request ID are found.

What is the BEST explanation for the duplicate entries?

Explanation

From the AWS documentation:

“When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.

For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.”

Therefore, the most likely explanation is that the function failed, and Lambda retried the invocation.

CORRECT: "The Lambda function failed, and the Lambda service retried the invocation with a delay" is the correct answer.

INCORRECT: "The S3 bucket name was specified incorrectly" is incorrect. If this was the case all attempts would fail but this is not the case.

INCORRECT: "There was an S3 outage, which caused duplicate entries of the same log file" is incorrect. There cannot be duplicate log files in Amazon S3 as every object must be unique within a bucket. Therefore, if the same log file was uploaded twice it would just overwrite the previous version of the file. Also, if a separate request was made to Lambda it would have a different request ID.

INCORRECT: "The application stopped intermittently and then resumed" is incorrect. The issue is duplicate entries of the same request ID.

References:

https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 26: Incorrect

A Developer has used a third-party tool to build, bundle, and package a software package on-premises. The software package is stored in a local file system and must be deployed to Amazon EC2 instances.

How can the application be deployed onto the EC2 instances?

Explanation

AWS CodeDeploy can deploy software packages using an archive that has been uploaded to an Amazon S3 bucket. The archive file will typically be a .zip file containing the code and files required to deploy the software package.

CORRECT: "Upload the bundle to an Amazon S3 bucket and specify the S3 location when doing a deployment using AWS CodeDeploy" is the correct answer.

INCORRECT: "Use AWS CodeDeploy and point it to the local file system to deploy the software package" is incorrect. You cannot point CodeDeploy to a local file system running on-premises.

INCORRECT: "Create a repository using AWS CodeCommit to automatically trigger a deployment to the EC2 instances" is incorrect. CodeCommit is a source control system. In this case the source code has already been package using a third-party tool.

INCORRECT: "Use AWS CodeBuild to commit the package and automatically deploy the software package" is incorrect. CodeBuild does not commit packages (CodeCommit does) or deploy the software. It is a build service.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-windows-upload-application.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 27: Correct

A company runs a legacy application that uses an XML-based SOAP interface. The company needs to expose the functionality of the service to external customers and plans to use Amazon API Gateway.

How can a Developer configure the integration?

Explanation

In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend.

API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response.

CORRECT: "Create a RESTful API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates" is the correct answer.

INCORRECT: "Create a RESTful API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through an Application Load Balancer" is incorrect. The API Gateway cannot process the XML SOAP data and cannot pass it through an ALB.

INCORRECT: "Create a SOAP API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using AWS Lambda" is incorrect. API Gateway does not support SOAP APIs.

INCORRECT: "Create a SOAP API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through a Network Load Balancer" is incorrect. API Gateway does not support SOAP APIs.

References:

https://docs.aws.amazon.com/apigateway/latest/Developerguide/request-response-data-mappings.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 28: Correct

A web application runs on a fleet of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). A developer needs a store for session data so it can be reliably served across multiple requests.

Where is the best place to store the session data?

Explanation

ElastiCache is a good solution for storing session state data as it has very low latency and high performance. DynamoDB is often used for the same purpose. In this case the session data can be written to the ElastiCache cluster and can then be easily retrieved from subsequent sessions on the same or a different EC2 instance. This decouples the data from the individual instance so if an instance fails, the data is not lost.

CORRECT: "Write the data to an Amazon ElastiCache cluster" is the correct answer (as explained above.)

INCORRECT: "Write the data to a shared Amazon EBS volume" is incorrect.

You cannot share an EBS volume except under specific circumstances and even then, you must share the volume from an EC2 instance which could fail.

INCORRECT: "Write the data to the root of the filesystem" is incorrect.

This will result in data loss if the instance fails.

INCORRECT: "Write the data to the local instance store volumes" is incorrect.

This will result in data loss if the instance fails.

References:

https://aws.amazon.com/caching/session-management/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 29: Correct

A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?

Explanation

You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.

You need to create a target group, which is used in request routing, and register a Lambda function to the target group. If the request content matches a listener rule with an action to forward it to this target group, the load balancer invokes the registered Lambda function.

CORRECT: "Create a target group and register the Lambda function using the AWS CLI" is the correct answer.

INCORRECT: "Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration" is incorrect as launch configurations and ASGs are used for launching Amazon EC2 instances, you cannot use an ASG with a Lambda function.

INCORRECT: "Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function" is incorrect as it is not a common design pattern to map an API Gateway API to a Lambda function when using an ALB. Though technically possible, typically you would choose to put API Gateway or an ALB in front of your application, not both.

INCORRECT: "Configure an event-source mapping between the ALB and the Lambda function" is incorrect as you cannot configure an event-source mapping between and ALB and a Lambda function.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 30: Incorrect

A company is using an AWS Step Functions state machine. When testing the state machine errors were experienced in the Step Functions task state machine. To troubleshoot the issue a developer requires that the state input be included along with the error message in the state output.

Which coding practice can preserve both the original input and the error for the state?

Explanation

A Step Functions execution receives a JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state.

In the Amazon States Language, these fields filter and control the flow of JSON from state to state:

• InputPath

• OutputPath

• ResultPath

• Parameters

• ResultSelector

Use ResultPath to combine a task result with task input, or to select one of these. The path you provide to ResultPath controls what information passes to the output. Use ResultPath in a Catch to include the error with the original input, instead of replacing it. The following code is an example of this tactic:

CORRECT: "Use ResultPath in a Catch statement to include the original input with the error" is the correct answer (as explained above.)

INCORRECT: "Use InputPath in a Catch statement to include the original input with the error" is incorrect.

You can use InputPath to select a portion of the state input.

INCORRECT: "Use ErrorEquals in a Retry statement to include the original input with the error" is incorrect.

A retry is used to attempt to retry the process that caused the error based on the retry policy described by ErrorEquals.

INCORRECT: "Use OutputPath in a Retry statement to include the original input with the error" is incorrect.

OutputPath enables you to select a portion of the state output to pass to the next state. This enables you to filter out unwanted information and pass only the portion of JSON that you care about.

References:

https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html

https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-handling-error-conditions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 31: Correct

A Developer attempted to run an AWS CodeBuild project, and received an error. The error stated that the length of all environment variables exceeds the limit for the combined maximum of characters. What is the recommended solution?

Explanation

In this case the build is using environment variables that are too large for AWS CodeBuild. CodeBuild can raise errors when the length of all environment variables (all names and values added together) reach a combined maximum of around 5,500 characters.

The recommended solution is to use Amazon EC2 Systems Manager Parameter Store to store large environment variables and then retrieve them from your buildspec file. Amazon EC2 Systems Manager Parameter Store can store an individual environment variable (name and value added together) that is a combined 4,096 characters or less.

CORRECT: "Use AWS Systems Manager Parameter Store to store large numbers of environment variables" is the correct answer.

INCORRECT: "Add the export LC_ALL=”en_US.utf8” command to the pre_build section to ensure POSIX localization" is incorrect as this is used to set the locale and will not affect the limits that have been reached.

INCORRECT: "Use Amazon Cognito to store key-value pairs for large numbers of environment variables" is incorrect as Cognito is used for authentication and authorization and is not suitable for this purpose.

INCORRECT: "Update the settings for the build project to use an Amazon S3 bucket for large numbers of environment variables" is incorrect as Systems Manager Parameter Store is designed for this purpose and is a better fit.

References:

https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-large-env-vars

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-systems-manager/


Question 32: Correct

A Developer is setting up a code update to Amazon ECS using AWS CodeDeploy. The Developer needs to complete the code update quickly. Which of the following deployment types should the Developer use?

Explanation

CodeDeploy provides two deployment type options – in-place and blue/green. Note that AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.

The Blue/green deployment type on an Amazon ECS compute platform works like this:

Traffic is shifted from the task set with the original version of an application in an Amazon ECS service to a replacement task set in the same service.

You can set the traffic shifting to linear or canary through the deployment configuration.

The protocol and port of a specified load balancer listener is used to reroute production traffic.

During a deployment, a test listener can be used to serve traffic to the replacement task set while validation tests are run.

CORRECT: "Blue/green" is the correct answer.

INCORRECT: "Canary" is incorrect as this is a traffic shifting option, not a deployment type. Traffic is shifted in two increments.

INCORRECT: "Linear" is incorrect as this is a traffic shifting option, not a deployment type. Traffic is shifted in two increments.

INCORRECT: "In-place" is incorrect as AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 33: Correct

An application deployed on AWS Elastic Beanstalk experienced increased error rates during deployments of new application versions, resulting in service degradation for users. The Development team believes that this is because of the reduction in capacity during the deployment steps. The team would like to change the deployment policy configuration of the environment to an option that maintains full capacity during deployment while using the existing instances.

Which deployment policy will meet these requirements while using the existing instances?

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies and options that let you configure batch size and health check behavior during deployments.

All at once:

· Deploys the new version to all instances simultaneously.

Rolling:

· Update a few instances at a time (bucket), and then move onto the next bucket once the first bucket is healthy (downtime for 1 bucket at a time).

Rolling with additional batch:

· Like Rolling but launches new instances in a batch ensuring that there is full availability.

Immutable:

· Launches new instances in a new ASG and deploys the version update to these instances before swapping traffic to these instances once healthy.

· Zero downtime.

Blue / Green deployment:

· Zero downtime and release facility.

· Create a new “stage” environment and deploy updates there.

The rolling with additional batch launches a new batch to ensure capacity is not reduced and then updates the existing instances. Therefore, this is the best option to use for these requirements.

CORRECT: “Rolling with additional batch” is the correct answer.

INCORRECT: “Rolling” is incorrect as this will only use the existing instances without introducing an extra batch and therefore this will reduce the capacity of the application while the updates are taking place.

INCORRECT: “All at once” is incorrect as this will run the updates on all instances at the same time causing a total outage.

INCORRECT: “Immutable” is incorrect as this installs the updates on new instances, not existing instances.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 34: Correct

A company is setting up a Lambda function that will process events from a DynamoDB stream. The Lambda function has been created and a stream has been enabled. What else needs to be done for this solution to work?

Explanation

An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don't invoke Lambda functions directly. Lambda provides event source mappings for the following services.

Services That Lambda Reads Events From

Amazon Kinesis

Amazon DynamoDB

Amazon Simple Queue Service

An event source mapping uses permissions in the function's execution role to read and manage items in the event source. Permissions, event structure, settings, and polling behavior vary by event source.

The configuration of the event source mapping for stream-based services (DynamoDB, Kinesis), and Amazon SQS, is made on the Lambda side.

Note: for other services, such as Amazon S3 and SNS, the function is invoked asynchronously and the configuration is made on the source (S3/SNS) rather than Lambda.

CORRECT: "An event-source mapping must be created on the Lambda side to associate the DynamoDB stream with the Lambda function" is the correct answer.

INCORRECT: "An alarm should be created in CloudWatch that sends a notification to Lambda when a new entry is added to the DynamoDB stream" is incorrect as you should use an event-source mapping between Lambda and DynamoDB instead.

INCORRECT: "An event-source mapping must be created on the DynamoDB side to associate the DynamoDB stream with the Lambda function" is incorrect because for stream-based services that don’t invoke Lambda functions directly, the configuration should be made on the Lambda side.

INCORRECT: "Update the CloudFormation template to map the DynamoDB stream to the Lambda function" is incorrect as CloudFormation may not even be used in this scenario (it wasn’t mentioned) and wouldn’t continuously send events from DynamoDB streams to Lambda either.

References:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 35: Incorrect

A company is deploying a microservices application on AWS Fargate using Amazon ECS. The application has environment variables that must be passed to a container for the application to initialize.

How should the environment variables be passed to the container?

Explanation

When you register a task definition, you must specify a list of container definitions that are passed to the Docker daemon on a container instance.

The developer should use advanced container definition parameters and define environment variables to pass to the container.

CORRECT: "Use advanced container definition parameters and define environment variables under the environment parameter within the task definition" is the correct answer (as explained above.)

INCORRECT: "Use advanced container definition parameters and define environment variables under the environment parameter within the service definition" is incorrect.

The task definition is the correct place to define the environment variables to pass to the container.

INCORRECT: "Use standard container definition parameters and define environment variables under the secrets parameter within the task definition" is incorrect.

Advanced container definition parameters must be used to pass the environment variables to the container. The environment parameter should also be used.

INCORRECT: "Use standard container definition parameters and define environment variables under the WorkingDirectory parameter within the service definition" is incorrect.

Advanced container definition parameters must be used to pass the environment variables to the container. The environment parameter should also be used.

References:

https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_definition_parameters.html#container_definition_environment

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 36: Correct

A company needs to store sensitive documents on Amazon S3. The documents should be encrypted in transit using SSL/TLS and then be encrypted for storage at the destination. The company do not want to manage any of the encryption infrastructure or customer master keys and require the most cost-effective solution.

What is the MOST suitable option to encrypt the data?

Explanation

Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

As you can see in the image above, there are three options for server-side encryption:

· Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) – the data is encrypted by Amazon S3 using keys that are managed through S3

· Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) – this options uses CMKs managed in AWS KMS. There are additional benefits such as auditing and permissions associated with the CMKs but also additional charges

· Server-Side Encryption with Customer-Provided Keys (SSE-C) – you manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your objects.

The most suitable option for the requirements in this scenario is to use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) as the company do not want to manage CMKs and require a simple solution.

CORRECT: "Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)" is the correct answer.

INCORRECT: "Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) using customer managed CMKs" is incorrect as the company do not want to manage CMKs and they need the most cost-effective option and this does add additional costs.

INCORRECT: "Server-Side Encryption with Customer-Provided Keys (SSE-C)" is incorrect as with this option the customer must manage the keys or use keys managed in AWS KMS (which adds cost and complexity).

INCORRECT: "Client-side encryption with Amazon S3 managed keys" is incorrect as you cannot use Amazon S3 managed keys for client-side encryption and the encryption does not need to take place client-side for this solution.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 37: Correct

A serverless application uses an AWS Lambda function to process Amazon S3 events. The Lambda function executes 20 times per second and takes 20 seconds to complete each execution.

How many concurrent executions will the Lambda function require?

Explanation

Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency.

To calculate the concurrency requirements for the Lambda function simply multiply the number of executions per second (20) by the time it takes to complete the execution (20).

Therefore, for this scenario, the calculation is 20 x 20 = 400.

CORRECT: "400" is the correct answer.

INCORRECT: "5" is incorrect. Please use the formula above to calculate concurrency requirements.

INCORRECT: "40" is incorrect. Please use the formula above to calculate concurrency requirements.

INCORRECT: "20" is incorrect. Please use the formula above to calculate concurrency requirements.

References:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 38: Correct

A website is deployed in several AWS regions. A Developer needs to direct global users to the website that provides the best performance.

How can the Developer achieve this?

Explanation

If your application is hosted in multiple AWS Regions, you can improve performance for your users by serving their requests from the AWS Region that provides the lowest latency.

To use latency-based routing, you create latency records for your resources in multiple AWS Regions. When Route 53 receives a DNS query for your domain or subdomain (example.com or acme.example.com), it determines which AWS Regions you've created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region. Route 53 responds with the value from the selected record, such as the IP address for a web server.

CORRECT: "Create A records in AWS Route 53 and use a latency-based routing policy" is the correct answer.

INCORRECT: "Create Alias records in AWS Route 53 and direct the traffic to an Elastic Load Balancer" is incorrect as an ELB is within a single region. In this case the Developer needs to direct traffic to different regions.

INCORRECT: "Create A records in AWS Route 53 and use a weighted routing policy" is incorrect as weighting is used to send more traffic to one region other another, not to direct for best performance.

INCORRECT: "Create CNAME records in AWS Route 53 and direct traffic to Amazon CloudFront" is incorrect as this does not direct traffic to different regions for best performance which is what the questions asks for.

References:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-route-53/


Question 39: Correct

An application uses an Amazon DynamoDB table that is 50 GB in size and provisioned with 10,000 read capacity units (RCUs) per second. The table must be scanned during non-peak hours when normal traffic consumes around 5,000 RCUs. The Developer must scan the whole table in the shortest possible time whilst ensuring the normal workload is not affected.

How would the Developer optimize this scan cost-effectively?

Explanation

To make the most of the table’s provisioned throughput, the Developer can use the Parallel Scan API operation so that the scan is distributed across the table’s partitions. This will help to optimize the scan to complete in the fastest possible time. However, the Developer will also need to apply rate limiting to ensure that the scan does not affect normal workloads.

CORRECT: "Use the Parallel Scan API operation and limit the rate" is the correct answer.

INCORRECT: "Use sequential scans and apply a FilterExpression" is incorrect. A FilterExpression is a string that contains conditions that DynamoDB applies after the Scan operation, but before the data is returned to you. This will not assist with speeding up the scan or preventing it from affecting normal workloads.

INCORRECT: "Increase read capacity units during the scan operation" is incorrect. There are already more RCUs provisioned than are needed during the non-peak hours. The key here is to use what is available for cost-effectiveness whilst ensuing normal workloads are not affected.

INCORRECT: "Use sequential scans and set the ConsistentRead parameter to false" is incorrect. This setting would turn off consistent reads making the scan eventually consistent. This will not satisfy the requirements of the question.

References:

https://aws.amazon.com/blogs/Developer/rate-limited-scans-in-amazon-dynamodb/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 40: Incorrect

A Developer needs to scan a full DynamoDB 50GB table within non-peak hours. About half of the strongly consistent RCUs are typically used during non-peak hours and the scan duration must be minimized.

How can the Developer optimize the scan execution time without impacting production workloads?

Explanation

Performing a scan on a table consumes a lot of RCUs. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result you want, essentially adding the extra step of removing data from the result set. To reduce the amount of RCUs used by the scan so it doesn’t affect production workloads whilst minimizing the execution time, there are a couple of recommendations the Developer can follow.

Firstly, the Limit parameter can be used to reduce the page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a "pause" between each request.

Secondly, the Developer can configure parallel scans. With parallel scans the Developer can maximize usage of the available throughput and have the scans distributed across the table’s partitions.

A parallel scan can be the right choice if the following conditions are met:

The table size is 20 GB or larger.

The table's provisioned read throughput is not being fully used.

Sequential Scan operations are too slow.

Therefore, to optimize the scan operation the Developer should use parallel scans while limiting the rate as this will ensure that the scan operation does not affect the performance of production workloads and still have it complete in the minimum time.

CORRECT: "Use parallel scans while limiting the rate" is the correct answer.

INCORRECT: "Use sequential scans" is incorrect as this is slower than parallel scans and the Developer needs to minimize scan execution time.

INCORRECT: "Increase the RCUs during the scan operation" is incorrect as the table is only using half of the RCUs during non-peak hours so there are RCUs available. You could increase RCUs and perform the scan faster, but this would be more expensive. The better solution is to use parallel scans with the limit parameter.

INCORRECT: "Change to eventually consistent RCUs during the scan operation" is incorrect as this does not provide a solution for preventing impact to the production workloads. The limit parameter should be used to ensure the tables RCUs are not fully used.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html#QueryAndScanGuidelines.ParallelScan

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 41: Correct

A Developer is writing a serverless application that will process data uploaded to a file share. The Developer has created an AWS Lambda function and requires the function to be invoked every 15 minutes to process the data.

What is an automated and serverless way to trigger the functio

Explanation

Amazon CloudWatch Events help you to respond to state changes in your AWS resources. When your resources change state, they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.

You can create a Lambda function and direct AWS Lambda to execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression.

Therefore, the Developer should create an Amazon CloudWatch Events rule that triggers on a regular schedule to invoke the Lambda function. This is a serverless and automated solution.

CORRECT: "Create an Amazon CloudWatch Events rule that triggers on a regular schedule to invoke the Lambda function" is the correct answer.

INCORRECT: "Deploy an Amazon EC2 instance based on Linux, and edit it’s /etc/crontab file by adding a command to periodically invoke the Lambda function" is incorrect as EC2 is not a serverless solution.

INCORRECT: "Configure an environment variable named PERIOD for the Lambda function. Set the value at 600" is incorrect as you cannot cause a Lambda function to execute based on a value in an environment variable.

INCORRECT: "Create an Amazon SNS topic that has a subscription to the Lambda function with a 600-second timer" is incorrect as SNS does not run on a timer, CloudWatch Events should be used instead.

References:

https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 42: Incorrect

An ecommerce company manages a storefront that uses an Amazon API Gateway API which exposes an AWS Lambda function. The Lambda functions processes orders and stores the orders in an Amazon RDS for MySQL database. The number of transactions increases sporadically during marketing campaigns, and then goes close to zero during quite times.

How can a developer increase the elasticity of the system MOST cost-effectively?

Explanation

The most efficient solution would be to use Aurora Auto Scaling and configure the scaling events to happen based on target metric. The metric to use is Average connections of Aurora Replicas which will create a policy based on the average number of connections to Aurora Replicas.

This will ensure that the Aurora replicas scale based on actual numbers of connections to the replicas which will vary based on how busy the storefront is and how many transactions are being processed.

CORRECT: "Migrate from Amazon RDS to Amazon Aurora MySQL. Use an Aurora Auto Scaling policy to scale read replicas based on average connections of Aurora Replicas" is the correct answer (as explained above.)

INCORRECT: "Migrate from Amazon RDS to Amazon Aurora MySQL. Use an Aurora Auto Scaling policy to scale read replicas based on average CPU utilization" is incorrect.

The better metric to use for this situation would be the number of connections to Aurora Replicas as that is the metric that has the closest correlation to the number of transactions being executed.

INCORRECT: "Create an Amazon SNS topic. Publish transactions to the topic configure an SQS queue as a destination. Configure Lambda to process transactions from the queue" is incorrect.

This is highly inefficient. There is no need for an SNS topic in this situation.

INCORRECT: "Create an Amazon SQS queue. Publish transactions to the queue and set the queue to invoke the Lambda function. Set the reserved concurrency of the Lambda function to be equal to the max number of database connections" is incorrect.

This would be less cost effective as you would be paying for the reserved concurrency at all times.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-aurora/

Question 43: Correct

A developer is troubleshooting problems with a Lambda function that is invoked by Amazon SNS and repeatedly fails. How can the developer save discarded events for further processing?

Explanation

You can configure a dead letter queue (DLQ) on AWS Lambda to give you more control over message handling for all asynchronous invocations, including those delivered via AWS events (S3, SNS, IoT, etc.).

A dead-letter queue saves discarded events for further processing. A dead-letter queue acts the same as an on-failure destination in that it is used when an event fails all processing attempts or expires without being processed.

However, a dead-letter queue is part of a function's version-specific configuration, so it is locked in when you publish a version. On-failure destinations also support additional targets and include details about the function's response in the invocation record.

You can setup a DLQ by configuring the 'DeadLetterConfig' property when creating or updating your Lambda function. You can provide an SQS queue or an SNS topic as the 'TargetArn' for your DLQ, and AWS Lambda will write the event object invoking the Lambda function to this endpoint after the standard retry policy (2 additional retries on failure) is exhausted.

CORRECT: "Configure a Dead Letter Queue (DLQ)" is the correct answer.

INCORRECT: "Enable CloudWatch Logs for the Lambda function" is incorrect as CloudWatch logs will record metrics about the function but will not record records of the discarded events.

INCORRECT: "Enable Lambda streams" is incorrect as this is not something that exists (DynamoDB streams does exist).

INCORRECT: "Enable SNS notifications for failed events" is incorrect. Sending notifications from SNS will not include the data required for troubleshooting. A DLQ is the correct solution.

References:

https://digitalcloud.training/certification-training/aws-developer-associate/aws-compute/aws-lambda/

Save time with our exam-specific cheat sheets:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html

Question 44: Incorrect

An application needs to generate SMS text messages and emails for a large number of subscribers. Which AWS service can be used to send these messages to customers?

Explanation

Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients—publishers and subscribers—also referred to as producers and consumers.

Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel.

Subscribers (that is, web servers, email addresses, Amazon SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (that is, Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

CORRECT: "Amazon SNS" is the correct answer.

INCORRECT: "Amazon SES" is incorrect as this service only sends email, not SMS text messages.

INCORRECT: "Amazon SQS" is incorrect as this is a hosted message queue for decoupling application components.

INCORRECT: "Amazon SWF" is incorrect as the Simple Workflow Service is used for orchestrating multi-step workflows.

References:

https://docs.aws.amazon.com/sns/latest/dg/welcome.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 45: Incorrect

A developer is using AWS CodeBuild to build an application into a Docker image. The buildspec file is used to run the application build. The developer needs to push the Docker image to an Amazon ECR repository only upon the successful completion of each build.

Explanation

The post_build phase is an optional sequence. It represents the commands, if any, that CodeBuild runs after the build. For example, you might use Maven to package the build artifacts into a JAR or WAR file, or you might push a Docker image into Amazon ECR. Then you might send a build notification through Amazon SNS.

Here is an example of a buildspec file with a post_build phase that pushes a Docker image to Amazon ECR:

CORRECT: "Add a post_build phase to the buildspec file that uses the commands block to push the Docker image" is the correct answer (as explained above.)

INCORRECT: "Add a post_build phase to the buildspec file that uses the finally block to push the Docker image" is incorrect.

Commands specified in a finally block are run after commands in the commands block. The commands in a finally block are run even if a command in the commands block fails. This would not be ideal as this would push the image to ECR even if commands in previous sequences failed.

INCORRECT: "Add an install phase to the buildspec file that uses the commands block to push the Docker image" is incorrect.

These are commands that are run during installation. The develop would want to push the image only after all installations have succeeded. Therefore, the post_build phase should be used.

INCORRECT: "Add a post_build phase to the buildspec file that uses the artifacts sequence to find the build artifacts and push to Amazon ECR" is incorrect.

The artifacts sequence is not required if you are building and pushing a Docker image to Amazon ECR, or you are running unit tests on your source code, but not building it.

References:

https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html

https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 46: Correct

A Development team is involved with migrating an on-premises MySQL database to Amazon RDS. The database usage is very read-heavy. The Development team wants re-factor the application code to achieve optimum read performance for queries.

How can this objective be met?

Explanation

Amazon RDS uses the MariaDB, MySQL, Oracle, and PostgreSQL DB engines' built-in replication functionality to create a special type of DB instance called a Read Replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the Read Replica.

You can reduce the load on your source DB instance by routing read queries from your applications to the Read Replica. Using Read Replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

In the image below a primary Amazon RDS database server allows reads and writes while a Read Replica can be used for running read-only workloads such as BI/reporting. This reduces the load on the primary database.

It is necessary to add logic to your code to direct read traffic to the Read Replica and write traffic to the primary database. Therefore, in this scenario the Development team will need to “Add a connection string to use an Amazon RDS read replica for read queries”.

CORRECT: "Add a connection string to use an Amazon RDS read replica for read queries" is the correct answer.

INCORRECT: "Add database retries to the code and vertically scale the Amazon RDS database" is incorrect as this is not a good way to scale reads as you will likely hit a ceiling at some point in terms of cost or instance type. Scaling reads can be better implemented with horizontal scaling using a Read Replica.

INCORRECT: "Use Amazon RDS with a multi-AZ deployment" is incorrect as this creates a standby copy of the database in another AZ that can be failed over to in a failure scenario. This is used for DR not (at least not primarily) used for scaling performance. It is possible for certain RDS engines to use a multi-AZ standby as a read replica however the requirements in this solution do not warrant this configuration.

INCORRECT: "Add a connection string to use a read replica on an Amazon EC2 instance" is incorrect as Read Replicas are something you create on Amazon RDS, not on an EC2 instance.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-rds/

Question 47: Correct

An e-commerce web application that shares session state on-premises is being migrated to AWS. The application must be fault tolerant, natively highly scalable, and any service interruption should not affect the user experience.

What is the best option to store the session state?

Explanation

There are various ways to manage user sessions including storing those sessions locally to the node responding to the HTTP request or designating a layer in your architecture which can store those sessions in a scalable and robust manner. Common approaches used include utilizing Sticky sessions or using a Distributed Cache for your session management.

In this scenario, a distributed cache is suitable for storing session state data. ElastiCache can perform this role and with the Redis engine replication is also supported. Therefore, the solution is fault-tolerant and natively highly scalable.

CORRECT: "Store the session state in Amazon ElastiCache" is the correct answer.

INCORRECT: "Store the session state in Amazon CloudFront" is incorrect as CloudFront is not suitable for storing session state data, it is used for caching content for better global performance.

INCORRECT: "Store the session state in Amazon S3" is incorrect as though you can store session data in Amazon S3 and replicate the data to another bucket, this would result in a service interruption if the S3 bucket was not accessible.

INCORRECT: "Enable session stickiness using elastic load balancers" is incorrect as this feature directs sessions from a specific client to a specific EC2 instances. Therefore, if the instance fails the user must be redirected to another EC2 instance and the session state data would be lost.

References:

https://aws.amazon.com/caching/session-management/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 48: Correct

A Developer is deploying an application in a microservices architecture on Amazon ECS. The Developer needs to choose the best task placement strategy to MINIMIZE the number of instances that are used. Which task placement strategy should be used?

Explanation

A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. Task placement strategies can be specified when either running a task or creating a new service.

Amazon ECS supports the following task placement strategies:

binpack - Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.

random - Place tasks randomly.

spread - Place tasks evenly based on the specified value. Accepted values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone. Service tasks are spread based on the tasks from that service. Standalone tasks are spread based on the tasks from the same task group.

The binpack task placement strategy is the most suitable for this scenario as it minimizes the number of instances used which is a requirement for this solution.

CORRECT: "binpack" is the correct answer.

INCORRECT: "random" is incorrect as this would assign tasks randomly to EC2 instances which would not result in minimizing the number of instances used.

INCORRECT: "spread" is incorrect as this would spread the tasks based on a specified value. This is not used for minimizing the number of instances used.

INCORRECT: "weighted" is incorrect as this is not an ECS task placement strategy. Weighted is associated with Amazon Route 53 routing policies.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 49: Correct

A critical application runs on an Amazon EC2 instance. A Developer has configured a custom Amazon CloudWatch metric that monitors application availability with a data granularity of 1 second. The Developer must be notified within 30 seconds if the application experiences any issues.

What should the Developer do to meet this requirement?

Explanation

If you set an alarm on a high-resolution metric, you can specify a high-resolution alarm with a period of 10 seconds or 30 seconds, or you can set a regular alarm with a period of any multiple of 60 seconds. There is a higher charge for high-resolution alarms.

Amazon SNS can then be used to send notifications based on the CloudWatch alarm.

CORRECT: "Configure a high-resolution CloudWatch alarm and use Amazon SNS to send the alert" is the correct answer.

INCORRECT: "Specify an Amazon SNS topic for alarms when issuing the put-metric-data AWS CLI command" is incorrect. You cannot specify an SNS topic with this CLI command.

INCORRECT: "Use Amazon CloudWatch Logs Insights and trigger an Amazon Eventbridge rule to send a notification" is incorrect. Logs Insights cannot be used for alarms or alerting based on custom CloudWatch metrics.

INCORRECT: "Use a default CloudWatch metric, configure an alarm, and use Amazon SNS to send the alert" is incorrect. There is no default metric that would monitor the application uptime and the resolution would be lower.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html#high-resolution-alarms

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 50: Correct

An application asynchronously invokes an AWS Lambda function. The application has recently been experiencing occasional errors that result in failed invocations. A developer wants to store the messages that resulted in failed invocations such that the application can automatically retry processing them.

What should the developer do to accomplish this goal with the LEAST operational overhead?

Explanation

Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn't succeed.

The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.

You can set your DLQ as an event source to the Lambda function to drain your DLQ. This will ensure that all failed invocations are automatically retried.

CORRECT: "Configure a redrive policy on an Amazon SQS queue. Set the dead-letter queue as an event source to the Lambda function" is the correct answer (as explained above.)

INCORRECT: "Configure logging to an Amazon CloudWatch Logs group. Configure Lambda to read failed invocation events from the log group" is incorrect.

The information in the logs may not be sufficient for processing the event. This is not an automated or ideal solution.

INCORRECT: "Configure Amazon EventBridge to send the messages to Amazon SNS to initiate the Lambda function again" is incorrect.

Amazon EventBridge can be configured as a failure destination and can send to SNS. SNS can also be configured with Lambda as a target. However, this solution requires more operational overhead compared to using a DLQ.

INCORRECT: "Configure an Amazon S3 bucket as a destination for failed invocations. Configure event notifications to trigger the Lambda function to process the events" is incorrect.

S3 is not a supported failure destination. Supported destinations are Amazon SNS, Amazon SQS, and Amazon EventBridge.

References:

https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/

https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-dlq.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 51: Correct

A developer must identify the public IP addresses of clients connecting to Amazon EC2 instances behind a public Application Load Balancer (ALB). The EC2 instances run an HTTP server that logs all requests to a log file.

How can the developer ensure the client public IP addresses are captured in the log files on the EC2 instances?

Explanation

The X-Forwarded-For request header is automatically added and helps you identify the IP address of a client when you use an HTTP or HTTPS load balancer.

Because load balancers intercept traffic between clients and servers, your server access logs contain only the IP address of the load balancer. To see the IP address of the client, use the X-Forwarded-For request header.

The HTTP server may need to be configured to include the x-forwarded-for request header in the log files. Once this is done, the logs will contain the public IP addresses of the clients.

CORRECT: "Configure the HTTP server to add the x-forwarded-for request header to the logs" is the correct answer (as explained above.)

INCORRECT: "Configure the HTTP server to add the x-forwarded-proto request header to the logs" is incorrect.

This request header identifies the protocol (HTTP or HTTPS).

INCORRECT: "Install the AWS X-Ray daemon on the EC2 instances and configure request logging" is incorrect.

X-Ray is used for tracing applications; it will not help identify the public IP addresses of clients.

INCORRECT: "Install the Amazon CloudWatch Logs agent on the EC2 instances and configure logging" is incorrect.

The Amazon CloudWatch Logs agent will send application and system logs to CloudWatch Logs. This does not help to capture the client IP addresses of connections.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html#x-forwarded-for

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/

Question 52: Correct

A decoupled application is using an Amazon SQS queue. The processing layer that is retrieving messages from the queue is not able to keep up with the number of messages being placed in the queue.

What is the FIRST step the developer should take to increase the number of messages the application receives?

Explanation

The ReceiveMessage API call retrieves one or more messages (up to 10), from the specified queue. This should be the first step to resolve the issue. With more messages received with each call the application should be able to process messages faster.

If the application still fails to keep up with the messages, and speed is important (remember this is one of the reasons for using an SQS queue, to shield your processing layer for the front-end), then additional queues can be added to scale horizontally.

CORRECT: "Use the ReceiveMessage API to retrieve up to 10 messages at a time" is the correct answer.

INCORRECT: "Use the API to update the WaitTimeSeconds parameter to a value other than 0" is incorrect as this is used to configure long polling.

INCORRECT: "Add additional Amazon SQS queues and have the application poll those queues" is incorrect as this may not be the first step. It would be simpler to update the application code to pull more messages at a time before adding queues.

INCORRECT: "Configure the queue to use short polling" is incorrect as this will not help the application to receive more messages.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 53: Correct

A Developer is building a three-tier web application that must be able to handle a minimum of 10,000 requests per minute. The requirements state that the web tier should be completely stateless while the application maintains session state data for users.

How can the session state data be maintained externally, whilst keeping latency at the LOWEST possible value?

Explanation

It is common to use key/value stores for storing session state data. The two options presented in the answers are Amazon DynamoDB and Amazon ElastiCache Redis. Of these two, ElastiCache will provide the lowest latency as it is an in-memory database.

Therefore, the best answer is to create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage

CORRECT: "Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage" is the correct answer.

INCORRECT: "Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage" is incorrect as though this is a good solution for storing session state data, the latency will not be as low as with ElastiCache.

INCORRECT: "Create an Amazon RedShift instance, then implement session handling at the application level to leverage a database inside the RedShift database instance for session data storage" is incorrect. RedShift is a data warehouse that is used for OLAP use cases, not for storing session state data.

INCORRECT: "Implement a shared Amazon EFS file system solution across the underlying Amazon EC2 instances, then implement session handling at the application level to leverage the EFS file system for session data storage" is incorrect. For session state data a key/value store such as DynamoDB or ElastiCache will provide better performance.

References:

https://aws.amazon.com/caching/session-management/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 54: Correct

A team of Developers require read-only access to an Amazon DynamoDB table. The Developers have been added to a group. What should an administrator do to provide the team with access whilst following the principal of least privilege?

Explanation

The key requirement is to provide read-only access to the team for a specific DynamoDB table. Therefore, the AWS managed policy cannot be used as it will provide access to all DynamoDB tables in the account which does not follow the principal of least privilege.

Therefore, a customer managed policy should be created that provides read-only access and specifies the ARN of the table. For instance, the resource element might include the following ARN:

arn:aws:dynamodb:us-west-1:515148227241:table/exampletable

This will lock down access to the specific DynamoDB table, following the principal of least privilege.

CORRECT: "Create a customer managed policy with read only access to DynamoDB and specify the ARN of the table for the “Resource” element. Attach the policy to the group" is the correct answer.

INCORRECT: "Assign the AmazonDynamoDBReadOnlyAccess AWS managed policy to the group" is incorrect as this will provide read-only access to all DynamoDB tables in the account.

INCORRECT: "Assign the AWSLambdaDynamoDBExecutionRole AWS managed policy to the group" is incorrect as this is a role used with AWS Lambda.

INCORRECT: "Create a customer managed policy with read/write access to DynamoDB for all resources. Attach the policy to the group" is incorrect as read-only access should be provided, not read/write.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/using-identity-based-policies.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 55: Correct

A Developer is writing code to run in a cron job on an Amazon EC2 instance that sends status information about the application to Amazon CloudWatch.

Which method should the Developer use?

Explanation

The put-metric-data command publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric.

CORRECT: "Use the AWS CLI put-metric-data command" is the correct answer.

INCORRECT: "Use the AWS CLI put-metric-alarm command" is incorrect. This command creates or updates an alarm and associates it with the specified metric, metric math expression, or anomaly detection model.

INCORRECT: "Use the unified CloudWatch agent to publish custom metrics" is incorrect. It is not necessary to use the unified CloudWatch agent. In this case the Developer can use the AWS CLI with the cron job.

INCORRECT: "Use the CloudWatch console with detailed monitoring" is incorrect. You cannot collect custom metric data using the CloudWatch console with detailed monitoring. Detailed monitoring sends data at 1-minute rather than 5-minute frequencies but will not collect custom data.

References:

https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-data.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 56: Correct

A Developer has created a task definition that includes the following JSON code:

What is the effect of this task placement strategy?

Explanation

A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. Task placement strategies can be specified when either running a task or creating a new service.

Amazon ECS supports the following task placement strategies:

binpack

Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.

random

Place tasks randomly.

spread

Place tasks evenly based on the specified value. Accepted values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone.

You can specify task placement strategies with the following actions: CreateService, UpdateService, and RunTask. You can also use multiple strategies together as in the example JSON code provided with the question.

CORRECT: "It distributes tasks evenly across Availability Zones and then distributes tasks evenly across the instances within each Availability Zone" is the correct answer.

INCORRECT: "It distributes tasks evenly across Availability Zones and then bin packs tasks based on memory within each Availability Zone" is incorrect as it does not use the binpack strategy.

INCORRECT: "It distributes tasks evenly across Availability Zones and then distributes tasks evenly across distinct instances within each Availability Zone" is incorrect as it does not spread tasks across distinct instances (use a task placement constraint).

INCORRECT: "It distributes tasks evenly across Availability Zones and then distributes tasks randomly across instances within each Availability Zone" is incorrect as it does not use the random strategy.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 57: Incorrect

A company has three different environments: Development, QA, and Production. The company wants to deploy its code first in the Development environment, then QA, and then Production.

Which AWS service can be used to meet this requirement?

Explanation

You can specify one or more deployment groups for a CodeDeploy application. Each application deployment uses one of its deployment groups. The deployment group contains settings and configurations used during the deployment.

You can associate more than one deployment group with an application in CodeDeploy. This makes it possible to deploy an application revision to different sets of instances at different times. For example, you might use one deployment group to deploy an application revision to a set of instances tagged Test where you ensure the quality of the code.

Next, you deploy the same application revision to a deployment group with instances tagged Staging for additional verification. Finally, when you are ready to release the latest application to customers, you deploy to a deployment group that includes instances tagged Production.

Therefore, using AWS CodeDeploy to create multiple deployment groups can be used to meet the requirement

CORRECT: "Use AWS CodeDeploy to create multiple deployment groups" is the correct answer.

INCORRECT: "Use AWS CodeCommit to create multiple repositories to deploy the application" is incorrect as the requirement is to deploy the same code to separate environments in a staged manner. Therefore, having multiple code repositories is not useful.

INCORRECT: "Use AWS CodeBuild to create, configure, and deploy multiple build application projects" is incorrect as the requirement is not to build the application, it is to deploy the application.

INCORRECT: "Use AWS Data Pipeline to create multiple data pipeline provisions to deploy the application" is incorrect as Data Pipeline is a service used for data migration, not deploying updates to applications.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-groups.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 58: Correct

An application serves customers in several different geographical regions. Information about the location users connect from is written to logs stored in Amazon CloudWatch Logs. The company needs to publish an Amazon CloudWatch custom metric that tracks connections for each location.

Which approach will meet these requirements?

Explanation

You can search and filter the log data coming into CloudWatch Logs by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.

When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. In this case, the company can assign a dimension that uses the location information.

CORRECT: "Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension" is the correct answer.

INCORRECT: "Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension" is incorrect. You cannot create a custom metric through CloudWatch Logs Insights.

INCORRECT: "Configure a CloudWatch Events rule that creates a custom metric from the CloudWatch Logs group" is incorrect. You cannot create a custom metric using a CloudWatch Events rule.

INCORRECT: "Stream data to an Amazon Elasticsearch cluster in near-real time and export a custom metric" is incorrect. This is not a valid way of creating a custom metric in CloudWatch.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 59: Correct

A company runs many microservices applications that use Docker containers. The company are planning to migrate the containers to Amazon ECS. The workloads are highly variable and therefore the company prefers to be charged per running task.

Which solution is the BEST fit for the company’s requirements?

Explanation

The key requirement is that the company should be charged per running task. Therefore, the best answer is to use Amazon ECS with the Fargate launch type as with this model AWS charge you for running tasks rather than running container instances.

The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. You just register your task definition and Fargate launches the container for you. The Fargate Launch Type is a serverless infrastructure managed by AWS.

CORRECT: "Amazon ECS with the Fargate launch type" is the correct answer.

INCORRECT: "Amazon ECS with the EC2 launch type" is incorrect as with this launch type you pay for running container instances (EC2 instances).

INCORRECT: "An Amazon ECS Service with Auto Scaling" is incorrect as this does not specify the launch type. You can run an ECS Service on the Fargate or EC2 launch types.

INCORRECT: "An Amazon ECS Cluster with Auto Scaling" is incorrect as this does not specify the launch type. You can run an ECS Cluster on the Fargate or EC2 launch types.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 60: Correct

A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit.
What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?

Explanation

AWS CodeCommit is a managed version control service that hosts private Git repositories in the AWS cloud. To use CodeCommit, you configure your Git client to communicate with CodeCommit repositories. As part of this configuration, you provide IAM credentials that CodeCommit can use to authenticate you. IAM supports CodeCommit with three types of credentials:

Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.

SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with CodeCommit repositories over SSH.

AWS access keys, which you can use with the credential helper included with the AWS CLI to communicate with CodeCommit repositories over HTTPS.

In this scenario the Development team need to connect to CodeCommit using HTTPS so they need either AWS access keys to use the AWS CLI or Git credentials generated by IAM. Access keys are not offered as an answer choice so the best answer is that they need to create a set of Git credentials generated with IAM

CORRECT: "A set of Git credentials generated with IAM" is the correct answer.

INCORRECT: "A GitHub secure authentication token" is incorrect as they need to authenticate to AWS CodeCommit, not GitHub (they have already accessed and cloned the repository).

INCORRECT: "A public and private SSH key file" is incorrect as these are used to communicate with CodeCommit repositories using SSH, not HTTPS.

INCORRECT: "An Amazon EC2 IAM role with CodeCommit permissions" is incorrect as you need the Git credentials generated through IAM to connect to CodeCommit.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_ssh-keys.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 61: Incorrect

An AWS Lambda function authenticates to an external web site using a regularly rotated user name and password. The credentials need to be stored securely and must not be stored in the function code.

What combination of AWS services can be used to achieve this requirement? (Select TWO.)

Explanation

With AWS Systems Manager Parameter Store, you can create secure string parameters, which are parameters that have a plaintext parameter name and an encrypted parameter value. Parameter Store uses AWS KMS to encrypt and decrypt the parameter values of secure string parameters.

With Parameter Store you can create, store, and manage data as parameters with values. You can create a parameter in Parameter Store and use it in multiple applications and services subject to policies and permissions that you design. When you need to change a parameter value, you change one instance, rather than managing error-prone changes to numerous sources. Parameter Store supports a hierarchical structure for parameter names, so you can qualify a parameter for specific uses.

To manage sensitive data, you can create secure string parameters. Parameter Store uses AWS KMS customer master keys (CMKs) to encrypt the parameter values of secure string parameters when you create or change them. It also uses CMKs to decrypt the parameter values when you access them. You can use the AWS managed CMK that Parameter Store creates for your account or specify your own customer managed CMK.

Therefore, you can use a combination of AWS Systems Manager Parameter Store and AWS Key Management Store to store the credentials securely. These keys can be then be referenced in the Lambda function code or through environment variables.

NOTE: Systems Manager Parameter Store does not natively perform rotation of credentials so this must be done in the application. AWS Secrets Manager does perform credential rotation however it is not an answer option for this question.

CORRECT: "AWS Systems Manager Parameter Store" is a correct answer.

CORRECT: "AWS Key Management Store (KMS)" is also a correct answer.

INCORRECT: "AWS Certificate Manager (ACM)" is incorrect as this service is used to issue SSL/TLS certificates not encryption keys.

INCORRECT: "AWS Artifact" is incorrect as this is a service to view compliance information about the AWS platform

INCORRECT: "Amazon GuardDuty" is incorrect. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads.

References:

https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-systems-manager/

Question 62: Correct

Users of an application using Amazon API Gateway, AWS Lambda and Amazon DynamoDB have reported errors when using the application. Which metrics should a Developer monitor in Amazon CloudWatch to determine the number of client-side and server-side errors?

Explanation

To determine the number of client-side errors captured in a given period the Developer should look at the 4XXError metric. To determine the number of server-side errors captured in a given period the Developer should look at the 5XXError.

CORRECT: "4XXError and 5XXError" is the correct answer.

INCORRECT: "CacheHitCount and CacheMissCount" is incorrect as these count the number of requests served from the cache and the number of requests served from the backend.

INCORRECT: "IntegrationLatency and Latency" is incorrect as these measure the amount of time between when API Gateway relays a request to the backend and when it receives a response from the backend and the time between when API Gateway receives a request from a client and when it returns a response to the client.

INCORRECT: "Errors" is incorrect as this is not a metric related to Amazon API Gateway.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-metrics-and-dimensions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

https://digitalcloud.training/amazon-api-gateway/

Question 63: Correct

A Development team need to push an update to an application that is running on AWS Elastic Beanstalk. The business SLA states that the application must maintain full performance capabilities during updates whilst minimizing cost.

Which Elastic Beanstalk deployment policy should the development team select?

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once, Rolling, Rolling with additional batch, and Immutable) and options that let you configure batch size and health check behavior during deployments.

For this scenario we need to ensure we do not reduce the capacity of the application but we also need to minimize cost. In the table below you can see the different deployment policies available and how they impact capacity and cost:

The Rolling with additional batch deployment policy does require extra cost but the extra cost is the size of a batch of instances, therefore you can reduce cost by reducing the batch size. The Immutable deployment policy requires a total deployment of new instances – i.e. if you have 4 instances this will double to 8 instances.

Therefore, the best deployment policy to use for this scenario is the Rolling with additional batch.

CORRECT: "Rolling with additional batch" is the correct answer.

INCORRECT: "Immutable" is incorrect as this would require a higher cost as you need a total deployment of new instances.

INCORRECT: "Rolling" is incorrect as this will result in a reduction in capacity which will affect performance.

INCORRECT: "All at once" is incorrect as this results in a total reduction in capacity, i.e. your entire application is taken down at once while the application update is installed.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 64: Correct

A Developer is deploying an AWS Lambda update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?

Explanation

The content in the 'hooks' section of the AppSpec file varies, depending on the compute platform for your deployment. The 'hooks' section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

The 'hooks' section for a Lambda or an Amazon ECS deployment specifies Lambda validation functions to run during a deployment lifecycle event. If an event hook is not present, no operation is executed for that event. This section is required only if you are running scripts or Lambda validation functions as part of the deployment.

The following code snippet shows a valid example of the structure of hooks for an AWS Lambda deployment:

Therefore, in this scenario a valid structure for the order of hooks that should be specified in the appspec.yml file is: BeforeAllowTraffic > AfterAllowTraffic

CORRECT: "BeforeAllowTraffic > AfterAllowTraffic" is the correct answer.

INCORRECT: "BeforeInstall > AfterInstall > ApplicationStart > ValidateService" is incorrect as this would be valid for Amazon EC2.

INCORRECT: "BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic" is incorrect as this would be valid for Amazon ECS.

INCORRECT: "BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic" is incorrect as this is a partial listing of hooks for Amazon EC2 but is incomplete.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 65: Correct

A company is deploying a static website hosted from an Amazon S3 bucket. The website must support encryption in-transit for website visitors.

Which combination of actions must the Developer take to meet this requirement? (Select TWO.)

Explanation

Amazon S3 static websites use the HTTP protocol only and you cannot enable HTTPS. To enable HTTPS connections to your S3 static website, use an Amazon CloudFront distribution that is configured with an SSL/TLS certificate. This will ensure that connections between clients and the CloudFront distribution are encrypted in-transit as per the requirements.

CORRECT: "Create an Amazon CloudFront distribution. Set the S3 bucket as an origin" is a correct answer.

CORRECT: "Configure an Amazon CloudFront distribution with an SSL/TLS certificate" is also a correct answer.

INCORRECT: "Create an AWS WAF WebACL with a secure listener" is incorrect. You cannot configure a secure listener on a WebACL.

INCORRECT: "Configure an Amazon CloudFront distribution with an AWS WAF WebACL" is incorrect. This will not enable encrypted connections.

INCORRECT: "Configure the S3 bucket with an SSL/TLS certificate" is incorrect. You cannot manually add SSL/TLS certificates to Amazon S3, and it is not possible to directly configure an S3 bucket that is configured as a static website to accept encrypted connections.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/