Chart

Pie chart with 4 slices.
End of interactive chart.
Attempt 2
Question 1: Incorrect

A Developer is designing a fault-tolerant environment where client sessions will be saved. How can the Developer ensure that no sessions are lost if an Amazon EC2 instance fails?

Explanation

The DynamoDB Session Handler is a custom session handler for PHP that allows developers to use Amazon DynamoDB as a session store. Using DynamoDB for session storage alleviates issues that occur with session handling in a distributed web application by moving sessions off of the local file system and into a shared location. DynamoDB is fast, scalable, easy to setup, and handles replication of your data automatically.

CORRECT: "Use Amazon DynamoDB to perform scalable session handling" is the correct answer.

INCORRECT: "Use sticky sessions with an Elastic Load Balancer target group" is incorrect as this involves maintaining session state data on the EC2 instances which means that data is lost if an instance fails.

INCORRECT: "Use Amazon SQS to save session data" is incorrect as SQS is not used for session data, it is used for application component decoupling.

INCORRECT: "Use Elastic Load Balancer connection draining to stop sending requests to failing instances" is incorrect as this does not solve the problem of ensuring the session data is available, the data will be on the failing instance and will be lost.

References:

https://docs.aws.amazon.com/aws-sdk-php/v2/guide/feature-dynamodb-session-handler.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 2: Incorrect

A company runs a decoupled application that uses an Amazon SQS queue. The messages are processed by an AWS Lambda function. The function is not keeping up with the number of messages in the queue. A developer noticed that though the application can process multiple messages per invocation, it is only processing one at a time.

How can the developer configure the application to process messages more efficiently?

Explanation

The ReceiveMessage API retrieves one or more messages (up to 10), from the specified queue. The MaxNumberOfMessages specifies the maximum number of messages to return. Amazon SQS never returns more messages than this value (however, fewer messages might be returned). Valid values: 1 to 10. Default: 1.

Changing the MaxNumberOfMessages using the ReceiveMessage API to a value greater than 1 will therefore enable the application to process more messages in a single invocation, leading to greater efficiency.

CORRECT: "Call the ReceiveMessage API to set MaxNumberOfMessages to a value greater than the default of 1" is the correct answer (as explained above.)

INCORRECT: "Call the ReceiveMessage API to set MaximumMessageSize to a value greater than the default of 1" is incorrect.

MaximumMessageSize specifies the maximum bytes a message can contain before SQS rejects it.

INCORRECT: "Call the ChangeMessageVisibility API for the queue and set MessageRetentionPeriod to a value greater than the default of 1" is incorrect.

ChangeMessageVisibility changes the visibility timeout of a specified message in a queue to a new value.

INCORRECT: "Call the SetQueueAttributes API for the queue and set MaxNumberOfMessages to a value greater than the default of 1" is incorrect.

MaxNumberOfMessages is configured using the ReceiveMessage API, not the SetQueueAttributes API.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 3: Incorrect

An AWS Lambda functions downloads a 50MB from an object storage system each time it is invoked. The download delays the function completion and causes intermittent timeouts which is slowing down the application.

How can the application be refactored to resolve the timeout?

Explanation

You can use the /tmp directory if the function needs to download a large file or disk space for operations. The maximum size is 512 MB. The content is frozen within the execution context so multiple invocations can use the data.

Therefore, the download will occur once, and then subsequent invocations will use the file from the /tmp directory. This requires minimal refactoring and is the best way of resolving these issues.

CORRECT: "Store the file in the /tmp directory of the execution context and reuse it on subsequent invocations" is the correct answer.

INCORRECT: "Increase the memory allocation of the function" is incorrect as this will not resolve the issue of needing to download the file for each invocation. Adding memory results in more CPU being allocated which can reduce processing time but the problem still remains.

INCORRECT: "Increase the timeout of the function" is incorrect as this does not resolve the main issue. The download will still need to occur for each invocation and therefore the application will continue to be affected by poor performance.

INCORRECT: "Increase the concurrency allocation of the function" is incorrect as concurrency is not the issue here. The issue that needs to be resolved is to remove the requirement to download the large file for each invocation.

References:

https://aws.amazon.com/lambda/features/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 4: Incorrect

An application that processes financial transactions receives thousands of transactions each second. The transactions require end-to-end encryption, and the application implements this by using the AWS KMS GenerateDataKey operation. During operation the application receives the following error message:

“You have exceeded the rate at which you may call KMS. Reduce the frequency of your calls.

(Service: AWSKMS; Status Code: 400; Error Code: ThrottlingException; Request ID: <ID>”

Which actions are best practices to resolve this error? (Select TWO.)

Explanation

To ensure that AWS KMS can provide fast and reliable responses to API requests from all customers, it throttles API requests that exceed certain boundaries. Throttling occurs when AWS KMS rejects an otherwise valid request and returns a ThrottlingException error.

Data key caching stores data keys and related cryptographic material in a cache. When you encrypt or decrypt data, the AWS Encryption SDK looks for a matching data key in the cache. If it finds a match, it uses the cached data key rather than generating a new one. Data key caching can improve performance, reduce cost, and help you stay within service limits as your application scales.

Your application can benefit from data key caching if:

• It can reuse data keys.

• It generates numerous data keys.

• Your cryptographic operations are unacceptably slow, expensive, limited, or resource-intensive.

To create an instance of the local cache, use the LocalCryptoMaterialsCache constructor in Java and Python, the getLocalCryptographicMaterialsCache function in JavaScript, or the aws_cryptosdk_materials_cache_local_new constructor in C.

Additionally, the developer can request an increase in the quota for AWS KMS which will provide the ability to submit more API calls the AWS KMS.

CORRECT: "Create a local cache using the AWS Encryption SDK and the LocalCryptoMaterialsCache feature" is a correct answer (as explained above.)

CORRECT: "Create a case in the AWS Support Center to increase the quota for the account" is also a correct answer (as explained above.)

INCORRECT: "Call the AWS KMS Encrypt operation directly to allow AWS KMS to encrypt the data" is incorrect.

This will not reduce API calls to AWS KMS. Additionally, there are limits to the maximum size of the data that can be encrypted using this method. The max is 4096 bytes.

INCORRECT: "Use Amazon SQS to queue the requests and configure AWS KMS to poll the queue" is incorrect.

KMS cannot be configured to poll and SQS queue.

INCORRECT: "Create an AWS KMS custom key store and generate data keys through AWS CloudHSM" is incorrect.

This is an unnecessary step and would incur additional cost. CloudHSM is not beneficial for this specific situation.

References:

https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/data-key-caching.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-kms/

Question 5: Correct

An application is using Amazon DynamoDB as its data store and needs to be able to read 100 items per second as strongly consistent reads. Each item is 5 KB in size.
What value should be set for the table's provisioned throughput for reads?

Explanation

With provisioned capacity mode, you specify the number of data reads and writes per second that you require for your application.

Read capacity unit (RCU):

• Each API call to read data from your table is a read request.

• Read requests can be strongly consistent, eventually consistent, or transactional.

• For items up to 4 KB in size, one RCU can perform one strongly consistent read request per second.

• Items larger than 4 KB require additional RCUs.

• For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second.

Transactional read requests require two RCUs to perform one read per second for items up to 4 KB.

• For example, a strongly consistent read of an 8 KB item would require two RCUs, an eventually consistent read of an 8 KB item would require one RCU, and a transactional read of an 8 KB item would require four RCUs.

Write capacity unit (WCU):

• Each API call to write data to your table is a write request.

• For items up to 1 KB in size, one WCU can perform one standard write request per second.

• Items larger than 1 KB require additional WCUs.

Transactional write requests require two WCUs to perform one write per second for items up to 1 KB.

• For example, a standard write request of a 1 KB item would require one WCU, a standard write request of a 3 KB item would require three WCUs, and a transactional write request of a 3 KB item would require six WCUs.

To determine the number of RCUs required to handle 100 strongly consistent reads per/second with an average item size of 5KB, perform the following steps:

1. Determine the average item size by rounding up the next multiple of 4KB (5KB rounds up to 8KB).

2. Determine the RCU per item by dividing the item size by 4KB (8KB/4KB = 2).

3. Multiply the value from step 2 with the number of reads required per second (2x100 = 200).

CORRECT: "200 Read Capacity Units" is the correct answer.

INCORRECT: "50 Read Capacity Units" is incorrect.

INCORRECT: "250 Read Capacity Units" is incorrect.

INCORRECT: "500 Read Capacity Units" is incorrect.

References:

https://aws.amazon.com/dynamodb/pricing/provisioned/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 6: Incorrect

An application reads data from Amazon S3 and makes 55,000 read requests per second. A Developer must design the storage solution to ensure the performance requirements are met cost-effectively.

How can the storage be optimized to meet these requirements?

Explanation

To avoid throttling in Amazon S3 you must ensure you do not exceed certain limits on a per-prefix basis. You can send 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in an Amazon S3 bucket. There are no limits to the number of prefixes that you can have in your bucket.

In this case the Developer would need to split the files across at least 10 prefixes in a single Amazon S3 bucket. The application should then read the files across the prefixes in parallel.

CORRECT: "Create at least 10 prefixes and split the files across the prefixes" is the correct answer.

INCORRECT: "Create at least 10 S3 buckets and split the files across the buckets" is incorrect. Performance is improved based on splitting reads across prefixes, not buckets.

INCORRECT: "Move the files to Amazon EFS. Index the files with S3 metadata" is incorrect. This is not cost-effective.

INCORRECT: "Move the files to Amazon DynamoDB. Index the files with S3 metadata" is incorrect. This is not cost-effective.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 7: Correct

A company is deploying a new serverless application with an AWS Lambda function. A developer ran some test invocations using the AWS CLI. The function is invoking correctly and returning a success message, but not log data is being generated in Amazon CloudWatch Logs. The developer waited for 15 minutes but the log data is not showing up.

What is the most likely explanation for this issue?

Explanation

AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, after you set up permissions, Lambda logs all requests handled by your function and automatically stores logs generated by your code through Amazon CloudWatch Logs.

You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/<function name>.

It can take 5-10 minutes for logs to show up after a function invocation. If your Lambda function code is executing, but you don't see any log data being generated after several minutes, this could mean that your execution role for the Lambda function didn't grant permissions to write log data to CloudWatch Logs.


CORRECT: "The function execution role does not have permission to write log data to CloudWatch Logs" is the correct answer (as explained above.)

INCORRECT: "The Lambda function does not have any explicit log statements for the log data to send it to CloudWatch Logs" is incorrect.

You do need to have logging statements in your code to send meaningful data to CloudWatch Logs. However, the most likely cause of having nothing show up is that the permissions were not assigned.

INCORRECT: "The function configuration does not have CloudWatch Logs configured as a success destination" is incorrect.

CloudWatch Logs is not configured as a destination in a Lambda function.

INCORRECT: "A log group and log stream has not been configured for the function in CloudWatch Logs" is incorrect.

The log group and log stream are automatically created as long as permissions are assigned.

References:

https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 8: Incorrect

A developer is creating a microservices application that includes and AWS Lambda function. The function generates a unique file for each execution and must commit the file to an AWS CodeCommit repository.

How should the developer accomplish this?

Explanation

The developer can instantiate a CodeCommit client using the AWS SDK. This provides the ability to programmatically work with the AWS CodeCommit repository. The PutFile method is used to add or modify a single file in a specified repository and branch. The CreateCommit method creates a commit for changes to a repository.

CORRECT: "Use an AWS SDK to instantiate a CodeCommit client. Invoke the PutFile method to add the file to the repository and execute a commit with CreateCommit" is the correct answer (as explained above.)

INCORRECT: "Send a message to an Amazon SQS queue with the file attached. Configure an AWS Step Function as a destination for messages in the queue. Configure the Step Function to add the new file to the repository and commit the change" is incorrect.

A Step Function cannot be a destination for messages in an SQS queue. There would need to be another Lambda function or other method to trigger the state machine and pass the information across. This would be a highly inefficient solution.

INCORRECT: "After the new file is created in Lambda, use CURL to invoke the CodeCommit API. Send the file to the repository and automatically commit the change" is incorrect.

CURL cannot be used to work with the CodeCommit API. The developer must use the AWS SDK.

INCORRECT: "Upload the new file to an Amazon S3 bucket. Create an AWS Step Function to accept S3 events. Use AWS Lambda functions in the Step Function, to add the file to the repository and commit the change" is incorrect.

Step Functions is not a supported destination for Amazon S3 event notifications. Supported destinations are SNS topics, SQS queues, Lambda functions, and EventBridge event buses.

References:

https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/codecommit/AWSCodeCommitClient.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 9: Incorrect

A Developer is creating a database solution using an Amazon ElastiCache caching layer. The solution must provide strong consistency to ensure that updates to product data are consistent between the backend database and the ElastiCache cache. Low latency performance is required for all items in the database.

Which cache writing policy will satisfy these requirements?

Explanation

The write-through strategy adds data or updates data in the cache whenever data is written to the database. The advantages of write-through are as follows:

- Data in the cache is never stale. Because the data in the cache is updated every time it's written to the database, the data in the cache is always current.

- Write penalty vs. read penalty.

Every write involves two trips:

  1. A write to the cache

  2. A write to the database

Which adds latency to the process. That said, end users are generally more tolerant of latency when updating data than when retrieving data. There is an inherent sense that updates are more work and thus take longer.

CORRECT: "Use a write-through caching strategy" is the correct answer.

INCORRECT: "Use a lazy-loading caching strategy" is incorrect. Lazy loading is a caching strategy that loads data into the cache only when necessary. This will not ensure strong consistency between the database and the cache.

INCORRECT: "Add a short duration TTL value to each write" is incorrect. A TTL specifies the number of seconds until the key expires. This will not ensure strong consistency between the database and the cache.

INCORRECT: "Invalidate the cache for each database write" is incorrect. This will allow the cache to be updated when an item is next read but will not ensure the best performance for all items in the database.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 10: Incorrect

A developer is building a web application that will be hosted on Amazon EC2 instances. The EC2 instances will store configuration data in an Amazon S3 bucket. What is the SAFEST way to allow the EC2 instances to access the S3 bucket?

Explanation

Applications that run on an EC2 instance must include AWS credentials in their AWS API requests. You could have your developers store AWS credentials directly within the EC2 instance and allow applications in that instance to use those credentials. But developers would then have to manage the credentials and ensure that they securely pass the credentials to each instance and update each EC2 instance when it's time to rotate the credentials. That's a lot of additional work.

Instead, you can and should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests.

There are two answers that would work in this scenario. In one a customer-managed policy is used and in the other an AWS managed policy is used. The customer-managed policy is more secure in this situation as it can be locked down with more granularity to ensure the EC2 instances can only read and write to the specific bucket.

With an AWS managed policy, you must choose from read only or full access and full access would provide more access than is required:

CORRECT: "Create an IAM Role with a customer-managed policy attached that has the necessary permissions and attach the role to the EC2 instances" is the correct answer.

INCORRECT: "Store an access key and secret ID that has the necessary permissions on the EC2 instances" is incorrect as storing access keys on the EC2 instances is insecure and cumbersome to manage.

INCORRECT: "Create an IAM Role with an AWS managed policy attached that has the necessary permissions and attach the role to the EC2 instances" is incorrect as the AWS managed policy would provide more privileges than required.

INCORRECT: "Use the AWS SDK and authenticate with a user account that has the necessary permissions on the EC2 instances " is incorrect as you cannot authenticate through the AWS SDK using a user account on an EC2 instance.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/

Question 11: Incorrect

A company runs a popular online game on premises. The application stores players’ results in an in-memory database. The application is being migrated to AWS and the company needs to ensure there is no reduction in performance.

Which database would be MOST suitable?

Explanation

ElastiCache is a fully managed, low latency, in-memory data store that supports either Memcached or Redis. With ElastiCache, management tasks such as provisioning, setup, patching, configuration, monitoring, backup, and failure recovery are taken care of, so you can focus on application development.

Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. For this scenario, the company is currently running an in-memory database and needs to ensure similar performance, so this is an ideal use case for ElastiCache.

CORRECT: "Amazon ElastiCache" is the correct answer.

INCORRECT: "Amazon RDS" is incorrect as RDS is not an in-memory database so the performance may not be as good as ElastiCache.

INCORRECT: "Amazon DynamoDB" is incorrect as this is not an in-memory database. DynamoDB does offer great performance but if you need an in-memory cache you must use DynamoDB Accelerator (DAX).

INCORRECT: "Amazon Elastic Beanstalk" is incorrect as this is not a database service at all. You can launch databases such as RDS through Elastic Beanstalk, however EB itself is a platform service responsible for launching and managing the resources.

References:

https://aws.amazon.com/blogs/database/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/

https://aws.amazon.com/elasticache/features/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 12: Correct

A developer must deploy an update to Amazon ECS using AWS CodeDeploy. The deployment should expose 10% of live traffic to the new version. Then after a period of time, route all remaining traffic to the new version.

Which ECS deployment should the company use to meet these requirements?

Explanation

The blue/green deployment type uses the blue/green deployment model controlled by CodeDeploy. This deployment type enables you to verify a new deployment of a service before sending production traffic to it.

There are three ways traffic can shift during a blue/green deployment:

Canary — Traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated task set in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.

Linear — Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.

All-at-once — All traffic is shifted from the original task set to the updated task set all at once.

The best choice for this use case would be to use the canary traffic shifting strategy. You can see the predefined canary options in the table below:


CORRECT: "Blue/green with canary" is the correct answer (as explained above.)

INCORRECT: "Blue/green with linear" is incorrect.

With this option traffic is shifted in equal increments with an equal amount of time between increments.

INCORRECT: "Blue/green with all at once" is incorrect.

With this option all traffic is shifted at once.

INCORRECT: "Rolling update" is incorrect.

This is a native ECS deployment model. It does not deploy in two increments with 10% first.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-bluegreen.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 13: Correct

A Developer is storing sensitive documents in Amazon S3. The documents must be encrypted at rest and company policy mandates that the encryption keys must be rotated annually. What is the EASIEST way to achieve this?

Explanation

Cryptographic best practices discourage extensive reuse of encryption keys. To create new cryptographic material for your AWS Key Management Service (AWS KMS) customer master keys (CMKs), you can create new CMKs, and then change your applications or aliases to use the new CMKs. Or, you can enable automatic key rotation for an existing customer managed CMK.

When you enable automatic key rotation for a customer managed CMK, AWS KMS generates new cryptographic material for the CMK every year. AWS KMS also saves the CMK's older cryptographic material in perpetuity so it can be used to decrypt data that it encrypted. AWS KMS does not delete any rotated key material until you delete the CMK.

Key rotation changes only the CMK's backing key, which is the cryptographic material that is used in encryption operations. The CMK is the same logical resource, regardless of whether or how many times its backing key changes. The properties of the CMK do not change, as shown in the following image.

Therefore, the easiest way to meet this requirement is to use AWS KMS with automatic key rotation.

CORRECT: "Use AWS KMS with automatic key rotation" is the correct answer.

INCORRECT: "Encrypt the data before sending it to Amazon S3" is incorrect as that requires managing your own encryption infrastructure which is not the easiest way to achieve the requirements.

INCORRECT: "Import a custom key into AWS KMS with annual rotation enabled" is incorrect as when you import key material into AWS KMS you are still responsible for the key material while allowing KMS to use a copy of it. Therefore, this is not the easiest solution as you must manage the key materials.

INCORRECT: "Export a key from AWS KMS to encrypt the data" is incorrect as when you export a data encryption key you are then responsible for using it and managing it.

References:

https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-kms/

Question 14: Correct

A team of Developers are building a continuous integration and delivery pipeline using AWS Developer Tools. Which services should they use for running tests against source code and installing compiled code on their AWS resources? (Select TWO.)

Explanation

AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides pre-packaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more.

CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

CORRECT: "AWS CodeBuild for running tests against source code" is a correct answer.

CORRECT: "AWS CodeDeploy for installing compiled code on their AWS resources" is also a correct answer.

INCORRECT: "AWS CodePipeline for running tests against source code" is incorrect. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. This service works with the other Developer Tools to create a pipeline.

INCORRECT: "AWS CodeCommit for installing compiled code on their AWS resources" is incorrect as AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories.

INCORRECT: "AWS Cloud9 for running tests against source code" is incorrect as AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser.

References:

https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 15: Incorrect

An application uses Amazon API Gateway, an AWS Lambda function and a DynamoDB table. The developer requires that another Lambda function is triggered when an item lifecycle activity occurs in the DynamoDB table.

How can this be achieved?

Explanation

Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables.

If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an AWS Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records.

CORRECT: "Enable a DynamoDB stream and trigger the Lambda function synchronously from the stream" is the correct answer.

INCORRECT: "Enable a DynamoDB stream and trigger the Lambda function asynchronously from the stream" is incorrect as the invocation should be synchronous.

INCORRECT: "Configure an Amazon CloudWatch alarm that sends an Amazon SNS notification. Trigger the Lambda function asynchronously from the SNS notification" is incorrect as you cannot configure a CloudWatch alarm that notifies based on item lifecycle events. It is better to use DynamoDB streams and integrate Lambda.

INCORRECT: "Configure an Amazon CloudTrail API alarm that sends a message to an Amazon SQS queue. Configure the Lambda function to poll the queue and invoke the function synchronously" is incorrect. There is no such alarm that notifies from Amazon CloudTrail relating to item lifecycle events.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 16: Incorrect

A Developer wants to debug an application by searching and filtering log data. The application logs are stored in Amazon CloudWatch Logs. The Developer creates a new metric filter to count exceptions in the application logs. However, no results are returned from the logs.

What is the reason that no filtered results are being returned?

Explanation

After the CloudWatch Logs agent begins publishing log data to Amazon CloudWatch, you can begin searching and filtering the log data by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs.

CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. You can use any type of CloudWatch statistic, including percentile statistics, when viewing these metrics or setting alarms.

Filters do not retroactively filter data. Filters only publish the metric data points for events that happen after the filter was created. Filtered results return the first 50 lines, which will not be displayed if the timestamp on the filtered results is earlier than the metric creation time.

Therefore, the filtered results are not being returned as CloudWatch Logs only publishes metric data for events that happen after the filter is created.

CORRECT: "CloudWatch Logs only publishes metric data for events that happen after the filter is created" is the correct answer.

INCORRECT: "A setup of the Amazon CloudWatch interface VPC endpoint is required for filtering the CloudWatch Logs in the VPC" is incorrect as a VPC endpoint is not required.

INCORRECT: "The log group for CloudWatch Logs should be first streamed to Amazon Elasticsearch Service before filtering returns the results" is incorrect as you do not need to stream the results to Elasticsearch.

INCORRECT: "Metric data points to logs groups can be filtered only after they are exported to an Amazon S3 bucket" is incorrect as it is not necessary to export the logs to an S3 bucket.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 17: Incorrect

A company is planning to use AWS CodeDeploy to deploy a new AWS Lambda function

What are the MINIMUM properties required in the 'resources' section of the AppSpec file for CodeDeploy to deploy the function successfully?

Explanation

The content in the 'resources' section of the AppSpec file varies, depending on the compute platform of your deployment. The 'resources' section for an AWS Lambda deployment contains the name, alias, current version, and target version of a Lambda function.

Here is an example of a 'resources' section with the minimum required properties:

CORRECT: "name, alias, currentversion, and targetversion" is the correct answer (as explained above.)

INCORRECT: "name, alias, PlatformVersion, and type" is incorrect (as explained above.)

INCORRECT: "TaskDefinition, LoadBalancerInfo, and ContainerPort" is incorrect.

These properties are related to ECS deployments.

INCORRECT: "TaskDefinition, PlatformVersion, and ContainerName" is incorrect.

These properties are related to ECS deployments.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-resources.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 18: Correct

A company needs a version control system for collaborative software development. The solution must include support for batches of changes across multiple files and parallel branching.

Which AWS service will meet these requirements?

Explanation

AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud.

CodeCommit is optimized for team software development. It manages batches of changes across multiple files, which can occur in parallel with changes made by other developers.

CORRECT: "AWS CodeCommit" is the correct answer.

INCORRECT: "AWS CodeBuild" is incorrect as it is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy.

INCORRECT: "AWS CodePipeline" is incorrect as it is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.

INCORRECT: "Amazon S3" is incorrect. Amazon S3 versioning supports the recovery of past versions of files, but it's not focused on collaborative file tracking features that software development teams need.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 19: Incorrect

A company runs an application on a fleet of web servers running on Amazon EC2 instances. The web servers are behind an Elastic Load Balancer (ELB) and use an Amazon DynamoDB table for storing session state. A Developer has been asked to implement a mechanism for automatically deleting session state data that is older than 24 hours.

What is the SIMPLEST solution to this requirement?

Explanation

Time to Live (TTL) for Amazon DynamoDB lets you define when items in a table expire so that they can be automatically deleted from the database. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.

TTL is useful if you have continuously accumulating data that loses relevance after a specific time period (for example, session data, event logs, usage patterns, and other temporary data). If you have sensitive data that must be retained only for a certain amount of time according to contractual or regulatory obligations, TTL helps you ensure that it is removed promptly and as scheduled.

When Time to Live (TTL) is enabled on a table in Amazon DynamoDB, a background job checks the TTL attribute of items to determine whether they are expired.

DynamoDB compares the current time, in epoch time format, to the value stored in the user-defined Number attribute of an item. If the attribute’s value is in the epoch time format, is less than the current time, and is not older than 5 years, the item is deleted.

Processing takes place automatically, in the background, and doesn't affect read or write traffic to the table. In addition, deletes performed via TTL are not counted towards capacity units or request units. TTL deletes are available at no additional cost.

For this requirement, the Developer must add an attribute to each item with the expiration time in epoch format and then enable the Time To Live (TTL) feature based on that attribute.

CORRECT: "Add an attribute with the expiration time; enable the Time To Live feature based on that attribute" is the correct answer.

INCORRECT: "Each day, create a new table to hold session data; delete the previous day's table" is incorrect. This solution would delete some data that is not 24 hours old as it would have to run at a specific time.

INCORRECT: "Write a script that deletes old records; schedule the scripts as a cron job on an Amazon EC2 instance" is incorrect. This is not an elegant solution and would also cost more as it requires RCUs/WCUs to delete the items.

INCORRECT: "Add an attribute with the expiration time; name the attribute ItemExpiration" is incorrect as this is not a complete solution. You also need to enable the TTL feature on the table.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 20: Correct

An AWS Lambda function has been connected to a VPC to access an application running a private subnet. The Lambda function also pulls data from an Internet-based service and is no longer able to connect to the Internet. How can this be rectified?

Explanation

To enable connectivity to an application in a private subnet and the Internet you must first allow the function to connect to the private subnet (which has already been done).

Lambda needs the following VPC configuration information so that it can connect to the VPC:

· Private subnet ID.

· Security Group ID (with required access).

Lambda uses this information to setup an Elastic Network Interface (ENI) using an available IP address from your private subnet. Next you need to add a NAT Gateway for Internet access (no public IP).

The NAT Gateway should be connected to a public subnet and a route needs to be added to the private subnet.

CORRECT: "Add a NAT Gateway to a public subnet and specify a route in the private subnet" is the correct answer.

INCORRECT: "Connect the Lambda function to an Internet Gateway" is incorrect. Though by using a NAT Gateway you are effectively establishing routing to an Internet Gateway, you cannot actually connect Lambda to an Internet Gateway.

INCORRECT: "Connect an AWS VPN to Lambda to connect to the Internet" is incorrect as you cannot connect an AWS VPN to a Lambda function.

INCORRECT: "Add an Elastic IP to the Lambda function" is incorrect as you cannot add an Elastic IP to a Lambda function.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 21: Incorrect

The development team is working on an API that will be served from Amazon API Gateway. The API will serve three environments PROD, DEV, and TEST and requires a cache size of 250GB. What is the MOST cost-efficient deployment strategy?

Explanation

You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.

Caching is enabled for a stage. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.

The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.

In this scenario we are asked to choose the most cost-efficient solution. Therefore, the best answer is to use a single API Gateway with three stages and, as caching is enabled per stage, we can choose to save cost by only enabling the cache on DEV and TEST when we need to perform tests relating to that functionality.

CORRECT: "Create a single API Gateway with three stages and enable the cache for the DEV and TEST environments only when required" is the correct answer.

INCORRECT: "Create three API Gateways, one for each environment and enable the cache for the DEV and TEST environments only when required" is incorrect. It is unnecessary to create separate API Gateways. This will increase complexity. Instead we can choose to use stages for the different environments.

INCORRECT: "Create a single API Gateway with three stages and enable the cache for all environments" is incorrect as this would not be the most cost-efficient option.

INCORRECT: "Create a single API Gateway with three deployments and configure a global cache of 250GB" is incorrect. When you deploy you API, you do so to a stage. Caching is enabled at the stage level, not globally.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 22: Incorrect

A company uses an Amazon S3 bucket to store a large number of sensitive files relating to eCommerce transactions. The company has a policy that states that all data written to the S3 bucket must be encrypted.

How can a Developer ensure compliance with this policy?

Explanation

To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.

Enabling encryption on an S3 bucket does not enforce encryption however, so it is still necessary to take extra steps to force compliance with the policy. As the message in the image below states, bucket policies are applied before encryption settings so PUT requests without encryption information can be rejected by a bucket policy:

Therefore, we need to create an S3 bucket policy that denies any S3 Put request that do not include the x-amz-server-side-encryption header. There are two possible values for the x-amz-server-side-encryption header: AES256, which tells S3 to use S3-managed keys, and aws:kms, which tells S3 to use AWS KMS–managed keys.

CORRECT: "Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption" is the correct answer.

INCORRECT: "Create a bucket policy that denies the S3 PutObject request with the attribute x-amz-acl having values public-read, public-read-write, or authenticated-read" is incorrect. This policy means that authenticated users cannot upload objects to the bucket if the objects have public permissions.

INCORRECT: "Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) on the Amazon S3 bucket" is incorrect as this will enable default encryption but will not enforce encryption on the S3 bucket. You do still need to enable default encryption on the bucket, but this alone will not enforce encryption.

INCORRECT: "Create an Amazon CloudWatch alarm that notifies an administrator if unencrypted objects are uploaded to the S3 bucket" is incorrect. This is operationally difficult to manage and only notifies, it does not prevent.

References:

https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 23:
Skipped

A Developer is designing a cloud native application. The application will use several AWS Lambda functions that will process items that the functions read from an event source. Which AWS services are supported for Lambda event source mappings? (Select THREE.)

Explanation

An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don't invoke Lambda functions directly. Lambda provides event source mappings for the following services.

Services That Lambda Reads Events From

Amazon Kinesis

Amazon DynamoDB

Amazon Simple Queue Service

An event source mapping uses permissions in the function's execution role to read and manage items in the event source. Permissions, event structure, settings, and polling behavior vary by event source.

CORRECT: "Amazon Kinesis, Amazon DynamoDB, and Amazon Simple Queue Service (SQS)" are the correct answers.

INCORRECT: "Amazon Simple Notification Service (SNS)" is incorrect as SNS should be used as destination for asynchronous invocation.

INCORRECT: "Amazon Simple Storage Service (S3)" is incorrect as Lambda does not read from Amazon S3, you must configure the event notification on the S3 side.

INCORRECT: "Another Lambda function" is incorrect as another function should be invoked asynchronously.

References:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 24:
Skipped

A company needs to ingest several terabytes of data every hour from a large number of distributed sources. The messages are delivered continually 24 hrs a day. Messages must be delivered in real time for security analysis and live operational dashboards.

Which approach will meet these requirements?

Explanation

You can use Amazon Kinesis Data Streams to collect and process large streams of data records in real time. You can create data-processing applications, known as Kinesis Data Streams applications. A typical Kinesis Data Streams application reads data from a data stream as data records.

These applications can use the Kinesis Client Library, and they can run on Amazon EC2 instances. You can send the processed records to dashboards, use them to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services.

This scenario is an ideal use case for Kinesis Data Streams as large volumes of real time streaming data are being ingested. Therefore, the best approach is to use Amazon Kinesis Data Streams with Kinesis Client Library to ingest and deliver messages

CORRECT: "Use Amazon Kinesis Data Streams with Kinesis Client Library to ingest and deliver messages" is the correct answer.

INCORRECT: "Send the messages to an Amazon SQS queue, then process the messages by using a fleet of Amazon EC2 instances" is incorrect as this is not an ideal use case for SQS because SQS is used for decoupling application components, not for ingesting streaming data. It would require more cost (lots of instances to process data) and introduce latency. Also, the message size limitations could be an issue.

INCORRECT: "Use the Amazon S3 API to write messages to an S3 bucket, then process the messages by using Amazon RedShift" is incorrect as RedShift does not process messages from S3. RedShift is a data warehouse which is used for analytics.

INCORRECT: "Use AWS Data Pipeline to automate the movement and transformation of data" is incorrect as the question is not asking for transformation of data. The scenario calls for a solution for ingesting and processing the real time streaming data for analytics and feeding some data into a system that generates an operational dashboard.

References:

https://docs.aws.amazon.com/streams/latest/dev/introduction.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-kinesis/

Question 25:
Skipped

An application is being migrated into the cloud. The application is stateless and will run on a fleet of Amazon EC2 instances. The application should scale elastically. How can a Developer ensure that the number of instances available is sufficient for current demand?

Explanation

Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet.

You can also use the dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand. Dynamic scaling and predictive scaling can be used together to scale faster.

A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances. Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information in order to launch the instance.

You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scaling group at a time, and you can't modify a launch configuration after you've created it. To change the launch configuration for an Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with it.

Therefore, the Developer should create a launch configuration and use Amazon EC2 Auto Scaling.

CORRECT: "Create a launch configuration and use Amazon EC2 Auto Scaling" is the correct answer.

INCORRECT: "Create a launch configuration and use Amazon CodeDeploy" is incorrect as CodeDeploy is not used for auto scaling of Amazon EC2 instances.

INCORRECT: "Create a task definition and use an Amazon ECS cluster" is incorrect as the migrated application will be running on Amazon EC2 instances, not containers.

INCORRECT: "Create a task definition and use an AWS Fargate cluster" is incorrect as the migrated application will be running on Amazon EC2 instances, not containers.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2-auto-scaling/

Question 26:
Skipped

A Developer is creating a DynamoDB table for storing application logs. The table has 5 write capacity units (WCUs). The Developer needs to configure the read capacity units (RCUs) for the table. Which of the following configurations represents the most efficient use of throughput?

Explanation

In this scenario the Developer needs to maximize efficiency of RCUs. Therefore, the Developer will need to consider the item size and consistency model to determine the most efficient usage of RCUs.

Item size/consistency model: we know that both 1 KB items and 4 KB items consume the same number of RCUs as a read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size.

The following bullets provide the read throughput for each configuration:

• Eventually consistent, 15 RCUs, 1 KB item = 30 items/s = 2 items per RCU

• Strongly consistent, 15 RCUs, 1 KB item = 15 items/s = 1 item per RCU

• Eventually consistent, 5 RCUs, 4 KB item = 10 items/s = 2 items per RCU

• Strongly consistent, 5 RCUs, 4 KB item = 5 items/s = 1 item per RCU

From the above we can see that 4 KB items with eventually consistent reads is the most efficient option. Therefore, the Developer should choose the option “Eventually consistent reads of 5 RCUs reading items that are 4 KB in size”. This will achieve 2x 4 KB items per RCU.

CORRECT: "Eventually consistent reads of 5 RCUs reading items that are 4 KB in size" is the correct answer.

INCORRECT: "Eventually consistent reads of 15 RCUs reading items that are 1 KB in size" is incorrect as described above.

INCORRECT: "Strongly consistent reads of 5 RCUs reading items that are 4 KB in size" is incorrect as described above.

INCORRECT: "Strongly consistent reads of 15 RCUs reading items that are 1KB in size" is incorrect as described above.

References:

https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/ProvisionedThroughput.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 27:
Skipped

A company wants to implement authentication for its new REST service using Amazon API Gateway. To authenticate the calls, each request must include HTTP headers with a client ID and user ID. These credentials must be compared to authentication data in an Amazon DynamoDB table.

What MUST the company do to implement this authentication in API Gateway?

Explanation

A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API.

A Lambda authorizer is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML, or that uses request parameters to determine the caller's identity.

When a client makes a request to one of your API's methods, API Gateway calls your Lambda authorizer, which takes the caller's identity as input and returns an IAM policy as output.

There are two types of Lambda authorizers:

• A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller's identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.

• A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller's identity in a combination of headers, query string parameters, stageVariables, and $context variables.

• For WebSocket APIs, only request parameter-based authorizers are supported.

In this scenario, the authentication is using headers in the request and therefore the request parameter-based Lambda authorizer should be used.

CORRECT: "Implement an AWS Lambda authorizer that references the DynamoDB authentication table" is the correct answer.

INCORRECT: "Create a model that requires the credentials, then grant API Gateway access to the authentication table" is incorrect as a model defines the structure of the incoming payload using the JSON Schema.

INCORRECT: "Modify the integration requests to require the credentials, then grant API Gateway access to the authentication table" is incorrect as API Gateway will not authorize directly using the table information, an authorizer should be used.

INCORRECT: "Implement an Amazon Cognito authorizer that references the DynamoDB authentication table" is incorrect as a Lambda authorizer should be used in this example as the authentication data is being passed in request headers.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 28:
Skipped

A Developer needs to create an instance profile for an Amazon EC2 instance using the AWS CLI. How can this be achieved? (Select THREE.)

Explanation

To add a role to an Amazon EC2 instance using the AWS CLI you must first create an instance profile. Then you need to add the role to the instance profile and finally assign the instance profile to the Amazon EC2 instance.

The following example commands would achieve this outcome:

CORRECT: "Run the aws iam create-instance-profile command" is a correct answer.

CORRECT: "Run the aws iam add-role-to-instance-profile command" is a correct answer.

CORRECT: "Run the aws ec2 associate-instance-profile command" is a correct answer.

INCORRECT: "Run the CreateInstanceProfile API" is incorrect as this is an API action, not an AWS CLI command.

INCORRECT: "Run the AddRoleToInstanceProfile API" is incorrect as this is an API action, not an AWS CLI command.

INCORRECT: "Run the AssignInstanceProfile API" is incorrect as this is an API action, not an AWS CLI command.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/attach-replace-ec2-instance-profile/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/

Question 29:
Skipped

A gaming company is building an application to track the scores for their games using an Amazon DynamoDB table. Each item in the table is identified by a partition key
(user_id) and a sort key (game_name). The table also includes the attribute “TopScore”. The table design is shown below:

A Developer has been asked to write a leaderboard application to display the highest achieved scores for each game (game_name), based on the score identified in the “TopScore” attribute.

What process will allow the Developer to extract results MOST efficiently from the DynamoDB table?

Explanation

In an Amazon DynamoDB table, the primary key that uniquely identifies each item in the table can be composed not only of a partition key, but also of a sort key.

Well-designed sort keys have two key benefits:

- They gather related information together in one place where it can be queried efficiently. Careful design of the sort key lets you retrieve commonly needed groups of related items using range queries with operators such as begins_with, between, >, <, and so on.

- Composite sort keys let you define hierarchical (one-to-many) relationships in your data that you can query at any level of the hierarchy.

To speed up queries on non-key attributes, you can create a global secondary index. A global secondary index contains a selection of attributes from the base table, but they are organized by a primary key that is different from that of the table. The index key does not need to have any of the key attributes from the table. It doesn't even need to have the same key schema as a table.

For this scenario we need to identify the top achieved score for each game. The most efficient way to do this is to create a global secondary index using “game_name” as the partition key and “TopScore” as the sort key. We can then efficiently query the global secondary index to find the top achieved score for each game.

CORRECT: "Create a global secondary index with a partition key of “game_name” and a sort key of “TopScore” and get the results based on the score attribute" is the correct answer.

INCORRECT: "Create a local secondary index with a partition key of “game_name” and a sort key of “TopScore” and get the results based on the score attribute" is incorrect. With a local secondary index you can have a different sort key but the partition key is the same.

INCORRECT: "Use a DynamoDB scan operation to retrieve the scores for “game_name” using the “TopScore” attribute, and order the results based on the score attribute" is incorrect. This would be inefficient as it scans the whole table. First, we should create a global secondary index, and then use a query to efficiently retrieve the data.

INCORRECT: "Create a global secondary index with a partition key of “user_id” and a sort key of “game_name” and get the results based on the score attribute" is incorrect as with a global secondary index you have a different partition key and sort key. Also, we don’t need “user_id”, we need “game_name” and “TopScore”.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-sort-keys.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 30:
Skipped

An eCommerce application uses an Amazon RDS database with Amazon ElastiCache in front. Stock volume data is updated dynamically in listings as sales are made. Customers have complained that occasionally the stock volume data is incorrect, and they end up purchasing items that are out of stock. A Developer has checked the front end and indeed some items display the incorrect stock count.

What could be causing this issue?

Explanation

Amazon ElastiCache is being used to cache data from the Amazon RDS database to improve performance when performing queries. In this case the cache has stale stock volume data stored and is returning this information when customers are purchasing items.

The resolution is to ensure that the cache is invalidated whenever the stock volume data is changed. This can be done in the application layer.

CORRECT: "The cache is not being invalidated when the stock volume data is changed" is the correct answer.

INCORRECT: "The stock volume data is being retrieved using a write-through ElastiCache cluster" is incorrect. If this was the case the data would not be stale.

INCORRECT: "The Amazon RDS database is deployed as Multi-AZ and the standby is inconsistent" is incorrect. Multi-AZ standbys are not used for reading data and the replication is synchronous so it would not be inconsistent.

INCORRECT: "The Amazon RDS database has insufficient IOPS provisioned for its EBS volumes" is incorrect. This is not the issue here; the stale data is being retrieved from the ElastiCache database.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Strategies.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 31:
Skipped

A development team is migrating data from various file shares to AWS from on-premises. The data will be migrated into a single Amazon S3 bucket. What is the SIMPLEST method to ensure the data is encrypted at rest in the S3 bucket?

Explanation

Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. You can set default encryption on a bucket so that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS).

CORRECT: "Enable default encryption when creating the bucket" is the correct answer.

INCORRECT: "Use SSL to transmit the data over the Internet" is incorrect as this only deals with encrypting the data whilst it is being transmitted, it does not provide encryption at rest.

INCORRECT: "Ensure all requests use the x-amz-server-side​-encryption​-customer-key header" is incorrect as it is unnecessary to use customer-provided keys. This is used with client-side encryption which is more complex to manage and is not required in this scenario.

INCORRECT: "Ensure all requests use the x-amz-server-side-encryption header" is incorrect as though this has the required effect of ensuring all data is encrypted, it is not the simplest method. In this scenario there is a team migrating data from different file shares which increases the risk of human error where a team member may neglect to add the header to the API call. Using default encryption on the bucket is a simpler solution.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 32:
Skipped

A developer is planning the deployment of a new version of an application to AWS Elastic Beanstalk. The new version of the application should be deployed only to new EC2 instances.

Which deployment methods will meet these requirements? (Select TWO.)

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies and options that let you configure batch size and health check behavior during deployments.

All at once:

• Deploys the new version to all instances simultaneously.

Rolling:

• Update a few instances at a time (bucket), and then move onto the next bucket once the first bucket is healthy (downtime for 1 bucket at a time).

Rolling with additional batch:

• Like Rolling but launches new instances in a batch ensuring that there is full availability.

Immutable:

• Launches new instances in a new ASG and deploys the version update to these instances before swapping traffic to these instances once healthy.

• Zero downtime.

Blue / Green deployment:

• Zero downtime and release facility.

• Create a new “stage” environment and deploy updates there.

The immutable and blue/green options both provide zero downtime as they will deploy the new version to a new version of the application. These are also the only two options that will ONLY deploy the updates to new EC2 instances.

CORRECT: "Immutable" is the correct answer.

CORRECT: "Blue/green" is the correct answer.

INCORRECT: "All-at-once" is incorrect as this will deploy the updates to existing instances.

INCORRECT: "Rolling" is incorrect as this will deploy the updates to existing instances.

INCORRECT: "Rolling with additional batch" is incorrect as this will launch new instances but will also update the existing instances as well (which is not allowed according to the requirements).

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 33:
Skipped

An application uses an Auto Scaling group of Amazon EC2 instances, an Application Load Balancer (ALB), and an Amazon Simple Queue Service (SQS) queue. An Amazon CloudFront distribution caches content for global users. A Developer needs to add in-transit encryption to the data by configuring end-to-end SSL between the CloudFront Origin and the end users.

How can the Developer meet this requirement? (Select TWO.)

Explanation

For web distributions, you can configure CloudFront to require that viewers use HTTPS to request your objects, so that connections are encrypted when CloudFront communicates with viewers. You also can configure CloudFront to use HTTPS to get objects from your origin, so that connections are encrypted when CloudFront communicates with your origin.

If you configure CloudFront to require HTTPS both to communicate with viewers and to communicate with your origin, here's what happens when CloudFront receives a request for an object:

1. A viewer submits an HTTPS request to CloudFront. There's some SSL/TLS negotiation here between the viewer and CloudFront. In the end, the viewer submits the request in an encrypted format.

2. If the object is in the CloudFront edge cache, CloudFront encrypts the response and returns it to the viewer, and the viewer decrypts it.

3. If the object is not in the CloudFront cache, CloudFront performs SSL/TLS negotiation with your origin and, when the negotiation is complete, forwards the request to your origin in an encrypted format.

4. Your origin decrypts the request, encrypts the requested object, and returns the object to CloudFront.

5. CloudFront decrypts the response, re-encrypts it, and forwards the object to the viewer. CloudFront also saves the object in the edge cache so that the object is available the next time it's requested.

6. The viewer decrypts the response.

To enable SSL between the origin and the distribution the Developer can configure the Origin Protocol Policy. Depending on the domain name used (CloudFront default or custom), the steps are different. To enable SSL between the end-user and CloudFront the Viewer Protocol Policy should be configured.

CORRECT: "Configure the Origin Protocol Policy" is a correct answer.

CORRECT: "Configure the Viewer Protocol Policy" is also a correct answer.

INCORRECT: "Create an Origin Access Identity (OAI)" is incorrect as this is a special user used for securing objects in Amazon S3 origins.

INCORRECT: "Add a certificate to the Auto Scaling Group" is incorrect as you do not add certificates to an ASG. The certificate should be located on the ALB listener in this scenario.

INCORRECT: "Create an encrypted distribution" is incorrect as there’s no such thing as an encrypted distribution

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-custom-origin.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/

Question 34:
Skipped

A Developer is using AWS SAM to create a template for deploying a serverless application. The Developer plans deploy an AWS Lambda function and an Amazon DynamoDB table using the template.

Which resource types should the Developer specify? (Select TWO.)

Explanation

A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. Note that a serverless application is more than just a Lambda function—it can include additional resources such as APIs, databases, and event source mappings.

AWS SAM templates are an extension of AWS CloudFormation templates, with some additional components that make them easier to work with.

To create a Lambda function using an AWS SAM template the Developer can use the AWS::Serverless::Function resource type. The AWS::Serverless::Function resource type can be used to Create a Lambda function, IAM execution role, and event source mappings that trigger the function.

To create a DynamoDB table using an AWS SAM template the Developer can use the AWS::Serverless::SimpleTable resource type which creates a DynamoDB table with a single attribute primary key. It is useful when data only needs to be accessed via a primary key.

CORRECT: "AWS::Serverless:Function" is a correct answer.

CORRECT: "AWS::Serverless:SimpleTable" is also a correct answer.

INCORRECT: "AWS::Serverless::Application" is incorrect as this embeds a serverless application from the AWS Serverless Application Repository or from an Amazon S3 bucket as a nested application.

INCORRECT: "AWS::Serverless:LayerVersion" is incorrect as this creates a Lambda LayerVersion that contains library or runtime code needed by a Lambda Function.

INCORRECT: "AWS::Serverless:API" is incorrect as this creates a collection of Amazon API Gateway resources and methods that can be invoked through HTTPS endpoints.

References:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-sam/

Question 35:
Skipped

A Developer is creating an AWS Lambda function that will process data from an Amazon Kinesis data stream. The function is expected to be invoked 50 times per second and take 100 seconds to complete each request.

What MUST the Developer do to ensure the functions runs without errors?

Explanation

Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency.

Concurrency is subject to a Regional limit that is shared by all functions in a Region. For an initial burst of traffic, your functions' cumulative concurrency in a Region can reach an initial level of between 500 and 3000, which varies per Region:

3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)

1000 – Asia Pacific (Tokyo), Europe (Frankfurt)

500 – Other Regions

After the initial burst, your functions' concurrency can scale by an additional 500 instances each minute. This continues until there are enough instances to serve all requests, or until a concurrency limit is reached. When requests come in faster than your function can scale, or when your function is at maximum concurrency, additional requests fail with a throttling error (429 status code).

The function continues to scale until the account's concurrency limit for the function's Region is reached. The function catches up to demand, requests subside, and unused instances of the function are stopped after being idle for some time. Unused instances are frozen while they're waiting for requests and don't incur any charges.

The regional concurrency limit starts at 1,000. You can increase the limit by submitting a request in the Support Center console.

Calculating concurrency requirements for this scenario

To calculate the concurrency requirements for this scenario, simply multiply the invocation requests per second (50) with the average execution time in seconds (100). This calculation is 50 x 100 = 5,000.

Therefore, 5,000 concurrent executions is over the default limit and the Developer will need to request in the AWS Support Center console.

CORRECT: "Contact AWS and request to increase the limit for concurrent executions" is the correct answer.

INCORRECT: "No action is required as AWS Lambda can easily accommodate this requirement" is incorrect as by default the AWS account will be limited. Lambda can easily scale to this level of demand however the account limits must first be increased.

INCORRECT: "Increase the concurrency limit for the function" is incorrect as the default account limit of 1,000 concurrent executions will mean you can only assign up to 900 executions to the function (100 must be left unreserved). This is insufficient for this requirement to the account limit must be increased.

INCORRECT: "Implement exponential backoff in the function code" is incorrect. Exponential backoff means configuring the application to wait longer between API calls, slowing the demand. However, this is not a good resolution to this issue as it will have negative effects on the application. The correct choice is to raise the account limits so the function can concurrently execute according to its requirements.

References:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 36:
Skipped

A developer is creating an Auto Scaling group of Amazon EC2 instances. The developer needs to publish a custom metric to Amazon CloudWatch. Which method would be the MOST secure way to authenticate a CloudWatch PUT request?

Explanation

The most secure configuration to authenticate the request is to create an IAM role with a permissions policy that only provides the minimum permissions requires (least privilege). This IAM role should have a customer-managed permissions policy applied with the PutMetricData allowed.

The PutMetricData API publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric. When CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics.

The following images shows a permissions policy being created with the permission:

CORRECT: "Create an IAM role with the PutMetricData permission and create a new Auto Scaling launch configuration to launch instances using that role" is the correct answer

INCORRECT: "Modify the CloudWatch metric policies to allow the PutMetricData permission to instances from the Auto Scaling group" is incorrect as this is not possible. You should instead grant the permissions through a permissions policy and attach that to a role that the EC2 instances can assume.

INCORRECT: "Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the user credentials into the instance user data" is incorrect. You cannot “inject user credentials” using a launch configuration. Instead, you can attach an IAM role which allows the instance to assume the role and take on the privileges allowed through any permissions policies that are associated with that role.

INCORRECT: "Create an IAM role with the PutMetricData permission and modify the Amazon EC2 instances to use that role" is incorrect as you should create a new launch configuration for the Auto Scaling group rather than updating the instances manually.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/

https://digitalcloud.training/amazon-ec2/

Question 37:
Skipped

An application has been instrumented to use the AWS X-Ray SDK to collect data about the requests the application serves. The Developer has set the user field on segments to a string that identifies the user who sent the request.

How can the Developer search for segments associated with specific users?

Explanation

A segment document conveys information about a segment to X-Ray. A segment document can be up to 64 kB and contain a whole segment with subsegments, a fragment of a segment that indicates that a request is in progress, or a single subsegment that is sent separately. You can send segment documents directly to X-Ray by using the PutTraceSegments API.

Example minimally complete segment:


A subset of segment fields are indexed by X-Ray for use with filter expressions. For example, if you set the user field on a segment to a unique identifier, you can search for segments associated with specific users in the X-Ray console or by using the GetTraceSummaries API.

CORRECT: "By using the GetTraceSummaries API with a filter expression" is the correct answer.

INCORRECT: "By using the GetTraceGraph API with a filter expression" is incorrect as this API action retrieves a service graph for one or more specific trace IDs.

INCORRECT: "Use a filter expression to search for the user field in the segment metadata" is incorrect as the user field is not part of the segment metadata and metadata is not is not indexed for search.

INCORRECT: "Use a filter expression to search for the user field in the segment annotations" is incorrect as the user field is not part of the segment annotations.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/


Question 38:
Skipped

A website is running on a single Amazon EC2 instance. A Developer wants to publish the website on the Internet and is creating an A record on Amazon Route 53 for the website’s public DNS name.

What type of IP address MUST be assigned to the EC2 instance and used in the A record to ensure ongoing connectivity?

Explanation

In Amazon Route 53 when you create an A record you must supply an IP address for the resource to connect to. For a public hosted zone this must be a public IP address.

There are three types of IP address that can be assigned to an Amazon EC2 instance:

• Public – public address that is assigned automatically to instances in public subnets and reassigned if instance is stopped/started.

• Private – private address assigned automatically to all instances.

• Elastic IP – public address that is static.

To ensure ongoing connectivity the Developer needs to use an Elastic IP address for the EC2 instance and DNS A record as this is the only type of static, public IP address you can assign to an Amazon EC2 instance.

CORRECT: "Elastic IP address" is the correct answer.

INCORRECT: "Public IP address" is incorrect as though this is a public IP address, it is not static and will change every time the EC2 instance restarts. Therefore, connectivity would be lost until you update the Route 53 A record.

INCORRECT: "Dynamic IP address" is incorrect as a dynamic IP address is an IP address that will change over time. For this scenario a static, public address is required.

INCORRECT: "Private IP address" is incorrect as a public IP address is required for the public DNS A record.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

https://digitalcloud.training/amazon-route-53/

Question 39:
Skipped

An application runs on a fleet of Amazon EC2 instances and stores data in a Microsoft SQL Server database hosted on Amazon RDS. The developer wants to avoid storing database connection credentials the application code. The developer would also like a solution that automatically rotates the credentials.

What is the MOST secure way to store and access the database credentials?

Explanation

AWS Secrets Manager can be used for secure storage of secrets such as database connection credentials. Automatic rotation is supported for several RDS database types including Microsoft SQL Server. This is the most secure solution for storing and retrieving the credentials.

CORRECT: "Use AWS Secrets Manager to store the credentials. Retrieve the credentials from Secrets Manager as needed" is the correct answer (as explained above.)

INCORRECT: "Use AWS Systems Manager Parameter store to store the credentials. Enable automatic rotation of the credentials" is incorrect.

With SSM Parameter Store you cannot enable automatic rotation. You can rotate the credentials but you would need to configure your own Lambda function.

INCORRECT: "Create an IAM role that has permissions to access the database. Attach the role to the EC2 instance" is incorrect.

RDS for SQL Server does support windows authentication using a managed Microsoft AD with IAM roles for permissions to the AD service, but this is not described in the solution.

INCORRECT: "Store the credentials in an encrypted source code repository. Retrieve the credentials from AWS CodeCommit as needed" is incorrect.

This is not a solution that is suitable for retrieving database connection credentials and it does not support automatic rotation.

References:

https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-secrets-manager/

Question 40:
Skipped

A Development team has deployed several applications running on an Auto Scaling fleet of Amazon EC2 instances. The Operations team have asked for a display that shows a key performance metric for each application on a single screen for monitoring purposes.

What steps should a Developer take to deliver this capability using Amazon CloudWatch?

Explanation

A namespace is a container for CloudWatch metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.

Therefore, the Developer should create a custom namespace with a unique metric name for each application. This namespace will then allow the metrics for each individual application to be shown in a single view through CloudWatch.

CORRECT: "Create a custom namespace with a unique metric name for each application" is the correct answer.

INCORRECT: "Create a custom dimension with a unique metric name for each application" is incorrect as a dimension further clarifies what a metric is and what data it stores.

INCORRECT: "Create a custom event with a unique metric name for each application" is incorrect as an event is not used to organize metrics for display.

INCORRECT: "Create a custom alarm with a unique metric name for each application" is incorrect as alarms are used to trigger actions when a threshold is reached, this is not relevant to organizing metrics for display.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 41:
Skipped

A company needs to provide additional security for their APIs deployed on Amazon API Gateway. They would like to be able to authenticate their customers with a token. What is the SAFEST way to do this?

Explanation

A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API.

A Lambda authorizer is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML, or that uses request parameters to determine the caller's identity.

When a client makes a request to one of your API's methods, API Gateway calls your Lambda authorizer, which takes the caller's identity as input and returns an IAM policy as output.

There are two types of Lambda authorizers:

A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller's identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.

A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller's identity in a combination of headers, query string parameters, stageVariables, and $context variables.

For this scenario, a Lambda authorizer is the most secure method available. It can also be used with usage plans and AWS recommend that you don’t rely only on API keys, so a Lambda authorizer is a better solution.

CORRECT: "Create an API Gateway Lambda authorizer" is the correct answer.

INCORRECT: "Setup usage plans and distribute API keys to the customers" is incorrect as this is not the most secure (safest) option. AWS recommend that you don't rely on API keys as your only means of authentication and authorization for your APIs.

INCORRECT: "Create an Amazon Cognito identity pool" is incorrect. You can create an authorizer in API Gateway that uses Cognito user pools, but not identity pools.

INCORRECT: "Use AWS Single Sign-on to authenticate the customers" is incorrect. This is used to centrally access multiple AWS accounts and business applications from one place.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 42:
Skipped

To include objects defined by the AWS Serverless Application Model (SAM) in an AWS CloudFormation template, in addition to Resources, what section MUST be included in the document root?

Explanation

The primary differences between AWS SAM templates and AWS CloudFormation templates are the following:

Transform declaration. The declaration Transform: AWS::Serverless-2016-10-31 is required for AWS SAM templates. This declaration identifies an AWS CloudFormation template as an AWS SAM template.

Globals section. The Globals section is unique to AWS SAM. It defines properties that are common to all your serverless functions and APIs. All the AWS::Serverless::Function, AWS::Serverless::Api, and AWS::Serverless::SimpleTable resources inherit the properties that are defined in the Globals section.

Resources section. In AWS SAM templates the Resources section can contain a combination of AWS CloudFormation resources and AWS SAM resources.

Of these three sections, only the Transform section and Resources sections are required; the Globals section is optional.

CORRECT: "Transform" is the correct answer.

INCORRECT: "Globals" is incorrect as this is not a required section.

INCORRECT: "Conditions" is incorrect as this is an optional section.

INCORRECT: "Properties" is incorrect as this is not a section in a template, it is used within a resource.

References:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-sam/

Question 43:
Skipped

A Developer has created an Amazon Cognito user pool and configured a domain for it. The Developer wants to add sign-up and sign-in pages to an app with a company logo.

What should the Developer do to meet these requirements?

Explanation

When you create a user pool in Amazon Cognito and then configure a domain for it, Amazon Cognito automatically provisions a hosted web UI to let you add sign-up and sign-in pages to your app. You can add a custom logo or customize the CSS for the hosted web UI.

CORRECT: "Customize the Amazon Cognito hosted web UI and add the company logo" is the correct answer.

INCORRECT: "Create a REST API using Amazon API Gateway and add a Cognito authorizer. Upload the company logo to a stage in the API" is incorrect. There is no need to add a REST API to this solution.

INCORRECT: "Upload the company logo to an Amazon S3 bucket. Specify the S3 object path in the app client settings in Amazon Cognito" is incorrect. This is not required as the hosted web UI can be used.

INCORRECT: "Create a custom login page that includes the company logo and upload it to Amazon Cognito. Specify the login page in the app client settings" is incorrect. This is not required as the hosted web UI can be used.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/cognito-hosted-web-ui/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 44:
Skipped

An application will use AWS Lambda and an Amazon RDS database. The Developer needs to secure the database connection string and enable automatic rotation every 30 days. What is the SIMPLEST way to achieve this requirement?

Explanation

AWS Secrets Manager encrypts secrets at rest using encryption keys that you own and store in AWS Key Management Service (KMS). When you retrieve a secret, Secrets Manager decrypts the secret and transmits it securely over TLS to your local environment.

With AWS Secrets Manager, you can rotate secrets on a schedule or on demand by using the Secrets Manager console, AWS SDK, or AWS CLI.

For example, to rotate a database password, you provide the database type, rotation frequency, and master database credentials when storing the password in Secrets Manager. Secrets Manager natively supports rotating credentials for databases hosted on Amazon RDS and Amazon DocumentDB and clusters hosted on Amazon Redshift.

CORRECT: "Store a secret in AWS Secrets Manager and enable automatic rotation every 30 days" is the correct answer.

INCORRECT: "Store a SecureString in Systems Manager Parameter Store and enable automatic rotation every 30 days" is incorrect as SSM Parameter Store does not support automatic key rotation.

INCORRECT: "Store the connection string as an encrypted environment variable in Lambda and create a separate function that rotates the connection string every 30 days" is incorrect as this is not the simplest solution. In this scenario using AWS Secrets Manager would be easier to implement as it provides native features for rotating the secret.

INCORRECT: "Store the connection string in an encrypted Amazon S3 bucket and use a scheduled CloudWatch Event to update the connection string every 30 days" is incorrect. There is no native capability of CloudWatch to update connection strings so you would need some other service such as a Lambda function to execute and rotate the connection string which is missing from this answer.

References:

https://aws.amazon.com/secrets-manager/features/

Question 45:
Skipped

A developer plan to deploy an application on Amazon ECS that uses the AWS SDK to make API calls to Amazon DynamoDB. In the development environment the application was configured with access keys. The application is now ready for deployment to a production cluster.

How should the developer configure the application to securely authenticate to AWS services?

Explanation

Your Amazon ECS tasks can have an IAM role associated with them. The permissions granted in the IAM role are assumed by the containers running in the task. The following explain the benefits of using IAM roles with your tasks.

Credential Isolation: A container can only retrieve credentials for the IAM role that is defined in the task definition to which it belongs; a container never has access to credentials that are intended for another container that belongs to another task.

Authorization: Unauthorized containers cannot access IAM role credentials defined for other tasks.

Auditability: Access and event logging is available through CloudTrail to ensure retrospective auditing. Task credentials have a context of taskArn that is attached to the session, so CloudTrail logs show which task is using which role.

CORRECT: "Configure an ECS task IAM role for the application to use" is the correct answer (as explained above.)

INCORRECT: "Add the necessary AWS service permissions to an ECS instance profile" is incorrect.

The privileges assigned to instance profiles on the Amazon ECS instances are available to all tasks running on the instance. This is not secure and AWS recommend that you limit the permissions you assign to the instance profile.

INCORRECT: "Configure the credentials file with a new access key/secret access key" is incorrect.

Access keys are not a secure way of providing authentication. It is better to use roles that obtain temporary security permissions using the AWS STS service.

INCORRECT: "Add environment variables pointing to new access key credentials" is incorrect.

As above, access keys should not be used, IAM roles should be used instead.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/

Question 46:
Skipped

An application writes items to an Amazon DynamoDB table. As the application scales to thousands of instances, calls to the DynamoDB API generate occasional ThrottlingException errors. The application is coded in a language incompatible with the AWS SDK.

How should the error be handled?

Explanation

Exponential backoff can improve an application's reliability by using progressively longer waits between retries. When using the AWS SDK, this logic is built‑in. However, in this case the application is incompatible with the AWS SDK so it is necessary to manually implement exponential backoff.

CORRECT: "Add exponential backoff to the application logic" is the correct answer.

INCORRECT: "Use Amazon SQS as an API message bus" is incorrect as SQS requires instances or functions to pick up and process the messages and put them in the DynamoDB table. This is unnecessary cost and complexity and will not improve performance.

INCORRECT: "Pass API calls through Amazon API Gateway" is incorrect as this is not a suitable method of throttling the application. Exponential backoff logic in the application is a better solution.

INCORRECT: "Send the items to DynamoDB through Amazon Kinesis Data Firehose" is incorrect as DynamoDB is not a destination for Kinesis Data Firehose.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-table-throttled/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 47:
Skipped

A company is creating a serverless application that uses AWS Lambda functions. The developer has written the code to initialize the AWS SDK outside of the Lambda handler function.

What is PRIMARY benefit of this action?

Explanation

You should initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time.

The primary benefit of this technique is to take advantage of execution environment reuse to improve the performance of your function.

CORRECT: "Takes advantage of execution environment reuse" is the correct answer (as explained above.)

INCORRECT: "Creates a new SDK instance for each invocation" is incorrect.

This is the opposite of what we are trying to achieve here.

INCORRECT: "It minimizes the deployment package size" is incorrect.

This technique does not affect the deployment package size.

INCORRECT: "Improves readability and reduces complexity" is incorrect.

It may improve readability but that is debatable. This is not the primary reason you would use this technique.

References:

https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 48:
Skipped

The source code for an application is stored in a file named index.js that is in a folder along with a template file that includes the following code:

What does a Developer need to do to prepare the template so it can be deployed using an AWS CLI command?

Explanation

The template shown is an AWS SAM template for deploying a serverless application. This can be identified by the template header: Transform: 'AWS::Serverless-2016-10-31'

The Developer will need to package and then deploy the template. To do this the source code must be available in the same directory or referenced using the “codeuri” parameter. Then, the Developer can use the “aws cloudformation package” or “sam package” commands to prepare the local artifacts (local paths) that your AWS CloudFormation template references.

The command uploads local artifacts, such as source code for an AWS Lambda function or a Swagger file for an AWS API Gateway REST API, to an S3 bucket. The command returns a copy of your template, replacing references to local artifacts with the S3 location where the command uploaded the artifacts.

Once that is complete the template can be deployed using the “aws cloudformation deploy” or “sam deploy” commands. Therefore, the next step in this scenario is for the Developer to run the “aws cloudformation” package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template. An example of this command is provided below:

aws cloudformation package --template-file /path_to_template/template.json --s3-bucket bucket-name --output-template-file packaged-template.json

CORRECT: "Run the aws cloudformation package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template" is the correct answer.

INCORRECT: "Run the aws cloudformation compile command to base64 encode and embed the source file into a modified CloudFormation template" is incorrect as the Developer should run the “aws cloudformation package” command.

INCORRECT: "Run the aws lambda zip command to package the source file together with the CloudFormation template and deploy the resulting zip archive" is incorrect as the Developer should run the “aws cloudformation package” command which will automatically copy the relevant files to Amazon S3.

INCORRECT: "Run the aws serverless create-package command to embed the source file directly into the existing CloudFormation template" is incorrect as the Developer has the choice to run either “aws cloudformation package” or “sam package”, but not “aws serverless create-package”.

References:

https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-sam/

Question 49:
Skipped

A developer is making updates to the code for a Lambda function. The developer is keen to test the code updates by directing a small amount of traffic to a new version. How can this BEST be achieved?

Explanation

You can create one or more aliases for your AWS Lambda function. An AWS Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.

You can point an alias a multiple versions of your function code and then assign a weighting to direct certain amounts of traffic to each version. This enables a blue/green style of deployment and means it’s easy to roll back to the older version by simply updating the weighting if issues occur.

CORRECT: "Create an alias that points to both the new and previous versions of the function code and assign a weighting for sending a portion of traffic to the new version" is the correct answer.

INCORRECT: "Create two versions of the function code. Configure the application to direct a subset of requests to the new version" is incorrect as this would entail using application logic to direct traffic to the different versions. This is not the best way to solve this problem as Lambda aliases are a better solution.

INCORRECT: "Create an API using API Gateway and use stage variables to point to different versions of the Lambda function" is incorrect. Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of a REST API. They act like environment variables and can be used in your API setup and mapping templates. You can use stage variables to point to different Lambda ARNs and associate these with different stages of your API, however this is not a good solution for this scenario.

INCORRECT: "Create a new function using the new code and update the application to split requests between the new functions" is incorrect as this would entail using application logic to direct traffic to the different versions. This is not the best way to solve this problem as Lambda aliases are a better solution.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html

https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 50:
Skipped

A Developer has added a Global Secondary Index (GSI) to an existing Amazon DynamoDB table. The GSI is used mainly for read operations whereas the primary table is extremely write-intensive. Recently, the Developer has noticed throttling occurring under heavy write activity on the primary table. However, the write capacity units on the primary table are not fully utilized.

What is the best explanation for why the writes are being throttled on the primary table?

Explanation

Some applications might need to perform many kinds of queries, using a variety of different attributes as query criteria. To support these requirements, you can create one or more global secondary indexes and issue Query requests against these indexes in Amazon DynamoDB.

When items from a primary table are written to the GSI they consume write capacity units. It is essential to ensure the GSI has sufficient WCUs (typically, at least as many as the primary table). If writes are throttled on the GSI, the main table will be throttled (even if there’s enough WCUs on the main table). LSIs do not cause any special throttling considerations.

In this scenario, it is likely that the Developer assumed that the GSI would need fewer WCUs as it is more read-intensive and neglected to factor in the WCUs required for writing data into the GSI. Therefore, the most likely explanation is that the write capacity units on the GSI are under provisioned

CORRECT: "The write capacity units on the GSI are under provisioned" is the correct answer.

INCORRECT: "There are insufficient read capacity units on the primary table" is incorrect as the table is being throttled due to writes, not reads.

INCORRECT: "The Developer should have added an LSI instead of a GSI" is incorrect as a GSI has specific advantages and there was likely good reason for adding a GSI. Also, you cannot add an LSI to an existing table.

INCORRECT: "There are insufficient write capacity units on the primary table" is incorrect as the question states that the WCUs are underutilized.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html

Question 51:
Skipped

A Development team are creating a new REST API that uses Amazon API Gateway and AWS Lambda. To support testing there need to be different versions of the service. What is the BEST way to provide multiple versions of the REST API?

Explanation

A stage is a named reference to a deployment, which is a snapshot of the API. You use a Stage to manage and optimize a particular deployment. For example, you can set up stage settings to enable caching, customize request throttling, configure logging, define stage variables or attach a canary release for testing. APIs are deployed to stages:

Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of a REST API. They act like environment variables and can be used in your API setup and mapping templates.

With deployment stages in API Gateway, you can manage multiple release stages for each API, such as alpha, beta, and production. Using stage variables you can configure an API deployment stage to interact with different backend endpoints. For example, your API can pass a GET request as an HTTP proxy to the backend web host (for example, http://example.com).

In this case, the backend web host is configured in a stage variable so that when developers call your production endpoint, API Gateway calls example.com. When you call your beta endpoint, API Gateway uses the value configured in the stage variable for the beta stage, and calls a different web host (for example, beta.example.com). Similarly, stage variables can be used to specify a different AWS Lambda function name for each stage in your API.

Therefore, for this scenario the Developers can deploy the API versions as unique stages with unique endpoints and use stage variables to provide further context such as connections to different backend services.

CORRECT: "Deploy the API versions as unique stages with unique endpoints and use stage variables to provide further context" is the correct answer.

INCORRECT: "Create an API Gateway resource policy to isolate versions and provide context to the Lambda functions" is incorrect. API Gateway resource policies are JSON policy documents that you attach to an API to control whether a specified principal (typically, an IAM user or role) can invoke the API.

INCORRECT: "Create an AWS Lambda authorizer to route API clients to the correct API version" is incorrect. A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API. This is not used for routing API clients to different versions.

INCORRECT: "Deploy an HTTP Proxy integration and configure the proxy with API versions" is incorrect. The HTTP proxy integration allows a client to access the backend HTTP endpoints with a streamlined integration setup on a single API method. This is not used for providing multiple versions of the API, use stages and stage variables instead.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-stages.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/stage-variables.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 52:
Skipped

An application uses both Amazon EC2 instances and on-premises servers. The on-premises servers are a critical component of the application, and a developer wants to collect metrics and logs from these servers. The developer would like to use Amazon CloudWatch.

How can the developer accomplish this?

Explanation

You can download the CloudWatch agent package using either Systems Manager Run Command or an Amazon S3 download link. You then install the agent and specify the IAM credentials to use. The IAM credentials are an access key and secret access key of an IAM user that has permissions to Amazon CloudWatch.

Once this has been completed the on-premises servers will automatically send metrics and log files to Amazon CloudWatch and can be centrally monitored along with AWS services.

CORRECT: Install the CloudWatch agent on the on-premises servers and specify IAM credentials with permissions to CloudWatch" is the correct answer (as explained above.)

INCORRECT: "Install the CloudWatch agent on the on-premises servers and specify an IAM role with permissions to CloudWatch" is incorrect.

You cannot specify a role with an on-premises server so you must use access keys instead.

INCORRECT: "Write a batch script that uses system utilities to collect performance metrics and application logs. Upload the metrics and logs to CloudWatch" is incorrect.

The CloudWatch agent would be a better solution and you must have permissions to send this information to CloudWatch.

INCORRECT: "Install an AWS SDK on the on-premises servers that automatically sends logs to CloudWatch" is incorrect.

The CloudWatch agent would be a better solution and you must have permissions to send this information to CloudWatch.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-premise.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 53:
Skipped

A company is creating an application that will require users to access AWS services and allow them to reset their own passwords. Which of the following would allow the company to manage users and authorization while allowing users to reset their own passwords?

Explanation

There are two key requirements in this scenario. Firstly the company wants to manage user accounts using a system that allows users to reset their own passwords. The company also wants to authorize users to access AWS services.

The first requirement is provided by an Amazon Cognito User Pool. With a Cognito user pool you can add sign-up and sign-in to mobile and web apps and it also offers a user directory so user accounts can be created directly within the user pool. Users also have the ability to reset their passwords.

To access AWS services you need a Cognito Identity Pool. An identity pool can be used with a user pool and enables a user to obtain temporary limited-privilege credentials to access AWS services.

Therefore, the best answer is to use Amazon Cognito user pools and identity pools.

CORRECT: "Amazon Cognito user pools and identity pools" is the correct answer.

INCORRECT: "Amazon Cognito identity pools and AWS STS" is incorrect as there is no user directory in this solution. A Cognito user pool is required.

INCORRECT: "Amazon Cognito identity pools and AWS IAM" is incorrect as a Cognito user pool should be used as the directory source for creating and managing users. IAM is used for accounts that are used to administer AWS services, not for application user access.

INCORRECT: "Amazon Cognito user pools and AWS KMS" is incorrect as KMS is used for encryption, not for authentication to AWS services.

References:

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html

Save time with our exam-specific cheat sheets:

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html

Question 54:
Skipped

A financial application is hosted on an Auto Scaling group of EC2 instance with an Elastic Load Balancer. A Developer needs to capture information about the IP traffic going to and from network interfaces in the VPC.

How can the Developer capture this information?

Explanation

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.

Flow logs can help you with a number of tasks, such as:

• Diagnosing overly restrictive security group rules

• Monitoring the traffic that is reaching your instance

• Determining the direction of the traffic to and from the network interfaces

As you can see in the image below, you can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored.

Therefore, the Developer should create a flow log in the VPC and publish data to Amazon S3. The Developer could also choose CloudWatch Logs as a destination for publishing the data, but this is not presented as an option.

CORRECT: "Create a flow log in the VPC and publish data to Amazon S3" is the correct answer.

INCORRECT: "Capture the information directly into Amazon CloudWatch Logs" is incorrect as you cannot capture this information directly into CloudWatch Logs. You would need to capture with a flow log and then publish to CloudWatch Logs.

INCORRECT: "Capture the information using a Network ACL" is incorrect as you cannot capture data using a Network ACL as it is a subnet-level firewall.

INCORRECT: "Create a flow log in the VPC and publish data to Amazon CloudTrail" is incorrect as you cannot publish data from a flow log to CloudTrail. Amazon CloudTrail captures information about API calls.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-vpc/

Question 55:
Skipped

A company is running a web application on Amazon EC2 behind an Elastic Load Balancer (ELB). The company is concerned about the security of the web application and would like to secure the application with SSL certificates. The solution should not have any performance impact on the EC2 instances.

What steps should be taken to secure the web application? (Select TWO.)

Explanation

The requirements clearly state that we cannot impact the performance of the EC2 instances at all. Therefore, we will not be able to add certificates to the EC2 instances as that would place a burden on the CPU when encrypting and decrypting data.

We are therefore left with configuring SSL on the Elastic Load Balancer itself. For this we need to add an SSL certificate to the ELB and then configure the ELB for SSL termination.

You can create an HTTPS listener, which uses encrypted connections (also known as SSL offload). This feature enables traffic encryption between your load balancer and the clients that initiate SSL or TLS sessions.

To use an HTTPS listener, you must deploy at least one SSL/TLS server certificate on your load balancer. The load balancer uses a server certificate to terminate the front-end connection and then decrypt requests from clients before sending them to the targets.

This is the most secure solution we can created without adding any performance impact to the EC2 instances.

CORRECT: "Add an SSL certificate to the Elastic Load Balancer" is a correct answer.

CORRECT: "Configure the Elastic Load Balancer for SSL termination" is also a correct answer.

INCORRECT: "Configure the Elastic Load Balancer with SSL passthrough" is incorrect as this would be used to forward encrypted packets directly to the EC2 instance for termination but we do not want to add SSL certificates to the EC2 instances due to the extra processing required.

INCORRECT: "Install SSL certificates on the EC2 instances" is incorrect as we do not want to add SSL certificates to the EC2 instances due to the extra processing required.

INCORRECT: "Configure Server-Side Encryption with KMS managed keys" is incorrect as this applies to Amazon S3, not ELB.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/

Question 56:
Skipped

A development team manage a high-traffic e-Commerce site with dynamic pricing that is updated in real-time. There have been incidents where multiple updates occur simultaneously and cause an original editor’s updates to be overwritten. How can the developers ensure that overwriting does not occur?

Explanation

By default, the DynamoDB write operations (PutItem, UpdateItem, DeleteItem) are unconditional: Each operation overwrites an existing item that has the specified primary key.

DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value.

Conditional writes can be idempotent if the conditional check is on the same attribute that is being updated. This means that DynamoDB performs a given write request only if certain attribute values in the item match what you expect them to be at the time of the request.

For example, suppose that you issue an UpdateItem request to increase the Price of an item by 3, but only if the Price is currently 20. After you send the request, but before you get the results back, a network error occurs, and you don't know whether the request was successful. Because this conditional write is idempotent, you can retry the same UpdateItem request, and DynamoDB updates the item only if the Price is currently 20.

The following example shows how to use the condition-expression parameter to achieve a conditional write with idempotence:

For this scenario, conditional writes with idempotence will mean that each writer can check the current price and update the price only if the price matches that price. If the price is updated by another writer before the write is made, it will fail as the item price has changed and will not reflect the expected price.

CORRECT: "Use conditional writes" is the correct answer.

INCORRECT: "Use concurrent writes" is incorrect as writing concurrently to the same items is exactly what we want to avoid.

INCORRECT: "Use atomic counters" is incorrect. An atomic counter is a numeric attribute that is incremented, unconditionally, without interfering with other write requests. This is used for cases such as tracking visitors to a website. This does not prevent recent updated from being overwritten.

INCORRECT: "Use batch operations" is incorrect. Batch operations can reduce the number of network round trips from your application to DynamoDB. However, this does not solve the problem of preventing recent updates from being overwritten.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.ConditionalUpdate

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 57:
Skipped

An application is instrumented to generate traces using AWS X-Ray and generates a large amount of trace data. A Developer would like to use filter expressions to filter the results to specific key-value pairs added to custom subsegments.

How should the Developer add the key-value pairs to the custom subsegments?

Explanation

You can record additional information about requests, the environment, or your application with annotations and metadata. You can add annotations and metadata to the segments that the X-Ray SDK creates, or to custom subsegments that you create.

Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.

Metadata are key-value pairs that can have values of any type, including objects and lists, but are not indexed for use with filter expressions. Use metadata to record additional data that you want stored in the trace but don't need to use with search.

Annotations can be used with filter expressions, so this is the best solution for this requirement. The Developer can add annotations to the custom subsegments and will then be able to use filter expressions to filter the results in AWS X-Ray.

CORRECT: "Add annotations to the custom subsegments" is the correct answer.

INCORRECT: "Add metadata to the custom subsegments" is incorrect as though you can add metadata to custom subsegments it is not indexed and cannot be used with filters.

INCORRECT: "Add the key-value pairs to the Trace ID" is incorrect as this is not something you can do.

INCORRECT: "Setup sampling for the custom subsegments " is incorrect as this is a mechanism used by X-Ray to send only statistically significant data samples to the API.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-segment.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 58:
Skipped

A customer requires a schema-less, key/value database that can be used for storing customer orders. Which type of AWS database is BEST suited to this requirement?

Explanation

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It is a non-relational (schema-less), key-value type of database. This is the most suitable solution for this requirement.

CORRECT: "Amazon DynamoDB" is the correct answer.

INCORRECT: "Amazon RDS" is incorrect as this a relational database that has a schema.

INCORRECT: "Amazon ElastiCache" is incorrect as this is a key/value database but it is used to cache the contents of other databases (including DynamoDB and RDS) for better performance for reads.

INCORRECT: "Amazon S3" is incorrect as this is an object-based storage system not a database. It is a key/value store but DynamoDB is a better fit for a customer order database.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 59:
Skipped

Messages produced by an application must be pushed to multiple Amazon SQS queues. What is the BEST solution for this requirement?

Explanation

Amazon SNS works closely with Amazon Simple Queue Service (Amazon SQS). Both services provide different benefits for developers. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.

When you subscribe an Amazon SQS queue to an Amazon SNS topic, you can publish a message to the topic and Amazon SNS sends an Amazon SQS message to the subscribed queue. The Amazon SQS message contains the subject and message that were published to the topic along with metadata about the message in a JSON document.

CORRECT: "Publish the messages to an Amazon SNS topic and subscribe each SQS queue to the topic" is the correct answer.

INCORRECT: "Publish the messages to an Amazon SQS queue and configure an AWS Lambda function to duplicate the message into multiple queues" is incorrect as this seems like an inefficient solution. By using SNS we can eliminate the initial queue and Lambda function.

INCORRECT: "Create an Amazon SWF workflow that receives the messages and pushes them to multiple SQS queues" is incorrect as this is not a workable solution. Amazon SWF is not suitable for pushing messages to SQS queues.

INCORRECT: Create and AWS Step Functions state machine that uses multiple Lambda functions to process and push the messages into multiple SQS queues"" is incorrect as this is an inefficient solution and there is not mention on how the functions will be invoked with the message data

References:

https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 60:
Skipped

A batch job runs every 24 hours and writes around 1 million items into a DynamoDB table each day. The batch job completes quickly, and the items are processed within 2 hours and are no longer needed.

What’s the MOST efficient way to provide an empty table each day?

Explanation

With this scenario we have a table that has a large number of items quickly written to it on a recurring schedule. These items are no longer of use after they have been processed (within 2 hours) so from that point on until the next job the table is not being used. The items need to be deleted and we need to choose the most efficient (think cost as well as operations) way of doing this.

Any delete operation will consume RCUs to scan/query the table and WCUs to delete the items. It will be much cheaper and simpler to just delete the table and recreate it again ahead of the next batch job. This can easily be automated through the API.

CORRECT: "Delete the entire table and recreate it each day" is the correct answer.

INCORRECT: "Use the BatchUpdateItem API with expressions" is incorrect as this API does not exist.

INCORRECT: "Issue an AWS CLI aws dynamodb delete-item command with a wildcard" is incorrect as this operation deletes data from a table one item at a time, which is highly inefficient. You also must specify the item's primary key values; you cannot use a wildcard.

INCORRECT: "Use the BatchWriteItem API with a DeleteRequest" is incorrect as this is an inefficient way to solve this challenge.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html

https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/SQLtoNoSQL.DeleteData.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 61:
Skipped

A legacy service has an XML-based SOAP interface. The Developer wants to expose the functionality of the service to external clients with the Amazon API Gateway. Which technique will accomplish this?

Explanation

Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services as well as data stored in the AWS Cloud.

In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend.

API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response.

If an existing legacy service returns XML-style data, you can use the API Gateway to transform the output to JSON as part of your modernization effort. The API Gateway can be configured to transform the output of legacy services from XML to JSON, allowing them to make a move that is seamless and non-disruptive. The transformation is specified using JSON-Schema.

Therefore, the technique the Developer should use is to create a RESTful API with the API Gateway and transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates.

CORRECT: "Create a RESTful API with the API Gateway; transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates" is the correct answer.

INCORRECT: "Create a RESTful API with the API Gateway; pass the incoming JSON to the SOAP interface through an Application Load Balancer" is incorrect as we don’t need an ALB to do this, we can use a mapping template within the API Gateway which will be more cost-efficient.

INCORRECT: "Create a RESTful API with the API Gateway; pass the incoming XML to the SOAP interface through an Application Load Balancer" is incorrect as the incoming data will be JSON, not XML as the Developer needs to publish a modern application interface. A mapping template should also be used in place of the ALB.

INCORRECT: "Create a RESTful API with the API Gateway; transform the incoming XML into a valid message for the SOAP interface using mapping templates" is incorrect as the incoming data will be JSON, not XML as the Developer needs to publish a modern application interface.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 62:
Skipped

An application is running on an Amazon EC2 Linux instance. The instance needs to make AWS API calls to several AWS services. What is the MOST secure way to provide access to the AWS services with MINIMAL management overhead?

Explanation

An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. Using an instance profile you can attach an IAM Role to an EC2 instance that the instance can then assume in order to gain access to AWS services.

CORRECT: "Use EC2 instance profiles" is the correct answer.

INCORRECT: "Use AWS KMS to store and retrieve credentials" is incorrect as KMS is used to manage encryption keys.

INCORRECT: "Store the credentials in AWS CloudHSM" is incorrect as CloudHSM is also used to manage encryption keys. It is similar to KMS but uses a dedicated hardware device that is not multi-tenant.

INCORRECT: "Store the credentials in the ~/.aws/credentials file" is incorrect as this is not the most secure option. The credentials file is associated with the AWS CLI and used for passing credentials in the form of an access key ID and secret access key when making programmatic requests from the command line.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

Question 63:
Skipped

A Developer has completed some code updates and needs to deploy the updates to an Amazon Elastic Beanstalk environment. The environment includes twelve Amazon EC2 instances and there can be no reduction in application performance and availability during the update.

Which deployment policy is the most cost-effective choice to suit these requirements?

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies and options that let you configure batch size and health check behavior during deployments.

Each deployment policy has advantages and disadvantages and it’s important to select the best policy to use for each situation. The following tables summarizes the different deployment policies:

The “rolling with additional batch” policy will add an additional batch of instances, updates those instances, then move onto the next batch.

Rolling with additional batch:

Like Rolling but launches new instances in a batch ensuring that there is full availability.

Application is running at capacity.

Can set the bucket size.

Application is running both versions simultaneously.

Small additional cost.

Additional batch is removed at the end of the deployment.

Longer deployment.

Good for production environments.

For this scenario there can be no reduction in application performance and availability during the update. The question also asks for the most cost-effective choice.

Therefore, the “rolling with additional batch” is the best choice as it will ensure fully availability of the application but minimize cost as the additional batch size can be kept small.

CORRECT: "Rolling with additional batch" is the correct answer.

INCORRECT: "Rolling" is incorrect as this will result in a reduction in capacity as there is no additional batch of instances introduced to the environment. This is a better choice if speed is required and a reduction in capacity of a batch size is acceptable.

INCORRECT: "All at once" is incorrect as this will take the application down and cause a complete outage of the application during the update.

INCORRECT: "Immutable" is incorrect as this is the most expensive option as it doubles capacity with a whole new set of instances attached to a new ASG.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 64:
Skipped

The following permissions policy is applied to an IAM user account:

Due to this policy, what Amazon SQS actions will the user be able to perform?

Explanation

The policy allows the user to use all Amazon SQS actions, but only with queues whose names are prefixed with the literal string “staging-queue”. This policy is useful to provide a queue creator the ability to use Amazon SQS actions. Any user who has permissions to create a queue must also have permissions to use other Amazon SQS actions in order to do anything with the created queues.

CORRECT: "The user will be able to use all Amazon SQS actions, but only for queues with names begin with the string “staging-queue“" is the correct answer.

INCORRECT: "The user will be able to create a queue named “staging-queue“" is incorrect as this policy provides the permissions to perform SQS actions on an existing queue.

INCORRECT: "The user will be able to apply a resource-based policy to the Amazon SQS queue named “staging-queue”" is incorrect as this is a single operation and the permissions policy allows all SQS actions.

INCORRECT: "The user will be granted cross-account access from account number “513246782345” to queue “staging-queue”" is incorrect as this is not a policy for granting cross-account access. The account number and queue relate to the same account.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-overview-of-managing-access.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 65:
Skipped

AWS CodeBuild builds code for an application, creates a Docker image, pushes the image to Amazon Elastic Container Registry (ECR), and tags the image with a unique identifier.

If the Developers already have AWS CLI configured on their workstations, how can the Docker images be pulled to the workstations? 

Explanation

If you would like to run a Docker image that is available in Amazon ECR, you can pull it to your local environment with the docker pull command. You can do this from either your default registry or from a registry associated with another AWS account.

Docker CLI does not support standard AWS authentication methods, so client authentication must be handled so that ECR knows who is requesting to push or pull an image. To do this you can issue the aws ecr get-login-password AWS CLI command and then use the output to login using docker login and then issue a docker pull command specifying the image name using registry/repository[:tag].

CORRECT: "Run the output of the following: aws ecr get-login-password, and then run docker pull REPOSITORY URI : TAG" is the correct answer.

INCORRECT: "Run the following: docker pull REPOSITORY URI : TAG" is incorrect as the Developers first need to authenticate before they can pull the image.

INCORRECT: "Run the following: aws ecr get-login-password, and then run: docker pull REPOSITORY URI : TAG" is incorrect. The Developers need to not just run the login command but run the output of the login command which contains the authentication token required to log in.

INCORRECT: "Run the output of the following: aws ecr get-download-url-for-layer, and then run docker pull REPOSITORY URI : TAG" is incorrect as this command retrieves a pre-signed Amazon S3 download URL corresponding to an image layer.

References:

https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth

https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/