Chart

Pie chart with 4 slices.
End of interactive chart.
Attempt 1
Question 1:
Skipped

A Developer is creating a serverless application that will process sensitive data. The AWS Lambda function must encrypt all data that is written to /tmp storage at rest.

How should the Developer encrypt this data?

Explanation

On a per-function basis, you can configure Lambda to use an encryption key that you create and manage in AWS Key Management Service. These are referred to as customer managed customer master keys (CMKs) or customer managed keys. If you don't configure a customer managed key, Lambda uses an AWS managed CMK named aws/lambda, which Lambda creates in your account.

The CMK can be used to generate a data encryption key that can be used for encrypting all data uploaded to Lambda or generated by Lambda.

CORRECT: "Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage" is the correct answer.

INCORRECT: "Attach the Lambda function to a VPC and encrypt Amazon EBS volumes at rest using the AWS managed CMK. Mount the EBS volume to /tmp" is incorrect. You cannot attach an EBS volume to a Lambda function.

INCORRECT: "Enable default encryption on an Amazon S3 bucket using an AWS KMS customer managed customer master key (CMK). Mount the S3 bucket to /tmp" is incorrect. You cannot mount an S3 bucket to a Lambda function.

INCORRECT: "Enable secure connections over HTTPS for the AWS Lambda API endpoints using Transport Layer Security (TLS)" is incorrect. The Lambda API endpoints are always encrypted using TLS and this is encryption in-transit not encryption at-rest.

References:

https://docs.aws.amazon.com/lambda/latest/dg/security-dataprotection.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 2:
Skipped

A Developer is creating an application that will process some data and generate an image file from it. The application will use an AWS Lambda function which will require 150 MB of temporary storage while executing. The temporary files will not be needed after the function execution is completed.

What is the best location for the Developer to store the files?

Explanation

The /tmp directory can be used for storing temporary files within the execution context. This can be used for storing static assets that can be used by subsequent invocations of the function. If the assets must be deleted before the function is invoked again the function code should take care of deleting them.

There is a limit of 512 MB storage space in the /tmp directory, but this is more than adequate for this scenario.

CORRECT: "Store the files in the /tmp directory and delete the files when the execution completes" is the correct answer.

INCORRECT: "Store the files in Amazon S3 and use a lifecycle policy to delete the files automatically" is incorrect. The /tmp directory within the execution context has enough space for these files and this will reduce latency, cost, and execution time.

INCORRECT: "Store the files in an Amazon Instance Store and delete the files when the execution completes" is incorrect. Instance stores are ephemeral storage attached to Ec2 instances, they cannot be used except by EC2 instances for temporary storage.

INCORRECT: "Store the files in an Amazon EFS filesystem and delete the files when the execution completes" is incorrect. This is another option that would increase cost, complexity and latency. It is better to use the /tmp directory.

References:

https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 3:
Skipped

A company has a production application deployed using AWS Elastic Beanstalk. A new version of the application must be installed, and the company cannot tolerate any website downtime. If the application update fails, rollback should be fast and easy.

What deployment method should be used?

Explanation

The immutable deployment type launches new instances in a new ASG and deploys the version update to these instances before swapping traffic to these instances once healthy. There is zero downtime and a quick rollback in case of failures.

CORRECT: "Immutable" is the correct answer (as explained above.)

INCORRECT: "Rolling" is incorrect.

With a rolling update a few instances are updated at a time (batch), and then the deployment moves onto the next batch once the first batch is healthy. Each batch is taken out of service during deployment leading to downtime. If the update fails, you need to perform an additional rolling update to roll back the changes. This would not be ideal for this production environment.

INCORRECT: "All at once" is incorrect.

This would take all instances down at the same time. This is not suitable for a production environment.

INCORRECT: "Incremental" is incorrect.

This is not actually a type of deployment model in Elastic Beanstalk.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 4:
Skipped

A company uses Amazon DynamoDB to store sensitive data that must be encrypted. The company security policy mandates that data must be encrypted before it is submitted to DynamoDB

How can a Developer meet these requirements?

Explanation

In addition to encryption at rest, which is a server-side encryption feature, AWS provides the Amazon DynamoDB Encryption Client. This client-side encryption library enables you to protect your table data before submitting it to DynamoDB. With server-side encryption, your data is encrypted in transit over an HTTPS connection, decrypted at the DynamoDB endpoint, and then re-encrypted before being stored in DynamoDB. Client-side encryption provides end-to-end protection for your data from its source to storage in DynamoDB.

CORRECT: "Use the DynamoDB Encryption Client to enable end-to-end protection using client-side encryption" is the correct answer.

INCORRECT: "Use the UpdateTable operation to switch to a customer managed customer master key (CMK)" is incorrect. This will not ensure data is encrypted before it is submitted to DynamoDB; to meet this requirement, client-side encryption must be used.

INCORRECT: "Use the UpdateTable operation to switch to an AWS managed customer master key (CMK)" is incorrect. is will not ensure data is encrypted before it is submitted to DynamoDB; to meet this requirement, client-side encryption must be used.

INCORRECT: "Use AWS Certificate Manager (ACM) to create one certificate for each DynamoDB table" is incorrect. ACM is used to create SSL/TLS certificates and you cannot attach these to a DynamoDB table.

References:

https://docs.aws.amazon.com/kms/latest/Developerguide/services-dynamodb.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 5:
Skipped

A static website that serves a collection of images runs from an Amazon S3 bucket in the us-east-1 region. The website is gaining in popularity and is now being viewed around the world. How can a Developer improve the performance of the website for global users?

Explanation

CloudFront is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds. CloudFront is a good choice for distribution of frequently accessed static content that benefits from edge delivery—like popular website images, videos, media files or software downloads.

CORRECT: "Use Amazon CloudFront to cache the website content" is the correct answer.

INCORRECT: "Use Amazon ElastiCache to cache the website content" is incorrect as ElastiCache is used for caching the contents of databases, not S3 buckets.

INCORRECT: "Use cross region replication to replicate the bucket to several global regions" is incorrect as though this would get the content closer to users it would not provide a mechanism for connecting to those copies. This could be achieved using Route 53 latency based routing however it would be easier to use CloudFront.

INCORRECT: "Use Amazon S3 Transfer Acceleration to improve the performance of the website" is incorrect as this service is used for improving the performance of uploads to Amazon S3.

References:

https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/

https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/

Question 6:
Skipped

A team of Developers need to deploy a website for a development environment. The team do not want to manage the infrastructure and just need to upload Node.js code to the instances.

Which AWS service should Developers do?

Explanation

The Developers do not want to manage the infrastructure so the best AWS service for them to use to create a website for a development environment is AWS Elastic Beanstalk. This will allow the Developers to simply upload their Node.js code to Elastic Beanstalk and it will handle the provisioning and management of the underlying infrastructure.

AWS Elastic Beanstalk can be used to quickly deploy and manage applications in the AWS Cloud. Developers upload applications and Elastic Beanstalk handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. AWS Elastic Beanstalk leverages Elastic Load Balancing and Auto Scaling to automatically scale your application in and out based on your application’s specific needs.

CORRECT: "Create an AWS Elastic Beanstalk environment" is the correct answer.

INCORRECT: "Create an AWS CloudFormation template" is incorrect as though you can use CloudFormation to deploy the infrastructure, it will not be managed for you.

INCORRECT: "Create an AWS Lambda package" is incorrect as the Developers are deploying a website and Lambda is not a website. It is possible to use a Lambda function for a website however this would require a front-end component such as REST API.

INCORRECT: "Launch an Auto Scaling group of Amazon EC2 instances" is incorrect as this would not provide a managed solution.

References:

https://aws.amazon.com/elasticbeanstalk/details/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 7:
Skipped

A developer needs use the attribute of an Amazon S3 object that uniquely identifies the object in a bucket. Which of the following represents an Object Key?

Explanation

When you create an object, you specify the key name, which uniquely identifies the object in the bucket. For example, in the Amazon S3 console, when you highlight a bucket, a list of objects in your bucket appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long.

The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders. However, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of folders. Suppose that your bucket (admin-created) has four objects with the following object keys:

· Development/Projects.xls

· Finance/statement1.pdf

· Private/taxdocument.pdf

· s3-dg.pdf

The console uses the key name prefixes (Development/, Finance/, and Private/) and delimiter ('/') to present a folder structure as shown.

CORRECT: "Development/Projects.xls" is the correct answer.

INCORRECT: "s3://dctlabs/Development/Projects.xls" is incorrect as this is the full path to a file including the bucket name and object key.

INCORRECT: "Project=Blue" is incorrect as this is an example of an object tag. You can use object tagging to categorize storage. Each tag is a key-value pair.

INCORRECT: "arn:aws:s3:::dctlabs" is incorrect as this is the Amazon Resource Name (ARN) of a bucket.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 8:
Skipped

A serverless application uses an IAM role to authenticate and authorize access to an Amazon DynamoDB table. A Developer is troubleshooting access issues affecting the application. The Developer has access to the IAM role that the application is using.

Which of the following commands will help the Developer to test the role permissions using the AWS CLI?

Explanation

The AWS CLI “aws sts assume role” command will enable the Developer to assume the role and gain temporary security credentials. The Developer can then use those security credentials to troubleshoot access issues that are affecting the application.

CORRECT: "aws sts assume-role" is the correct answer.

INCORRECT: "aws sts get-session-token" is incorrect. This is used to get temporary credentials for an AWS account or IAM user. It can subsequently be used to call the assume-role API.

INCORRECT: "aws iam get-role-policy" is incorrect. This command retrieves the specified inline policy document that is embedded with the specified IAM role.

INCORRECT: "aws dynamodb describe-endpoints" is incorrect. This command returns the regional endpoint information.

References:

https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 9:
Skipped

A Developer is using AWS SAM to create a template for deploying a serverless application. The Developer plans to leverage an application from the AWS Serverless Application Repository in the template as a nested application.

Which resource type should the Developer specify?

Explanation

A serverless application can include one or more nested applications. You can deploy a nested application as a stand-alone artifact or as a component of a larger application.

As serverless architectures grow, common patterns emerge in which the same components are defined in multiple application templates. You can now separate out common patterns as dedicated applications, and then nest them as part of new or existing application templates. With nested applications, you can stay more focused on the business logic that's unique to your application.

To define a nested application in your serverless application, use the AWS::Serverless::Application resource type.

CORRECT: "AWS::Serverless::Application" is the correct answer.

INCORRECT: "AWS::Serverless:Function" is incorrect as this is used to define a serverless Lamdba function.

INCORRECT: "AWS::Serverless:HttpApi" is incorrect as this is used to define an API Gateway HTTP API.

INCORRECT: "AWS::Serverless:SimpleTable" is incorrect as this is used to define a DynamoDB table.

References:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-nested-applications.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-sam/

Question 10:
Skipped

A developer has created an Amazon API Gateway with caching enabled in front of AWS Lambda. For some requests, it is necessary to ensure the latest data is received from the endpoint. How can the developer ensure the data is not stale?

Explanation

You can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The request must contain the Cache-Control: max-age=0 header. The client receives the response directly from the integration endpoint instead of the cache, provided that the client is authorized to do so. This replaces the existing cache entry with the new response, which is fetched from the integration endpoint.

The following image shows a cache configuration in the settings for a stage:

CORRECT: "Send requests with the Cache-Control: max-age=0 header" is the correct answer.

INCORRECT: "Modify the TTL on the cache to a lower number" is incorrect as that would expire all entries after the TTL expires. The question states that for some requests (not all requests) that latest data must be received, in this case the best way to ensure this is to use invalidate the cache entries using the header in the correct answer.

INCORRECT: "The cache must be disables" is incorrect as you can achieve this requirement using invalidation as detailed in the explanation above.

INCORRECT: "Send requests with the Cache-Delete: max-age=0 header " is incorrect as that is the wrong header to use. The Developer should use the Cache-Control: max-age=0 header instead.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 11:
Skipped

An application runs on a fleet of Amazon EC2 instances in an Auto Scaling group. The application stores data in an Amazon DynamoDB table and all instances make updates to the table. When querying data, EC2 instances sometimes retrieve stale data. The Developer needs to update the application to ensure the most up-to-date data is retrieved for all queries.

How can the Developer accomplish this?

Explanation

DynamoDB supports eventually consistent and strongly consistent reads. When using eventually consistent reads the response might not reflect the results of a recently completed write operation. The response might include some stale data.

When using strongly consistent reads DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful.

DynamoDB uses eventually consistent reads unless you specify otherwise. Read operations (such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this parameter to true, DynamoDB uses strongly consistent reads during the operation.

CORRECT: "Set the ConsistentRead parameter to true when calling GetItem" is the correct answer.

INCORRECT: "Cache the database writes using Amazon DynamoDB Accelerator" is incorrect. DynamoDB DAX caches items from DynamoDB to improve read performance but will not ensure the latest data is retrieved.

INCORRECT: "Use the TransactWriteItems API when issuing PutItem actions" is incorrect. This operation is used to group transactions in an all-or-nothing update.

INCORRECT: "Use the UpdateGlobalTable API to create a global secondary index" is incorrect. A GSI does not assist in any way in this solution.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/Developerguide/HowItWorks.ReadConsistency.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 12:
Skipped

A developer created an operational dashboard for a serverless application using Amazon API Gateway, AWS Lambda, Amazon S3, and Amazon DynamoDB. Users will connect to the dashboard from a variety of mobile applications, desktops and tablets.

The developer needs an authentication mechanism that can allow users to sign-in and will remember the devices users sign in from and suppress the second factor of authentication for remembered devices. Which AWS service should the developer use to support this scenario?

Explanation

You can enable device remembering for Amazon Cognito user pools. A remembered device can serve in place of the security code delivered via SMS as a second factor of authentication. This suppresses the second authentication challenge from remembered devices and thus reduces the friction users experience with multi-factor authentication (MFA).

Therefore, Amazon Cognito is the best answer and will support all of the requirements in the scenario.

CORRECT: "Amazon Cognito" is the correct answer.

INCORRECT: "AWS Directory Service" is incorrect as this service enables directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud.

INCORRECT: "AWS KMS" is incorrect as KMS is used to manage encryption keys; it does not enable authentication from mobile devices.

INCORRECT: "Amazon IAM" is incorrect as IAM is not the best authentication solution for mobile users. It also does not support device remembering or any ability to suppress MFA when it is enabled.

References:

https://aws.amazon.com/blogs/mobile/tracking-and-remembering-devices-using-amazon-cognito-your-user-pools/

https://aws.amazon.com/cognito/details/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 13:
Skipped

A company maintains a REST API service using Amazon API Gateway with native API key validation. The company recently launched a new registration page, which allows users to sign up for the service. The registration page creates a new API key using CreateApiKey and sends the new key to the user. When the user attempts to call the API using this key, the user receives a 403 Forbidden error. Existing users are unaffected and can still call the API.

What code updates will grant these new users’ access to the API?

Explanation

A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.

API keys are alphanumeric string values that you distribute to application developer customers to grant access to your API. You can use API keys together with usage plans or Lambda authorizers to control access to your APIs. API Gateway can generate API keys on your behalf, or you can import them from a CSV file. You can generate an API key in API Gateway, or import it into API Gateway from an external source.

To associate the newly created key with a usage plan the CreatUsagePlanKey API can be called. This creates a usage plan key for adding an existing API key to a usage plan.

CORRECT: "The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan" is the correct answer.

INCORRECT: "The createDeployment method must be called so the API can be redeployed to include the newly created API key" is incorrect as you do not need to redeploy an API to a stage in order to associate an API key.

INCORRECT: "The updateAuthorizer method must be called to update the API’s authorizer to include the newly created API key" is incorrect as this updates and authorizer resource, not an API key.

INCORRECT: "The importApiKeys method must be called to import all newly created API keys into the current stage of the API" is incorrect as this imports API keys to API Gateway from an external source such as a CSV file which is not relevant to this scenario.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html

http://docs.amazonaws.cn/en_us/sdkfornet/v3/apidocs/items/APIGateway/MAPIGatewayCreateUsagePlanKeyCreateUsagePlanKeyRequest.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 14:
Skipped

A company runs multiple microservices that each use their own Amazon DynamoDB table. The “customers” microservice needs data that originates in the “orders” microservice.

What approach represents the SIMPLEST method for the “customers” table to get near real-time updates from the “orders” table?

Explanation

DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.

Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified. A stream record contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the "before" and "after" images of modified items.

For this scenario, we can enable a DynamoDB stream on the “orders” table and the configure the “customers” microservice to read records from the stream and then write those records, or relevant attributes of those records, to the “customers’ table.


CORRECT: "Enable Amazon DynamoDB streams on the “orders” table, configure the “customers” microservice to read records from the stream" is the correct answer.

INCORRECT: "Enable DynamoDB streams for the “customers” table, trigger an AWS Lambda function to read records from the stream and write them to the “orders” table" is incorrect. This could be a good solution if it wasn’t backward. We can trigger a Lambda function to then process the records from the stream. However, we should be enabling the stream on the “orders” table, not the “customers” table, and then writing the records to the “customers” table, not the “orders” table.

INCORRECT: "Use Amazon CloudWatch Events to send notifications every time an item is added or modified in the “orders” table" is incorrect. CloudWatch Events is used to respond to changes in the state of specific AWS services. It does not support DynamoDB.

INCORRECT: "Use Amazon Kinesis Firehose to deliver all changes in the “orders” table to the “customers” table" is incorrect. Kinesis Firehose cannot be configured to ingest data from a DynamoDB table, nor is DynamoDB a supported destination.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 15:
Skipped

An application requires an in-memory caching engine. The cache should provide high availability as repopulating data is expensive. How can this requirement be met?

Explanation

Single-node Amazon ElastiCache Redis clusters are in-memory entities with limited data protection services (AOF). If your cluster fails for any reason, you lose all the cluster's data.

However, if you're running the Redis engine, you can group 2 to 6 nodes into a cluster with replicas where 1 to 5 read-only nodes contain replicate data of the group's single read/write primary node.

In this scenario, if one node fails for any reason, you do not lose all your data since it is replicated in one or more other nodes. Due to replication latency, some data may be lost if it is the primary read/write node that fails.

Therefore, the best solution is to use ElastiCache Redis with replicas.

CORRECT: "Use Amazon ElastiCache Redis with replicas" is the correct answer.

INCORRECT: "Use Amazon ElastiCache Memcached with partitions" is incorrect as partitions are not copies of data so if you lose a partition you lose the data contained within it (no high availability).

INCORRECT: "Amazon RDS with a Read Replica" is incorrect as this is not an in-memory database and is read-only.

INCORRECT: "Amazon Aurora with a Global Database" is incorrect as this is not an in-memory database and this configuration is for scaling a database globally.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 16:
Skipped

A company is creating an application that must support Security Assertion Markup Language (SAML) and authentication with social identity providers. The application must also be authorized to access data in Amazon S3 buckets and Amazon DynamoDB tables.

Which AWS service or feature will meet these requirements with the LEAST amount of additional coding?

Explanation

Amazon Cognito identity pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. With an identity pool, you can obtain temporary, limited-privilege AWS credentials to access other AWS services.

Amazon Cognito identity pools support the following identity providers:

• Public providers: Amazon, Facebook, Google, Apple

• Amazon Cognito user pools

• Open ID Connect providers (identity pools)

• SAML identity providers (identity pools)

• Developer authenticated identities (identity pools)

Identity pools are well suited to use cases where you need to authenticate users through one of the above IdPs and then authorize access to AWS services such as Amazon S3 and DynamoDB.

CORRECT: "Amazon Cognito identity pools" is the correct answer (as explained above.)

INCORRECT: "Amazon Cognito user pools" is incorrect.

You can use a user pool for authentication but you would then need to use the identity pool for authorization to AWS services. Therefore, this option would require more additional coding.

INCORRECT: "AWS AppSync GraphQL API" is incorrect.

There is no need to implement an API for this use case. The developer simply needs a solution for authorization and access control.

INCORRECT: "Amazon API Gateway REST API" is incorrect.

There is no need to implement an API for this use case. The developer simply needs a solution for authorization and access control.

References:

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html

https://aws.amazon.com/premiumsupport/knowledge-center/cognito-user-pools-identity-pools/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 17:
Skipped

A new application will be hosted on the domain name dctlabs.com using an Amazon API Gateway REST API front end. The Developer needs to configure the API with a path to dctlabs.com/products that will be accessed using the HTTP GET verb. How MUST the Developer configure the API? (Select TWO

Explanation

An API Gateway REST API is a collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway.

As you can see from the image above, the Developer would need to create a resource which in this case would be /products. The Developer would then create a GET method within the resource.

CORRECT: "Create a /products resource" is a correct answer.

CORRECT: "Create a GET method" is a correct answer.

INCORRECT: "Create a /products method" is incorrect as a resource should be created.

INCORRECT: "Create a GET resource" is incorrect as a method should be created.

INCORRECT: "Create a /GET method" is incorrect as a method is not preceded by a slash.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-basic-concept.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 18:
Skipped

A DynamoDB table is being used to store session information for users of an online game. A developer has noticed that the table size has increased considerably and much of the data is not required after a gaming session is completed.

What is the MOST cost-effective approach to reducing the size of the table?

Explanation

Time to Live (TTL) for Amazon DynamoDB lets you define when items in a table expire so that they can be automatically deleted from the database. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.

TTL is useful if you have continuously accumulating data that loses relevance after a specific time period (for example, session data, event logs, usage patterns, and other temporary data). If you have sensitive data that must be retained only for a certain amount of time according to contractual or regulatory obligations, TTL helps you ensure that it is removed promptly and as scheduled.

Therefore, using a TTL is the best solution as it will automatically purge items after their useful lifetime.

CORRECT: "Enable a Time To Live (TTL) on the table and add a timestamp attribute on new items" is the correct answer.

INCORRECT: "Use the batch-write-item API to delete the data" is incorrect as this would use RCUs and WCUs to remove the data.

INCORRECT: "Create an AWS Lambda function that purges stale items from the table daily" is incorrect as this would also require reading/writing to the table so it would require RCUs/WCUs.

INCORRECT: "Use the delete-item API to delete the data" is incorrect is incorrect as this would use RCUs and WCUs to remove the data.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 19:
Skipped

A developer needs to add sign-up and sign-in capabilities for a mobile app. The solution should integrate with social identity providers (IdPs) and SAML IdPs. Which service should the developer use?

Explanation

User pools are for authentication (identify verification). With a user pool, your app users can sign in through the user pool or federate through a third-party identity provider (IdP).

Identity pools are for authorization (access control). You can use identity pools to create unique identities for users and give them access to other AWS services.

User pool use cases:

Use a user pool when you need to:

Design sign-up and sign-in webpages for your app.

Access and manage user data.

Track user device, location, and IP address, and adapt to sign-in requests of different risk levels.

Use a custom authentication flow for your app.

Identity pool use cases:

Use an identity pool when you need to:

Give your users access to AWS resources, such as an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon DynamoDB table.

Generate temporary AWS credentials for unauthenticated users.

Therefore, a user pool is the correct service to use as in this case we are not granting access to AWS services, just providing sign-up and sign-in capabilities for a mobile app.

CORRECT: "AWS Cognito user pool" is the correct answer.

INCORRECT: "AWS Cognito identity pool" is incorrect as an identity pool is used when you need to provide access to AWS resources (see explanation above).

INCORRECT: "API Gateway with a Lambda authorizer" is incorrect as AWS Cognito is the best solution for providing sign-up and sign-in for mobile apps and also integrates with the 3rd party IdPs.

INCORRECT: "AWS IAM and STS" is incorrect as AWS Cognito is the best solution for providing sign-up and sign-in for mobile apps and also integrates with the 3rd party IdPs.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/cognito-user-pools-identity-pools/

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-identity-federation.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 20:
Skipped

A Developer implemented a static website hosted in Amazon S3 that makes web service requests hosted in Amazon API Gateway and AWS Lambda. The site is showing an error that reads:

“No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘null’ is therefore not allowed access.”

What should the Developer do to resolve this issue?

Explanation

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. In this scenario the S3 bucket is the requestor and is requesting access to resources served by Amazon API Gateway and AWS Lambda. Therefore, the CORS configuration must be enabled on the requested endpoint which is the method in API Gateway.

CORRECT: "Enable cross-origin resource sharing (CORS) for the method in API Gateway" is the correct answer.

INCORRECT: "Enable cross-origin resource sharing (CORS) on the S3 bucket" is incorrect as CORS must be enabled on the requested endpoint which is API Gateway, not S3.

INCORRECT: "Add the Access-Control-Request-Method header to the request" is incorrect as this is a request header value that asks permission to use a specific HTTP method.

INCORRECT: "Add the Access-Control-Request-Headers header to the request" is incorrect as this notifies a server what headers will be sent in a request.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 21:
Skipped

A Developer needs to manage AWS services from a local development server using the AWS CLI. How can the Developer ensure that the CLI uses their IAM permissions?

Explanation

For general use, the aws configure command is the fastest way to set up your AWS CLI installation. The following example shows sample values:

You can configure the AWS CLI on Linux, MacOS, and Windows. Computers can be located anywhere as long as they can connect to the AWS API.

For this scenario, the best solution is to run aws configure and use the IAM user’s access key ID and secret access key. This will mean that commands run using the AWS CLI will use the user’s IAM permissions as required.

CORRECT: "Run the aws configure command and provide the Developer’s IAM access key ID and secret access key" is the correct answer.

INCORRECT: "Create an IAM Role with the required permissions and attach it to the local server’s instance profile" is incorrect as this is not an Amazon EC2 instance so you cannot attach an IAM role.

INCORRECT: "Put the Developer’s IAM user account in an IAM group that has the necessary permissions" is incorrect as this does not assist with configuring the AWS CLI.

INCORRECT: "Save the Developer’s IAM login credentials as environment variables and reference them when executing AWS CLI commands" is incorrect as the IAM login credentials cannot be used with the AWS CLI. You need to use an access key ID and secret access key with the AWS CLI and these are configured for use by running aws configure.

References:

https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html

Question 22:
Skipped

An application exports documents to an Amazon S3 bucket. The data must be encrypted at rest and company policy mandates that encryption keys must be rotated annually. How can this be achieved automatically and with the LEAST effort?

Explanation

With AWS KMS you can choose to have AWS KMS automatically rotate CMKs every year, provided that those keys were generated within AWS KMS HSMs. Automatic key rotation is not supported for imported keys, asymmetric keys, or keys generated in an AWS CloudHSM cluster using the AWS KMS custom key store feature.

If you choose to import keys to AWS KMS or asymmetric keys or use a custom key store, you can manually rotate them by creating a new CMK and mapping an existing key alias from the old CMK to the new CMK.

If you choose to have AWS KMS automatically rotate keys, you don’t have to re-encrypt your data. AWS KMS automatically keeps previous versions of keys to use for decryption of data encrypted under an old version of a key. All new encryption requests against a key in AWS KMS are encrypted under the newest version of the key.

CORRECT: "Use AWS KMS keys with automatic rotation enabled" is the correct answer.

INCORRECT: "Import a custom key into AWS KMS and configure automatic rotation" is incorrect as per the explanation above KMS will not automatically rotate imported encryption keys (it can automatically rotate imported CMKs though).

INCORRECT: "Encrypt the data within the application before writing to S3" is incorrect as this is both an incomplete solution (where would the encryption keys come from) and would also likely require more maintenance and management overhead.

INCORRECT: "Configure automatic rotation with AWS Secrets Manager" is incorrect as Secrets Manager is used for rotating credentials, not encryption keys.

References:

https://aws.amazon.com/kms/faqs/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-kms/

Question 23:
Skipped

A utilities company needs to ensure that documents uploaded by customers through a web portal are securely stored in Amazon S3 with encryption at rest. The company does not want to manage the security infrastructure in-house. However, the company still needs maintain control over its encryption keys due to industry regulations.

Which encryption strategy should a Developer use to meet these requirements?

Explanation

Server-side encryption is about protecting data at rest. Server-side encryption encrypts only the object data, not object metadata. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys.

With the encryption key you provide as part of your request, Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects. Therefore, you don't need to maintain any code to perform data encryption and decryption. The only thing you do is manage the encryption keys you provide.

Therefore, SSE-C is the best choice as AWS will manage all encryption and decryption operations whilst the company get to supply keys that they can manage.

CORRECT: "Server-side encryption with customer-provided encryption keys (SSE-C)" is the correct answer.

INCORRECT: "Server-side encryption with Amazon S3 managed keys (SSE-S3)" is incorrect as with this option AWS manage the keys in S3.

INCORRECT: "Server-side encryption with AWS KMS managed keys (SSE-KMS)" is incorrect as with this option the keys are managed by AWS KMS.

INCORRECT: "Client-side encryption" is incorrect as with this option all encryption and decryption is handled by the company (client) which is not desired in this scenario.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 24:
Skipped

A Developer is configuring an Amazon ECS Service with Auto Scaling. The tasks should scale based on user load in the previous 20 seconds. How can the Developer enable the scaling?

Explanation

Metrics produced by AWS services are standard resolution by default. When you publish a custom metric, you can define it as either standard resolution or high resolution. When you publish a high-resolution metric, CloudWatch stores it with a resolution of 1 second, and you can read and retrieve it with a period of 1 second, 5 seconds, 10 seconds, 30 seconds, or any multiple of 60 seconds.

User activity is not a standard CloudWatch metric and as stated above for the resolution we need in this scenario a custom CloudWatch metric is required anyway. Therefore, for this scenario the Developer should create a high-resolution custom Amazon CloudWatch metric for user activity data and publish the data every 10 seconds.

CORRECT: "Create a high-resolution custom Amazon CloudWatch metric for user activity data, then publish data every 10 seconds" is the correct answer.

INCORRECT: "Create a high-resolution custom Amazon CloudWatch metric for user activity data, then publish data every 5 seconds" is incorrect as the resolution is higher than required which will cost more. We need the resolution to be 20 seconds so that means publishing in 10 second intervals with 2 data points. At 5 second intervals there would be 4 data points which will incur additional costs.

INCORRECT: "Create a standard-resolution custom Amazon CloudWatch metric for user activity data, then publish data every 30 seconds" is incorrect as standard resolution metrics have a granularity of one minute.

INCORRECT: "Create a standard-resolution custom Amazon CloudWatch metric for user activity data, then publish data every 5 seconds" is incorrect as standard resolution metrics have a granularity of one minute.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 25:
Skipped

A Developer is building a WebSocket API using Amazon API Gateway. The payload sent to this API is JSON that includes an action key which can have multiple values. The Developer must integrate with different routes based on the value of the action key of the incoming JSON payload.

How can the Developer accomplish this task with the LEAST amount of configuration?

Explanation

In your WebSocket API, incoming JSON messages are directed to backend integrations based on routes that you configure. (Non-JSON messages are directed to a $default route that you configure.)

A route includes a route key, which is the value that is expected once a route selection expression is evaluated. The routeSelectionExpression is an attribute defined at the API level. It specifies a JSON property that is expected to be present in the message payload.

For example, if your JSON messages contain an action property and you want to perform different actions based on this property, your route selection expression might be ${request.body.action}. Your routing table would specify which action to perform by matching the value of the action property against the custom route key values that you have defined in the table.

CORRECT: "Set the value of the route selection expression to $request.body.action" is the correct answer.

INCORRECT: "Create a separate stage for each possible value of the action key" is incorrect. There is no need to create separate stages, the action key can be used for routing as described above.

INCORRECT: "Create a mapping template to map the action key to an integration request" is incorrect. Mapping templates are not used for routing to different integrations, they are used for transforming data.

INCORRECT: "Set the value of the route selection expression to $default" is incorrect. The $default route is used for routing non-JSON messages.

References:

https://docs.aws.amazon.com/apigateway/latest/Developerguide/websocket-api-develop-routes.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 26:
Skipped

A Developer has created an Amazon S3 bucket and uploaded some objects that will be used for a publicly available static website. What steps MUST be performed to configure the bucket as a static website? (Select TWO.)

Explanation

You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. They might also contain client-side scripts.

To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you enable static website hosting, set permissions, and add an index document.

When you enable static website hosting for your bucket, you enter the name of the index document (for example, index.html). After you enable static website hosting for your bucket, you upload an HTML file with the index document name to your bucket. Note that an error document is optional.

To provide permissions, it is necessary to disable “block public access” settings and then create a bucket policy that grants everyone the s3:GetObject permission. For example:

CORRECT: "Upload an index document and enter the name of the index document when enabling static website hosting" is a correct answer.

CORRECT: "Enable public access and grant everyone the s3:GetObject permissions" is also a correct answer.

INCORRECT: "Upload an index and error document and enter the name of the index and error documents when enabling static website hosting" is incorrect as the error document is optional and the question specifically asks for the steps that MUST be completed.

INCORRECT: "Create an object access control list (ACL) granting READ permissions to the AllUsers group" is incorrect. This may be necessary if the bucket objects are not owned by the bucket owner but the question states that the Developer created the bucket and uploaded the objects and so must be the object owner.

INCORRECT: "Upload a certificate from AWS Certificate Manager" is incorrect as this is not supported or necessary for static websites on Amazon S3.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

Question 27:
Skipped

A developer needs to create a serverless application that uses an event-driven architecture.

How can the developer configure the application to automatically receive and process events?

Explanation

You can use a Lambda function to process Amazon Simple Notification Service (Amazon SNS) notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. You can subscribe your function to topics in the same account or in other AWS accounts.

When an event is submitted to the SNS topic it will be sent to AWS Lambda which can then process the event. This is an example of a simple event-driven architecture.

CORRECT: "Create an Amazon SNS topic and an AWS Lambda function. Subscribe the Lambda function to the SNS topic and submit events to the SNS topic" is the correct answer (as explained above.)

INCORRECT: "Create an Amazon SQS queue and publish events to the queue. Configure an Amazon EC2 instance to poll the queue and consume the messages" is incorrect.

Amazon EC2 is not a serverless service so cannot if the application must be serverless.

INCORRECT: "Create an Amazon SQS topic and an Amazon EC2 instance. Subscribe the instance to the SNS topic and submit events to the topic" is incorrect.

EC2 is not serverless and cannot be subscribed to an SNS topic.

INCORRECT: "Create an Amazon SNS topic and an AWS Lambda function. Configure an HTTP endpoint on the Lambda function and subscribe the HTTP endpoint to the SNS topic. Submit events to the SNS topic" is incorrect.

An HTTP endpoint is not required for the serverless application. Lambda can be subscribed directly to an SNS topic.

References:

https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 28:
Skipped

A Developer has completed some code updates and needs to deploy the updates to an Amazon Elastic Beanstalk environment. The update must be deployed in the fastest possible time and application downtime is acceptable.

Which deployment policy should the Developer choose?

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies and options that let you configure batch size and health check behavior during deployments.

Each deployment policy has advantages and disadvantages and it’s important to select the best policy to use for each situation. The following tables summarizes the different deployment policies:

The “all at once” policy will deploy the update in the fastest time but will incur downtime.

All at once:

Deploys the new version to all instances simultaneously.

All of your instances are out of service while the deployment takes place.

Fastest deployment.

Good for quick iterations in development environment.

You will experience an outage while the deployment is taking place – not ideal for mission-critical systems.

If the update fails, you need to roll back the changes by re-deploying the original version to all of your instances.

No additional cost.

For this scenario downtime is acceptable and deploying in the fastest possible time is required so the “all at once” policy is the best choice.

CORRECT: "All at once" is the correct answer.

INCORRECT: "Rolling" is incorrect as this takes longer than “all at once”. This is a better choice if speed is required but downtime is not acceptable.

INCORRECT: "Rolling with additional batch" is incorrect if you require no reduction in capacity as it adds an additional batch of instances to the deployment.

INCORRECT: "Immutable" is incorrect as this takes a long time to complete. This is good if you cannot sustain application downtime and need to be able to quickly and easily roll back if issues occur.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 29:
Skipped

A Developer is developing a web application and will maintain separate sets of resources for the alpha, beta, and release stages. Each version runs on Amazon EC2 and uses an Elastic Load Balancer.

How can the Developer create a single page to view and manage all of the resources?

Explanation

In AWS, a resource is an entity that you can work with. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, or an Amazon S3 bucket. If you work with multiple resources, you might find it useful to manage them as a group rather than move from one AWS service to another for each task.

By default, the AWS Management Console is organized by AWS service. But with Resource Groups, you can create a custom console that organizes and consolidates information based on criteria specified in tags, or the resources in an AWS CloudFormation stack. The following list describes some of the cases in which resource grouping can help organize your resources.

An application that has different phases, such as development, staging, and production.

Projects managed by multiple departments or individuals.

A set of AWS resources that you use together for a common project or that you want to manage or monitor as a group.

A set of resources related to applications that run on a specific platform, such as Android or iOS.

CORRECT: "Create a resource group" is the correct answer.

INCORRECT: "Deploy all resources using a single Amazon CloudFormation stack" is incorrect as this would not be a best practice as it is better to create separate stacks to manage deployment separately.

INCORRECT: "Create an AWS Elastic Beanstalk environment for each stage" is incorrect. It’s fine to create separate environments for each stage, however this won’t create a single view to view and manage all resources.

INCORRECT: "Create a single AWS CodeDeploy deployment" is incorrect as each stage should be created in a separate deployment.

References:

https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html

Question 30:
Skipped

A developer is designing a web application that will run on Amazon EC2 Linux instances using an Auto Scaling Group. The application should scale based on a threshold for the number of users concurrently using the application.

How should the Auto Scaling Group be configured to scale out?

Explanation

You can create a custom CloudWatch metric for your EC2 Linux instance statistics by creating a script through the AWS Command Line Interface (AWS CLI). Then, you can monitor that metric by pushing it to CloudWatch. In this scenario you could then monitor the number of users currently logged in.

CORRECT: "Create a custom Amazon CloudWatch metric for concurrent users" is the correct answer.

INCORRECT: "Use the Amazon CloudWatch metric “NetworkIn”" is incorrect as this will only shows statistics for the number of inbound connections, not the number of concurrent users.

INCORRECT: "Use a target tracking scaling policy" is incorrect as this is used to maintain a certain number of instances based on a target utilization.

INCORRECT: "Create a custom Amazon CloudWatch metric for memory usage" is incorrect as memory usage does not tell us how many users are logged in.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-custom-metrics/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 31:
Skipped

An application is deployed using AWS Elastic Beanstalk and uses a Classic Load Balancer (CLB). A developer is performing a blue/green migration to change to an Application Load Balancer (ALB).

After deployment, the developer has noticed that customers connecting to the ALB need to re-authenticate every time they connect. Normally they would only authenticate once and then be able to reconnect without re-authenticating for several hours.

How can the developer resolve this issue?

Explanation

Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information in order to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies.

In this case, it is likely that the clients authenticate to the back-end instance and when they are reconnecting without sticky sessions enabled they may be load balanced to a different instance and need to authenticate again.

The most obvious first step in troubleshooting this issue is to enable sticky sessions on the target group. The following image shows this setting:

CORRECT: "Enable Sticky Sessions on the target group" is the correct answer.

INCORRECT: "Enable IAM authentication on the ALBs listener" is incorrect as you cannot enable “IAM authentication” on a listener.

INCORRECT: "Add a new SSL certificate to the ALBs listener" is incorrect as this is not related to authentication.

INCORRECT: "Change the load balancing algorithm on the target group to “least outstanding requests)" is incorrect as this does not prevent the customer from being load balanced to a different instance, which is what is most likely to resolve this issue.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#sticky-sessions

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/

Question 32:
Skipped

A Developer is creating a REST service using Amazon API Gateway with AWS Lambda integration. The service adds data to a spreadsheet and the data is sent as query string parameters in the method request.

How should the Developer convert the query string parameters to arguments for the Lambda function?

Explanation

Standard API Gateway parameter and response code mapping templates allow you to map parameters one-to-one and map a family of integration response status codes (matched by a regular expression) to a single response status code.

Mapping template overrides provides you with the flexibility to perform many-to-one parameter mappings; override parameters after standard API Gateway mappings have been applied; conditionally map parameters based on body content or other parameter values; programmatically create new parameters on the fly; and override status codes returned by your integration endpoint.

Any type of request parameter, response header, or response status code may be overridden.

Following are example uses for a mapping template override:

To create a new header (or overwrite an existing header) as a concatenation of two parameters

To override the response code to a success or failure code based on the contents of the body

To conditionally remap a parameter based on its contents or the contents of some other parameter

To iterate over the contents of a json body and remap key value pairs to headers or query strings

Therefore, the Developer can convert the query string parameters by creating a mapping template.

CORRECT: "Create a mapping template" is the correct answer.

INCORRECT: "Enable request validation" is incorrect as this is used to configure API Gateway to perform basic validation of an API request before proceeding with the integration request.

INCORRECT: "Include the Amazon Resource Name (ARN) of the Lambda function" is incorrect as that doesn’t assist with converting the query string parameters.

INCORRECT: "Change the integration type" is incorrect as to perform a conversion the Lambda integration does not need to have a different integration type such as Lambda proxy.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-override-request-response-parameters.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 33:
Skipped

A company has deployed a new web application that uses Amazon Cognito for authentication. The company wants to allow sign-in from any source but wants to automatically block all sign-in attempts if the risk level is elevated.

Which Amazon Cognito feature will meet these requirements?

Explanation

With adaptive authentication, you can configure your user pool to block suspicious sign-ins or add second factor authentication in response to an increased risk level.

For each sign-in attempt, Amazon Cognito generates a risk score for how likely the sign-in request is to be from a compromised source. This risk score is based on many factors, including whether it detects a new device, user location, or IP address.

For each risk level, you can choose from the following options:

•  Allow - Users can sign in without an additional factor.

•  Optional MFA - Users who have a second factor configured must complete a second factor challenge to sign in.

•  Require MFA - Users who have a second factor configured must complete a second factor challenge to sign in. Amazon Cognito blocks sign-in for users who don't have a second factor configured.

•  Block - Amazon Cognito blocks all sign-in attempts at the designated risk level.

In this case the company should use adaptive authentication and configure Cognito to block sign-in attempts at the specific risk level they feel is appropriate.

CORRECT: "Adaptive authentication" is the correct answer (as explained above.)

INCORRECT: "Advanced security metrics" is incorrect.

Amazon Cognito publishes sign-in attempts, their risk levels, and failed challenges to Amazon CloudWatch. These are known as advanced security metrics. This information is useful for analysis, but adaptive authentication is required to automatically block sign-in attempts.

INCORRECT: "Multi-factor authentication (MFA)" is incorrect.

This is not a method of blocking. In this case adaptive authentication with a block response should be configured.

INCORRECT: "Case sensitive user pools" is incorrect.

This has nothing to do with responding to security threats. This is a configuration that determines whether Cognito considers the case of email addresses and usernames.

References:

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-adaptive-authentication.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cognito/

Question 34:
Skipped

A developer is creating a serverless web application that includes AWS Lambda functions and a REST API deployed using Amazon API Gateway. The developer maintains multiple branches of code. The developer wants to avoid updating the API gateway target endpoint when a new code push is performed.

What solution would allow the developer to update the Lambda code without needing to update the REST API configuration?

Explanation

You can create one or more aliases for your Lambda function. A Lambda alias is like a pointer to a specific function version. Users can access the function version using the alias Amazon Resource Name (ARN). You can then release new versions of your code without needing to change the alias that applications use to invoke the function.

In this case the REST API could be configured with aliases for the functions. The developer could also use different stages with different endpoints using the aliases. This would enable calling different versions of the application by changing the stage name in the REST API URL.

CORRECT: "Create aliases and versions in AWS Lambda" is the correct answer (as explained above.)

INCORRECT: "Create multiple stages and deployments" is incorrect.

Stages and stage variables could be used to reference different functions or aliases. But if only stages and deployments are used (and not a Lambda alias) then the REST API would need to have the endpoint updated every time a new function version is released.

INCORRECT: "Create different tags for each Lambda function" is incorrect.

Tags cannot be used to define the Lambda endpoints in the REST API.

INCORRECT: "Create multiple private API endpoints and use CNAMEs" is incorrect.

There is no value here in creating multiple private API endpoints as that would be completely different APIs.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 35:
Skipped

A company runs a popular website behind an Amazon CloudFront distribution that uses an Application Load Balancer as the origin. The Developer wants to set up custom HTTP responses to 404 errors for content that has been removed from the origin that redirects the users to another page.

The Developer wants to use an AWS Lambda@Edge function that is associated with the current CloudFront distribution to accomplish this goal. The solution must use a minimum amount of resources.

Which CloudFront event type should the Developer use to invoke the Lambda@Edge function that contains the redirect logic?

Explanation

When CloudFront receives an HTTP response from the origin server, if there is an origin-response trigger associated with the cache behavior, you can modify the HTTP response to override what was returned from the origin.

Some common scenarios for updating HTTP responses include the following:

   • Changing the status to set an HTTP 200 status code and creating static body content to return to the viewer when an origin returns an error status code (4xx or 5xx)

   • Changing the status to set an HTTP 301 or HTTP 302 status code, to redirect the user to another website when an origin returns an error status code (4xx or 5xx)

You can also replace the HTTP responses in viewer and origin request events. However, in this case it is the error response being returned from the origin that must be modified when a 404 error is encountered for a page that has been removed.

CORRECT: "Origin response" is the correct answer.

INCORRECT: "Origin request" is incorrect as explained above.

INCORRECT: "Viewer response" is incorrect as explained above.

INCORRECT: "Viewer request" is incorrect as explained above.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-updating-http-responses.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 36:
Skipped

An application must be refactored for the cloud. The application data is stored in an Amazon DynamoDB table and is processed by a Lambda function which prepares the data for analytics. The data processing currently takes place once a day, but the data analysts require it to be performed in near-real time.

Which architecture pattern could be used to enable the data to be processed as it is received?

Explanation

An event driven architecture will ensure that the records are processed as they are received. This can be achieved by creating a DynamoDB Stream for the existing table and then configuring the Lambda function to retrieve messages from the stream. This would be an event-driven architecture as the event (a record being written to the stream) causes the processing layer to do work.

CORRECT: "Use an event-driven architecture" is the correct answer (as explained above.)

INCORRECT: "Use a microservices architecture" is incorrect.

A microservices architecture is where you have many small individual components of an application that are loosely coupled. Though the architecture described may have components of a microservices architecture, this is not the defining characteristic that meets the requirement of near-real time processing.

INCORRECT: "Use a fan-out architecture" is incorrect.

An example of a fan-out architecture is using SNS to send a single notification to many SQS queues.

INCORRECT: "Use a scheduled architecture" is incorrect.

A scheduled architecture would not process events as they occur. It would only process them on a fixed schedule.

References:

https://aws.amazon.com/event-driven-architecture/

Question 37:
Skipped

An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?

Explanation

With AWS KMS you can encrypt files directly with a customer master key (CMK). A CMK can encrypt up to 4KB (4096 bytes) of data in a single encrypt, decrypt, or reencrypt operation. As CMKs cannot be exported from KMS this is a very safe way to encrypt small amounts of data.

Customer managed CMKs are CMKs in your AWS account that you create, own, and manage. You have full control over these CMKs, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the CMK, and scheduling the CMKs for deletion.

AWS managed CMKs are CMKs in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. Some AWS services support only an AWS managed CMK. In this example the Amazon EC2 instance is saving files on a proprietary network-attached file system and this will not have support for AWS managed CMKs.

Data keys are encryption keys that you can use to encrypt data, including large amounts of data and other data encryption keys. You can use AWS KMS CMKs to generate, encrypt, and decrypt data keys. However, AWS KMS does not store, manage, or track your data keys, or perform cryptographic operations with data keys. You must use and manage data keys outside of AWS KMS – this is potentially less secure as you need to manage the security of these keys.

CORRECT: "Encrypt the data directly with a customer managed customer master key" is the correct answer.

INCORRECT: "Create a data encryption key from a customer master key and encrypt the data with the data encryption key" is incorrect as this is not the most secure option here as you need to secure the data encryption key outside of KMS. It is also unwarranted as you can use a CMK directly to encrypt files up to 4KB in size.

INCORRECT: "Create a data encryption key from a customer master key and encrypt the data with the customer master key" is incorrect as the creation of the data encryption key is of no use here. It does not necessarily pose a security risk as the data key hasn’t been used (and you can use the CMK to encrypt the data), however this is not the correct process to follow.

INCORRECT: "Encrypt the data directly with an AWS managed customer master key" is incorrect as the network-attached file system is proprietary and therefore will not be supported by AWS managed CMKs.

References:

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-kms/

Question 38:
Skipped

A developer received the following error message during an AWS CloudFormation deployment:

DELETE_FAILED (The following resource(s) failed to delete: (sg-11223344).)

Which action should the developer take to resolve this error?

Explanation

The stack may be stuck in the DELETE_FAILED state because the dependent object (security group), can't be deleted. This can be for many reasons, for example, the security group could have an ENI attached that’s not part of the CloudFormation stack.

To delete the stack you must choose to delete the stack in the console and then select to retain the resource(s) that failed to delete. This can also be achieved from the AWS CLI:

CORRECT: "Modify the CloudFormation template to retain the security group resource. Then manually delete the resource after deployment" is the correct answer (as explained above.)

INCORRECT: "Add a DependsOn attribute to the sg-11223344 resource in the CloudFormation template. Then delete the stack" is incorrect.

This creates a dependency for stack creation. It does not assist with resolving the issue that is preventing the stack from deleting successfully.

INCORRECT: "Manually delete the security group. Then execute a change set to force deletion of the CloudFormation stack" is incorrect.

You can manually delete the security group. However, you would not then use a change set to continue with the deletion. You would instead simply choose to delete the stack from the console or the CLI.

INCORRECT: "Update the logical ID of the security group resource with the security groups ARN. Then delete the stack" is incorrect.

The issue has nothing to do with logical IDs or ARNs. The resource cannot be deleted by CloudFormation so the developer simply needs to choose to retain the resource before continuing with the stack deletion process.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-cloudformation/

Question 39:
Skipped

A Lambda function is taking a long time to complete. The Developer has discovered that inadequate compute capacity is being allocated to the function. How can the Developer ensure that more compute capacity is allocated to the function?

Explanation

You can allocate memory between 128 MB and 3,008 MB in 64-MB increments. AWS Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of one full vCPU (one vCPU-second of credits per second).

Therefore, the way provide more compute capacity to this function is to allocate more memory.

CORRECT: "Allocate more memory to the function" is the correct answer.

INCORRECT: "Use an instance type with more CPU" is incorrect as Lambda is a serverless service and you cannot choose an instance type for your function.

INCORRECT: "Increase the maximum execution time" is incorrect as the function is not timing out, it’s just taking longer than expected due to having insufficient compute allocated.

INCORRECT: "Increase the reserved concurrency" is incorrect as this would enable more invocations to run in parallel but would not add more CPU to each function execution.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-console.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

Question 40:
Skipped

A company runs an e-commerce website that uses Amazon DynamoDB where pricing for items is dynamically updated in real time. At any given time, multiple updates may occur simultaneously for pricing information on a particular product. This is causing the original editor’s changes to be overwritten without a proper review process.

Which DynamoDB write option should be selected to prevent this overwriting?

Explanation

By default, the DynamoDB write operations (PutItem, UpdateItem, DeleteItem) are unconditional: Each operation overwrites an existing item that has the specified primary key.

DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value.

Conditional writes are helpful in cases where multiple users attempt to modify the same item. Consider the following diagram, in which two users (Alice and Bob) are working with the same item from a DynamoDB table.

Therefore, conditional writes are should be used to prevent the overwriting that has been occurring.

CORRECT: "Conditional writes" is the correct answer.

INCORRECT: "Concurrent writes" is incorrect is not a feature of DynamoDB. If concurrent writes occur this could lead to the very issues that conditional writes can be used to resolve.

INCORRECT: "Atomic writes" is incorrect. Atomic reads and writes are something that can be performed using DynamoDB transactions using conditional writes.

INCORRECT: "Batch writes" is incorrect as this is just a way of making multiple put or delete API operations in a single batch operation.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.ConditionalUpdate

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 41:
Skipped

Customers who use a REST API have reported performance issues. A Developer needs to measure the time between when API Gateway receives a request from a client and when it returns a response to the client.

Which metric should the Developer monitor?

Explanation

The Latency metric measures the time between when API Gateway receives a request from a client and when it returns a response to the client. The latency includes the integration latency and other API Gateway overhead.

CORRECT: "Latency" is the correct answer.

INCORRECT: "IntegrationLatency" is incorrect. This measures the time between when API Gateway relays a request to the backend and when it receives a response from the backend.

INCORRECT: "CacheHitCount" is incorrect. This measures the number of requests served from the API cache in a given period.

INCORRECT: "5XXError" is incorrect. This measures the number of server-side errors captured in a given period.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-metrics-and-dimensions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

https://digitalcloud.training/amazon-cloudwatch/

Question 42:
Skipped

An application will be hosted on the AWS Cloud. Developers will be using an Agile software development methodology with regular updates deployed through a continuous integration and delivery (CI/CD) model. Which AWS service can assist the Developers with automating the build, test, and deploy phases of the release process every time there is a code change?

Explanation

AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. You can quickly model and configure the different stages of a software release process. CodePipeline automates the steps required to release your software changes continuously.

Specifically, you can:

Automate your release processes: CodePipeline fully automates your release process from end to end, starting from your source repository through build, test, and deployment. You can prevent changes from moving through a pipeline by including a manual approval action in any stage except a Source stage. You can release when you want, in the way you want, on the systems of your choice, across one instance or multiple instances.

Establish a consistent release process: Define a consistent set of steps for every code change. CodePipeline runs each stage of your release according to your criteria.

Speed up delivery while improving quality: You can automate your release process to allow your developers to test and release code incrementally and speed up the release of new features to your customers.

Use your favorite tools: You can incorporate your existing source, build, and deployment tools into your pipeline.

View progress at a glance: You can review real-time status of your pipelines, check the details of any alerts, retry failed actions, view details about the source revisions used in the latest pipeline execution in each stage, and manually rerun any pipeline.

View pipeline history details: You can view details about executions of a pipeline, including start and end times, run duration, and execution IDs.

Therefore, AWS CodePipeline is the perfect tool for the Developer’s requirements.

CORRECT: "AWS CodePipeline" is the correct answer.

INCORRECT: "AWS CloudFormation" is incorrect as CloudFormation is not triggered by changes in a source code repository. You must create change sets for deploying updates.

INCORRECT: "AWS Elastic Beanstalk" is incorrect as this is a platform service that can be used to deploy code to managed runtimes such as Nodejs. It does not update automatically based on changes to source code. You must update that environment when you need to release new code.

INCORRECT: "AWS CodeBuild" is incorrect as CodeBuild is used for compiling code, running unit tests and creating the deployment package. It does not manage the deployment of the code.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome-what-can-I-do.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 43:
Skipped

A Developer needs to be notified by email for all new object creation events in a specific Amazon S3 bucket. Amazon SNS will be used for sending the messages. How can the Developer enable these notifications?

Explanation

The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket.

Currently, Amazon S3 can publish notifications for the following events:

New object created events — Amazon S3 supports multiple APIs to create objects. You can request notification when only a specific API is used (for example, s3:ObjectCreated:Put), or you can use a wildcard (for example, s3:ObjectCreated:*) to request notification when an object is created regardless of the API used.

Object removal events — Amazon S3 supports deletes of versioned and unversioned objects. For information about object versioning, see Object Versioning and Using Versioning.

Restore object events — Amazon S3 supports the restoration of objects archived to the S3 Glacier storage class. You request to be notified of object restoration completion by using s3:ObjectRestore:Completed. You use s3:ObjectRestore:Post to request notification of the initiation of a restore.

Reduced Redundancy Storage (RRS) object lost events — Amazon S3 sends a notification message when it detects that an object of the RRS storage class has been lost.

Replication events — Amazon S3 sends event notifications for replication configurations that have S3 Replication Time Control (S3 RTC) enabled. It sends these notifications when an object fails replication, when an object exceeds the 15-minute threshold, when an object is replicated after the 15-minute threshold, and when an object is no longer tracked by replication metrics. It publishes a second event when that object replicates to the destination Region.

Therefore, the Developer should create an event notification for all s3:ObjectCreated:* API calls as this will capture all new object creation events.

CORRECT: "Create an event notification for all s3:ObjectCreated:* API calls" is the correct answer.

INCORRECT: "Create an event notification for all s3:ObjectCreated:Put API calls" is incorrect as this will not capture all new object creation events (e.g. POST or COPY). The wildcard should be used instead.

INCORRECT: "Create an event notification for all s3:ObjectRemoved:Delete API calls" is incorrect as this is used for object deletions.

INCORRECT: "Create an event notification for all s3:ObjectRestore:Post API calls" is incorrect as this is used for restore events from Amazon S3 Glacier archives.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

https://aws.amazon.com/sns/faqs/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

https://digitalcloud.training/aws-application-integration-services/

Question 44:
Skipped

A company is using Amazon API Gateway to manage access to a set of microservices implemented as AWS Lambda functions. The company has made some minor changes to one of the APIs. The company wishes to give existing customers using the API up to 6 months to migrate from version 1 to version 2.

What approach should a Developer use to implement the change?

Explanation

Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services as well as data stored in the AWS Cloud.

A stage is a named reference to a deployment, which is a snapshot of the API. You use a Stage to manage and optimize a particular deployment. For example, you can set up stage settings to enable caching, customize request throttling, configure logging, define stage variables or attach a canary release for testing.

You deploy your API to a stage and it is given a unique URL that contains the stage name. This URL can be used to direct customers to your URL based on the stage (or version) you’d like them to use.

The following invocation URLs can be used to direct customers to version 1 or version 2 of an API:

Therefore, the best approach is to use API Gateway to deploy a new stage named v2 to the API and provide users with its URL.

CORRECT: "Use API Gateway to deploy a new stage named v2 to the API and provide users with its URL" is the correct answer.

INCORRECT: "Update the underlying Lambda function and provide clients with the new Lambda invocation URL" is incorrect as the API has been updated, not the Lambda function. We deploy API updates to stages, so we need to deploy a new stage.

INCORRECT: "Use API Gateway to automatically propagate the change to clients, specifying 180 days in the phased deployment parameter" is incorrect as this is not a valid method of migrating users from one stage in API Gateway to another.

INCORRECT: "Update the underlying Lambda function, create an Amazon CloudFront distribution with the updated Lambda function as its origin" is incorrect as the API has been updated, not the Lambda function.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-stages.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-api-gateway/

Question 45:
Skipped

An Amazon DynamoDB table has been created using provisioned capacity. A manager needs to understand whether the DynamoDB table is cost-effective. How can the manager query how much provisioned capacity is actually being used?

Explanation

You can monitor Amazon DynamoDB using CloudWatch, which collects and processes raw data from DynamoDB into readable, near real-time metrics. These statistics are retained for a period of time, so that you can access historical information for a better perspective on how your web application or service is performing. By default, DynamoDB metric data is sent to CloudWatch automatically.

To determine how much of the provisioned capacity is being used you can monitor ConsumedReadCapacityUnits or ConsumedWriteCapacityUnits over the specified time period.

CORRECT: "Monitor the ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits over a specified time period" is the correct answer.

INCORRECT: "Monitor the ReadThrottleEvents and WriteThrottleEvents metrics for the table" is incorrect as these metrics are used to determine which requests exceed the provisioned throughput limits of a table.

INCORRECT: "Use Amazon CloudTrail and monitor the DescribeLimits API action" is incorrect as CloudTrail records API actions, not performance metrics.

INCORRECT: "Use AWS X-Ray to instrument the DynamoDB table and monitor subsegments" is incorrect. DynamoDB does not directly integrate with X-Ray but you can record information in subsegments for downstream requests. This is not, however, a method for monitoring provisioned capacity utilization.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/monitoring-cloudwatch.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

https://digitalcloud.training/amazon-cloudwatch/

Question 46:
Skipped

An application that runs on an Amazon EC2 instance needs to access and make API calls to multiple AWS services.

What is the MOST secure way to provide access to the AWS services with MINIMAL management overhead?

Explanation

An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. This is a secure way to authorize and EC2 instance to access AWS services.

Instance profiles are created automatically if you use the console to add a role to an instance. You can also create instance profiles using the AWS CLI or API and assign roles to them.

CORRECT: "Use EC2 instance profiles" is the correct answer.

INCORRECT: "Use AWS KMS to store and retrieve credentials" is incorrect as KMS is used for encrypting data, not storing credentials.

INCORRECT: "Use AWS root user to make requests to the application " is incorrect as this is not a secure way to access services as the root user has full privileges to the AWS account.

INCORRECT: "Store and retrieve credentials from AWS CodeCommit" is incorrect as this is not a suitable solution for storing this data as CodeCommit is used for storing source code.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

Question 47:
Skipped

An application searches a DynamoDB table to return items based on primary key attributes. A developer noticed some ProvisionedThroughputExceeded exceptions being generated by DynamoDB.

How can the application be optimized to reduce the load on DynamoDB and use the LEAST amount of RCU?

Explanation

In general, Scan operations are less efficient than other operations in DynamoDB. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result you want, essentially adding the extra step of removing data from the result set.

If possible, you should avoid using a Scan operation on a large table or index with a filter that removes many results. Also, as a table or index grows, the Scan operation slows. The Scan operation examines every item for the requested values and can use up the provisioned throughput for a large table or index in a single operation. For faster response times, design your tables and indexes so that your applications can use Query instead of Scan. (For tables, you can also consider using the GetItem and BatchGetItem APIs.)

Additionally, eventual consistency consumes fewer RCUs than strong consistency. Therefore, the application should be refactored to use query APIs with eventual consistency.

CORRECT: "Modify the application to issue query API calls with eventual consistency reads" is the correct answer.

INCORRECT: "Modify the application to issue scan API calls with strong eventual reads" is incorrect as the Scan API is less efficient as it will return all items in the table.

INCORRECT: "Modify the application to issue query API calls with strong consistency reads" is incorrect as strong consistency reads will consume more RCUs.

INCORRECT: "Modify the application to issue scan API calls with strong consistency reads" is incorrect as the Scan API is less efficient as it will return all items in the table and strong consistency reads will use more RCUs.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 48:
Skipped

A Developer is deploying an application using Docker containers running on the Amazon Elastic Container Service (ECS). The Developer is testing application latency and wants to capture trace information between the microservices.

Which solution will meet these requirements?

Explanation

In Amazon ECS, create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster. You can use port mappings and network mode settings in your task definition file to allow your application to communicate with the daemon container.

CORRECT: "Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to the Amazon ECS cluster." is the correct answer.

INCORRECT: "Install the Amazon CloudWatch agent on the container image. Use the CloudWatch SDK to publish custom metrics from each of the microservices" is incorrect. The CloudWatch agent does not capture trace information between Docker containers.

INCORRECT: "Install the AWS X-Ray daemon on each of the Amazon ECS instances" is incorrect. The X-Ray daemon must be installed on the Docker containers, not the ECS hosts.

INCORRECT: "Install the AWS X-Ray daemon locally on an Amazon EC2 instance and instrument the Amazon ECS microservices using the X-Ray SDK" is incorrect. You cannot trace Docker microservices from an Amazon EC2 instance.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 49:
Skipped

A new AWS Lambda function processes data and sends it to another service. The data is around 1 MB in size. A developer has been asked to update the function so it encrypts the data before sending it on to the other service.

Which API call is required to perform the encryption?

Explanation

To create a data key, call the GenerateDataKey operation. AWS KMS generates the data key. Then it encrypts a copy of the data key under a symmetric encryption KMS key that you specify. The operation returns a plaintext copy of the data key and the copy of the data key encrypted under the KMS key. The following image shows this operation.

AWS KMS cannot use a data key to encrypt data. But you can use the data key outside of AWS KMS, such as by using OpenSSL or a cryptographic library like the AWS Encryption SDK.

After using the plaintext data key to encrypt data, remove it from memory as soon as possible. You can safely store the encrypted data key with the encrypted data, so it is available to decrypt the data.

In this case, the Lambda function can use the encryption keys generated to encrypt the data before sending it to the other service. The GenerateDataKey API is the correct API action to use.

CORRECT: "Issue the AWS KMS GenerateDataKey API to return an encryption key" is the correct answer (as explained above.)

INCORRECT: "Issue the AWS KMS GenerateDataKeyWithoutPlainText API to return an encryption key" is incorrect.

This API action returns only an encrypted data key. When you need to use the data key, ask AWS KMS to decrypt it. The correct API for directly encrypting larger amounts of data is the GenerateDataKey API.

INCORRECT: "Pass the data directly to AWS KMS and issue the Encrypt API for encryption" is incorrect.

AWS KMS can only encrypt data up to 4096 bytes. Therefore, the data must be encrypted outside of KMS and a data key must be generated for this purpose.

INCORRECT: "Pass the data directly to AWS KMS and issue the ReEncrypt API for encryption" is incorrect.

The reencrypt API decrypts data before reencrypting it within AWS KMS. In this case the data is not currently encrypted and the operations cannot take place within KMS as the data is larger than the KMS maximum of 4096 bytes.

References:

https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#data-keys

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-kms/

Question 50:
Skipped

A company is developing a new online game that will run on top of Amazon ECS. Four distinct Amazon ECS services will be part of the architecture, each requiring specific permissions to various AWS services. The company wants to optimize the use of the underlying Amazon EC2 instances by bin packing the containers based on memory reservation.

Which configuration would allow the Development team to meet these requirements MOST securely

Explanation

With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

Instead of creating and distributing your AWS credentials to the containers or using the EC2 instance’s role, you can associate an IAM role with an ECS task definition or RunTask API operation. The applications in the task’s containers can then use the AWS SDK or CLI to make API requests to authorized AWS services.

In this case each service requires access to different AWS services so following the principal of least privilege it is best to assign as a separate role to each task definition.

CORRECT: "Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS task definition to reference the associated IAM role" is the correct answer.

INCORRECT: "Create a new Identity and Access Management (IAM) instance profile containing the required permissions for the various ECS services, then associate that instance role with the underlying EC2 instances" is incorrect. It is a best practice to use IAM roles for tasks instead of assigning the roles to the container instances.

INCORRECT: "Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS service to reference the associated IAM role" is incorrect as the reference should be made within the task definition.

INCORRECT: "Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then, create an IAM group and configure the ECS cluster to reference that group" is incorrect as the reference should be made within the task definition.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 51:
Skipped

A Developer is attempting to call the Amazon CloudWatch API and is receiving HTTP 400: ThrottlingException errors intermittently. When a call fails, no data is retrieved.

What best practice should the Developer first attempt to resolve this issue?

Explanation

Occasionally ,you may receive the 400 ThrottlingException error for PutMetricData API calls in Amazon CloudWatch with a detailed response similar the following:

CloudWatch requests are throttled for each Amazon Web Services (AWS) account on a per-Region basis to help service performance. For current PutMetricData API request limits, see CloudWatch Limits.

All calls to the PutMetricData API in an AWS Region count towards the maximum allowed request rate. This number includes calls from any custom or third-party application, such as calls from the CloudWatch Agent, the AWS Command Line Interface (AWS CLI), or the AWS Management Console.

Resolutions: It's a best practice to use the following methods to reduce your call rate and avoid API throttling:

- Distribute your API calls evenly over time rather than making several API calls in a short time span. If you require data to be available with a one-minute resolution, you have an entire minute to emit that metric. Use jitter (randomized delay) to send data points at various times.

- Combine as many metrics as possible into a single API call. For example, a single PutMetricData call can include 20 metrics and 150 data points. You can also use pre-aggregated data sets, such as StatisticSet, to publish aggregated data points, thus reducing the number of PutMetricData calls per second.

- Retry your call with exponential backoff and jitter.

Following attempting the above resolutions AWS suggest the following: “If you still require a higher limit, you can request a limit increase. Increasing the rate limit can have a high financial impact on your AWS bill.”

Therefore, the first thing the Developer should do, from the list of options presented, is to retry the call with exponential backoff.

CORRECT: "Retry the call with exponential backoff" is the correct answer.

INCORRECT: "Contact AWS Support for a limit increase" is incorrect. As mentioned above, there are other resolutions the Developer should attempt before contacting support to raise the limit.

INCORRECT: "Use the AWS CLI to get the metrics" is incorrect as this will still make the same API calls.

INCORRECT: "Analyze the applications and remove the API call" is incorrect as this is not a good resolution to the issue as this may mean that important monitoring and logging data is not recorded for the application.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-400-error-throttling/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 52:
Skipped

A company has an application that provides access to objects in Amazon S3 based on the type of user. The user types are registered user and guest user. The company has 30,000 users. Information is read from an S3 bucket depending on the user type.

Which approaches are recommended to provide access to both user types MOST efficiently? (Select TWO.)

Explanation

Amazon Cognito can be used with identity pools. A Cognito identity pool supports both authenticated and unauthenticated identities. Authenticated identities belong to users who are authenticated by any supported identity provider. Unauthenticated identities typically belong to guest users.

The most secure way of using the IAM service for this solution would be to use separate roles. IAM roles can be securely assumed based on the type of user. Each role can be configured with different permission sets as applicable to registered and guest users.

CORRECT: "Use Amazon Cognito to provide access using authenticated and unauthenticated roles" is a correct answer (as explained above.)

CORRECT: "Use the AWS IAM service and let the application assume different roles depending on the type of user" is also a correct answer (as explained above.)

INCORRECT: "Store separate access keys in the application code for registered users and guest users to provide access to the objects" is incorrect.

This is highly insecure. You should avoid embedding access keys in application code and use IAM roles instead.

INCORRECT: "Create a new IAM user for each user and grant access to the S3 objects" is incorrect.

This would be a lot of users and is an inefficient solution.

INCORRECT: "Use S3 bucket policies to restrict read access to specific IAM users" is incorrect.

This would also be highly complex with so many users and would need constant updating when users need to be added or removed.

References:

https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/

https://digitalcloud.training/amazon-cognito/

Question 53:
Skipped

A Developer is creating a serverless application. The application looks up information about a customer using a separate Lambda function for each item such as address and phone number. The Developer has created branches in AWS Step Functions for each lookup function.

How can the Developer optimize the performance, so the lookups complete faster?

Explanation

The Parallel state ("Type": "Parallel") can be used to create parallel branches of execution in your AWS Step Functions state machine. This will improve the performance of the application by ensuring that all information lookups occur in parallel.

CORRECT: "Use a Parallel state to iterate over all the branches parallel" is the correct answer.

INCORRECT: "Use a Choice state to lookup the specific information required" is incorrect. This is used to add additional logic but is not required and is unlikely to improve performance.

INCORRECT: "Use a Wait state to reduce the wait time for function execution" is incorrect. The Wait state delays the state machine from continuing for a specified time.

INCORRECT: "Use a Map state to iterate over all the items" is incorrect. The Map state executes the same steps for multiple entries of an array in the state input.

References:

https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-parallel-state.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

Question 54:
Skipped

A Developer is creating an application and would like add AWS X-Ray to trace user requests end-to-end through the software stack. The Developer has implemented the changes and tested the application and the traces are successfully sent to X-Ray. The Developer then deployed the application on an Amazon EC2 instance, and noticed that the traces are not being sent to X-Ray.

What is the most likely cause of this issue? (Select TWO.)

Explanation

AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs.

  - You can run the X-Ray daemon on the following operating systems on Amazon EC2:

  - Amazon Linux

  - Ubuntu

Windows Server (2012 R2 and newer)

The X-Ray daemon must be running on the EC2 instance in order to collect data. You can use a user data script to run the daemon automatically when you launch the instance. The X-Ray daemon uses the AWS SDK to upload trace data to X-Ray, and it needs AWS credentials with permission to do that.

On Amazon EC2, the daemon uses the instance's instance profile role automatically. The IAM role or user that the daemon's credentials belong to must have permission to write data to the service on your behalf.

- To use the daemon on Amazon EC2, create a new instance profile role or add the managed policy to an existing one.

- To use the daemon on Elastic Beanstalk, add the managed policy to the Elastic Beanstalk default instance profile role.

- To run the daemon locally, create an IAM user and save its access keys on your computer.

Therefore, the most likely cause of the issues being experienced in this scenario is that the instance’s instance profile role does not have permission to upload trace data to X-Ray or the X-Ray daemon is not running on the EC2 instance.

CORRECT: "The instance’s instance profile role does not have permission to upload trace data to X-Ray" is the correct answer.

CORRECT: "The X-Ray daemon is not installed on the EC2 instance." is also a correct answer.

INCORRECT: "The X-Ray API is not installed on the EC2 instance " is incorrect as you do not install the X-Ray API, you run the X-Ray daemon. The API will always be accessible using the X-Ray endpoint.

INCORRECT: "The traces are reaching X-Ray, but the Developer does not have permission to view the records" is incorrect as the developer previously viewed data in X-Ray so clearly has permissions.

INCORRECT: "The X-Ray segments are being queued" is incorrect. The X-Ray daemon is responsible for relaying trace data to X-Ray. However, it will not queue data for an extended period of time so this is unlikely to be a cause of this issue.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ec2.html

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html#xray-daemon-permissions

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 55:
Skipped

There are multiple AWS accounts across multiple regions managed by a company. The operations team require a single operational dashboard that displays some key performance metrics from these accounts and regions. What is the SIMPLEST solution?

Explanation

You can create cross-account cross-Region dashboards, which summarize your CloudWatch data from multiple AWS accounts and multiple Regions into one dashboard. From this high-level dashboard you can get a view of your entire application, and also drill down into more specific dashboards without having to log in and out of accounts or switch Regions.

You can create cross-account cross-Region dashboards in the AWS Management Console and programmatically.

CORRECT: "Create an Amazon CloudWatch cross-account cross-region dashboard" is the correct answer.

INCORRECT: "Create an Amazon CloudWatch dashboard in one account and region and import the data from the other accounts and regions" is incorrect as this is more complex and unnecessary.

INCORRECT: "Create an AWS Lambda function that collects metrics from each account and region and pushes the metrics to the account where the dashboard has been created" is incorrect as this is not a simple solution.

INCORRECT: "Create an Amazon CloudTrail trail that applies to all regions and deliver the logs to a single Amazon S3 bucket. Create a dashboard using the data in the bucket" is incorrect as CloudTrail logs API activity, not performance metrics.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_xaxr_dashboard.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 56:
Skipped

An application is hosted in AWS Elastic Beanstalk and is connected to a database running on Amazon RDS MySQL. A Developer needs to instrument the application to trace database queries and calls to downstream services using AWS X-Ray.

How can the Developer enable tracing for the application?

Explanation

To relay trace data from your application to AWS X-Ray, you can run the X-Ray daemon on your Elastic Beanstalk environment's Amazon EC2 instances.

Elastic Beanstalk platforms provide a configuration option that you can set to run the daemon automatically. You can enable the daemon in a configuration file in your source code or by choosing an option in the Elastic Beanstalk console. When you enable the configuration option, the daemon is installed on the instance and runs as a service.

The following example code can be placed in the .ebextensions/xray-daemon.config file in your source code:

The above code will ensure the X-Ray daemon starts and the Developer can enable tracing for the application as required.

CORRECT: "Add a .ebextensions/xray-daemon.config file to the source code to enable the X-Ray daemon" is the correct answer.

INCORRECT: "Add a xray-daemon.config file to the root of the source code to enable the X-Ray deamon" is incorrect as all .config files must be stored in the .ebextensions folder in the source code.

INCORRECT: "Enable active tracing in the Elastic Beanstalk console" is incorrect as you cannot enable active tracing through the console for Elastic Beanstalk. This is available for AWS Lambda and API Gateway.

INCORRECT: "Enable X-Ray tracing using an AWS Lambda function" is incorrect as there is no need to add a Lambda function to the application to add tracing support. The developer can enable tracing by enabling the X-Ray daemon.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-beanstalk.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

https://digitalcloud.training/aws-elastic-beanstalk/

Question 57:
Skipped

A Developer has deployed an application that runs on an Auto Scaling group of Amazon EC2 instances. The application data is stored in an Amazon DynamoDB table and records are constantly updated by all instances. An instance sometimes retrieves old data. The Developer wants to correct this by making sure the reads are strongly consistent.

How can the Developer accomplish this?

Explanation

When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful.

The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data and there will be no Item element in the response.

GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to true. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.

Therefore, the Developer should set ConsistentRead to true when calling GetItem.

CORRECT: "Set ConsistentRead to true when calling GetItem" is the correct answer.

INCORRECT: "Create a new DynamoDB Accelerator (DAX) table" is incorrect as DAX is not used to enable strongly consistent reads. DAX is used for improving read performance as it caches data in an in-memory cache.

INCORRECT: "Set consistency to strong when calling UpdateTable" is incorrect as you cannot use this API action to configure consistency at a table level.

INCORRECT: "Use the GetShardIterator command" is incorrect as this is not related to DynamoDB, it is related to Amazon Kinesis.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 58:
Skipped

A Developer is creating a DynamoDB table for storing transaction logs. The table has 10 write capacity units (WCUs). The Developer needs to configure the read capacity units (RCUs) for the table in order to MAXIMIZE the number of requests allowed per second. Which of the following configurations should the Developer use?

Explanation

A read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. For example, suppose that you create a table with 10 provisioned read capacity units. This allows you to perform 10 strongly consistent reads per second, or 20 eventually consistent reads per second, for items up to 4 KB.

Reading an item larger than 4 KB consumes more read capacity units. For example, a strongly consistent read of an item that is 8 KB (4 KB × 2) consumes 2 read capacity units. An eventually consistent read on that same item consumes only 1 read capacity unit.

Item sizes for reads are rounded up to the next 4 KB multiple. For example, reading a 3,500-byte item consumes the same throughput as reading a 4 KB item. Therefore, the smaller (1 KB) items in this scenario would consume the same number of RCUs as the 4 KB items. Also, we know that eventually consistent reads consume half the RCUs of strongly consistent reads.

The following bullets provide the read throughput for each configuration:

· Eventually consistent, 15 RCUs, 1 KB item = 30 items read per second.

· Strongly consistent, 15 RCUs, 1 KB item = 15 items read per second.

· Eventually consistent, 5 RCUs, 4 KB item = 10 items read per second.

· Strongly consistent, 5 RCUs, 4 KB item = 5 items read per second.

Therefore, the Developer should choose the option to enable eventually consistent reads of 15 RCUs reading items that are 1 KB in size as this will result in the highest number of items read per second.

CORRECT: "Eventually consistent reads of 15 RCUs reading items that are 1 KB in size" is the correct answer.

INCORRECT: "Eventually consistent reads of 5 RCUs reading items that are 4 KB in size" is incorrect as described above.

INCORRECT: "Strongly consistent reads of 5 RCUs reading items that are 4 KB in size" is incorrect as described above.

INCORRECT: "Strongly consistent reads of 15 RCUs reading items that are 1KB in size" is incorrect as described above.

References:

https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/ProvisionedThroughput.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

Question 59:
Skipped

A company has deployed a REST API using Amazon API Gateway with a Lambda authorizer. The company needs to log who has accessed the API and how the caller accessed the API. They also require logs that include errors and execution traces for the Lambda authorizer.

Which combination of actions should the Developer take to meet these requirements? (Select TWO.)

Explanation

There are two types of API logging in CloudWatch: execution logging and access logging. In execution logging, API Gateway manages the CloudWatch Logs. The process includes creating log groups and log streams, and reporting to the log streams any caller's requests and responses.

The logged data includes errors or execution traces (such as request or response parameter values or payloads), data used by Lambda authorizers, whether API keys are required, whether usage plans are enabled, and so on.

In access logging, you, as an API Developer, want to log who has accessed your API and how the caller accessed the API. You can create your own log group or choose an existing log group that could be managed by API Gateway.

CORRECT: "Enable API Gateway execution logging" is a correct answer.

CORRECT: "Enable API Gateway access logs" is also a correct answer.

INCORRECT: "Enable detailed logging in Amazon CloudWatch" is incorrect. Detailed logging does not provide the requested information.

INCORRECT: "Create an API Gateway usage plan" is incorrect. This will not enable logging.

INCORRECT: "Enable server access logging" is incorrect. This is a type of logging that applies to Amazon S3 buckets.

References:

https://docs.aws.amazon.com/apigateway/latest/Developerguide/set-up-logging.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

Question 60:
Skipped

A team of Developers are working on a shared project and need to be able to collaborate on code. The shared application code must be encrypted at rest, stored on a highly available and durable architecture, and support multiple versions and batch change tracking.

Which AWS service should the Developer use?

Explanation

AWS CodeCommit is a fully managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. AWS CodeCommit automatically encrypts your files in transit and at rest.

AWS CodeCommit helps you collaborate on code with teammates via pull requests, branching, and merging. You can implement workflows that include code reviews and feedback by default, and control who can make changes to specific branches.

CORRECT: "AWS CodeCommit" is the correct answer.

INCORRECT: "AWS CodeBuild" is incorrect. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages

INCORRECT: "Amazon S3" is incorrect. Amazon S3 is an object-based storage system and does not support the features required here.

INCORRECT: "AWS Cloud9" is incorrect. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser.

References:

https://aws.amazon.com/codecommit/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 61:
Skipped

An application stores data in Amazon RDS and uses Amazon ElastiCache to improve read performance. The developer has configured ElastiCache to update the cache immediately after any writes to the primary database.

What will be the result of this approach to caching?

Explanation

The ElastiCache deployment is using a write-through caching strategy. The write-through strategy adds data or updates data in the cache whenever data is written to the database. This means data in the cache is never stale. There is a write penalty, but not a read penalty (in terms of latency added).

However, with a write-through strategy, most data are never read so the cache can become large and expensive. Adding a TTL to records can assist with this.

CORRECT: "The cache will become large and expensive because the infrequently requested data is also written to the cache" is the correct answer (as explained above.)

INCORRECT: "Caching will slow performance of the read queries because the cache is updated when the cache cannot find the requested data" is incorrect.

This is true of a lazy loading strategy. With a write-through strategy there is a write penalty, but not a read penalty.

INCORRECT: "Load on the RDS database instance will increase because the cache is updated for every database update" is incorrect.

There is still only one write to the RDS database instance.

INCORRECT: "There is a cache miss penalty because the cache is updated only after a cache miss, resulting in response latency" is incorrect.

This is a description of a lazy loading strategy, not a write-through strategy.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

Question 62:
Skipped

A Development team would like to migrate their existing application code from a GitHub repository to AWS CodeCommit.

What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?

Explanation

The simplest way to set up connections to AWS CodeCommit repositories is to configure Git credentials for CodeCommit in the IAM console, and then use those credentials for HTTPS connections.

You can also use these same credentials with any third-party tool or individual development environment (IDE) that supports HTTPS authentication using a static user name and password. For examples, see For Connections from Development Tools.

CORRECT: "A set of credentials generated from IAM" is the correct answer.

INCORRECT: "A GitHub secure authentication token" is incorrect as this is not how you authenticated to CodeCommit.

INCORRECT: "A public and private SSH key file" is incorrect as that is required for accessing CodeCommit using SSH.

INCORRECT: "An Amazon EC2 IAM role with CodeCommit permissions" is incorrect as that would be used to provide access to administer CodeCommit. However, the question is asking how to authenticate a Git client to CodeCommit using HTTPS.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-developer-tools/

Question 63:
Skipped

A Development team are currently creating a new application that uses a microservices design pattern and runs on Docker containers. The team would like to run the platform on AWS using a managed platform. They want minimize management overhead for the platform. Which service should the Development team use?

Explanation

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

There are two launch types with Amazon ECS and the following table describes the differences:

As you can see with the EC2 launch type you must manage the infrastructure layer (Amazon EC2 instances), whereas with Amazon Fargate you do not. Therefore, for this scenario the Fargate launch type should be used.

CORRECT: "Amazon ECS with Fargate launch type" is the correct answer.

INCORRECT: "Amazon ECS with EC2 launch type" is incorrect as the EC2 launch type requires more platform overhead as you must manage Amazon EC2 instances.

INCORRECT: "Amazon Elastic Kubernetes Service (EKS)" is incorrect as this would require more management overhead (unless used with Fargate).

INCORRECT: "AWS Lambda" is incorrect as this is not a service that can be used to run Docker containers.

References:

https://aws.amazon.com/fargate/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

Question 64:
Skipped

A company manages a web application that is deployed on AWS Elastic Beanstalk. A Developer has been instructed to update to a new version of the application code. There is no tolerance for downtime if the update fails and rollback should be fast.

What is the SAFEST deployment method to use?

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once, Rolling, Rolling with additional batch, and Immutable) and options that let you configure batch size and health check behavior during deployments.

For this scenario we need to ensure that no downtime occurs if the update fails and there is a quick way to rollback. In the table below you can see the different deployment policies available and how they impact downtime and rollback:

All policies except for Immutable and Blue/Green require manual redeployment of the previous version of the code which will take time and result in downtime. The blue/green option is not actually an Elastic Beanstalk policy but it is a method you can use, however it is not offered as an answer choice

Therefore, the best deployment policy to use for this scenario is the Immutable deployment policy.

CORRECT: "Immutable" is the correct answer.

INCORRECT: "All at once" is incorrect as it causes complete downtime and manual redeployment in the case of failure.

INCORRECT: "Rolling" is incorrect because it requires manual redeployment in the case of failure.

INCORRECT: "Rolling with Additional Batch" is incorrect because it requires manual redeployment in the case of failure.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-beanstalk/

Question 65:
Skipped

An organization has an account for each environment: Production, Testing, Development. A Developer with an IAM user in the Development account needs to launch resources in the Production and Testing accounts. What is the MOST efficient way to provide access

Explanation

You can grant your IAM users’ permission to switch to roles within your AWS account or to roles defined in other AWS accounts that you own. This is known as cross-account access.

In the image below a user in the Development account needs to access an S3 bucket in the Production account:

The user is able to assume the role in the Production account and access the S3 bucket. This is more efficient than providing the user with multiple accounts. In this scenario the user requests to switch to the role through either the console or the API/CLI.

CORRECT: "Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role" is the correct answer.

INCORRECT: "Create a separate IAM user in each account and have the Developer login separately to each account" is incorrect as this is not the most efficient method of providing access. Cross-account access is preferred .

INCORRECT: "Create an IAM group in the Production and Testing accounts and add the Developer’s user from the Development account to the groups" is incorrect as you cannot add an IAM user from another AWS account to a group.

INCORRECT: "Create an IAM permissions policy in the Production and Testing accounts and reference the IAM user in the Development account" is incorrect as you cannot reference an IAM user from another AWS account in a permissions policy.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-iam/