Chart

Pie chart with 4 slices.
End of interactive chart.
Attempt 2
Question 1: Incorrect

A developer is configuring an Amazon API Gateway as a front door to expose backend business logic. To keep the solution cost-effective, the developer has opted for HTTP APIs.

Which of the following services are not available as an HTTP API via Amazon API Gateway?

Explanation

Correct option:

AWS Web Application Firewall (AWS WAF)

Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale.

API Gateway REST APIs: A collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway.

API Gateway HTTP API: A collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions. You can deploy this collection in one or more stages. Each route can expose one or more API methods that have unique HTTP verbs supported by API Gateway.

AWS Lambda, AWS Identity and Access Management (IAM) and Amazon Cognito services are all available as HTTP APIs. AWS Web Application Firewall (AWS WAF) however, is only available in REST APIs.

Choosing between HTTP API or REST API: via - https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html

Incorrect options:

AWS Lambda

AWS Identity and Access Management (IAM)

Amazon Cognito

These three options contradict the explanation above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html

Question 2: Correct

A company has more than 100 million members worldwide enjoying 125 million hours of TV shows and movies each day. The company uses AWS for nearly all its computing and storage needs, which use more than 10,000 server instances on AWS. This results in an extremely complex and dynamic networking environment where applications are constantly communicating inside AWS and across the Internet. Monitoring and optimizing its network is critical for the company.

The company needs a solution for ingesting and analyzing the multiple terabytes of real-time data its network generates daily in the form of flow logs. Which technology/service should the company use to ingest this data economically and has the flexibility to direct this data to other downstream systems?

Explanation

Correct option:

Amazon Kinesis Data Streams

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.

Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).

Incorrect options:

Amazon Simple Queue Service (SQS) - Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. AWS recommends using Amazon SQS for cases where individual message fail/success are important, message delays are needed and there is only one consumer for the messages received (if more than one consumers need to consume the message, then AWS suggests configuring more queues).

Amazon Kinesis Firehose - Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.

Kinesis data streams is highly customizable and best suited for developers building custom applications or streaming data for specialized needs. Data Streams also provide greater flexibility in integrating downstream applications than Firehose. Data Streams is also a cost-effective option compared to Firehose. Therefore, KDS is the right solution.

AWS Glue - AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all of the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months. Glue is not best suited to handle real-time data.

References:

https://aws.amazon.com/kinesis/data-streams/

https://aws.amazon.com/sqs/

https://aws.amazon.com/kinesis/data-firehose/

Question 3: Correct

A banking application needs to send real-time alerts and notifications based on any updates from the backend services. The company wants to avoid implementing complex polling mechanisms for these notifications.

Which of the following types of APIs supported by the Amazon API Gateway is the right fit?

Explanation

Correct option:

WebSocket APIs

In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.

For example, you could build a serverless application using an API Gateway WebSocket API and AWS Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as AWS Lambda, Amazon Kinesis, or an HTTP endpoint based on message content.

You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following:

  1. Chat applications
  2. Real-time dashboards such as stock tickers
  3. Real-time alerts and notifications

API Gateway provides WebSocket API management functionality such as the following:

  1. Monitoring and throttling of connections and messages
  2. Using AWS X-Ray to trace messages as they travel through the APIs to backend services
  3. Easy integration with HTTP/HTTPS endpoints

Incorrect options:

REST or HTTP APIs

REST APIs - An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user.

For example, /incomes could be the path of a resource representing the income of the app user. A resource can have one or more operations that are defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the reported expenses incurred by the caller.

HTTP APIs - HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to any publicly routable HTTP endpoint.

For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client.

Server push mechanism is not possible in REST and HTTP APIs.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-overview-developer-experience.html

Question 4: Correct

Two policies are attached to an IAM user. The first policy states that the user has explicitly been denied all access to EC2 instances. The second policy states that the user has been allowed permission for EC2:Describe action.

When the user tries to use 'Describe' action on an EC2 instance using the CLI, what will be the output?

Explanation

Correct option:

The user will be denied access because the policy has an explicit deny on it - User will be denied access because any explicit deny overrides the allow.

Policy Evaluation explained: via - https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

Incorrect options:

The IAM user stands in an invalid state, because of conflicting policies - This is an incorrect statement. Access policies can have allow and deny permissions on them and based on policy rules they are evaluated. A user account does not get invalid because of policies.

The user will get access because it has an explicit allow - As discussed above, explicit deny overrides all other permissions and hence the user will be denied access.

The order of the policy matters. If policy 1 is before 2, then the user is denied access. If policy 2 is before 1, then the user is allowed access - If policies that apply to a request include an Allow statement and a Deny statement, the Deny statement trumps the Allow statement. The request is explicitly denied.

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

Question 5: Incorrect

Your company has a three-year contract with a healthcare provider. The contract states that monthly database backups must be retained for the duration of the contract for compliance purposes. Currently, the limit on backup retention for automated backups, on Amazon Relational Database Service (RDS), does not meet your requirements.

Which of the following solutions can help you meet your requirements?

Explanation

Correct option:

Create a cron event in CloudWatch, which triggers an AWS Lambda function that triggers the database snapshot - There are multiple ways to run periodic jobs in AWS. CloudWatch Events with Lambda is the simplest of all solutions. To do this, create a CloudWatch Rule and select “Schedule” as the Event Source. You can either use a cron expression or provide a fixed rate (such as every 5 minutes). Next, select “Lambda Function” as the Target. Your Lambda will have the necessary code for snapshot functionality.

Incorrect options:

Enable RDS automatic backups - You can enable automatic backups but as of 2020, the retention period is 0 to 35 days.

Enable RDS Read replicas - Amazon RDS server's built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. Read replicas are useful for heavy read-only data workloads. These are not suitable for the given use-case.

Enable RDS Multi-AZ - Multi-AZ allows you to create a highly available application with RDS. It does not directly help in database backups or retention periods.

Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html

https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html

Question 6: Correct

You are a developer working with the AWS CLI to create Lambda functions that contain environment variables. Your functions will require over 50 environment variables consisting of sensitive information of database table names.

What is the total set size/number of environment variables you can create for AWS Lambda?

Explanation

Correct option:

The total size of all environment variables shouldn't exceed 4 KB. There is no limit on the number of variables

An environment variable is a pair of strings that are stored in a function's version-specific configuration. The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request. The total size of all environment variables doesn't exceed 4 KB. There is no limit defined on the number of variables that can be used.

Incorrect options:

The total size of all environment variables shouldn't exceed 8 KB. The maximum number of variables that can be created is 50 - Incorrect option. The total size of environment variables cannot exceed 4 KB with no restriction on the number of variables.

The total size of all environment variables shouldn't exceed 8 KB. There is no limit on the number of variables - Incorrect option. The total size of environment variables cannot exceed 4 KB with no restriction on the number of variables.

The total size of all environment variables shouldn't exceed 4 KB. The maximum number of variables that can be created is 35 - Incorrect option. The total size of environment variables cannot exceed 4 KB with no restriction on the number of variables.

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html

Question 7: Correct

A HealthCare mobile app uses proprietary Machine Learning algorithms to provide early diagnosis using patient health metrics. To protect this sensitive data, the development team wants to transition to a scalable user management system with log-in/sign-up functionality that also supports Multi-Factor Authentication (MFA)

Which of the following options can be used to implement a solution with the LEAST amount of development effort? (Select two)

Explanation

Correct options:

Use Amazon Cognito for user-management and facilitating the log-in/sign-up process

Use Amazon Cognito to enable Multi-Factor Authentication (MFA) when users log-in

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.

A Cognito user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.

Cognito user pools provide support for sign-up and sign-in services as well as security features such as multi-factor authentication (MFA).

via - https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

Exam Alert:

Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via - https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

Incorrect options:

Use Lambda functions and DynamoDB to create a custom solution for user management

Use Lambda functions and RDS to create a custom solution for user management

As the problem statement mentions that the solution should require the least amount of development effort, so you cannot use Lambda functions with DynamoDB or RDS to create a custom solution. So both these options are incorrect.

Use Amazon SNS to send Multi-Factor Authentication (MFA) code via SMS to mobile app users - Amazon SNS cannot be used to send MFA codes via SMS to the user's mobile devices as this functionality is only meant to be used for IAM users. An SMS (short message service) MFA device can be any mobile device with a phone number that can receive standard SMS text messages. AWS will soon end support for SMS multi-factor authentication (MFA).

Please see this for more details:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_sms.html

Reference:

https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

Question 8: Correct

As a developer, you are looking at creating a custom configuration for Amazon EC2 instances running in an Auto Scaling group. The solution should allow the group to auto-scale based on the metric of 'average RAM usage' for your Amazon EC2 instances.

Which option provides the best solution?

Explanation

Correct option:

Create a custom metric in CloudWatch and make your instances send data to it using PutMetricData. Then, create an alarm based on this metric - You can create a custom CloudWatch metric for your EC2 Linux instance statistics by creating a script through the AWS Command Line Interface (AWS CLI). Then, you can monitor that metric by pushing it to CloudWatch.

You can publish your own metrics to CloudWatch using the AWS CLI or an API. Metrics produced by AWS services are standard resolution by default. When you publish a custom metric, you can define it as either standard resolution or high resolution. When you publish a high-resolution metric, CloudWatch stores it with a resolution of 1 second, and you can read and retrieve it with a period of 1 second, 5 seconds, 10 seconds, 30 seconds, or any multiple of 60 seconds.

High-resolution metrics can give you more immediate insight into your application's sub-minute activity. But, every PutMetricData call for a custom metric is charged, so calling PutMetricData more often on a high-resolution metric can lead to higher charges.

Incorrect options:

Create a custom alarm for your ASG and make your instances trigger the alarm using PutAlarmData API - This solution will not work, your instances must be aware of each other's RAM utilization status, to know when the average RAM would be too high to trigger the alarm.

Enable detailed monitoring for EC2 and ASG to get the RAM usage data and create a CloudWatch Alarm on top of it - By enabling detailed monitoring you define the frequency at which the metric data has to be sent to CloudWatch, from 5 minutes to 1-minute frequency window. But, you still need to create and collect the custom metric you wish to track.

Migrate your application to AWS Lambda - This option has been added as a distractor. You cannot use Lambda for the given use-case.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-monitoring.html#CloudWatchAlarm

Question 9: Correct

The development team at a social media company is considering using Amazon ElastiCache to boost the performance of their existing databases.

As a Developer Associate, which of the following use-cases would you recommend as the BEST fit for ElastiCache? (Select two)

Explanation

Correct option:

Use ElastiCache to improve latency and throughput for read-heavy application workloads

Use ElastiCache to improve performance of compute-intensive workloads

Amazon ElastiCache allows you to run in-memory data stores in the AWS cloud. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.

via - https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html

Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, and Q&A portals) or compute-intensive workloads (such as a recommendation engine) by allowing you to store the objects that are often read in the cache.

Overview of Amazon ElastiCache features: via - https://aws.amazon.com/elasticache/features/

Incorrect options:

Use ElastiCache to improve latency and throughput for write-heavy application workloads - As mentioned earlier in the explanation, Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads. Caching is not a good fit for write-heavy applications as the cache goes stale at a very fast rate.

Use ElastiCache to improve performance of Extract-Transform-Load (ETL) workloads - ETL workloads involve reading and transforming high volume data which is not a good fit for caching. You should use AWS Glue or Amazon EMR to facilitate ETL workloads.

Use ElastiCache to run highly complex JOIN queries - Complex JSON queries can be run on relational databases such as RDS or Aurora. ElastiCache is not a good fit for this use-case.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html

https://aws.amazon.com/elasticache/features/

Question 10: Correct

An IT company has migrated to a serverless application stack on the AWS Cloud with the compute layer being implemented via Lambda functions. The engineering managers would like to actively troubleshoot any failures in the Lambda functions.

As a Developer Associate, which of the following solutions would you suggest for this use-case?

Explanation

Correct option:

"The developers should insert logging statements in the Lambda function code which are then available via CloudWatch logs"

When you invoke a Lambda function, two types of error can occur. Invocation errors occur when the invocation request is rejected before your function receives it. Function errors occur when your function's code or runtime returns an error. Depending on the type of error, the type of invocation, and the client or service that invokes the function, the retry behavior, and the strategy for managing errors varies.

Lambda function failures are commonly caused by:

Permissions issues Code issues Network issues Throttling Invoke API 500 and 502 errors

You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/<function name>.

Please see this note for more details: via - https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html

Incorrect options:

"Use CloudWatch Events to identify and notify any failures in the Lambda code" - Typically Lambda functions are triggered as a response to a CloudWatch Event. CloudWatch Events cannot identify and notify failures in the Lambda code.

"Use CodeCommit to identify and notify any failures in the Lambda code"

"Use CodeDeploy to identify and notify any failures in the Lambda code"

AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers.

Neither CodeCommit nor CodeDeploy can identify and notify failures in the Lambda code.

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html

Question 11: Correct

You have been asked by your Team Lead to enable detailed monitoring of the Amazon EC2 instances your team uses. As a Developer working on AWS CLI, which of the below command will you run?

Explanation

Correct option:

aws ec2 monitor-instances --instance-ids i-1234567890abcdef0 - This enables detailed monitoring for a running instance.

EC2 detailed monitoring: via - https://docs.aws.amazon.com/cli/latest/reference/ec2/monitor-instances.html

Incorrect options:

aws ec2 run-instances --image-id ami-09092360 --monitoring Enabled=true - This syntax is used to enable detailed monitoring when launching an instance from AWS CLI.

aws ec2 run-instances --image-id ami-09092360 --monitoring State=enabled - This is an invalid syntax

aws ec2 monitor-instances --instance-id i-1234567890abcdef0 - This is an invalid syntax

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html

https://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html

Question 12: Correct

As a Senior Developer, you manage 10 Amazon EC2 instances that make read-heavy database requests to the Amazon RDS for PostgreSQL. You need to make this architecture resilient for disaster recovery.

Which of the following features will help you prepare for database disaster recovery? (Select two)

Explanation

Correct option:

Use cross-Region Read Replicas

In addition to using Read Replicas to reduce the load on your source DB instance, you can also use Read Replicas to implement a DR solution for your production DB environment. If the source DB instance fails, you can promote your Read Replica to a standalone source server. Read Replicas can also be created in a different Region than the source database. Using a cross-Region Read Replica can help ensure that you get back up and running if you experience a regional availability issue.

Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single AWS Region

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances use Amazon's failover technology.

The automated backup feature of Amazon RDS enables point-in-time recovery for your database instance. Amazon RDS will backup your database and transaction logs and store both for a user-specified retention period. If it’s a Multi-AZ configuration, backups occur on the standby to reduce I/O impact on the primary. Automated backups are limited to a single AWS Region while manual snapshots and Read Replicas are supported across multiple Regions.

Incorrect options:

Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups across multiple Regions - This is an incorrect statement. Automated backups are limited to a single AWS Region while manual snapshots and Read Replicas are supported across multiple Regions.

Use RDS Provisioned IOPS (SSD) Storage in place of General Purpose (SSD) Storage - Amazon RDS Provisioned IOPS Storage is an SSD-backed storage option designed to deliver fast, predictable, and consistent I/O performance. This storage type enhances the performance of the RDS database, but this isn't a disaster recovery option.

Use database cloning feature of the RDS DB cluster - This option has been added as a distractor. Database cloning is only available for Aurora and not for RDS.

References:

https://aws.amazon.com/rds/features/

https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

Question 13: Correct

You company runs business logic on smaller software components that perform various functions. Some functions process information in a few seconds while others seem to take a long time to complete. Your manager asked you to decouple components that take a long time to ensure software applications stay responsive under load. You decide to configure Amazon Simple Queue Service (SQS) to work with your Elastic Beanstalk configuration.

Which of the following Elastic Beanstalk environment should you choose to meet this requirement?

Explanation

Correct option:

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

Elastic BeanStalk Key Concepts: via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html

Dedicated worker environment - If your AWS Elastic Beanstalk application performs operations or workflows that take a long time to complete, you can offload those tasks to a dedicated worker environment. Decoupling your web application front end from a process that performs blocking operations is a common way to ensure that your application stays responsive under load.

A long-running task is anything that substantially increases the time it takes to complete a request, such as processing images or videos, sending emails, or generating a ZIP archive. These operations can take only a second or two to complete, but a delay of a few seconds is a lot for a web request that would otherwise complete in less than 500 ms.

via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html

Incorrect options:

Single Instance Worker node - Worker machines in Kubernetes are called nodes. Amazon EKS worker nodes are standard Amazon EC2 instances. EKS worker nodes are not to be confused with the Elastic Beanstalk worker environment. Since we are talking about the Elastic Beanstalk environment, this is not the correct choice.

Load-balancing, Autoscaling environment - A load-balancing and autoscaling environment uses the Elastic Load Balancing and Amazon EC2 Auto Scaling services to provision the Amazon EC2 instances that are required for your deployed application. Amazon EC2 Auto Scaling automatically starts additional instances to accommodate increasing load on your application. If your application requires scalability with the option of running in multiple Availability Zones, use a load-balancing, autoscaling environment. This is not the right environment for the given use-case since it will add costs to the overall solution.

Single Instance with Elastic IP - A single-instance environment contains one Amazon EC2 instance with an Elastic IP address. A single-instance environment doesn't have a load balancer, which can help you reduce costs compared to a load-balancing, autoscaling environment. This is not a highly available architecture, because if that one instance goes down then your application is down. This is not recommended for production environments.

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html

Question 14: Correct

You are creating a mobile application that needs access to the AWS API Gateway. Users will need to register first before they can access your API and you would like the user management to be fully managed.

Which authentication option should you use for your API Gateway layer?

Explanation

Correct option:

Use Cognito User Pools - As an alternative to using IAM roles and policies or Lambda authorizers, you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway. To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user into the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request's Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn't authorized to make the call because the client did not have credentials that could be authorized.

Incorrect options:

Use Lambda Authorizer- A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API. A Lambda authorizer is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML, or that uses request parameters to determine the caller's identity. This won't be a fully managed user management solution but it would allow you to check for access at the AWS API Gateway level.

Use IAM permissions with sigv4 - Signature Version 4 is the process to add authentication information to AWS requests sent by HTTP. For security, most requests to AWS must be signed with an access key, which consists of an access key ID and secret access key. These two keys are commonly referred to as your security credentials. But, we cannot possibly create an IAM user for every visitor of the site, so this is where social identity providers come in to help.

Use API Gateway User Pools - This is a made-up option.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html

Question 15: Incorrect

A cybersecurity company is running a serverless backend with several compute-heavy workflows running on Lambda functions. The development team has noticed a performance lag after analyzing the performance metrics for the Lambda functions.

As a Developer Associate, which of the following options would you suggest as the BEST solution to address the compute-heavy workloads?

Explanation

Correct option:

Increase the amount of memory available to the Lambda functions

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.

In the AWS Lambda resource model, you choose the amount of memory you want for your function which allocates proportional CPU power and other resources. This means you will have access to more compute power when you choose one of the new larger settings. You can set your memory in 64MB increments from 128MB to 3008MB. You access these settings when you create a function or update its configuration. The settings are available using the AWS Management Console, AWS CLI, or SDKs.

via - https://docs.aws.amazon.com/lambda/latest/dg/configuration-console.html

Therefore, by increasing the amount of memory available to the Lambda functions, you can run the compute-heavy workflows.

Incorrect options:

Invoke the Lambda functions asynchronously to process the compute-heavy workflows - When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors and can send invocation records to a downstream resource to chain together components of your application. The method of invocation has no bearing on the Lambda function's ability to process the compute-heavy workflows.

Use reserved concurrency to account for the compute-heavy workflows

Use provisioned concurrency to account for the compute-heavy workflows

Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency. The type of concurrency has no bearing on the Lambda function's ability to process the compute-heavy workflows. So both these options are incorrect.

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-console.html

Question 16: Correct

You are running a cloud file storage website with an Internet-facing Application Load Balancer, which routes requests from users over the internet to 10 registered Amazon EC2 instances. Users are complaining that your website always asks them to re-authenticate when they switch pages. You are puzzled because this behavior is not seen in your local machine or dev environment.

What could be the reason?

Explanation

Correct option:

The Load Balancer does not have stickiness enabled - Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies.

When a load balancer first receives a request from a client, it routes the request to a target, generates a cookie named AWSALB that encodes information about the selected target, encrypts the cookie, and includes the cookie in the response to the client. The client should include the cookie that it receives in subsequent requests to the load balancer. When the load balancer receives a request from a client that contains the cookie, if sticky sessions are enabled for the target group and the request goes to the same target group, the load balancer detects the cookie and routes the request to the same target.

More info here: via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#sticky-sessions

Incorrect options:

Application Load Balancer is in slow-start mode, which gives ALB a little more time to read and write session data - This is an invalid statement. The load balancer serves as a single point of contact for clients and distributes incoming traffic across its healthy registered targets. By default, a target starts to receive its full share of requests as soon as it is registered with a target group and passes an initial health check. Using slow start mode gives targets time to warm up before the load balancer sends them a full share of requests. This does not help in session management.

The EC2 instances are logging out the users because the instances never have access to the client IPs because of the Load Balancer - This is an incorrect statement. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-For request header and passes the header to the server. If needed, the server can read IP addresses from this data.

The Load Balancer does not have TLS enabled - To use an HTTPS listener, you must deploy at least one SSL/TLS server certificate on your load balancer. The load balancer uses a server certificate to terminate the front-end connection and then decrypt requests from clients before sending them to the targets. This does not help in session management.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#sticky-sessions

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#slow-start-mode

Question 17: Correct

Recently, you started an online learning platform using AWS Lambda and AWS Gateway API. Your first version was successful, and you began developing new features for the second version. You would like to gradually introduce the second version by routing only 10% of the incoming traffic to the new Lambda version.

Which solution should you opt for?

Explanation

Correct option:

Use AWS Lambda aliases - A Lambda alias is like a pointer to a specific Lambda function version. You can create one or more aliases for your AWS Lambda function. Users can access the function version using the alias ARN. An alias can only point to a function version, not to another alias. You can update an alias to point to a new version of the function. Event sources such as Amazon S3 invoke your Lambda function. These event sources maintain a mapping that identifies the function to invoke when events occur. If you specify a Lambda function alias in the mapping configuration, you don't need to update the mapping when the function version changes. This is the right choice for the current requirement.

Incorrect options:

Use Tags to distinguish the different versions - You can tag Lambda functions to organize them by owner, project or department. Tags are freeform key-value pairs that are supported across AWS services for use in filtering resources and adding detail to billing reports. This does not address the given use-case.

Use environment variables - You can use environment variables to store secrets securely and adjust your function's behavior without updating code. An environment variable is a pair of strings that are stored in a function's version-specific configuration. The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request. For example, you can use environment variables to point to test, development or production databases by passing it as an environment variable during runtime. This option does not address the given use-case.

Deploy your Lambda in a VPC - Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined. This adds another layer of security for your entire architecture. Not the right choice for the given scenario.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html

https://docs.aws.amazon.com/lambda/latest/dg/configuration-tags.html

https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html

Question 18: Correct

As part of their on-boarding, the employees at an IT company need to upload their profile photos in a private S3 bucket. The company wants to build an in-house web application hosted on an EC2 instance that should display the profile photos in a secure way when the employees mark their attendance.

As a Developer Associate, which of the following solutions would you suggest to address this use-case?

Explanation

Correct option:

"Save the S3 key for each user's profile photo in a DynamoDB table and use a lambda function to dynamically generate a pre-signed URL. Reference this URL for display via the web application"

On Amazon S3, all objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects.

You can also use an IAM instance profile to create a pre-signed URL. When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object), and expiration date and time. The pre-signed URLs are valid only for the specified duration. So for the given use-case, the object key can be retrieved from the DynamoDB table, and then the application can generate the pre-signed URL using the IAM instance profile.

Please see this note for more details: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Incorrect options:

"Make the S3 bucket public so that the application can reference the image URL for display" - Making the S3 bucket public would violate the security and privacy requirements for the use-case, so this option is incorrect.

"Keep each user's profile image encoded in base64 format in a DynamoDB table and reference it from the application for display"

"Keep each user's profile image encoded in base64 format in an RDS table and reference it from the application for display"

It's a bad practice to keep the raw image data in the database itself. Also, it would not be possible to create a secure access URL for the image without a significant development effort. Hence both these options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Question 19: Correct

You are working for a shipping company that is automating the creation of ECS clusters with an Auto Scaling Group using an AWS CloudFormation template that accepts cluster name as its parameters. Initially, you launch the template with input value 'MainCluster', which deployed five instances across two availability zones. The second time, you launch the template with an input value 'SecondCluster'. However, the instances created in the second run were also launched in 'MainCluster' even after specifying a different cluster name.

What is the root cause of this issue?

Explanation

Correct option:

The cluster name Parameter has not been updated in the file /etc/ecs/ecs.config during bootstrap - In the ecs.config file you have to configure the parameter ECS_CLUSTER='your_cluster_name' to register the container instance with a cluster named 'your_cluster_name'.

Sample config for ECS Container Agent: via - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html

Incorrect options:

The EC2 instance is missing IAM permissions to join the other clusters - EC2 instances are getting registered to the first cluster, so permissions are not an issue here and hence this statement is an incorrect choice for the current use case.

The ECS agent Docker image must be re-built to connect to the other clusters - Since the first set of instances got created from the template without any issues, there is no issue with the ECS agent here.

The security groups on the EC2 instance are pointing to the wrong ECS cluster - Security groups govern the rules about the incoming network traffic to your ECS containers. The issue here is not about user access and hence is a wrong choice for the current use case.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html

Question 20: Correct

Your team lead has requested code review of your code for Lambda functions. Your code is written in Python and makes use of the Amazon Simple Storage Service (S3) to upload logs to an S3 bucket. After the review, your team lead has recommended reuse of execution context to improve the Lambda performance.

Which of the following actions will help you implement the recommendation?

Explanation

Correct option:

Move the Amazon S3 client initialization, out of your function handler - AWS best practices for Lambda suggest taking advantage of execution context reuse to improve the performance of your functions. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and cost. To avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications.

Incorrect options:

Use environment variables to pass operational parameters - This is one of the suggested best practices for Lambda. By using environment variables to pass operational parameters you can avoid hard-coding useful information. But, this is not the right answer for the current use-case, since it talks about reusing context.

Assign more RAM to the function - Increasing RAM will help speed up the process. But, in the current question, the reviewer has specifically mentioned about reusing context. Hence, this is not the right answer.

Enable X-Ray integration - You can use AWS X-Ray to visualize the components of your application, identify performance bottlenecks, and troubleshoot requests that resulted in an error. Your Lambda functions send trace data to X-Ray, and X-Ray processes the data to generate a service map and searchable trace summaries. This is a useful tool for troubleshooting. But, for the current use-case, we already know the bottleneck that needs to be fixed and that is the context reuse.

References:

https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html

Question 21: Correct

As part of employee skills upgrade, the developers of your team have been delegated few responsibilities of DevOps engineers. Developers now have full control over modeling the entire software delivery process, from coding to deployment. As the team lead, you are now responsible for any manual approvals needed in the process.

Which of the following approaches supports the given workflow?

Explanation

Correct option:

Create one CodePipeline for your entire flow and add a manual approval step - You can add an approval action to a stage in a CodePipeline pipeline at the point where you want the pipeline to stop so someone can manually approve or reject the action. Approval actions can't be added to Source stages. Source stages can contain only source actions.

Incorrect options:

Create multiple CodePipelines for each environment and link them using AWS Lambda - You can create Lambda functions and add them as actions in your pipelines but the approval step is confined to a workflow process and cannot be outsourced to any other AWS service.

Create deeply integrated AWS CodePipelines for each environment - You can use an AWS CloudFormation template in conjunction with AWS CodePipeline and AWS CodeCommit to create a test environment that deploys to your production environment when the changes to your application are approved, helping you automate a continuous delivery workflow. This is a possible answer but not an optimized way of achieving what the client needs.

Use CodePipeline with Amazon Virtual Private Cloud - AWS CodePipeline supports Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS PrivateLink. This means you can connect directly to CodePipeline through a private endpoint in your VPC, keeping all traffic inside your VPC and the AWS network. This is a robust security feature but is of no value for our current use-case.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/vpc-support.html

Question 22: Correct

A telecommunications company that provides internet service for mobile device users maintains over 100 c4.large instances in the us-east-1 region. The EC2 instances run complex algorithms. The manager would like to track CPU utilization of the EC2 instances as frequently as every 10 seconds.

Which of the following represents the BEST solution for the given use-case?

Explanation

Correct option:

Create a high-resolution custom metric and push the data using a script triggered every 10 seconds

Using high-resolution custom metric, your applications can publish metrics to CloudWatch with 1-second resolution. You can watch the metrics scroll across your screen seconds after they are published and you can set up high-resolution CloudWatch Alarms that evaluate as frequently as every 10 seconds. You can alert with High-Resolution Alarms, as frequently as 10-second periods. High-Resolution Alarms allow you to react and take actions faster and support the same actions available today with standard 1-minute alarms.

via - https://aws.amazon.com/blogs/aws/new-high-resolution-custom-metrics-and-alarms-for-amazon-cloudwatch/

Incorrect options:

Enable EC2 detailed monitoring - As part of basic monitoring, Amazon EC2 sends metric data to CloudWatch in 5-minute periods. To send metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance, however, this comes at an additional cost.

Simply get it from the CloudWatch Metrics - You can get data from metrics. The basic monitoring data is available automatically in a 5-minute interval and detailed monitoring data is available in a 1-minute interval.

Open a support ticket with AWS - This option has been added as a distractor.

Reference:

https://aws.amazon.com/blogs/aws/new-high-resolution-custom-metrics-and-alarms-for-amazon-cloudwatch/

Question 23: Correct

You have deployed a traditional 3-tier web application architecture with a Classic Load Balancer, an Auto Scaling group, and an Amazon Relational Database Service (RDS) database. Users are reporting that they have to re-authenticate into the website often.

Which of the following represents a scalable solution to make the application tier stateless and outsource the session information?

Explanation

Correct option:

Add an ElastiCache Cluster - To provide a shared data storage for sessions that can be accessed from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution for this is to leverage an ElastiCache service offering which is an In-Memory Key/Value store such as Redis and Memcached.

Incorrect options:

Use Elastic IP - An Elastic IP is a way to give your server a static IP address but won't solve the issue with session data.

Enable RDS read replicas - Amazon RDS server's built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. A replica continually synchronizes with the master database and hence might add a bit of lag based on how occupied the master database is. It will also add inconsistencies in the current scenario.

Enable Load Balancer stickiness - Sticky sessions are a mechanism to route requests from the same client to the same target. Application Load Balancer supports sticky sessions using load balancer generated cookies. If you enable sticky sessions, the same target receives the request and can use the cookie to recover the session context. But, ElastiCache is much more scalable and better suited in the current scenario.

Reference:

https://aws.amazon.com/caching/session-management/

Question 24: Correct

Your web application reads and writes data to your DynamoDB table. The table is provisioned with 400 Write Capacity Units (WCU’s) shared across 4 partitions. One of the partitions receives 250 WCU/second while others receive much less. You receive the error 'ProvisionedThroughputExceededException'.

What is the likely cause of this error?

Explanation

Correct option:

You have a hot partition

It's not always possible to distribute read and write activity evenly. When data access is imbalanced, a "hot" partition can receive a higher volume of read and write traffic compared to other partitions. To better accommodate uneven access patterns, DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled, provided that traffic does not exceed your table’s total provisioned capacity or the partition maximum capacity.

ProvisionedThroughputExceededException explained: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughput.html

Hot partition explained: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html

Incorrect options:

CloudWatch monitoring is lagging - The error is specific to DynamoDB itself and not to any connected service. CloudWatch is a fully managed service from AWS and does not result in throttling.

Configured IAM policy is wrong - The error is not associated with authorization but to exceeding something pre-configured value. So, it's clearly not a permissions issue.

Write-capacity units (WCU’s) are applied across to all your DynamoDB tables and this needs reconfiguration - This statement is incorrect. Read Capacity Units and Write Capacity Units are specific to one table.

Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughput.html

Question 25: Correct

A team is checking the viability of using AWS Step Functions for creating a banking workflow for loan approvals. The web application will also have human approval as one of the steps in the workflow.

As a developer associate, which of the following would you identify as the key characteristics for AWS Step Function? (Select two)

Explanation

Correct options:

Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that can also support any human approval steps - Standard Workflows on AWS Step Functions are more suitable for long-running, durable, and auditable workflows where repeating workflow steps is expensive (e.g., restarting a long-running media transcode) or harmful (e.g., charging a credit card twice). Example workloads include training and deploying machine learning models, report generation, billing, credit card processing, and ordering and fulfillment processes. Step functions also support any human approval steps.

You should use Express Workflows for workloads with high event rates and short duration* - You should use Express Workflows for workloads with high event rates and short durations. Express Workflows support event rates of more than 100,000 per second.

Incorrect options:

Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that do not support any human approval steps - As Step functions support any human approval steps, so this option is incorrect.

Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of 180 days or 6 months - Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of one year.

Both Standard and Express Workflows support all service integrations, activities, and design patterns - Standard Workflows support all service integrations, activities, and design patterns. Express Workflows do not support activities, job-run (.sync), and Callback patterns.

Reference:

https://aws.amazon.com/step-functions/features/

https://aws.amazon.com/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/

Question 26: Correct

A leading financial services company offers data aggregation services for Wall Street trading firms. The company bills its clients based on per unit of clickstream data provided to the clients. As the company operates in a regulated industry, it needs to have the same ordered clickstream data available for auditing within a window of 7 days.

As a Developer Associate, which of the following AWS services do you think provides the ability to run the billing process and auditing process on the given clickstream data in the same order?

Explanation

Correct option:

AWS Kinesis Data Streams

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.

Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering). Amazon Kinesis Data Streams is recommended when you need the ability to consume records in the same order a few hours later.

For example, you have a billing application and an audit application that runs a few hours behind the billing application. By default, records of a stream are accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to a maximum of 365 days. For the given use-case, Amazon Kinesis Data Streams can be configured to store data for up to 7 days and you can run the audit application up to 7 days behind the billing application.

KDS provides the ability to consume records in the same order a few hours later via - https://aws.amazon.com/kinesis/data-streams/faqs/

Incorrect options:

AWS Kinesis Data Firehose - Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. As Kinesis Data Firehose is used to load streaming data into data stores, therefore this option is incorrect.

AWS Kinesis Data Analytics - Amazon Kinesis Data Analytics is the easiest way to analyze streaming data in real-time. You can quickly build SQL queries and sophisticated Java applications using built-in templates and operators for common processing functions to organize, transform, aggregate, and analyze data at any scale. Kinesis Data Analytics enables you to easily and quickly build queries and sophisticated streaming applications in three simple steps: setup your streaming data sources, write your queries or streaming applications and set up your destination for processed data. As Kinesis Data Analytics is used to build SQL queries and sophisticated Java applications, therefore this option is incorrect.

Amazon SQS - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent. For SQS, you cannot have the same message being consumed by multiple consumers in the same order a few hours later, therefore this option is incorrect.

References:

https://aws.amazon.com/kinesis/data-streams/faqs/

https://aws.amazon.com/kinesis/data-firehose/faqs/

https://aws.amazon.com/kinesis/data-analytics/faqs/

Question 27: Correct

A company follows collaborative development practices. The engineering manager wants to isolate the development effort by setting up simulations of API components owned by various development teams.

Which API integration type is best suited for this requirement?

Explanation

Correct option:

MOCK

This type of integration lets API Gateway return a response without sending the request further to the backend. This is useful for API testing because it can be used to test the integration setup without incurring charges for using the backend and to enable collaborative development of an API.

In collaborative development, a team can isolate their development effort by setting up simulations of API components owned by other teams by using the MOCK integrations. It is also used to return CORS-related headers to ensure that the API method permits CORS access. In fact, the API Gateway console integrates the OPTIONS method to support CORS with a mock integration.

Incorrect options:

AWS_PROXY - This type of integration lets an API method be integrated with the Lambda function invocation action with a flexible, versatile, and streamlined integration setup. This integration relies on direct interactions between the client and the integrated Lambda function.

HTTP_PROXY - The HTTP proxy integration allows a client to access the backend HTTP endpoints with a streamlined integration setup on single API method. You do not set the integration request or the integration response. API Gateway passes the incoming request from the client to the HTTP endpoint and passes the outgoing response from the HTTP endpoint to the client.

HTTP - This type of integration lets an API expose HTTP endpoints in the backend. With the HTTP integration, you must configure both the integration request and integration response. You must set up necessary data mappings from the method request to the integration request, and from the integration response to the method response.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-integration-types.html

Question 28: Correct

A junior developer working on ECS instances terminated a container instance in Amazon Elastic Container Service (Amazon ECS) as per instructions from the team lead. But the container instance continues to appear as a resource in the ECS cluster.

As a Developer Associate, which of the following solutions would you recommend to fix this behavior?

Explanation

Correct option:

You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues - If you terminate a container instance while it is in the STOPPED state, that container instance isn't automatically removed from the cluster. You will need to deregister your container instance in the STOPPED state by using the Amazon ECS console or AWS Command Line Interface. Once deregistered, the container instance will no longer appear as a resource in your Amazon ECS cluster.

Incorrect options:

You terminated the container instance while it was in RUNNING state, that lead to this synchronization issues - This is an incorrect statement. If you terminate a container instance in the RUNNING state, that container instance is automatically removed, or deregistered, from the cluster.

The container instance has been terminated with AWS CLI, whereas, for ECS instances, Amazon ECS CLI should be used to avoid any synchronization issues - This is incorrect and has been added as a distractor.

A custom software on the container instance could have failed and resulted in the container hanging in an unhealthy state till restarted again - This is an incorrect statement. It is already mentioned in the question that the developer has terminated the instance.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/deregister-ecs-instance/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html

Question 29: Correct

An IT company has its serverless stack integrated with AWS X-Ray. The developer at the company has noticed a high volume of data going into X-Ray and the AWS monthly usage charges have skyrocketed as a result. The developer has requested changes to mitigate the issue.

As a Developer Associate, which of the following solutions would you recommend to obtain tracing trends while reducing costs with minimal disruption?

Explanation

Correct option:

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

How X-Ray Works: via - https://aws.amazon.com/xray/

Enable X-Ray sampling

To ensure efficient tracing and provide a representative sample of the requests that your application serves, the X-Ray SDK applies a sampling algorithm to determine which requests get traced. By default, the X-Ray SDK records the first request each second, and five percent of any additional requests. X-Ray sampling is enabled directly from the AWS console, hence your application code does not need to change.

You can also customize the X-Ray sampling rules: via - https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html

Incorrect options:

Use Filter Expressions in the X-Ray console - When you choose a time period of traces to view in the X-Ray console, you might get more results than the console can display. You can narrow the results to just the traces that you want to find by using a filter expression. This option is not correct because it does not reduce the volume of data sent into the X-Ray console.

via - https://docs.aws.amazon.com/xray/latest/devguide/xray-console-filters.html

Custom configuration for the X-Ray agents - You cannot do a custom configuration, instead you can do custom sampling rules. So this option is incorrect.

Implement a network sampling rule - This option has been added as a distractor.

References:

https://aws.amazon.com/xray/

https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html

Question 30: Correct

A new member of your team is working on creating Dead Letter Queue (DLQ) for AWS Lambda functions.

As a Developer Associate, can you help him identify the use cases, wherein AWS Lambda will add a message into a DLQ after being processed? (Select two)

Explanation

Correct option:

The Lambda function invocation is asynchronous - When an asynchronous invocation event exceeds the maximum age or fails all retry attempts, Lambda discards it. Or sends it to dead-letter queue if you have configured one.

The event fails all processing attempt - A dead-letter queue acts the same as an on-failure destination in that it is used when an event fails all processing attempts or expires without being processed.

Incorrect options:

The Lambda function invocation is synchronous - When you invoke a function synchronously, Lambda runs the function and waits for a response. Queues are generally used with asynchronous invocations since queues implement the decoupling feature of various connected systems. It does not make sense to use queues when the calling code will wait on it for a response.

The event has been processed successfully - A successfully processed event is not sent to the dead-letter queue.

The event processing failed only once but succeeded thereafter - A successfully processed event is not sent to the dead-letter queue.

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html

Question 31: Correct

You have an Auto Scaling group configured to a minimum capacity of 1 and a maximum capacity of 5, designed to launch EC2 instances across 3 Availability Zones. During a low utilization period, an entire Availability Zone went down and your application experienced downtime.

What can you do to ensure that your application remains highly available?

Explanation

Correct option:

Increase the minimum instance capacity of the Auto Scaling Group to 2 -

You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity. The minimum and maximum capacity are required to create an Auto Scaling group, while the desired capacity is optional. If you do not define your desired capacity upfront, it defaults to your minimum capacity.

Since a minimum capacity of 1 was defined, an instance was launched in only one AZ. This AZ went down, taking the application with it. If the minimum capacity is set to 2. As per Auto Scale AZ configuration, it would have launched 2 instances- one in each AZ, making the architecture disaster-proof and hence highly available.

via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html

Incorrect options:

Change the scaling metric of auto-scaling policy to network bytes - With target tracking scaling policies, you select a scaling metric and set a target value. You can use predefined customized metrics. Setting the metric to network bytes will not help in this context since the instances have to be spread across different AZs for high availability. The optimized way of doing it, is by defining minimum and maximum instance capacities, as discussed above.

Configure ASG fast failover - This is a made-up option, given as a distractor.

Enable RDS Multi-AZ - This configuration will make your database highly available. But for the current scenario, you will need to have more than 1 instance in separate availability zones to keep the application highly available.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-maintain-instance-levels.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html

Question 32: Correct

To meet compliance guidelines, a company needs to ensure replication of any data stored in its S3 buckets.

Which of the following characteristics are correct while configuring an S3 bucket for replication? (Select two)

Explanation

Correct options:

Same-Region Replication (SRR) and Cross-Region Replication (CRR) can be configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags - Amazon S3 Replication (CRR and SRR) is configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags. You add a replication configuration on your source bucket by specifying a destination bucket in the same or different AWS region for replication.

S3 lifecycle actions are not replicated with S3 replication - With S3 Replication (CRR and SRR), you can establish replication rules to make copies of your objects into another storage class, in the same or a different region. Lifecycle actions are not replicated, and if you want the same lifecycle configuration applied to both source and destination buckets, enable the same lifecycle configuration on both.

Incorrect options:

Object tags cannot be replicated across AWS Regions using Cross-Region Replication - Object tags can be replicated across AWS Regions using Cross-Region Replication. For customers with Cross-Region Replication already enabled, new permissions are required for tags to replicate.

Once replication is enabled on a bucket, all old and new objects will be replicated - Replication only replicates the objects added to the bucket after replication is enabled on the bucket. Any objects present in the bucket before enabling replication are not replicated.

Replicated objects do not retain metadata - You can use replication to make copies of your objects that retain all metadata, such as the original object creation time and version IDs. This capability is important if you need to ensure that your replica is identical to the source object.

Reference:

https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/replication.html

Question 33: Correct

Your company uses an Application Load Balancer to route incoming end-user traffic to applications hosted on Amazon EC2 instances. The applications capture incoming request information and store it in the Amazon Relational Database Service (RDS) running on Microsoft SQL Server DB engines.

As part of new compliance rules, you need to capture the client's IP address. How will you achieve this?

Explanation

Correct option:

Use the header X-Forwarded-For - The X-Forwarded-For request header helps you identify the IP address of a client when you use an HTTP or HTTPS load balancer. Because load balancers intercept traffic between clients and servers, your server access logs contain only the IP address of the load balancer. To see the IP address of the client, use the X-Forwarded-For request header. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-For request header and passes the header to your server.

Incorrect options:

You can get the Client IP addresses from server access logs - As discussed above, Load Balancers intercept traffic between clients and servers, so server access logs will contain only the IP address of the load balancer.

Use the header X-Forwarded-From - This is a made-up option and given as a distractor.

You can get the Client IP addresses from Elastic Load Balancing logs - Elastic Load Balancing logs requests sent to the load balancer, including requests that never made it to the targets. For example, if a client sends a malformed request, or there are no healthy targets to respond to the request, the request is still logged. So, this is not the right option if we wish to collect the IP addresses of the clients that have access to the instances.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html

Question 34: Incorrect

You team maintains a public API Gateway that is accessed by clients from another domain. Usage has been consistent for the last few months but recently it has more than doubled. As a result, your costs have gone up and would like to prevent other unauthorized domains from accessing your API.

Which of the following actions should you take?

Explanation

Correct option:

Restrict access by using CORS - Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. When your API's resources receive requests from a domain other than the API's own domain and you want to restrict servicing these requests, you must disable cross-origin resource sharing (CORS) for selected methods on the resource.

Incorrect options:

Use Account-level throttling - To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API. By default, API Gateway limits the steady-state request rate to 10,000 requests per second (rps). It limits the burst (that is, the maximum bucket size) to 5,000 requests across all APIs within an AWS account. This is Account-level throttling. As you see, this is about limit on the number of requests and is not a suitable answer for the current scenario.

Use Mapping Templates - A mapping template is a script expressed in Velocity Template Language (VTL) and applied to the payload using JSONPath expressions. Mapping templates help format/structure the data in a way that it is easily readable, unlike a server response that might always be easy to ready. Mapping Templates have nothing to do with access and are not useful for the current scenario.

Assign a Security Group to your API Gateway - API Gateway does not use security groups but uses resource policies, which are JSON policy documents that you attach to an API to control whether a specified principal (typically an IAM user or role) can invoke the API. You can restrict IP address using this, the downside being, an IP address can be changed by the accessing user. So, this is not an optimal solution for the current use case.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-protect.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-data-transformations.html

Question 35: Correct

A media company uses Amazon Simple Queue Service (SQS) queue to manage their transactions. With changing business needs, the payload size of the messages is increasing. The Team Lead of the project is worried about the 256 KB message size limit that SQS has.

What can be done to make the queue accept messages of a larger size?

Explanation

Correct option:

Use the SQS Extended Client - To manage large Amazon Simple Queue Service (Amazon SQS) messages, you can use Amazon Simple Storage Service (Amazon S3) and the Amazon SQS Extended Client Library for Java. This is especially useful for storing and consuming messages up to 2 GB. Unless your application requires repeatedly creating queues and leaving them inactive or storing large amounts of data in your queues, consider using Amazon S3 for storing your data.

Incorrect options:

Use the MultiPart API - This is an incorrect statement. There is no multi-part API for Amazon Simple Queue Service.

Get a service limit increase from AWS - While it is possible to get service limits extended for certain AWS services, AWS already offers Extended Client to deal with queues that have larger messages.

Use gzip compression - You can compress the messages before sending them to the queue. The messages also need to be encoded after this to cater to SQS message standards. This adds bulk to the messages and will not be an optimal solution for the current scenario.

Reference:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html

Question 36: Correct

You have migrated an on-premise SQL Server database to an Amazon Relational Database Service (RDS) database attached to a VPC inside a private subnet. Also, the related Java application, hosted on-premise, has been moved to an Amazon Lambda function.

Which of the following should you implement to connect AWS Lambda function to its RDS instance?

Explanation

Correct option:

Configure Lambda to connect to VPC with private subnet and Security Group needed to access RDS - You can configure a Lambda function to connect to private subnets in a virtual private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your lambda function to the VPC to access private resources during execution. When you connect a function to a VPC, Lambda creates an elastic network interface for each combination of the security group and subnet in your function's VPC configuration. This is the right way of giving RDS access to Lambda.

Lambda VPC Config: via - https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html

Incorrect options:

Use Lambda layers to connect to the internet and RDS separately - You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. Layers will not help in configuring access to RDS instance and hence is an incorrect choice.

Configure lambda to connect to the public subnet that will give internet access and use the Security Group to access RDS inside the private subnet - This is an incorrect statement. Connecting a Lambda function to a public subnet does not give it internet access or a public IP address. To grant internet access to your function, its associated VPC must have a NAT gateway (or NAT instance) in a public subnet.

Use Environment variables to pass in the RDS connection string - You can use environment variables to store secrets securely and adjust your function's behavior without updating code. You can use environment variables to exchange data with RDS, but you will still need access to RDS, which is not possible with just environment variables.

References:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html

https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html

https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html

https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/

Question 37: Incorrect

A multi-national enterprise uses AWS Organizations to manage its users across different divisions. Even though CloudTrail is enabled on the member accounts, managers have noticed that access issues to CloudTrail logs across different divisions and AWS Regions is becoming a bottleneck in troubleshooting issues. They have decided to use the organization trail to keep things simple.

What are the important points to remember when configuring an organization trail? (Select two)

Explanation

Correct option:

If you have created an organization in AWS Organizations, you can also create a trail that will log all events for all AWS accounts in that organization. This is referred to as an organization trail.

By default, CloudTrail tracks only bucket-level actions. To track object-level actions, you need to enable Amazon S3 data events - This is a correct statement. AWS CloudTrail supports Amazon S3 Data Events, apart from bucket Events. You can record all API actions on S3 Objects and receive detailed information such as the AWS account of the caller, IAM user role of the caller, time of the API call, IP address of the API, and other details. All events are delivered to an S3 bucket and CloudWatch Events, allowing you to take programmatic actions on the events.

Member accounts will be able to see the organization trail, but cannot modify or delete it - Organization trails must be created in the master account, and when specified as applying to an organization, are automatically applied to all member accounts in the organization. Member accounts will be able to see the organization trail, but cannot modify or delete it. By default, member accounts will not have access to the log files for the organization trail in the Amazon S3 bucket.

Organization trail: via - https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html

Incorrect options:

There is nothing called Organization Trail. The master account can, however, enable CloudTrail logging, to keep track of all activities across AWS accounts - This statement is incorrect. AWS offers Organization Trail for easy management and monitoring.

Member accounts do not have access to the organization trail, neither do they have access to the Amazon S3 bucket that logs the files - This statement is only partially correct. Member accounts will be able to see the organization trail, but cannot modify or delete it. By default, member accounts will not have access to the log files for the organization trail in the Amazon S3 bucket.

By default, CloudTrail event log files are not encrypted - This is an incorrect statement. By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE).

References:

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html

https://aws.amazon.com/about-aws/whats-new/2016/11/aws-cloudtrail-supports-s3-data-events/

https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/

Question 38: Correct

A high-frequency stock trading firm is migrating their messaging queues from self-managed message-oriented middleware systems to Amazon SQS. The development team at the company wants to minimize the costs of using SQS.

As a Developer Associate, which of the following options would you recommend to address the given use-case?

Explanation

Correct option:

Use SQS long polling to retrieve messages from your Amazon SQS queues

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.

Amazon SQS provides short polling and long polling to receive messages from a queue. By default, queues use short polling. With short polling, Amazon SQS sends the response right away, even if the query found no messages. With long polling, Amazon SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request. Amazon SQS sends an empty response only if the polling wait time expires.

Long polling makes it inexpensive to retrieve messages from your Amazon SQS queue as soon as the messages are available. Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response). When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20 seconds.

Exam Alert:

Please review the differences between Short Polling vs Long Polling: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html

via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html

Incorrect options:

Use SQS short polling to retrieve messages from your Amazon SQS queues - With short polling, Amazon SQS sends the response right away, even if the query found no messages. You end up paying more because of the increased number of empty receives.

Use SQS visibility timeout to retrieve messages from your Amazon SQS queues - Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to retrieve messages from your Amazon SQS queues. This option has been added as a distractor.

Use SQS message timer to retrieve messages from your Amazon SQS queues - You can use message timers to set an initial invisibility period for a message added to a queue. So, if you send a message with a 60-second timer, the message isn't visible to consumers for its first 60 seconds in the queue. The default (minimum) delay for a message is 0 seconds. The maximum is 15 minutes. You cannot use message timer to retrieve messages from your Amazon SQS queues. This option has been added as a distractor.

Reference:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html

Question 39: Correct

A mobile gaming company is experiencing heavy read traffic to its Amazon Relational Database Service (RDS) database that retrieves player’s scores and stats. The company is using RDS database instance type db.m5.12xlarge, which is not cost-effective for their budget. They would like to implement a strategy to deal with the high volume of read traffic, reduce latency, and also downsize the instance size to cut costs.

As a Developer, which of the following solutions do you recommend?

Explanation

Correct option:

Setup ElastiCache in front of RDS

Amazon ElastiCache is an ideal front-end for data stores such as Amazon RDS, providing a high-performance middle tier for applications with extremely high request rates and/or low latency requirements. The best part of caching is that it’s minimally invasive to implement and by doing so, your application performance regarding both scale and speed is dramatically improved.

Incorrect options:

Setup RDS Read Replicas - Adding read replicas would further add to the database costs and will not help in reducing latency when compared to a caching solution. So this option is ruled out.

Move to Amazon Redshift - Redshift is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more. If the company is looking at cost-cutting, moving to Redshift from RDS is not an option.

Switch application code to AWS Lambda for better performance - AWS Lambda can help in running data processing workflows. But, data still needs to be read from RDS and hence we need a solution to speed up the data reads and not before/after processing.

Reference:

https://aws.amazon.com/caching/database-caching/

Question 40: Correct

A development team has created a new IAM user that has s3:putObject permission to write to an S3 bucket. This S3 bucket uses server-side encryption with AWS KMS managed keys (SSE-KMS) as the default encryption. Using the access key ID and the secret access key of the IAM user, the application received an access denied error when calling the PutObject API.

As a Developer Associate, how would you resolve this issue?

Explanation

Correct option:

Correct the policy of the IAM user to allow the kms:GenerateDataKey action - You can protect data at rest in Amazon S3 by using three different modes of server-side encryption: SSE-S3, SSE-C, or SSE-KMS. SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS. You can choose a customer managed CMK or the AWS managed CMK for Amazon S3 in your account. If you choose to encrypt your data using the standard features, AWS KMS and Amazon S3 perform the following actions:

  1. Amazon S3 requests a plaintext data key and a copy of the key encrypted under the specified CMK.

  2. AWS KMS generates a data key, encrypts it under the CMK, and sends both the plaintext data key and the encrypted data key to Amazon S3.

  3. Amazon S3 encrypts the data using the data key and removes the plaintext key from memory as soon as possible after use.

  4. Amazon S3 stores the encrypted data key as metadata with the encrypted data.

The error message indicates that your IAM user or role needs permission for the kms:GenerateDataKey action. This permission is required for buckets that use default encryption with a custom AWS KMS key.

In the JSON policy documents, look for policies related to AWS KMS access. Review statements with "Effect": "Allow" to check if the user or role has permissions for the kms:GenerateDataKey action on the bucket's AWS KMS key. If this permission is missing, then add the permission to the appropriate policy.

In the JSON policy documents, look for statements with "Effect": "Deny". Then, confirm that those statements don't deny the s3:PutObject action on the bucket. The statements must also not deny the IAM user or role access to the kms:GenerateDataKey action on the key used to encrypt the bucket. Additionally, make sure the necessary KMS and S3 permissions are not restricted using a VPC endpoint policy, service control policy, permissions boundary, or session policy.

Incorrect options:

Correct the policy of the IAM user to allow the s3:Encrypt action - This is an invalid action given only as a distractor.

Correct the bucket policy of the S3 bucket to allow the IAM user to upload encrypted objects - The user already has access to the bucket. What the user lacks is access to generate a KMS key, which is mandatory when a bucket is enabled for default encryption.

Correct the ACL of the S3 bucket to allow the IAM user to upload encrypted objects - Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. ACL is another way of giving access to S3 bucket objects. Permissions to use KMS keys will still be needed.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-error-kms/

https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html

Question 41: Correct

An intern at an IT company is getting started with AWS Cloud and wants to understand the following Amazon S3 bucket access policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListAllS3Buckets",
            "Effect": "Allow",
            "Action": ["s3:ListAllMyBuckets"],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Sid": "AllowBucketLevelActions",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Sid": "AllowBucketObjectActions",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::*/*"
        },
        {
            "Sid": "RequireMFAForProductionBucket",
            "Effect": "Deny",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::Production/*",
                "arn:aws:s3:::Production"
            ],
            "Condition": {
                "NumericGreaterThanIfExists": {"aws:MultiFactorAuthAge": "1800"}
            }
        }
    ]
}

As a Developer Associate, can you help him identify what the policy is for?

Explanation

Correct option:

Allows full S3 access, but explicitly denies access to the Production bucket if the user has not signed in using MFA within the last thirty minutes - This example shows how you might create a policy that allows an Amazon S3 user to access any bucket, including updating, adding, and deleting objects. However, it explicitly denies access to the Production bucket if the user has not signed in using multi-factor authentication (MFA) within the last thirty minutes. This policy grants the permissions necessary to perform this action in the console or programmatically using the AWS CLI or AWS API.

This policy never allows programmatic access to the Production bucket using long-term user access keys. This is accomplished using the aws:MultiFactorAuthAge condition key with the NumericGreaterThanIfExists condition operator. This policy condition returns true if MFA is not present or if the age of the MFA is greater than 30 minutes. In those situations, access is denied. To access the Production bucket programmatically, the S3 user must use temporary credentials that were generated in the last 30 minutes using the GetSessionToken API operation.

Incorrect options:

Allows a user to manage a single Amazon S3 bucket and denies every other AWS action and resource if the user is not signed in using MFA within last thirty minutes

Allows full S3 access to an Amazon Cognito user, but explicitly denies access to the Production bucket if the Cognito user is not authenticated

Allows IAM users to access their own home directory in Amazon S3, programmatically and in the console

These three options contradict the explanation provided above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_full-access-except-production.html

Question 42: Correct

A company uses Amazon RDS as its database. For improved user experience, it has been decided that a highly reliable fully-managed caching layer has to be configured in front of RDS.

Which of the following is the right choice, keeping in mind that cache content regeneration is a costly activity?

Explanation

Correct option:

Implement Amazon ElastiCache Redis in Cluster-Mode - One can leverage ElastiCache for Redis with cluster mode enabled to enhance reliability and availability with little change to your existing workload. Cluster mode comes with the primary benefit of horizontal scaling of your Redis cluster, with almost zero impact on the performance of the cluster.

When building production workloads, you should consider using a configuration with replication, unless you can easily recreate your data. Enabling Cluster-Mode provides a number of additional benefits in scaling your cluster. In short, it allows you to scale in or out the number of shards (horizontal scaling) versus scaling up or down the node type (vertical scaling). This means that Cluster-Mode can scale to very large amounts of storage (potentially 100s of terabytes) across up to 90 shards, whereas a single node can only store as much data in memory as the instance type has capacity for.

Redis Cluster config: via - https://aws.amazon.com/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/

Incorrect options:

Install Redis on an Amazon EC2 instance - It is possible to install Redis directly onto Amazon EC2 instance. But, unlike ElastiCache for Redis, which is a managed service, you will need to maintain and manage your Redis installation.

Implement Amazon ElastiCache Memcached - Redis and Memcached are popular, open-source, in-memory data stores. Although they are both easy to use and offer high performance, there are important differences to consider when choosing an engine. Memcached is designed for simplicity while Redis offers a rich set of features that make it effective for a wide range of use cases. Redis offers snapshots facility, replication, and supports transactions, which Memcached cannot and hence ElastiCache Redis is the right choice for our use case.

Migrate the database to Amazon Redshift - Amazon Redshift belongs to "Big Data as a Service" cloud facility, while Redis can be primarily classified under "In-Memory Databases". "Data Warehousing" is the primary reason why developers consider Amazon Redshift over the competitors, whereas "Performance" is the key factor in picking Redis.

References:

https://aws.amazon.com/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/

https://aws.amazon.com/elasticache/redis-vs-memcached/

Question 43: Incorrect

You have a workflow process that pulls code from AWS CodeCommit and deploys to EC2 instances associated with tag group ProdBuilders. You would like to configure the instances to archive no more than two application revisions to conserve disk space.

Which of the following will allow you to implement this?

Explanation

Correct option:

"CodeDeploy Agent"

The CodeDeploy agent is a software package that, when installed and configured on an instance, makes it possible for that instance to be used in CodeDeploy deployments. The CodeDeploy agent archives revisions and log files on instances. The CodeDeploy agent cleans up these artifacts to conserve disk space. You can use the :max_revisions: option in the agent configuration file to specify the number of application revisions to the archive by entering any positive integer. CodeDeploy also archives the log files for those revisions. All others are deleted, except for the log file of the last successful deployment.

More info here: via - https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html

Incorrect options:

AWS CloudWatch Log Agent - The CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances. This is an incorrect choice for the current use case.

Integrate with AWS CodePipeline - AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodeCommit and CodePipeline are already integrated services. CodePipeline cannot help in version control and management of archives on an EC2 instance.

Have a load balancer in front of your instances - Load Balancer helps balance incoming traffic across different EC2 instances. It is an incorrect choice for the current use case.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

Question 44: Correct

A telecom service provider stores its critical customer data on Amazon Simple Storage Service (Amazon S3).

Which of the following options can be used to control access to data stored on Amazon S3? (Select two)

Explanation

Correct options:

Bucket policies, Identity and Access Management (IAM) policies

Query String Authentication, Access Control Lists (ACLs)

Customers may use four mechanisms for controlling access to Amazon S3 resources: Identity and Access Management (IAM) policies, bucket policies, Access Control Lists (ACLs), and Query String Authentication.

IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, customers can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do.

With bucket policies, customers can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address.

With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object.

With Query String Authentication, customers can create a URL to an Amazon S3 object which is only valid for a limited time. Using query parameters to authenticate requests is useful when you want to express a request entirely in a URL. This method is also referred as presigning a URL.

Incorrect options:

Permissions boundaries, Identity and Access Management (IAM) policies

Query String Authentication, Permissions boundaries

IAM database authentication, Bucket policies

Permissions boundary - A Permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity's permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. When you use a policy to set the permissions boundary for a user, it limits the user's permissions but does not provide permissions on its own.

IAM database authentication - IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token. It is a database authentication technique and cannot be used to authenticate for S3.

Therefore, all three options are incorrect.

References:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-overview.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html

Question 45: Correct

An e-commerce company has an order processing workflow with several tasks to be done in parallel as well as decision steps to be evaluated for successful processing of the order. All the tasks are implemented via Lambda functions.

Which of the following is the BEST solution to meet these business requirements?

Explanation

Correct option:

Use AWS Step Functions state machines to orchestrate the workflow

AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.

How Step Functions Work: via - https://aws.amazon.com/step-functions/

The following are key features of AWS Step Functions:

Step Functions are based on the concepts of tasks and state machines. You define state machines using the JSON-based Amazon States Language. A state machine is defined by the states it contains and the relationships between them. States are elements in your state machine. Individual states can make decisions based on their input, perform actions, and pass output to other states. In this way, a state machine can orchestrate workflows.

Please see this note for a simple example of a State Machine: via - https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-state-machine-structure.html

Incorrect options:

Use AWS Step Functions activities to orchestrate the workflow - In AWS Step Functions, activities are a way to associate code running somewhere (known as an activity worker) with a specific task in a state machine. When a Step Function reaches an activity task state, the workflow waits for an activity worker to poll for a task. For example, an activity worker can be an application running on an Amazon EC2 instance or an AWS Lambda function. AWS Step Functions activities cannot orchestrate a workflow.

Use AWS Glue to orchestrate the workflow - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue cannot orchestrate a workflow.

Use AWS Batch to orchestrate the workflow - AWS Batch runs batch computing jobs on the AWS Cloud. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch cannot orchestrate a workflow.

Reference:

https://aws.amazon.com/step-functions/

https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-state-machine-structure.html

Question 46: Correct

You have a three-tier web application consisting of a web layer using AngularJS, an application layer using an AWS API Gateway and a data layer in an Amazon Relational Database Service (RDS) database. Your web application allows visitors to look up popular movies from the past. The company is looking at reducing the number of calls made to endpoint and improve latency to the API.

What can you do to improve performance?

Explanation

Correct option:

Enable API Gateway Caching - You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.

Incorrect options:

Use Mapping Templates - A mapping template is a script expressed in Velocity Template Language (VTL) and applied to the payload using JSONPath expressions. Mapping templates help format/structure the data in a way that it is easily readable, unlike a server response that might always be easy to ready. Mapping Templates do not help in latency issues of the APIs.

Use Stage Variables - Stage variables act like environment variables and can be used to change the behavior of your API Gateway methods for each deployment stage; for example, making it possible to reach a different back end depending on which stage the API is running on. Stage variables do not help in latency issues.

Use Amazon Kinesis Data Streams to stream incoming data and reduce the burden on Gateway APIs - Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html

Question 47: Correct

Your application is deployed automatically using AWS Elastic Beanstalk. Your YAML configuration files are stored in the folder .ebextensions and new files are added or updated often. The DevOps team does not want to re-deploy the application every time there are configuration changes, instead, they would rather manage configuration externally, securely, and have it load dynamically into the application at runtime.

What option allows you to do this?

Explanation

Correct option:

Use SSM Parameter Store

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. For the given use-case, as the DevOps team does not want to re-deploy the application every time there are configuration changes, so they can use the SSM Parameter Store to store the configuration externally.

Incorrect options:

Use Environment variables - Environment variables provide another way to specify configuration options and credentials, and can be useful for scripting or temporarily setting a named profile as the default. Your application is not running AWS CLI. Since the use-case requires the configuration to be stored securely, so using Environment variables is ruled out, as these are not encrypted at rest and these are visible in clear text in the AWS Console as well as in the response of some actions of the Elastic BeanStalk API.

Use Stage Variables - You can use stage variables for managing multiple release stages for API Gateway, this is not what you are looking for here.

Use S3 - S3 offers the same benefit as the SSM Parameter Store where there are no servers to manage. With S3 you have to set encryption and choose other security options and there are more chances of misconfiguring security if you share your S3 bucket with other objects. You would have to create a custom setup to come close to the parameter store. Use Parameter Store and let AWS handle the rest.

Reference:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html

Question 48: Incorrect

An organization is moving its on-premises resources to the cloud. Source code will be moved to AWS CodeCommit and AWS CodeBuild will be used for compiling the source code using Apache Maven as a build tool. The organization wants the build environment should allow for scaling and running builds in parallel.

Which of the following options should the organization choose for their requirement?

Explanation

Correct option:

CodeBuild scales automatically, the organization does not have to do anything for scaling or for parallel builds - AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests.

Incorrect options:

Choose a high-performance instance type for your CodeBuild instances - For the current requirement, this is will not make any difference.

Run CodeBuild in an Auto Scaling Group - AWS CodeBuild is a managed service and scales automatically, does not need Auto Scaling Group to scale it up.

Enable CodeBuild Auto Scaling - This has been added as a distractor. CodeBuild scales automatically to meet peak build requests.

References:

https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html

https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html

Question 49: Correct

A development team uses shared Amazon S3 buckets to upload files. Due to this shared access, objects in S3 buckets have different owners making it difficult to manage the objects.

As a developer associate, which of the following would you suggest to automatically make the S3 bucket owner, also the owner of all objects in the bucket, irrespective of the AWS account used for uploading the objects?

Explanation

Correct option:

Use S3 Object Ownership to default bucket owner to be the owner of all objects in the bucket

S3 Object Ownership is an Amazon S3 bucket setting that you can use to control ownership of new objects that are uploaded to your buckets. By default, when other AWS accounts upload objects to your bucket, the objects remain owned by the uploading account. With S3 Object Ownership, any new objects that are written by other accounts with the bucket-owner-full-control canned access control list (ACL) automatically become owned by the bucket owner, who then has full control of the objects.

S3 Object Ownership has two settings: 1. Object writer – The uploading account will own the object. 2. Bucket owner preferred – The bucket owner will own the object if the object is uploaded with the bucket-owner-full-control canned ACL. Without this setting and canned ACL, the object is uploaded and remains owned by the uploading account.

Incorrect options:

Use S3 CORS to make the S3 bucket owner, the owner of all objects in the bucket - Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

Use S3 Access Analyzer to identify the owners of all objects and change the ownership to the bucket owner - Access Analyzer for S3 helps review all buckets that have bucket access control lists (ACLs), bucket policies, or access point policies that grant public or shared access. Access Analyzer for S3 alerts you to buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization.

Use Bucket Access Control Lists (ACLs) to control access on S3 bucket and then define its owner - Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. A bucket ACLs allow you to control access at bucket level.

None of the above features are useful for the current scenario and hence are incorrect options.

References:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html

Question 50: Correct

Your company hosts a static website on Amazon Simple Storage Service (S3) written in HTML5. The website targets aviation enthusiasts and it has grown a worldwide audience with hundreds of thousands of visitors accessing the website now on a monthly basis. While users in the United States have a great user experience, users from other parts of the world are experiencing slow responses and lag.

Which service can mitigate this issue?

Explanation

Correct option:

Use Amazon CloudFront

Storing your static content with S3 provides a lot of advantages. But to help optimize your application’s performance and security while effectively managing cost, AWS recommends that you also set up Amazon CloudFront to work with your S3 bucket to serve and protect the content. CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.

By caching your content in Edge Locations, CloudFront reduces the load on your S3 bucket and helps ensure a faster response for your users when they request content. In addition, data transfer out for content by using CloudFront is often more cost-effective than serving files directly from S3, and there is no data transfer fee from S3 to CloudFront.

A security feature of CloudFront is Origin Access Identity (OAI), which restricts access to an S3 bucket and its content to only CloudFront and operations it performs.

CloudFront Overview: via - https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/

Incorrect options:

Use Amazon ElastiCache for Redis - Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, and Q&A portals). ElastiCache is often used with databases that need millisecond latency. For the current scenario, we do not need a caching layer since the data load is not that heavy.

Use Amazon S3 Caching - This is a made-up option, given as a distractor.

Use Amazon S3 Transfer Acceleration - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. However, S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. Each time S3 Transfer Acceleration is used to upload an object, AWS checks whether S3 Transfer Acceleration is likely to be faster than a regular Amazon S3 transfer. If it finds that S3 Transfer Acceleration might not be significantly faster, AWS shifts back to normal Amazon S3 transfer mode. So, this is not the right option for our use case.

References:

https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/

https://aws.amazon.com/elasticache/

Question 51: Correct

Your web application architecture consists of multiple Amazon EC2 instances running behind an Elastic Load Balancer with an Auto Scaling group having the desired capacity of 5 EC2 instances. You would like to integrate AWS CodeDeploy for automating application deployment. The deployment should re-route traffic from your application's original environment to the new environment.

Which of the following options will meet your deployment criteria?

Explanation

Correct option:

Opt for Blue/Green deployment - A Blue/Green deployment is used to update your applications while minimizing interruptions caused by the changes of a new application version. CodeDeploy provisions your new application version alongside the old version before rerouting your production traffic. The behavior of your deployment depends on which compute platform you use:

  1. AWS Lambda: Traffic is shifted from one version of a Lambda function to a new version of the same Lambda function.
  2. Amazon ECS: Traffic is shifted from a task set in your Amazon ECS service to an updated, replacement task set in the same Amazon ECS service.
  3. EC2/On-Premises: Traffic is shifted from one set of instances in the original environment to a replacement set of instances.

Incorrect options:

Opt for Rolling deployment - This deployment type is present for AWS Elastic Beanstalk and not for EC2 instances directly.

Opt for Immutable deployment - This deployment type is present for AWS Elastic Beanstalk and not for EC2 instances directly.

Opt for In-place deployment - Under this deployment type, the application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html#welcome-deployment-overview-blue-green

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments.html

Question 52: Correct

A digital marketing company has its website hosted on an Amazon S3 bucket A. The development team notices that the web fonts that are hosted on another S3 bucket B are not loading on the website.

Which of the following solutions can be used to address this issue?

Explanation

Correct option:

Configure CORS on the bucket B that is hosting the web fonts to allow Bucket A origin to make the requests

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

To configure your bucket to allow cross-origin requests, you create a CORS configuration, which is an XML document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) that will support for each origin, and other operation-specific information.

For the given use-case, you would create a <CORSRule> in <CORSConfiguration> for bucket B to allow access from the S3 website origin hosted on bucket A.

Please see this note for more details: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

Incorrect options:

Enable versioning on both the buckets to facilitate the correct functioning of the website - This option is a distractor and versioning will not help to address the web fonts loading issue on the website.

Update bucket policies on both bucket A and bucket B to allow successful loading of the web fonts on the website - It's not the bucket policies but the CORS configuration on bucket B that needs to be updated to allow web fonts to be loaded on the website. Updating bucket policies will not help to address the web fonts loading issue on the website.

Configure CORS on the bucket A that is hosting the website to allow any origin to respond to requests - The CORS configuration needs to be updated on bucket B to allow web fonts to be loaded on the website hosted on bucket A. So this option in incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

Question 53: Correct

An application runs on an EC2 instance and processes orders on a nightly basis. This EC2 instance needs to access the orders that are stored in S3.

How would you recommend the EC2 instance access the orders securely?

Explanation

Correct option:

Use an IAM role

IAM roles have been incorporated so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles.

Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the IAM console, the console creates an instance profile automatically and gives it the same name as the role to which it corresponds.

This is the most secure option as the role assigned to EC2 can be used to access S3 without storing any credentials onto the EC2 instance.

Incorrect options:

Create an IAM programmatic user and store the access key and secret access key on the EC2 ~/.aws/credentials file. - While this would work, this is highly insecure as anyone gaining access to the EC2 instance would be able to steal the credentials stored in that file.

Use EC2 User Data - EC2 User Data is used to run bootstrap scripts at an instance's first launch. This option is not the right fit for the given use-case.

Create an S3 bucket policy that authorizes public access - While this would work, it would allow anyone to access your S3 bucket files. So this option is ruled out.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

Question 54: Correct

AWS CloudFormation helps model and provision all the cloud infrastructure resources needed for your business.

Which of the following services rely on CloudFormation to provision resources (Select two)?

Explanation

Correct option:

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes.

AWS Serverless Application Model (AWS SAM) - You use the AWS SAM specification to define your serverless application. AWS SAM templates are an extension of AWS CloudFormation templates, with some additional components that make them easier to work with. AWS SAM needs CloudFormation templates as a basis for its configuration.

Incorrect options:

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Hence, Lamda does not need CloudFormation to run its services.

AWS Autoscaling - AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. Auto Scaling used CloudFormation but is not a mandatory requirement.

AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. AWS CodePipeline uses AWS CloudFormation as a deployment action but is not a mandatory service.

References:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification.html

https://aws.amazon.com/elasticbeanstalk/

Question 55: Correct

Your company is planning to move away from reserving EC2 instances and would like to adopt a more agile form of serverless architecture.

Which of the following is the simplest and the least effort way of deploying the Docker containers on this serverless architecture?

Explanation

Correct option:

Amazon Elastic Container Service (Amazon ECS) on Fargate - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You can host your cluster on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks using the Fargate launch type.

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

ECS Fargate Overview: via - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) on EC2 - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. ECS uses EC2 instances and hence cannot be called a serverless solution.

Amazon Elastic Kubernetes Service (Amazon EKS) on Fargate - Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. You can choose to run your EKS clusters using AWS Fargate, which is a serverless compute for containers. Since the use-case talks about the simplest and the least effort way to deploy Docker containers, EKS is not the best fit as you can use ECS Fargate to build a much easier solution. EKS is better suited to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane or worker nodes.

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. Beanstalk uses EC2 instances for its deployment, hence cannot be called a serverless architecture.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

https://aws.amazon.com/eks/

https://aws.amazon.com/elasticbeanstalk/

Question 56: Incorrect

A developer from your team has configured the load balancer to route traffic equally between instances or across Availability Zones. However, Elastic Load Balancing (ELB) routes more traffic to one instance or Availability Zone than the others.

Why is this happening and how can it be fixed? (Select two)

Explanation

Correct option:

Sticky sessions are enabled for the load balancer - This can be the reason for potential unequal traffic routing by the load balancer. Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information in order to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies.

When a load balancer first receives a request from a client, it routes the request to a target, generates a cookie named AWSALB that encodes information about the selected target, encrypts the cookie, and includes the cookie in the response to the client. The client should include the cookie that it receives in subsequent requests to the load balancer. When the load balancer receives a request from a client that contains the cookie, if sticky sessions are enabled for the target group and the request goes to the same target group, the load balancer detects the cookie and routes the request to the same target.

If you use duration-based session stickiness, configure an appropriate cookie expiration time for your specific use case. If you set session stickiness from individual applications, use session cookies instead of persistent cookies where possible.

Instances of a specific capacity type aren’t equally distributed across Availability Zones - A Classic Load Balancer with HTTP or HTTPS listeners might route more traffic to higher-capacity instance types. This distribution aims to prevent lower-capacity instance types from having too many outstanding requests. It’s a best practice to use similar instance types and configurations to reduce the likelihood of capacity gaps and traffic imbalances.

A traffic imbalance might also occur if you have instances of similar capacities running on different Amazon Machine Images (AMIs). In this scenario, the imbalance of the traffic in favor of higher-capacity instance types is desirable.

Incorrect options:

There could be short-lived TCP connections between clients and instances - This is an incorrect statement. Long-lived TCP connections between clients and instances can potentially lead to unequal distribution of traffic by the load balancer. Long-lived TCP connections between clients and instances cause uneven traffic load distribution by design. As a result, new instances take longer to reach connection equilibrium. Be sure to check your metrics for long-lived TCP connections that might be causing routing issues in the load balancer.

For Application Load Balancers, cross-zone load balancing is disabled by default - This is an incorrect statement. With Application Load Balancers, cross-zone load balancing is always enabled.

After you disable an Availability Zone, the targets in that Availability Zone remain registered with the load balancer, thereby receiving random bursts of traffic - This is an incorrect statement. After you disable an Availability Zone, the targets in that Availability Zone remain registered with the load balancer. However, even though they remain registered, the load balancer does not route traffic to them.

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#sticky-sessions

https://aws.amazon.com/premiumsupport/knowledge-center/elb-fix-unequal-traffic-routing/

https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html#availability-zones

Question 57: Correct

A development team has deployed a REST API in Amazon API Gateway to two different stages - a test stage and a prod stage. The test stage is used as a test build and the prod stage as a stable build. After the updates have passed the test, the team wishes to promote the test stage to the prod stage.

Which of the following represents the optimal solution for this use-case?

Explanation

Correct option:

Update stage variable value from the stage name of test to that of prod

After creating your API, you must deploy it to make it callable by your users. To deploy an API, you create an API deployment and associate it with a stage. A stage is a logical reference to a lifecycle state of your API (for example, dev, prod, beta, v2). API stages are identified by the API ID and stage name. They're included in the URL that you use to invoke the API. Each stage is a named reference to a deployment of the API and is made available for client applications to call.

Stages enable robust version control of your API. In our current use-case, after the updates pass the test, you can promote the test stage to the prod stage. The promotion can be done by redeploying the API to the prod stage or updating a stage variable value from the stage name of test to that of prod.

Incorrect options:

Deploy the API without choosing a stage. This way, the working deployment will be updated in all stages - An API can only be deployed to a stage. Hence, it is not possible to deploy an API without choosing a stage.

Delete the existing prod stage. Create a new stage with the same name (prod) and deploy the tested version on this stage* - This is possible, but not an optimal way of deploying a change. Also, as prod refers to real production system, this option will result in downtime.

API performance is optimized in a different way for prod environments. Hence, promoting test to prod is not correct. The promotion should be done by redeploying the API to the prod stage - For each stage, you can optimize API performance by adjusting the default account-level request throttling limits and enabling API caching. And these settings can be changed/updated at any time.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-deploy-api.html

Question 58: Correct

The development team at a company wants to encrypt a 111 GB object using AWS KMS.

Which of the following represents the best solution?

Explanation

Correct option:

Make a GenerateDataKey API call that returns a plaintext key and an encrypted copy of a data key. Use a plaintext key to encrypt the data - GenerateDataKey API, generates a unique symmetric data key for client-side encryption. This operation returns a plaintext copy of the data key and a copy that is encrypted under a customer master key (CMK) that you specify. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.

GenerateDataKey returns a unique data key for each request. The bytes in the plaintext key are not related to the caller or the CMK.

To encrypt data outside of AWS KMS:

  1. Use the GenerateDataKey operation to get a data key.

  2. Use the plaintext data key (in the Plaintext field of the response) to encrypt your data outside of AWS KMS. Then erase the plaintext data key from memory.

  3. Store the encrypted data key (in the CiphertextBlob field of the response) with the encrypted data.

To decrypt data outside of AWS KMS:

  1. Use the Decrypt operation to decrypt the encrypted data key. The operation returns a plaintext copy of the data key.

  2. Use the plaintext data key to decrypt data outside of AWS KMS, then erase the plaintext data key from memory.

Incorrect options:

Make a GenerateDataKeyWithPlaintext API call that returns an encrypted copy of a data key. Use a plaintext key to encrypt the data - This is a made-up option, given only as a distractor.

Make an Encrypt API call to encrypt the plaintext data as ciphertext using a customer master key (CMK) with imported key material - Encrypt API is used to encrypt plaintext into ciphertext by using a customer master key (CMK). The Encrypt operation has two primary use cases:

  1. To encrypt small amounts of arbitrary data, such as a personal identifier or database password, or other sensitive information.

  2. To move encrypted data from one AWS Region to another.

Neither of the two is useful for the given scenario.

Make a GenerateDataKeyWithoutPlaintext API call that returns an encrypted copy of a data key. Use an encrypted key to encrypt the data - GenerateDataKeyWithoutPlaintext API, generates a unique symmetric data key. This operation returns a data key that is encrypted under a customer master key (CMK) that you specify.

GenerateDataKeyWithoutPlaintext is identical to the GenerateDataKey operation except that returns only the encrypted copy of the data key. This operation is useful for systems that need to encrypt data at some point, but not immediately. When you need to encrypt the data, you call the Decrypt operation on the encrypted copy of the key.

References:

https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html

https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html

https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKeyWithoutPlaintext.html

Question 59: Correct

A recruit has created an Amazon Simple Storage Service (S3) bucket. He needs assistance in getting the security principles right for this bucket.

Which of the following is NOT a security practice for access control to S3 buckets?

Explanation

Correct option:

Use of Security Groups - A security group acts as a virtual firewall for your instances to control incoming and outgoing traffic. S3 is a managed object storage service. Security Groups are not meant for S3.

Incorrect options:

Use of IAM Roles - For applications on Amazon EC2 or other AWS services to access Amazon S3 resources, they must include valid AWS credentials in their AWS API requests. IAM role is the perfect option in such scenarios. When you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an Amazon EC2 instance or AWS service such as AWS Lambda. The role supplies temporary permissions that applications can use when they make calls to other AWS resources.

More info here: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html#roles

Use of Access Control Lists (ACLs) - Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.

Use of Bucket Policies - A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html

https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-bucket-policy.html

Question 60: Correct

A company has hosted its website on an Amazon S3 bucket and have used another Amazon S3 bucket for storing the rest of the assets like images, fonts, etc.

Which technique/mechanism will help the hosted website access its assets without access/permission issues?

Explanation

Correct option:

S3 Cross-Origin Resource Sharing (CORS) - Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

To configure your bucket to allow cross-origin requests, you create a CORS configuration. The CORS configuration is a document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) that will support each origin, and other operation-specific information. You can add up to 100 rules to the configuration. You can add the CORS configuration as the cors subresource to the bucket.

If you are configuring CORS in the S3 console, you must use JSON to create a CORS configuration. The new S3 console only supports JSON CORS configurations.

Incorrect options:

S3 Transfer Acceleration - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.

S3 Access Analyzer - Access Analyzer for S3 is a feature that monitors your access policies, ensuring that the policies provide only the intended access to your S3 resources. Access Analyzer for S3 evaluates your bucket access policies and enables you to discover and swiftly remediate buckets with potentially unintended access.

S3 Access Points - Amazon S3 Access Points simplifies managing data access at scale for applications using shared data sets on S3. With S3 Access Points, you can now easily create hundreds of access points per bucket, representing a new way of provisioning access to shared data sets. Access Points provide a customized path into a bucket, with a unique hostname and access policy that enforces the specific permissions and network controls for any request made through the access point.

None of these three options will help achieve the access required for the current use-case and hence are all incorrect options.

References:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

https://aws.amazon.com/s3/faqs/

Question 61: Correct

A company has a workload that requires 14,000 consistent IOPS for data that must be durable and secure. The compliance standards of the company state that the data should be secure at every stage of its lifecycle on all of the EBS volumes they use.

Which of the following statements are true regarding data security on EBS?

Explanation

Correct option:

Amazon EBS works with AWS KMS to encrypt and decrypt your EBS volume. You can encrypt both the boot and data volumes of an EC2 instance. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:

  1. Data at rest inside the volume

  2. All data moving between the volume and the instance

  3. All snapshots created from the volume

  4. All volumes created from those snapshots

EBS volumes support both in-flight encryption and encryption at rest using KMS - This is a correct statement. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.

Incorrect options:

EBS volumes support in-flight encryption but do not support encryption at rest - This is an incorrect statement. As discussed above, all data moving between the volume and the instance is encrypted.

EBS volumes do not support in-flight encryption but do support encryption at rest using KMS - This is an incorrect statement. As discussed above, data at rest is also encrypted.

EBS volumes don't support any encryption - This is an incorrect statement. Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources associated with your EC2 instances. With Amazon EBS encryption, you aren't required to build, maintain, and secure your own key management infrastructure.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

Question 62: Correct

A financial services company wants to ensure that the customer data is always kept encrypted on Amazon S3 but wants an AWS managed solution that allows full control to create, rotate and remove the encryption keys.

As a Developer Associate, which of the following would you recommend to address the given use-case?

Explanation

Correct option:

Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)

You have the following options for protecting data at rest in Amazon S3:

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.

Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

When you use server-side encryption with AWS KMS (SSE-KMS), you can use the default AWS managed CMK, or you can specify a customer-managed CMK that you have already created.

Creating your own customer-managed CMK gives you more flexibility and control over the CMK. For example, you can create, rotate, and disable customer-managed CMKs. You can also define access controls and audit the customer-managed CMKs that you use to protect your data.

Please see this note for more details: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

Incorrect options:

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) - When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, AWS encrypts the key itself with a master key that it regularly rotates. So this option is incorrect for the given use-case.

Server-Side Encryption with Customer-Provided Keys (SSE-C) - With Server-Side Encryption with Customer-Provided Keys (SSE-C), you will need to create the encryption keys as well as manage the corresponding process to rotate and remove the encryption keys. Amazon S3 manages the data encryption, as it writes to disks, as well as the data decryption when you access your objects. So this option is incorrect for the given use-case.

Server-Side Encryption with Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You cannot combine Server-Side Encryption with Secrets Manager to create, rotate, or disable the encryption keys.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

Question 63: Correct

A company has their entire stack integrated with AWS X-Ray. A manager at the company noticed the skyrocketing AWS monthly usage charges for the X-Ray service. He tracked the abnormal bills to the high volume of input data going into X-Ray. As a Developer, you have been given the responsibility to fix this issue as quickly as possible, with minimal disruptions.

Which of the below is the optimal choice for this issue?

Explanation

Correct option:

Enable X-Ray Sampling - By customizing sampling rules, you can control the amount of data that you record, and modify sampling behavior on the fly without modifying or redeploying your code. Sampling rules tell the X-Ray SDK how many requests to record for a set of criteria. By default, the X-Ray SDK records the first request each second, and five percent of any additional requests.

Incorrect options:

Send only the required data from client-side - Theoretically, it is possible to filter out unwanted data and send only what is deemed necessary. But, this requires custom code changes and management overhead of tracking and maintaining the code through its life cycle. So, this option is not an optimal solution for the current scenario.

Custom configuration of the X-Ray agents - Such a custom configuration is not possible and this option is an invalid one, given only as a distractor.

Enable X-Ray Deep linking to send only the most useful data to X-Ray - The AWS X-Ray console enables you to view service maps and traces for requests that your applications serve. The console's service map is a visual representation of the JSON service graph that X-Ray generates from the trace data generated by your applications. Deep linking refers to the use of routes and queries to deep-link into specific traces or filtered views of traces and the service map. This is not a useful feature for the current scenario.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html

https://docs.aws.amazon.com/xray/latest/devguide/xray-console-deeplinks.html

Question 64: Correct

A company’s e-commerce website is expecting hundreds of thousands of visitors on Black Friday. The marketing department is concerned that high volumes of orders might stress SQS leading to message failures. The company has approached you for the steps to be taken as a precautionary measure against the high volumes.

What step will you suggest as a Developer Associate?

Explanation

Correct option:

Amazon SQS is highly scalable and does not need any intervention to handle the expected high volumes

Amazon SQS leverages the AWS cloud to dynamically scale, based on demand. SQS scales elastically with your application so you don't have to worry about capacity planning and pre-provisioning. For most standard queues (depending on queue traffic and message backlog), there can be a maximum of approximately 120,000 inflight messages (received from a queue by a consumer, but not yet deleted from the queue).

Info on Queue Quotas: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-quotas.html

Incorrect options:

Pre-configure the SQS queue to increase the capacity when messages hit a certain threshold - This is an incorrect statement. Amazon SQS scales dynamically, automatically provisioning the needed capacity.

Enable auto-scaling in the SQS queue - SQS queues are, by definition, auto-scalable and do not need any configuration changes for auto-scaling.

Convert the queue into FIFO ordered queue, since messages to the down system will be processed faster once they are ordered - This is a wrong statement. You cannot convert an existing standard queue to FIFO queue. To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.

Standard to FIFO queue conversion: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-quotas.html

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

Question 65: Correct

A large firm stores its static data assets on Amazon S3 buckets. Each service line of the firm has its own AWS account. For a business use case, the Finance department needs to give access to their S3 bucket's data to the Human Resources department.

Which of the below options is NOT feasible for cross-account access of S3 bucket objects?

Explanation

Correct option:

Use IAM roles and resource-based policies delegate access across accounts within different partitions via programmatic access only - This statement is incorrect and hence the right choice for this question. IAM roles and resource-based policies delegate access across accounts only within a single partition. For example, assume that you have an account in US West (N. California) in the standard aws partition. You also have an account in China (Beijing) in the aws-cn partition. You can't use an Amazon S3 resource-based policy in your account in China (Beijing) to allow access for users in your standard AWS account.

Incorrect options:

Use Resource-based policies and AWS Identity and Access Management (IAM) policies for programmatic-only access to S3 bucket objects - Use bucket policies to manage cross-account control and audit the S3 object's permissions. If you apply a bucket policy at the bucket level, you can define who can access (Principal element), which objects they can access (Resource element), and how they can access (Action element). Applying a bucket policy at the bucket level allows you to define granular access to different objects inside the bucket by using multiple policies to control access. You can also review the bucket policy to see who can access objects in an S3 bucket.

Use Access Control List (ACL) and IAM policies for programmatic-only access to S3 bucket objects - Use object ACLs to manage permissions only for specific scenarios and only if ACLs meet your needs better than IAM and S3 bucket policies. Amazon S3 ACLs allow users to define only the following permissions sets: READ, WRITE, READ_ACP, WRITE_ACP, and FULL_CONTROL. You can use only an AWS account or one of the predefined Amazon S3 groups as a grantee for the Amazon S3 ACL.

Use Cross-account IAM roles for programmatic and console access to S3 bucket objects - Not all AWS services support resource-based policies. This means that you can use cross-account IAM roles to centralize permission management when providing cross-account access to multiple services. Using cross-account IAM roles simplifies provisioning cross-account access to S3 objects that are stored in multiple S3 buckets, removing the need to manage multiple policies for S3 buckets. This method allows cross-account access to objects that are owned or uploaded by another AWS account or AWS services. If you don't use cross-account IAM roles, the object ACL must be modified.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example3.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html

https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/