AWS Certified Solutions Architect Associate Practice Test 03
AWS Certified Solutions Architect Associate Practice Test 03
Return to review
Attempt 1
All knowledge areas
All questions
Question 1: Skipped
You are working for a start-up firm which developed a new multilingual website for
sharing images & video files. You are using EC2 instance to host this web application. To
deliver these web content with lowest latency to end users, you have configured Amazon
CloudFront which forward query strings to origin servers based on selected parameter
values & also cache web content based upon these parameter values.
During trial, it was observed that in some cases caching is not happening based upon query
strings resulting in these requests hitting origin servers. Which of the following need to be
check if CloudFront is caching properly based upon query strings? (Select Three.)
Check only that the query parameter names are in the same case
Make sure if delimiter character between query string parameters is a “/” character.
(Correct)
(Correct)
Make sure if distribution is RTMP distribution.
Make sure if delimiter character between query string parameters is a “&” character.
(Correct)
Explanation
CloudFront Query String Forwarding only supports Web distribution. For query string
forwarding, delimiter character must be always “&” character. Parameter names & values used
in query string are case sensitive.
Option A is incorrect as CloudFront Query String Forwarding does not support RTMP
distribution.
Option D is incorrect as Delimiter Character should be always &, not \ character.
Option F is incorrect as in case of Parameters in query string, both name & values are case
sensitive.
Create a service that pulls SQS messages and writes these to DynamoDB to handle sudden spikes in
DynamoDB.
(Correct)
General Purpose SSD for the web server
(Correct)
(Correct)
Explanation
If the database is going to have a lot of read/write requests, then the ideal solution is to have the
underlying EBS Volume as Provisioned IOPS. Whereas, in case of the standard workload,
General Purpose SSD should be sufficient.
The below excerpt from AWS documentation shows the different types of EBS Volumes for
different workloads:
For more information on EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 4: Skipped
You need to ensure that new objects being uploaded to an S3 bucket are available in
another region. This is because of the criticality of the data that is hosted in the S3 bucket.
How can you achieve this in the easiest way possible?
Write a script to copy the objects to another bucket in the destination region.
(Correct)
Enable versioning which will copy the objects to the destination region.
Explanation
AWS Documentation mentions the following:
Cross-Region Replication is a bucket-level configuration that enables automatic, asynchronous
copying of objects across buckets in different AWS Regions.
For more information on Cross-Region Replication in the Simple Storage Service, please visit
the below URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
Question 5: Skipped
You are working as an AWS Architect for a global media firm. They have web server’s
deployed on EC2 instance across multiple regions. For audit purpose, you have created a
CloudTrail trail to store all CloudTrail event log files to S3 bucket.
This trail applies to all regions & are stored in S3 buckets at EU-Central region. During
last year’s audit, Auditors has raised a query on integrity of log files that are stored in S3
buckets & tendered Non- Compliance. What feature can help you to gain compliance from
Auditors for this query?
Use Amazon SSE-S3 encryption for CloudTrail log file while storing to S3 buckets.
Use S3 bucket policy to grant access to only Security head for S3 buckets having CloudTrail log
files.
Use Amazon SSE-KMS encryption for CloudTrail log file while storing to S3 buckets.
(Correct)
Explanation
After you enable CloudTrail log file integrity, it will create a hash file called as digest file which
refers to logs that are generated. This digest file is saved in different folder in S3 bucket where
log files are saved. Each of this digest file has private key of public & private key pair. DIgest
file can be validated using public key. This feature ensures that all the modification made to
CloudTrail log files are recorded.
Option A is incorrect as by default all CloudTrail log files are delivered to S3 buckets using
SSE-S3 encryption, this will not ensure the integrity of log files.
Option B is incorrect as with Amazon SSE-KMS encryption for CloudTrail log file, there
would be additional layer of security for log files, but it won’t ensure integrity of log files.
Option C is incorrect as although this will restrict access to bucket but won’t ensure that no
modification has been done to log files post delivering in S3 buckets.
For more information on CloudTrail Log file Integrity, refer to following URLs,
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-
intro.html
https://aws.amazon.com/blogs/aws/aws-cloudtrail-update-sse-kms-encryption-log-leintegrity-
verification/
Question 6: Skipped
You work in the media industry and have created a web application where users will be
able to upload photos they create to your website. This web application must be able to call
the S3 API in order to be able to function. Where should you store your API credentials
whilst maintaining the maximum level of security?
Don’t save your API credentials. Instead, create a role in IAM and assign this role to an EC2 instance
when you first create it.
(Correct)
For more information on IAM Roles, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Question 7: Skipped
You have an application to be setup in AWS, and the following points are to be considered:
a) A Web tier hosted on EC2 Instances
b) Session data to be written to DynamoDB
c) Log files to be written to Microsoft SQL Server
How will you ensure that the application writes data to a DynamoDB table?
Create an IAM role that allows read access to the DynamoDB table.
Create an IAM role that allows write access to the DynamoDB table.
(Correct)
Add an IAM user that allows write access to the DynamoDB table.
Explanation
IAM roles are designed so that your applications can securely make API requests from your
instances, without requiring you to manage the security credentials that the applications use.
Instead of creating and distributing your AWS credentials, wean delegate permission to make
API requests using IAM roles.
For more information on IAM roles, please refer to the link below:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Question 8: Skipped
IOT sensors monitor the number of bags that are handled at an airport. The data gets sent
back to a Kinesis stream with default settings. Every alternate day, the data from the
stream is sent to S3 for processing. But it is noticed that S3 is not receiving all of the data
that is being sent to the Kinesis stream. What could be the reason for this?
The sensors probably stopped working on somedays, hence data is not sent to the stream.
The default retention period of the data stream is set to 24 hours only, and hence the failure.
(Correct)
Explanation
Kinesis Streams support changes to the data record retention period of your stream. A Kinesis
stream is an ordered sequence of data records meant to be written to and read from in real-time.
Data records are therefore stored in shards in your stream temporarily. The time period from
when a record is added to when it is no longer accessible is called the retention period. A Kinesis
stream stores records from 24 hours by default, up to 168 hours.
Option A, even though a possibility, cannot be taken for granted as the right option.
Option B is invalid since S3 can store data indefinitely unless you have a lifecycle policy
defined.
Option D is invalid because the Kinesis service is perfect for this sort of data ingestion.
For more information on Kinesis data retention, please refer to the below URL:
http://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
Question 9: Skipped
You need to ensure that data stored in S3 is encrypted but do not want to manage the
encryption keys. Which of the following encryption mechanisms can be used in this case?
SSE-KMS
SSE-SSL
SSE-C
SSE-S3
(Correct)
Explanation
AWS Documentation mentions the following on Encryption keys:
SSE-S3 requires that Amazon S3 manages the data and master encryption keys.
SSE-C requires that you manage the encryption keys.
SSE-KMS requires that AWS manages the data key but you manage the master key in AWS
KMS.
For more information on using the Key Management service for S3, please visit the below URL:
https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html
Question 10: Skipped
A company has a Redshift Cluster defined in AWS. The IT Operations team have ensured
that both automated and manual snapshots are in place. Since the cluster is going to be run
for a long duration of a couple of years, Reserved Instances have been purchased. There
has been a recent concern on the cost being incurred by the cluster. Which of the following
steps can be carried out to minimize the costs being incurred by the cluster?
(Correct)
For more information on working with Snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
Question 11: Skipped
Your IT Supervisor is worried about users accidentally deleting objects in an S3 bucket.
Which of the following can help prevent accidental deletion of objects in an S3 bucket?
Choose 3 answers from the options given below.
(Correct)
(Correct)
(Correct)
For more information on the features of S3, please visit the following URL:
https://aws.amazon.com/s3/faqs/
Amazon S3 for storing the log files and Amazon EMR for processing the log files.
(Correct)
Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts.
Amazon S3 for storing the log files and EC2 Instances for processing the log files.
Explanation
Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such
as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By
using these frameworks and related open-source projects, such as Apache Hive and Apache Pig,
you can process data for analytics purposes and business intelligence workloads. Additionally,
you can use Amazon EMR to transform and move large amounts of data into and out of other
AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and
Amazon DynamoDB.
Options B and C, though partially correct would be an overhead for EC2 Instances to process
log files when you already have a ready made service to help in this regard.
Option D is in invalid because DynamoDB is not an ideal option to store log files.
Using single Lexicon, create different alias for a word “S3” & apply in different orders. Upload
Lexicons in us-west-1 region & us-east-1 region.
Using multiple Lexicons, create different alias for a word “S3” & apply in different orders. Upload
Lexicons in us-west-1 region & us-east-1 region.
(Correct)
Using multiple Lexicons, create different alias for a word “S3” & apply in different orders. Upload
Lexicons in us-west-1 region & use for both regions.
Using single Lexicon, create different alias for a word “S3” & apply in different orders. Upload
Lexicons in us-east-1 region & use for both regions.
Explanation
Lexicons are specific to a region. You will need to upload Lexicon in each region where you
need to use. For a single text which appears multiple times in a content, you can create alias
using multiple Lexicons to have different speech.
Option A is incorrect as Lexicons needs to upload in all regions where content will be using
Amazon Polly.
Option C is incorrect as if a single word is repeating multiple times in a content & needs to
have different speech, we need to have multiple Lexicons created.
Option D is incorrect as Lexicons needs to upload in all regions where content will be using
Amazon Polly & to have different speech for single word repeating multiple times, multiple
Lexicons needs to be created.
Use an Amazon SQS queue to throttle data going to the Amazon RDS DB Instance.
(Correct)
(Correct)
Add Amazon RDS DB Read Replicas, and have your application direct read queries to them.
(Correct)
Add your Amazon RDS DB Instance to an Auto Scaling group and configure your CloudWatch
metric based on CPU utilization.
(Correct)
Explanation
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB)
instances. This replication feature makes it easy to elastically scale out beyond the capacity
constraints of a single DB Instance for read-heavy database workloads. You can create one or
more replicas of a given source DB Instance and serve high-volume application read traffic from
multiple copies of your data, thereby increasing aggregate read throughput.
For more information on Read Replicas, please refer to the link below.
https://aws.amazon.com/rds/details/read-replicas/
Question 15: Skipped
A company has a Redshift cluster for petabyte-scale data warehousing. The data within the
cluster is easily reproducible from additional data stored on Amazon S3. The company
wants to reduce the overall total cost of running this Redshift cluster. Which scenario
would best meet the needs of the running cluster, while still reducing total overall
ownership of the cluster? Choose the correct answer from the options below.
Instead of implementing automatic daily backups, write a CLI script that creates manual snapshots
every few days. Copy the manual snapshot to a secondary AWS region for disaster recovery
situations.
Implement daily backups, but do not enable multi-region copy to save data transfer costs.
Enable automated snapshots but set the retention period to a lower number to reduce storage costs.
(Correct)
Explanation
Snapshots are point-in-time backups of a cluster. There are two types of snapshots: automated
and manual. Amazon Redshift stores these snapshots internally in Amazon S3 by using an
encrypted Secure Sockets Layer (SSL) connection. If you need to restore from a snapshot,
Amazon Redshift creates a new cluster and imports data from the snapshot that you specify.
Since the question already mentions that the cluster is easily reproducible from additional data
stored on Amazon S3, you do not need to maintain snapshots.
For more information on Redshift Snapshots, please visit the below URL:
http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
Question 16: Skipped
A company is hosting a MySQL database in AWS using the AWS RDS service. To offload
the reads, a Read Replica has been created and reports are run off the Read Replica
database. But at certain times, the reports show stale data. Why may this be the case?
(Correct)
The backup of the original database has not been set properly.
Explanation
An AWS Whitepaper on the caveat for Read Replicas is given below which must be taken into
consideration by designers:
Read Replicas are separate database instances that are replicated asynchronously. As a result,
they are subject to replication lag and might be missing some of the latest transactions.
Application designers need to consider which queries have tolerance to slightly stale data. Those
queries can be executed on a Read Replica, while the rest should run on the primary node. Read
Replicas can also not accept any write queries.
For more information on AWS Cloud best practices, please visit the following URL:
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf
Question 17: Skipped
An application consists of a web server and database server hosted on separate EC2
Instances. There are lot of read requests on the database which is degrading the
performance of the application. Which of the following can help improve the performance
of the database under this heavy load?
Enable Multi-AZ for the database.
(Correct)
For more information on AWS ElastiCache, please visit the following URL:
https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/WhatIs.html
Question 18: Skipped
Users within a company need a place to store their documents. Each user must have his/her
own location for placing the set of documents and should not be able to view another
person’s documents. Also, users should be able to retrieve their documents easily. Which
AWS service would be ideal for this requirement?
AWS Redshift
(Correct)
AWS Glacier
Explanation
The Simple Storage Service is the perfect place to store the documents. You can define a folder
for each user and have policies which restrict access so that each user can only access his/her
own files.
https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specificfolders-
in-an-amazon-s3-bucket/
https://aws.amazon.com/premiumsupport/knowledge-center/iam-s3-user-specific-folder/
Question 19: Skipped
Your company's management team has asked you to replicate the current set of AWS
resources that you are using now, into another region. Which of the following is the cost-
optimized option?
Create a DB snapshot of the existing DB and AMI of the existing instances to be used in another
region
(Correct)
Use Elastic Beanstalk to create another copy of the infrastructure in another region
(Correct)
EBS General Purpose SSD
Explanation
Since this is a high performance requirement with high IOPS needed, one should opt for EBS
Provisioned IOPS SSD.
The below snapshot from the AWS Documentation mentions the need for using Provisioned
IOPS for better IOPS performance in database based applications.
For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 21: Skipped
A company is planning on moving their PostgreSQL database to AWS. They want to have
the ability to have Replicas for the database and automated backup. Which of the following
databases would be ideal for this scenario?
AWS DynamoDB
Amazon Aurora
(Correct)
AWS Redshift
Explanation
AWS Documentation mentions the following on Amazon Aurora:
Amazon Aurora is a drop-in replacement for MySQL and PostgreSQL. The code, tools and
applications you use today with your existing MySQL and PostgreSQL databases can be used
with Amazon Aurora.
For more information on Amazon Aurora, please visit the following URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html
Note:
Both(Aurora and Postgre SQL) are capable of all requirements mentioned in the Question. But
Amazon Aurora will be the best option(Here we have to select only one answer) because of its
performance over Postgre SQL.Amazon Aurora PostgreSQL delivers up to three times the
performance of PostgreSQL.
Question 22: Skipped
You are building a stateless architecture for an application which will consist of web
servers and an Auto Scaling Group. Which of the following would be an ideal storage
mechanism for Session data?
AWS S3
AWS Redshift
AWS DynamoDB
(Correct)
Explanation
The below diagram from AWS Documentation shows how stateless architecture would look like:
For more information on architecting for the cloud, please visit the below URL:
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf
Question 23: Skipped
You create an Auto Scaling Group which is used to spin up instances On Demand. As an
architect, you need to ensure that the instances are pre-installed with a software when they
are launched. What are the ways in which you can achieve this? Choose 2 answers from the
options given below.
Add the software installation to the configuration for the Auto Scaling Group.
Add the scripts for the installation in the User data section.
(Correct)
Ask the IT operations team to install the software as soon as the instance is launched.
Explanation
The User data section of an instance launch can be used to pre-configure software after the
instance is initially booted.
For more information on User data, please visit the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Also, you can create an AMI or a golden image with the already installed software, then create a
launch configuration which can be used by that Auto Scaling Group.
Configure the ELB to perform health checks on the EC2 instances and implement auto-scaling
(Correct)
Change the instance size to the maximum available to compensate for failure
Use CloudWatch to monitor the VPC Flow Logs for the VPC the instances are deployed in
Explanation
Correct:
C - Using the elastic load balancer to perform health checks will determine whether or not to
remove a non- or underperforming instance and have the auto-scaling group launch a new
instance.
Incorrect:
A. Increasing the instance size doesn’t prevent failure of one or both the instances, therefore the
website can still become slow or unavailable
B. Monitoring the VPC flow logs for the VPC will capture VPC traffic, not traffic for the EC2
instance. You would need to create a flow log for a network interface.
D. Replicating the same two instance deployment may not prevent failure of instances and could
still result in the website becoming slow or unavailable.
Reference:
https://media.amazonwebservices.com/AWS_Building_Fault_Tolerant_Applications.pdf
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html#working-with-
flowlogs
Question 25: Skipped
You are working as an AWS Administrator for a software firm which has a popular Web
application hosted on EC2 instance in various regions. You are using AWS CloudHSM for
offloading SSL/TLS processing from Web servers. Since this is a critical application for the
firm, you need to ensure that proper backups are performed for data in AWS CloudHSM
on a daily basis. Which of the following is used by AWS CloudHSM for performing a
secure & durable backup?
Ephemeral backup key (EBK) is used to encrypt data & Persistent backup key (PBK) is used to
encrypt EBK before saving data to Amazon S3 bucket in the same region as that of AWS CloudHSM
cluster.
(Correct)
Data Key is used to encrypt data & Customer Managed key (CMK) is used to encrypt Data Key
before saving data to Amazon S3 bucket in the same region as that of AWS CloudHSM cluster.
Ephemeral backup key (EBK) is used to encrypt data & Persistent backup key (PBK) is used to
encrypt EBK before saving data to Amazon S3 bucket in different region than AWS CloudHSM
cluster.
Data Key is used to encrypt data & Customer Managed key (CMK) is used to encrypt Data Key
before saving data to Amazon S3 bucket in different region than AWS CloudHSM cluster.
Explanation
For backing of AWS CloudHSM data to Amazon S3 buckets in the same region, AWS
CloudHSM generates a unique Ephemeral backup key (EBK) to encrypt all data using AES 256-
bit encryption key. This Ephemeral backup key (EBK) is further encrypted using Persistent
backup key (PBK) which is also AES 256-bit encryption key.
Option B is incorrect as Data Key & Customer managed Key are not used by AWS CloudHSM
for encrypting data, instead of that EBK & PBK are used for encryption of data.
Option C is incorrect as while backing data from different AWS CloudHSM cluster to Amazon
S3 bucket, it should be in the same region of that of AWS CloudHSM cluster.
Option D is incorrect as Data Key & Customer managed Key are not used by AWS CloudHSM
for encrypting data, instead of that EBK & PBK are used for encryption of data & saving to
Amazon S3 bucket in same region.
For more information on backing data from AWS CloudHSM, refer to the following URL,
https://docs.aws.amazon.com/cloudhsm/latest/userguide/backups.html
Question 26: Skipped
An application in AWS is currently running in the Singapore region. You have been asked
to implement disaster recovery for the same. So, if the application goes down in the
Singapore region, it has to be started in the Asia region. Your application relies on pre-
built AMIs. As a part of your disaster recovery strategy, which of the below points would
you consider?
Copy the AMI from the Singapore region to the Asia region. Modify the Auto Scaling groups in the
backup region to use the new AMI ID in the backup region.
(Correct)
Nothing, because all AMIs by default are available in any region as long as they are created within
the same account.
Modify the image permissions and share the AMI to the Asia region.
Modify the image permissions to share the AMI with another account, then set the default region to
the backup region.
Explanation
If you need an AMI across multiple regions, you have to copy the AMI across regions. Note that
by default, AMIs that you have created will not be available across all regions. Hence, option A
is automatically invalid.
You can share AMIs with other users, but they will not be available across regions. Hence,
options C and D are also invalid. You have to copy the AMI across regions.
For more information on copying AMIs, please refer to the URL below.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
Question 27: Skipped
A company currently hosts a lot of data on their On-premises location. They want to start
storing backups of this data on AWS. How can this be achieved in the most efficient way
possible?
(Correct)
Setup the database in a local data center and use a private gateway to connect the application to the
database.
Setup the public website on a public subnet and set up the database in a private subnet which
connects to the Internet via a NAT Gateway.
(Correct)
Setup the database in a private subnet with a security group which only allows outbound traffic.
Setup the database in a public subnet with a security group which only allows inbound traffic.
Explanation
The below diagram from AWS Documentation showcases this architecture:
For more information on the VPC Scenario for public and private subnets, please see the below
link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Question 29: Skipped
A company has been using AWS cloud services for six months and just finished a security
review. Which finding below is considered a best practice in the security pillar of the well
architected framework?
Using the root user to create all new user accounts, at any time
(Correct)
Explanation
B - Monitoring and alerting for key metrics and events is a best practice of the Security pillar
Incorrect:
A. For the root user, you should follow the best practice of only using this login to create
another, initial set of IAM users and groups for longer-term identity management operations
C. Non-overlapping Private IP addresses is in the Reliability pillar
D. Design using elasticity to meet demand is in the Performance Efficiency pillar (Design for
Cloud Operations)
Reference:
https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pdf
Question 30: Skipped
You have a requirement for deploying an existing Java based application to AWS. There is
a need for automatic scaling for the underlying environment. Which of the following can be
used to deploy this environment in the quickest way possible?
(Correct)
For more information on the Elastic Beanstalk service, please visit the following URL:
https://aws.amazon.com/elasticbeanstalk/
Question 31: Skipped
A company planning to move to the AWS Cloud, wants to leverage its existing Chef recipes
for configuration management of its infrastructure. Which AWS service would be ideal for
this requirement?
AWS OpsWorks
(Correct)
AWS Inspector
For more information on AWS OpsWorks, please visit the following URL:
https://docs.aws.amazon.com/opsworks/latest/userguide/welcome.html
Question 32: Skipped
A company has set up their data layer in the Simple Storage Service. There are a number
of requests which include read/write and updates to objects in an S3 bucket. Users
sometimes complain that updates to an object are not being reflected. Which of the
following could be a reason for this?
Encryption is enabled for the bucket, hence it is taking time for the update to occur.
Versioning is not enabled for the bucket, so the newer version does not reflect the right data.
(Correct)
Explanation
Updates made to objects in S3 follow an eventual consistency model. Hence, for object updates,
there can be a slight delay when the updated object is provided back to the user on the next read
request.
For more information on various aspects of the Simple Storage Service, please visit the
following URL:
https://aws.amazon.com/s3/faqs/
Question 33: Skipped
You have created your own VPC and subnet in AWS and launched an instance in that
subnet. On attaching an Internet Gateway to the VPC, you see that the instance has a
public IP. The route table is shown below:
Larger image
The instance still cannot be reached from the Internet. Which of the below changes need to
be made to the route table to ensure that the issue is resolved?
Add the following entry to the route table – Destination as 10.0.0.0/16 and Target as Internet
Gateway
Add the following entry to the route table - Destination as 0.0.0.0/16 and Target as Internet Gateway
Add the following entry to the route table – Destination as 0.0.0.0/0 and Target as Internet Gateway
(Correct)
Modify the above route table – Destination as 10.0.0.0/16 and Target as Internet Gateway
Explanation
The route table needs to be modified as shown below to ensure that routes from the Internet
reach the instance:
For more information on Route Tables, please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html
Question 34: Skipped
You currently have an EC2 instance hosting a web application. It's expected that in the
coming months the number of users accessing the web application will increase. How can
we provide high availability to the web application? Choose 2 answers from the options
given below.
(Correct)
Set up your web app on more EC2 instances and set them behind an Elastic Load Balancer.
(Correct)
For more information on architecting for the cloud, please visit the following URL:
https://aws.amazon.com/whitepapers/architecting-for-the-aws-cloud-best-practices/
Note:
The reason is here, Amazon ElastiCache improves application performance by storing critical
pieces of data in memory for fast access. You can use this caching to significantly improve
latency and throughput for many read-heavy application workloads. so, will not help in
elasticity
Use EBS Volumes to store the videos. Create a script to delete the videos after a month.
Store the videos in Amazon Glacier and then use Lifecycle Policies.
Configure object expiration on S3 bucket and the policy takes care of deleting the videos on
completion of 30 days.
(Correct)
Store the videos using Stored Volumes. Create a script to delete the videos after a month.
Explanation
AWS Documentation mentions the following on Lifecycle Policies:
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket.
The configuration is a set of one or more rules, where each rule defines an action for Amazon S3
to apply to a group of objects. These actions can be classified as follows:
Transition actions – In which you define when objects transition to another storage class. For
example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent
access) storage class 30 days after creation, or archive objects to the GLACIER storage class one
year after creation.
Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the
expired objects on your behalf.
For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Question 36: Skipped
You have a set of Docker images that you use for building containers. You want to start
using the Elastic Container Service and utilize the Docker images. You need a place to store
these Docker images. Which of the following can be used for this purpose?
(Correct)
Use EC2 Instances with EBS Volumes to store the Docker images.
For more information on the Elastic Container Service, please visit the following URL:
https://aws.amazon.com/ecr/?nc2=h_m1
Question 37: Skipped
You need to have a Data storage layer in AWS. Following are the key requirements:
a) Storage of JSON documents
b) Availability of Indexes
c) Automatic scaling
Which of the following would be an ideal storage layer for the above requirements?
AWS DynamoDB
(Correct)
AWS Glacier
AWS S3
Explanation
AWS Documentation mentions the following:
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. DynamoDB enables customers to offload the
administrative burdens of operating and scaling distributed databases to AWS so that they don’t
have to worry about hardware provisioning, setup and configuration, throughput capacity
planning, replication, software patching, or cluster scaling.
(Correct)
Explanation
AWS Documentation mentions the following:
By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption
(SSE). You can also choose to encrypt your log files with an AWS Key Management Service
(AWS KMS) key. You can store your log files in your bucket for as long as you want. You can
also define Amazon S3 lifecycle rules to archive or delete log files automatically. If you want
notifications about log file delivery and validation, you can set up Amazon SNS notifications.
For more information on how CloudTrail works, please visit the following URL:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html
Question 39: Skipped
You are an AWS Solutions Architect and are architecting an application environment on
AWS. Which service or service feature would you enable to take advantage of monitoring
to ensure that auditing the environment for compliance is easy and follows strict security
compliance requirements?
(Correct)
SSL Logging
Explanation
AWS CloudTrail is a de facto service provided by AWS for monitoring all the API calls to AWS
and is used for logging and monitoring for compliance purposes. Amazon CloudTrail detects
every call made to AWS and creates a log which can then be used for analysis.
For more information on Amazon CloudTrail, please visit the link below.
https://aws.amazon.com/cloudtrail/
Question 40: Skipped
You are hosting a web server on an EC2 Instance. With the number of requests consuming
a large part of the CPU, the response performance for the application is getting degraded.
Which of the following would help alleviate the problem and provide a better response
time?
(Correct)
Place the EC2 Instance in an Auto Scaling Group with the max size as 1.
Explanation
Since there is a mention of only one EC2 instance, placing it behind the ELB would not make
much sense, hence Option A and B are invalid.
Having it in an Auto Scaling Group with just one instance would not make much sense.
CloudFront distribution would help alleviate the load on the EC2 Instance because of its edge
location and cache feature.
Create one large EC2 instance to host the website and replicate it in every region
Register a domain with Route53 and verify ahead of time that a unique S3 bucket name can be
created that matches the domain name.
(Correct)
Create a Content Delivery Network (CDN) to deliver your images and files
Create an auto-scaling group of EC2 instances and manage the web hosting on these instances
Explanation
Correct:
A - S3 static web hosting is the quickest way to setup this website. Because bucket names are
unique across all regions, it’s important to know that your S3 bucket is available before
purchasing a domain name.
Incorrect:
B. Hosting on EC2 is not necessary here as server-side scripting is not needed and S3 will scale
automatically.
C. Hosting on EC2 is not necessary and this particular implementation can lead to different
configurations on each server.
D. A CDN will improve the delivery time of your files and pages to the customer but is not a
hosting solution itself.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-cloudfrontwalkthrough.html
Question 42: Skipped
You have been tasked with architecting an application in AWS. The architecture would
consist of EC2, the Classic Load Balancer, Auto Scaling and Route 53. There is a directive
to ensure that Blue-Green deployments are possible in this architecture. Which routing
policy could you ideally use in Route 53 for achieving Blue-Green deployments?
Weighted
(Correct)
Multivalue Answer
Simple
Latency
Explanation
AWS Documentation mentions that Weighted routing policy is good for testing new versions of
the software. And that this is the ideal approach for Blue-Green deployments.
Weighted routing lets you associate multiple resources with a single domain name
(example.com) or subdomain name (acme.example.com) and choose how much traffic is routed
to each resource. This can be useful for a variety of purposes, including load balancing and
testing new versions of software.
For more information on Route 53 routing policies, please visit the following URL:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Note: Multivalue-answer is recommended to use only when you want to route traffic randomly
to multiple resources, such as web servers, you can create one multivalue answer record for each
resource and, optionally, associate an Amazon Route 53 health check with each record.
However, in our case, we need to choose how much traffic is routed to each resource (blue and
green). For example, Blue is currently live and we need to send less portion of traffic to Green,
to check everything works fine. If yes, then we can decide to go with Green resources. If no, we
can change the weight for that record to 0. Blue will be completely live again.
Note:
When you implement the Blue-Green Deployment, It's not always fixed that Blue environment
in Alive state and Green environment in Idle state vice versa. During the testing phase, you can
route your traffic to both Blue and Green environment with specified traffic load.
Reference Link:
https://d1.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf
(11 of 35). AWS explained with the proper diagram.
Question 43: Skipped
You need to ensure that instances in a private subnet can access the Internet. The solution
should be highly available and ensure less maintenance overhead. Which of the following
would ideally fit this requirement?
(Correct)
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-network-
internetmanual.html
Shown below is a comparison of the NAT Gateway and NAT Instances as per the AWS
Documentation.
The documentation states that the NAT Gateway is highly available and requires less
management.
For more information on the above comparison, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-comparison.html
Question 44: Skipped
You want to build a decoupled, highly available and fault tolerant architecture for your
application in AWS. You decide to use EC2, the Classic Load Balancer, Auto Scaling and
Route 53. Which one of the following additional services should you involve in this
architecture?
AWS SNS
AWS Config
AWS SQS
(Correct)
Explanation
The Simple Queue Service can be used to build a decoupled architecture.
AWS Documentation further mentions the following:
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it
easy to decouple and scale microservices, distributed systems, and serverless applications.
Building applications from individual components that each perform a discrete function
improves scalability and reliability, and is best practice design for modern applications.
For more information on the Simple Queue Service, please visit the following URL:
https://aws.amazon.com/sqs/
Question 45: Skipped
You are building a large-scale confidential documentation web server on AWS and all of its
documentation will be stored on S3. One of the requirements is that it should not be
publicly accessible from S3 directly, and CloudFront would be needed to accomplish this.
Which of the methods listed below would satisfy the outlined requirements? Choose an
answer from the options below.
Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target
bucket as the Amazon Resource Name (ARN).
Create individual policies for each bucket the documents are stored in, and grant access only to
CloudFront in these policies.
Create an Identity and Access Management (IAM) user for CloudFront and grant access to the
objects in your S3 bucket to that IAM User.
Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3
bucket to that OAI.
(Correct)
Explanation
If you want to use CloudFront signed URLs or signed cookies to provide access to objects in
your Amazon S3 bucket, you probably also want to prevent users from accessing your Amazon
S3 objects using Amazon S3 URLs. If users access your objects directly in Amazon S3, they
bypass the controls provided by CloudFront signed URLs or signed cookies, for example, control
over the date and time that a user can no longer access your content and control over which IP
addresses can be used to access content. In addition, if users access objects both through
CloudFront and directly by using Amazon S3 URLs, CloudFront access logs are less useful
because they're incomplete.
For more information on Origin Access Identity, please see the below link:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-
restrictingaccess-to-s3.html
Question 46: Skipped
A company offers its customers short-lived, contests that require users to upload files in
hopes of winning prizes. These contests can last up to two weeks, with unknown uploads
and the resulting file analysis can last up to three months. The company currently stores
four weeks of data in a S3 bucket and now it needs an economic and scalable object storage
solution to hold it's customer files. The files will be accessed once and then deleted. The best
solution for this company is:
Amazon S3 Standard
(Correct)
Amazon Glacier
Incorrect:
A. Amazon Glacier is for data archiving and can be accessed within minutes
B. Elastic File System is file storage, not object storage as required
C. S3 standard is for frequently accessed data, and less economical than S3 - IA
Reference:
https://aws.amazon.com/s3/storage-classes/
https://aws.amazon.com/efs/when-to-choose-efs/
Question 47: Skipped
You currently have a set of Lambda functions which have business logic embedded in
them. You want customers to have the ability to call these functions via HTTPS. How can
this be achieved?
Add EC2 Instances with an API server installed. Integrate the server with AWSLambda functions.
Use the API Gateway and provide integration with the AWS Lambda functions.
(Correct)
Explanation
An API Gateway provides the ideal access to your back end services via APIs.
For more information on the API Gateway service, please visit the following URL:
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html
Question 48: Skipped
You are working with an educational website which provide online content for professional
exams using WordPress website. You have recently added Amazon Polly plugins for
WordPress website to provide students audio recordings for exam contents. You are
getting customer feedback of speech rate being too fast & continuous. What changes will
you make in your content to resolve this? (Select Three.)
(Correct)
(Correct)
Add a pause using <break> SSML tag between appropriate words & paragraphs.
(Correct)
Explanation
Using SSML tags we can control on the speech generated by Amazon Polly. In above example,
using <break> SSML tags, convert commas to period & <emphasis> tag as “Strong”, will help to
control speech speed, adds appropriate pause & emphasis on appropriate words slowing speaking
rate.
Option C is incorrect as commas will not insert pause in speech during reading text.
Option D is incorrect as adding <emphasis> tag as “Reduced” will speed up speech rate, along
with decrease in volume.
For more information on SSML Tags supported by Amazon Polly, refer to following URL:
https://docs.aws.amazon.com/polly/latest/dg/supported-ssml.html
Question 49: Skipped
Your company has enabled CORS on your S3 bucket to allow cross-origin resource
sharing. You’ve entered three domains as origins to be allowed and established which
HTTP methods are available. During testing of your web application, the developer is
having trouble getting the OPTIONS and CONNECT methods to work, where other
methods do work. What do you provide the developer to fix the issue?
Only these methods are supported: GET, PUT, POST, DELETE, HEAD
(Correct)
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
https://aws.amazon.com/blogs/aws/amazon-S3-cross-origin-resource-sharing/
Question 50: Skipped
You are working as an AWS Architect for a global insurance firm. For web application you
are using S3 buckets & have configured CloudFront to cache image files. For audit
purpose, you have created a CloudTrail trail in each region & events logs files are logged in
S3 bucket in us-west-1 region. There has been changes in CloudFront which has caused all
traffic being routed to origin, resulting in increased latency for users in other continents.
After scrutinising CloudTrail logs, you found that there are duplicate CloudFront events
being logged. What configuration changes will you perform to eliminate duplicate
CloudFront logs?
Using AWS console, change the configuration of a trail to logging a single region instead of logging
all regions.
Using AWS console, update CloudTrail trail to disable global service events to be delivered in all
regions except US-West.
Using AWS CLI, update CloudTrail trail to disable global service events which are delivered in all
regions except US-West.
(Correct)
Using AWS CLI, change the configuration of a trail to logging a single region instead of logging all
regions.
Explanation
Amazon CloudFront is a global service for which events are delivered to CloudTrail trails which
include global services. To avoid duplicate Amazon CloudFront events, you can disable these
events from delivering to CloudTrail trails in all regions & enable in only one region.
Option B & D is incorrect as if CloudTrail trail is changed to logging a single region, global
service event logging is off automatically, this will disable CloudFront events being logged
instead of avoiding duplicate logs.
Option C is incorrect as Changes to Global service event logs can be done only via AWS CLI
& not via AWS console.
Question 51: Skipped
A company is planning to deploy an application in AWS. This application requires an EC2
Instance to continuously perform log processing activities requiring Max 500MiB/s of data
throughput. Which of the following is the best storage option for this requirement?
EBS SSD
(Correct)
EBS IOPS
Explanation
While considering storage volume types for batch processing activities with large throughput,
consider using the EBS Throughput Optimized volume type.
AWS Documentation mentions this, as shown below:
For more information on EBS Volume Types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 52: Skipped
You have an EC2 Instance in a particular region. This EC2 Instance has a preconfigured
software running on it. You have been requested to create a disaster recovery solution in
case the instance in the region fails. Which of the following is the best solution?
Backup the EBS data volume. If the instance fails, bring up a new EC2 instance and attach the
volume.
(Correct)
Store the EC2 data on S3. If the instance fails, bring up a new EC2 instance and restore the data from
S3.
Create a duplicate EC2 Instance in another AZ. Keep it in the shutdown state. When required, bring
it back up.
Explanation
You can copy an Amazon Machine Image (AMI) within or across an AWS region using the
AWS Management Console, the AWS command line tools or SDKs, or the Amazon EC2 API,
all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and
instance storebacked AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs.
Copying a source AMI results in an identical but distinct target AMI with its own unique
identifier. In the case of an Amazon EBS-backed AMI, each of its backing snapshots is, by
default, copied to an identical but distinct target snapshot.
For more information on Copying AMIs, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
Question 53: Skipped
A company uses an open-source system for automating the deployment, scaling, and
management of containerized applications. Which of the following would be ideal for such
a requirement?
(Correct)
Use AWS Lambda functions to embed the logic for container orchestration.
For more information on the Elastic Container Service, please visit the below URL:
https://aws.amazon.com/eks/
Question 54: Skipped
A company plan on using SQS queues and AWS Lambda to leverage the serverless aspects
of the AWS Cloud. Each invocation to AWS Lambda will send a message to an SQS queue.
Which of the following must be in place to achieve this?
(Correct)
Explanation
While working with AWS Lambda functions, if there is a need to access other resources, ensure
that an IAM role is in place. The IAM role will have the required permissions to access the SQS
queue.
For more information on AWS IAM Roles, please visit the following URL:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
Question 55: Skipped
You work for a large company having multiple applications which are very different from
each other. These are built using different programming languages. How can you deploy
these applications as quickly as possible?
Develop each app in separate Docker containers and deploy using CloudFormation.
Create a Lambda function deployment package consisting of code and any dependencies.
Develop all the apps in a single Docker container and deploy using Elastic Beanstalk
Develop each app in a separate Docker container and deploy using Elastic Beanstalk.
(Correct)
Explanation
Elastic Beanstalk supports the deployment of web applications from Docker containers. With
Docker containers, you can define your own runtime environment. You can choose your own
platform, programming language, and any application dependencies (such as package managers
or tools), that aren't supported by other platforms. Docker containers are self-contained and
include all the configuration information and software your web application requires to run.
Option A is not suitable here, because the requirement is to deploy multiple app with different
languages & very different from each other.
Option B is ideally used for running code and not packaging the applications and dependencies.
Option D - Deploying Docker containers using CloudFormation is also not an ideal choice.
For more information on Docker and Elastic Beanstalk, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
Question 56: Skipped
A company needs to have a columnar structured database storage suitable to perform
complex analytic queries against petabytes of structured data, Which of the following
options can meet this requirement?
Amazon Redshift
(Correct)
ElastiCache
Amazon RDS
DynamoDB
Explanation
AWS Documentation mentions the following:
Amazon Redshift is a column-oriented, fully managed, petabyte-scale data warehouse that makes
it simple and cost-effective to analyze all your data using your existing business intelligence
tools. Amazon Redshift achieves efficient storage and optimum query performance through a
combination of massively parallel processing, columnar data storage, and very efficient, targeted
data compression encoding schemes.
For more information on columnar database in AWS, please refer to the below URL:
https://aws.amazon.com/nosql/columnar/
Question 57: Skipped
A company is hosting EC2 instances which focus on work-loads for non-production and
non-priority batch loads. Also, these processes can be interrupted at any time. What is the
best pricing model that can be used for EC2 instances in this case?
Regular instances
Reserved instances
Spot instances
(Correct)
On-Demand instances
Explanation
Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2
costs significantly. The hourly price for a Spot instance (of each instance type in each
Availability Zone) is set by Amazon EC2, and fluctuates depending on the supply of and demand
for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price.
Spot instances are a cost-effective choice if you can be flexible about when your applications run
and if your applications can be interrupted. For example, Spot instances are well-suited for data
analysis, batch jobs, background processing, and optional tasks.
Option A is invalid because even though Reserved instances can reduce costs, it's best for
workloads that would be active for longer periods of time rather than for batch load processes
which could last for a shorter period.
Option B is not right because On-Demand instances tend to be more expensive than Spot
Instances.
Option D is invalid because there is no concept of Regular instances in AWS.
For more information on Spot instances, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
Question 58: Skipped
You have been given a business requirement to retain log files for your application for 10
years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging
system must be cost effective, given the large volume of logs. What technique should you
use to meet these requirements?
Store your logs in Amazon S3, and use Lifecycle Policies to archive to AmazonGlacier.
(Correct)
Store your logs on Amazon EBS, and use Amazon EBS Snapshots to archive them.
Store your logs in Amazon Glacier.
Explanation
Option A is invalid, because it is not a cost-effective option.
Option B is invalid, because it will not serve the purpose of regularly retrieving the most recent
logs for troubleshooting. You will need to pay more to retrieve the logs faster from this storage
option.
Option D is invalid because it is neither an ideal nor cost-effective option.
For more information on Lifecycle management please refer to the below link:
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Question 59: Skipped
As a part of your application architecture requirements, the company you are working for
has requested the ability to run analytics against all the combined log files from the Elastic
Load Balancer. Which services are used together to collect logs and process log file analysis
in an AWS environment? Choose the correct option.
Amazon S3 for storing ELB log files and Amazon EMR for processing the log files in analysis
(Correct)
Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts
Amazon S3 for storing the ELB log files and EC2 for processing the log files in analysis
Explanation
This question is not that complicated, even if you do not understand the options. If you see
“collection of logs and processing of logs”, directly think of AWS EMR.
Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-
effective to process vast amounts of data across dynamically scalable Amazon EC2 instances.
You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto,
and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3
and Amazon DynamoDB.
Amazon EMR securely and reliably handles a broad set of big data use cases, including log
analysis, web indexing, data transformations (ETL), machine learning, financial analysis,
scientific simulation, and bioinformatics.
Expedited retrieval
Standard retrieval
(Correct)
Vault Lock
Bulk retrieval
Explanation
Answer – D
Option A - Vault Lock
- this feature of Amazon Glacier, allows you to lock your vault with a variety of compliance
controls that are designed to support such long-term records retention. Due to this reason this is
not the correct answer.
For more information on Amazon Glacier retrievals, please visit the following URL:
https://aws.amazon.com/glacier/faqs/#dataretrievals
Question 61: Skipped
You are designing a system which needs at minimum, 8 m4.large instances operating to
service traffic. While designing a system for high availability in the us-east-1 region having
6 Availability Zones, your company needs to be able to handle the death of a full
availability zone. How should you distribute the servers to save as much cost as possible,
assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can
utilize us-east-1’s AZs a through f, inclusive.
4 servers in each of AZs a through c, inclusive.
(Correct)
Explanation
The best way is to distribute the instances across multiple AZs to get the best performance and to
avoid a disaster scenario.
With this solution, you will always have a minimum of more than 8 servers even if one AZ were
to go down. Even though options A and D are also valid, the best solution for distribution is
Option C.
For more information on High Availability and Fault tolerance, please refer to the below link:
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_04.pdf
Note:
In option A, we need to distribute 3 servers in each AZ's.
so, A=3, B=3, C=3, D=3. Total servers used=12.
In the question, it's clearly mentioned that "the company needs to be handle death of full AZ and
save as much cost as possible." In option C we are using less number of servers i.e 10 servers
distributed in more AZ's.
The question says" You are designing a system which needs at minimum, 8 m4.large instances
operating to service traffic." Now we are clear that minimum instances should be 8. The next
part of the question is that "How should you distribute the servers to save as much cost as
possible, assuming all of the EC2 nodes are properly linked to an ELB?"
We have to select the solution that should be cost effective and more available. Based on this
Option B is not that much high available. Because here you are using only 2 availability zones
with 8 in each i.e 16 instances.
(Correct)
Note:
NACL is used when you want deny the access for Particular IP address or the CIDR block(Set of
IP address).
So, The simple funda here is that if the requirement allows the traffic, then you can go with the
Security Group.
if the requirement mentioned like denies (Not allow) the traffic, then you can go with the NACL.
Question 63: Skipped
A small company started using EBS backed EC2 instances due to the cost improvements
over running their own servers. The company’s policy is to stop the development servers
over weekend and restart them each week. The first time the servers were brought back
none of the developers were able to SSH into them. What did the server most likely
overlook?
EBS backed EC2 instances cannot be stopped and were automatically terminated
The security group for a stopped instance needs to be reassigned after start
The public IPv4 address has changed on server start and the SSH configurations were not updated
(Correct)
The associated Elastic IP address has changed and the SSH configurations were not updated
Explanation
Correct:
C - The instance retains its private IPv4 addresses and any IPv6 addresses when stopped and
started. AWS releases the public IPv4 address and assigns a new one when it is stopped &
started.
Incorrect:
A. An EC2 instance retains its associated Elastic IP addresses.
B. Security groups do not need to be reassigned to instances that are restarted.
D. EBS backed instances are the only instance type that can be started and stopped.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html
Question 64: Skipped
An organization is managing a Redshift Cluster in AWS. They need to monitor the
performance of this Redshift cluster to ensure that it is performing as efficiently as
possible. Which of the following services can be used for achieving this requirement?
Choose 2 options.
CloudWatch
(Correct)
(Correct)
CloudTrail
Explanation
AWS Documentation mentions the following on monitoring Redshift Clusters:
Amazon CloudWatch metrics help you monitor physical aspects of your cluster, such as CPU
utilization, latency, and throughput. Metric data is displayed directly in the Amazon Redshift
console. You can also view it in the Amazon CloudWatch console, or you can consume it in any
other way you work with metrics such as with the Amazon CloudWatch Command Line
Interface (CLI) or one of the AWS Software Development Kits (SDKs).
For more information on monitoring Redshift, please visit the below URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/metrics.html
Option D: https://aws.amazon.com/about-aws/whats-new/2016/03/aws-trusted-advisor-
addschecks-for-amazon-s3-amazon-redshift-reserved-instances-security-and-service-limits/
Question 65: Skipped
Your company is planning on hosting their development, test and production applications
on EC2 Instances in AWS. They are worried about how access control would be given to
relevant IT Admins for each of the above environments. As an architect, what would you
suggest for managing the relevant accesses?
Add Userdata to the underlying instances to mark each environment.
Add tags to the instances marking each environment and then segregate access usingIAM Policies.
(Correct)
For more information on using tags, please see the below link:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
Continue
Retake test