AWS Interview Question
AWS Interview Question
Performs best for big content and Strong performance with database
PERFORMANCE
high stream throughput and transactional data
The greater the distance between
Data can be stored across
GEOGRAPHY storage and application, the higher
multiple regions
the latency
Can scale infinitely to petabytes Addressing requirements limit
SCALABILITY
and beyond scalability
Customizable metadata allows
ANALYTICS data to be easily organized and No metad
retrieved
Q3 Difference between manage and inline policy.
Q4 How to manage create Billing estimate for require service.
Q5 elastic cache and where is place in organization.
Ans. Elastic cache is place between Applications to Database
Q6 what is Lymbda
Q7 Can you update security policy in security group in Cloud formation by code.
Q8 In case any changes in some EC2 instance in that case how is your customer bind and rebind with
your server.
Ans: We can use sticky session in load balancer for particular time according session activity
Q9 how to login EC2 from my customer.
Q10 how to login our application from my customer.
Q11 How many service are you using in AWS.
Q12 which one load balancer you are using to entertain your incoming traffic.
Q13 what is difference between ALB and NLB load balancer.
Q14 are you using Autoscaling.
Q15 what is difference between launch configuration and launch template.
Q16 How to encrypt client data in S3 Bucket.
Q17 which type of EC2 is suitable in Development production organization.
Q18 What is different between SG(Security group) & NACL.
ANS.
Q. How can add and remove Inbound and outbound rule in Security Group and use of it.
ANS: Inbound for incoming traffic and outbound rule for outgoing traffic at Server level. By default all
allow in Outbound traffic .If you remove all allow traffic in outbound in SG then outgoing traffic will not
go . If you add some IP in Outbound rule in SG then server can give outgoing traffic to that particular
IPs.
Q. How can add and remove Inbound and outbound rule in NACL Security and use of it and difference
with SG.
ANS: Security of traffic at Subnet level that can have Inbound and outbound rule .For Inbound rule put
with IP and port of application and form Outbound put Source IP with range of source IP port then traffic
will allow to source. If you apply NACL then all restriction will be applicable on all EC2 instance which are
placed in same subnet because NACL have restriction on basis of Subnet level.
Network Access control lists are applicable at the subnet level, so any instance in the subnet
with an associated NACL will follow rules of NACL. ... This means any instances within the
subnet group gets the rule applied
2. Create role and attach EC2 Profile from IAM provide S3 full access and generate key( Secret Access
Key ID) for accessing.
3. Create ec2 and install AWS CLI.
4. Now CLI requesting AWS Secret Access Key, we will get from IAM option in Amazon, where
we have created Access Key ID same will show AWS Secret Access key then paste Secret Access
key.
5. Now after putting AWS Secret Access Key we will come Amazon Zone(us-east-2) Copy the
above Secret Access Key ID and paste in CLI.
6. We will use Ls command (aws s3 ls)
Default region name[us-east-2]:put enter
Default output format[none]:put enter
C:\Program Files\Amazon\AWSCLI>asw s3 ls
7. Now we will use below command for copy from Source Bucket to Destination Bucket.
Command: aws s3 cp --recursive s3://uitasourcebucket s3://uitadestinationbucket
Syntax: aws s3 cp –recursive s3://<source bucket name> s3://<destination bucket name
Q31 If I lost key pair for login EC2 then how can access EC2. Is it possible?
ANS. To replace a lost Key pair,you can use the AWS System Manager AWS Support-Reset Access
Automation document. You can create an Amazon Machine Image (AMI) of the existing instance, launch
a new instance, and then select new Key Pair.
OR
Through Session Manager either Linux or Window Server case.
connect to an EC2 instance with the AWS SSM:-Make sure that AWS system Manager agent is running &
Make sure EC2 Instance can communicate with the AWS System Manager and Use AWS System
Manager session Manager to connect to instance.> Select Lost key’s EC2 >go to Action Button>
Security>Modify IAM Role> then Create IAM role of accessing EC2 to AWS SSM after creating Role then >
go in again Action Button select Security> Modify IAM Role and attach Role of SSM> Save. Now go in
SSM tab>Session Manager>click Start session> Now you will see your EC2 instance connected.
OR
We can attach Volume of EC2 to other EC2 then I can access all data of Lost Key pair’s EC2.
Q32 what is difference between NAT Gateway and Internet Gateway.
Ans. NAT Gate use for outgoing in Internet but Internet is use for both direction to Internet and from
Internet to user.
Q33 Recently which type of have you done exposure.
Ans. Creating Auto scaling and connectivity from Private EC2 in VPC-A to Other Private EC2 in VPC-B.
Q34 What is BOTO 3
Boto3 is the name of the Python software development kits (SDKs) for AWS. It allows you to
directly create, update, and delete AWS resources from your Python scripts.
Q35 Jenkins Pileline
Q36 what is cloud formation.
Q37 How to AMI save to cross region from one region to other region.
Create Image of EC2>Auto save in AMI and Image of volume in snapshot if selected>Select the copy of
AMI >then select Region where you require save.
Q38 How to define schedule job/cron tab in AWS.
Ans. Schedule job define in using lambda function
On tagging basis we can schedule define on cron job for Lambda Function.
Steps for configuration:
IAM>role>name role EC2 lambda>Lambda>add role>now change policy as required>policy>create
policy>choose service>ec2>Write>start/stop/describe>all resource>put name policy>add in my
role>check lambda function>write code in lambda function for start/stop EC2>set cron job in cloud
watch for stop/start ec2>event in cloudwatch>rule in cloudwatch>go in EC2 and give tag name Type
>Stop EC2>test in lambda>check your EC2 start or not.
Throttle: by default 1000 time code will run concurrently
NAT Gateway is not free and will be charge and Data processing as well.
Always we have to place NAT Gateway in Public Subnet.Elastic IP must be add for NAT Gateway.
If NAT Gateway attached with Private EC2 , You can send traffic from Private EC2 to internet
but no one can access to private EC2 via Internet.
Flow of Internet traffic of Private EC2
Private EC2>NAT Gateway>Route table of Private EC2 add for NAT Gateway>NAT Gateway is
place in Public EC2>can go internet.
After you have created NAT Gateway ,You must update route table associate with one or more
private subnet to point Internet bound traffic to NAT Gateway. This enable instance in your
private subnets to communicate with the internet.
Note: If no longer need a NAT Gateway, you can delete it.Deleting a NAT gateway disassociates
its Elastic IP,but does not release the address from your account.so delete Elastic IP as well.
Architecture of VPC with NAT Gateway
Q . In Single server how can make high availability.
Ans. TO take snapshot every hours then can make high availability.
Q. How to connect multiple web url & application to EC2.
Q. what is latest version of Terraform.
ANS.Terraform v.0.13
Q. what is latest version of Terraform v.013 and Terraform v.012
ANS: This guide focuses on changes from v0.12 to v0.13. Terraform supports upgrade
tools and features only for one major release upgrade at a time, so if you are currently
using a version of Terraform prior to v0.12 please upgrade through the latest minor
releases of all of the intermediate versions first, reviewing the previous upgrade guides
for any considerations that may be relevant to you.
In particular, Terraform v0.13 no longer includes the terraform 0.12upgrade command
for automatically migrating module source code from v0.11 to v0.12 syntax. If your
modules are written for v0.11 and earlier you may need to upgrade their syntax using
the latest minor release of Terraform v0.12 before using Terraform v0.13.
Before You Upgrade
When upgrading between major releases, we always recommend ensuring that you can
run terraform plan and see no proposed changes on the previous version first,
because otherwise pending changes can add additional unknowns into the upgrade
process.
For this upgrade in particular, completing the upgrade will require running terraform
apply with Terraform 0.13 after upgrading in order to apply some upgrades to the
Terraform state, and we recommend doing that with no other changes pending.
Q.How to create EC2 with Terraform automation.
Q. How to auto creation EC2 in AWS.
ANS: using AWS CLI and put shell scripting then define crone tab in shell scripting.
Q. How to identify multiple server alarm in cloudwatch dashboard.
ANS: We create Alarm By using Instance ID. So we can identify by Instance ID in Dashboard.
Q. How to take Backup of EBS of EC2.
ANS. Using “Lifycycle Manager” tool can take Backup of EBS on given time.
Q. Upgrade/downgrade AWS EC2 resources (CPU and Resize RAM).
ANS. Login console account>Select EC2>Select stop in instance State in Action Option>Select Instance
Setting in Action Option>Change Instance Type = t2.micro to t2.small.
Q. If getting High CPU utilization in EC2 then what need to take action?
ANS. Upgrade/downgrade AWS EC2 resources (CPU and Resize RAM).
Process: Login console account>Select EC2>Select stop in instant Stat in Action Option>Select Instance
Setting in Action Option>Change Instance Type = t2.micro to t2.small.
OR
Configure Auto scaling for EC2 and create rule enhance EC2 for high CPU and traffic.
Q. How to access my Amazon S3 bucket privately without using authentication, such as AWS
Identity and Access Management (IAM) credentials. How can I do that?
You can access an S3 bucket privately without authentication when you access the S3 bucket
from an Amazon Virtual Private Cloud (Amazon VPC) that has an endpoint to Amazon S3.
Resolution
Before you begin, you must create a VPC that you'll access the bucket from.
Add a bucket policy that allows access from the VPC endpoint
Update your bucket policy with a condition that allows users to access the S3 bucket when the
request is from the VPC endpoint that you created. To allow those users to download objects
(s3:GetObject), you can use a bucket policy that's similar to the following:
Note: For the value of aws:sourceVpce, enter the VPC endpoint ID of the endpoint that you
created.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
}
}
]
}
Important: This policy allows access from the VPC endpoint, but it doesn't deny all access from
outside the endpoint. If a user from the same account is authenticated, this policy still allows
the user to access the bucket from outside the VPC endpoint. If you need a more restrictive
bucket policy, then use a policy that explicitly denies access to any requests from outside the
endpoint.
Ans. Every AWS AMI uses the bare metal Xen hypervisor. Xen offers two types of
virtualization: HVM (Hardware Virtual Machine) and PV (Paravirtualization). But before
looking at these virtualization capabilities, it's important to understand how the Xen
architecture works.
AWS revealed that it had created a new KVM-based hypervisor, not the Xen hypervisor it
had trusted for years. The new hypervisor was revealed as a virtual note in the news for the
new EC2 instance type called "C5" powered by Intel Skylake Xeons.
AWS is a cloud specialist co-op, while VMware is a pioneer in virtualization. The principle
contrast is that VMware is in virtualization, while AWS is essentially in cloud administrations.
Linux Amazon Machine Images uses one of two types of virtualization: paravirtual virtual
machine (PV) or virtual hardware (HVM). The primary contrasts among PV and HVM AMIs
are the means by which they begin and whether they can exploit unique equipment
expansions (CPU, system, and capacity) for better execution
Q. EBS Encryption.
Ans.
Q. How many types of Record Set zone can create in Route 53.
Ans. Name Server , Type,Value & Alias & Routing Policy
Name Server----domain name ex. www.google.com
Type: Type of application Ex. Type of Ip Address.
Alias: Can select Load balancer / cloud front/S3
Q. VPC Peering
Ans. VPC peering can done different region and other region with different account and can
communicate with EC2’s Private IPs. Transitive communication not possible.
Q. What is DMS.
Q. How to connect AWS resources from On premises.
Q. I'm using Amazon Simple Storage Service (Amazon S3) to host a static website, and I'm
using Amazon CloudFront to serve the website. However, the website is down. How can I fix
this? .
Ans.
Resolution.
Before you begin, confirm the following:
A 403 Access Denied error indicates that there's a permissions problem that's causing your website
to appear down. For troubleshooting instructions, see I’m using an S3 website endpoint as the origin
of my CloudFront distribution. Why am I getting HTTP response code 403 (Access Denied)?
Important: Be sure to check the Block Public Access settings for your website's S3 bucket. These
settings can block anonymous requests to your website. Block Public Access settings can apply to
AWS accounts or individual buckets.
A 404 Not Found error indicates that the request is pointing to a website object that doesn't exist.
Check the following:
Verify that the URL to the website object doesn't have any typos.
Verify that the website object exists on the S3 bucket that's hosting your website. You can check the
bucket using the Amazon S3 console. Or, you can run the list-objects command using the AWS
Command Line Interface (AWS CLI).
OR
Ans.
This pattern uses a source account and a destination account in different Regions. You attach a
bucket policy to your source S3 bucket that grants the destination account access through AWS
IAM. You then create an IAM policy in your destination account that allows a user to
perform PutObject and GetObject actions on the source S3 bucket. Finally, you
run copy and sync commands to transfer data from the source S3 bucket to the destination S3
bucket.
Accounts own the objects that they upload to S3 buckets. If you copy objects across different
accounts and Regions, you grant the destination account ownership of the copied objects. You
can change the ownership of an object by changing its ACL to bucket-owner-full-control.
However, we recommend that you grant programmatic cross-account permissions to the
destination account because ACLs can be difficult to manage for multiple objects.
Required service for configuration: S3,AWS CLI & IAM
Copy command will run on AWS CLI
aws s3 cp s3:// DOC-EXAMPLE-BUCKET-SOURCE / \ s3:// DOC-EXAMPLE-BUCKET-TARGET / \ --recursive
--source-region SOURCE-REGION-NAME --region DESTINATION-REGION-NAME
Q. I want to transfer a large amount of data (1 TB or more) from one Amazon Simple Storage
Service (Amazon S3) bucket to another bucket. How can I do that?
Ans.
Depending on your use case, you can perform the data transfer between buckets using one of the
following options:
Run parallel uploads using the AWS Command Line Interface (AWS CLI)
Use an AWS SDK
Use cross-Region replication or same-Region replication
Use Amazon S3 batch operations
Use S3DistCp with Amazon EMR
Use AWS DataSync
Ans.
Unfortunately, there is no metric for memory consumption in Cloudwatch.
The recommended way to monitor memory usage is to create a custom Cloudwatch metric
by using your own monitoring scripts on your instances. AWS have published
documentation on how to achieve this on Linux instances with a set of (unsupported) scripts.
Once your instances are publishing the custom metrics, you will be able to attach alarms to
them in CloudWatch.
Q. If Region is going down then how to take backup and how to take caring traffic.
Ans. AWS Backup now supports cross-region backup, enabling AWS customers to copy
backups across multiple services to different regions.
Using AWS Backup, you can copy backups to multiple AWS Regions on demand or
automatically as part of a scheduled backup plan. Cross-Region replication is particularly
valuable if you have business continuity or compliance requirements to store backups a
minimum distance away from your production data.
There is two way to take cross region Backup 1. On- Demand 2. Schedule Backup Plan.
Scheduling cross-Region backup
You can use a scheduled backup plan to copy backups across AWS Regions.
Q. if not configure NAT Gateway and Internet Gateway then any other option to connect
internet in you instance.
Ans . I think VPN. I am not sure at least internet gateway or NAT Gateway should be available.
Q. If We configure more than one AWS CloudTrail log in different different account then how
can we connect in central log.
Ans. Receiving CloudTrail log files from multiple regions
You can configure CloudTrail to deliver log files from multiple regions to a single S3 bucket for a single account. For
example, you have a trail in the US West (Oregon) Region that is configured to deliver log files to a S3 bucket, and a
CloudWatch Logs log group. When you change an existing single-region trail to log all regions, CloudTrail logs
events from all regions that are in a single AWS partition in your account. CloudTrail delivers log files to the same
S3 bucket and CloudWatch Logs log group. As long as CloudTrail has permissions to write to an S3 bucket, the
bucket for a multi-region trail does not have to be in the trail's home region.
To log events across all regions in all AWS partitions in your account, create a multi-region trail in each partition.
In the console, by default, you create a trail that logs events in all AWS Regions. This is a recommended best
practice. To log events in a single region (not recommended), use the AWS CLI. To configure an existing single-
region trail to log in all regions, you must use the AWS CLI.
To change an existing trail so that it applies to all Regions, add the --is-multi-region-trail option to the update-
trail command.
aws cloudtrail update-trail --name my-trail --is-multi-region-trail
To confirm that the trail now applies to all Regions, the IsMultiRegionTrail element in the output shows true.
--------------------------------------------------------------------------------------------------------
"IncludeGlobalServiceEvents": true,
"Name": "my-trail",
"TrailARN": "arn:aws:cloudtrail:us-east-2:123456789012:trail/my-trail",
"LogFileValidationEnabled": false,
"IsMultiRegionTrail": true,
"IsOrganizationTrail": false,
"S3BucketName": "my-bucket"
Note:When a new region launches in the aws partition, CloudTrail automatically creates a trail for you in the new
region with the same settings as your original trail.
Ans.
AWS announced the new SSD storage type, gp3, At AWS re:Invent 2020. It is the next
generation of General Purposed block storage in AWS.
This post compares both types in terms of pricing and performance.
Performance:
AWS measures the volumes performance in two dimensions:
IOPS: the maximum number of Input and Output requests the volume can serve per
second.
Throughput: the maximum amount of data that can be read/written per second.
With gp2 volumes, you don't have direct control on the IOPs and throughput, they
depend on the volume size. To get higher performance you have to use a bigger
volume size.
gp3 removed this limitation and you can increase the IOPs and/or throughput
independently up to 16,000 IOPs and 1,000 MiB/s respectively.
Please note that gp3 volume size, IOPs and throughput still impact each other. The
maximum throughput that can be reached = (maximum IOPs) x (average IO size). Also,
the maximum IOPs = (500) x (provisioned GiB). So, 32 GiB or smaller volumes can't
reach the 16,000 IOPs limit.
WHAT ARE THE NEW FEATURES OF EBS GP3?
Now, GP3 speed and IOPS can be customized, and the Burstable capacity has been
increased by 4x. The Maximum Throughput is now 1,000 MB/s compared to GP2’s 250
MB/s.
The entry point includes 3,000 IOPS for free and a throughput of 125 MB/s.