0% found this document useful (0 votes)
110 views

AWS Interview Question

This document contains an interview question and answer about the differences between object storage and block-level storage. Object storage stores data as objects with metadata, while block-level storage splits files into evenly sized blocks without metadata. Object storage is best for large, unstructured data like videos and photos, while block storage provides consistent performance for databases and transactions. The answer also lists some key differences in performance, geography, scalability, and analytics capabilities between the two storage types.

Uploaded by

Saurabh Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

AWS Interview Question

This document contains an interview question and answer about the differences between object storage and block-level storage. Object storage stores data as objects with metadata, while block-level storage splits files into evenly sized blocks without metadata. Object storage is best for large, unstructured data like videos and photos, while block storage provides consistent performance for databases and transactions. The answer also lists some key differences in performance, geography, scalability, and analytics capabilities between the two storage types.

Uploaded by

Saurabh Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

AWS Interview Question

Q1 Explain about your organization structure.


Q2 Difference between Object level and Block Level storage.
Ans. Object
Blocked Storage: Files are split into evenly sized blocks of data, each with its own address but
with no additional information (metadata) to provide more context for what that block of data.
Block storage is ideal for high-performing, mission-critical applications that require consistent
input/output (I/O) performance and low latency and is often used in storage-area network (SAN)
environments in place of file storage.
block storage volumes are treated as individual hard disks, the approach works well for storing a
variety of applications: 
block storage offers makes it an ideal fit for applications that require high performance, such as
transactional or database applications.
Ex. Transaction data for database, bootable file

Object storage, by contrast, doesn't split files up into raw blocks of data.


Object Storage are use in S3 services (Http application.).
Object Storage is base on HDD and can not bootable.
Data that benefits the most from object storage includes:
Ex.Unstructured data such as music, images, and videos, Backup and log files, large sets of
historical data & Archived files

  OBJECT STORAGE BLOCK STORAGE

Performs best for big content and Strong performance with database
PERFORMANCE
high stream throughput and transactional data
The greater the distance between
Data can be stored across
GEOGRAPHY storage and application, the higher
multiple regions
the latency
Can scale infinitely to petabytes Addressing requirements limit
SCALABILITY
and beyond scalability
Customizable metadata allows
ANALYTICS data to be easily organized and No metad
retrieved
Q3 Difference between manage and inline policy.
Q4 How to manage create Billing estimate for require service.
Q5 elastic cache and where is place in organization.
Ans. Elastic cache is place between Applications to Database
Q6 what is Lymbda
Q7 Can you update security policy in security group in Cloud formation by code.
Q8 In case any changes in some EC2 instance in that case how is your customer bind and rebind with
your server.
Ans: We can use sticky session in load balancer for particular time according session activity
Q9 how to login EC2 from my customer.
Q10 how to login our application from my customer.
Q11 How many service are you using in AWS.
Q12 which one load balancer you are using to entertain your incoming traffic.
Q13 what is difference between ALB and NLB load balancer.
Q14 are you using Autoscaling.
Q15 what is difference between launch configuration and launch template.
Q16 How to encrypt client data in S3 Bucket.
Q17 which type of EC2 is suitable in Development production organization.
Q18 What is different between SG(Security group) & NACL.
ANS.

SG NACL(Network Access Control List)


You can say this is subnet level or
You can say this is server level security network level security
SG is stateful (means if port is allow in Inbound then by
default will be allowed in outbound for that port)  NACL is stateless
By Default, everything is deny By Default, everything is allow

Q. How can add and remove Inbound and outbound rule in Security Group and use of it.
ANS: Inbound for incoming traffic and outbound rule for outgoing traffic at Server level. By default all
allow in Outbound traffic .If you remove all allow traffic in outbound in SG then outgoing traffic will not
go . If you add some IP in Outbound rule in SG then server can give outgoing traffic to that particular
IPs.  
Q. How can add and remove Inbound and outbound rule in NACL Security and use of it and difference
with SG.
ANS: Security of traffic at Subnet level that can have Inbound and outbound rule .For Inbound rule put
with IP and port of application and form Outbound put Source IP with range of source IP port then traffic
will allow to source. If you apply NACL then all restriction will be applicable on all EC2 instance which are
placed in same subnet because NACL have restriction on basis of Subnet level.
Network Access control lists are applicable at the subnet level, so any instance in the subnet
with an associated NACL will follow rules of NACL. ... This means any instances within the
subnet group gets the rule applied

Q19 How launch an EC2 instance.


Q20 What is sticky session.
Q21 What is RDS.
Q22 Difference between all Databases.
Q23 Are you using Scripting.
Q24 Use of Cloud formation.
Q25 How to access AWS Service by customer.
Q26 What is disaster recovery steps.
Q27 How many ips in 10.0.0.0/24.
Ans  254 ips
Q28 Tell me Subneting
Q29 How to one Private EC2 in VPC-A communicate with other Private EC2 in VPC-B.
Ans through VPC Peering and add in route table give route other VPC target VPC peering ID.
Q30  Can we connect S3 to EC2 without VPC End point.
Ans2. Use Storage gateway (File gateway)
Ans2 . Lecture no. 29 in Urdu It.com
Yes We can connect EC2 to S3 using AWS CLI and role permission S3 to EC2.
Fallow below steps we can connect S3 to EC2.
1. Create s3 Bucket

2. Create role and attach EC2 Profile from IAM provide S3 full access and generate key( Secret Access
Key ID) for accessing.
3. Create ec2 and install AWS CLI.

4. Now CLI requesting AWS Secret Access Key, we will get from IAM option in Amazon, where
we have created Access Key ID same will show AWS Secret Access key then paste Secret Access
key.
5. Now after putting AWS Secret Access Key we will come Amazon Zone(us-east-2) Copy the
above Secret Access Key ID and paste in CLI.
6. We will use Ls command (aws s3 ls)
Default region name[us-east-2]:put enter
Default output format[none]:put enter
C:\Program Files\Amazon\AWSCLI>asw s3 ls
 
7. Now we will use below command for copy from Source Bucket to Destination Bucket.
Command:  aws s3 cp  --recursive s3://uitasourcebucket s3://uitadestinationbucket
Syntax: aws s3 cp –recursive s3://<source bucket name> s3://<destination bucket name
 

Q31 If I lost key pair for login EC2 then how can access EC2. Is it possible?
ANS. To replace a lost Key pair,you can use the AWS System Manager AWS Support-Reset Access
Automation document. You can create an Amazon Machine Image (AMI) of the existing instance, launch
a new instance, and then select new Key Pair.
OR
Through Session Manager either Linux or Window Server case.
connect to an EC2 instance with the AWS SSM:-Make sure that AWS system Manager agent is running &
Make sure EC2 Instance can communicate with the AWS System Manager and Use AWS System
Manager session Manager to connect to instance.> Select Lost key’s EC2 >go to Action Button>
Security>Modify IAM Role> then Create IAM role of accessing EC2 to AWS SSM after creating Role then >
go in again Action Button select Security> Modify IAM Role and attach Role of SSM> Save. Now go in
SSM tab>Session Manager>click Start session> Now you will see your EC2 instance connected.

OR
We can attach Volume of EC2 to other EC2 then I can access all data of Lost Key pair’s EC2.
Q32 what is difference between NAT Gateway and Internet Gateway.
Ans. NAT Gate use for outgoing in Internet but Internet is use for both direction to Internet and from
Internet to user.
Q33 Recently which type of have you done exposure.
Ans. Creating Auto scaling and connectivity from Private EC2 in VPC-A to Other Private EC2 in VPC-B.
Q34 What is BOTO 3
Boto3 is the name of the Python software development kits (SDKs) for AWS. It allows you to
directly create, update, and delete AWS resources from your Python scripts.
Q35 Jenkins Pileline
Q36 what is cloud formation.
Q37 How to AMI save to cross region from one region to other region.
Create Image of EC2>Auto save in AMI and Image of volume in snapshot if selected>Select the copy of
AMI >then select Region where you require save.
Q38 How to define schedule job/cron tab in AWS.
Ans. Schedule job define in using lambda function
On tagging basis we can schedule define on cron job for Lambda Function.
Steps for configuration:
IAM>role>name role EC2 lambda>Lambda>add role>now change policy as required>policy>create
policy>choose service>ec2>Write>start/stop/describe>all resource>put name policy>add in my
role>check lambda function>write code in lambda function for start/stop EC2>set cron job in cloud
watch for stop/start ec2>event in cloudwatch>rule in cloudwatch>go in EC2 and give tag name Type
>Stop EC2>test in lambda>check your EC2 start or not.
Throttle: by default 1000 time code will run concurrently 

Q39 How many parameter in Cloud formation.


ANS. You can have a maximum of 200 parameters in an AWS CloudFormation template.
Each parameter must be given a logical name (also called logical ID), which must be alphanumeric and
unique among all logical names within the template.
Parameter in Cloud formation
Type,String,Number,Default ,Value, Allowedvalues

Q40 What is Parameter in Cloud Formation


ANS.  Parameter in Cloud formation
Type,String,Number,Default ,Value & Allowedvalues

Q41 How can I connect one stack to other stack


Q42 What is the function of Terraform
Q43 How can access Internet from Private EC2 
Ans:.
1. We make two VPC one is for private VPC and another for public VPC. Private EC2 will be place in
VPC-A which don’t have internet gateway in routing table. We need to do VPC peering from
VPC-A to VPC-B then generate VPC Peering ID. We need to add VPC Peering ID in routing table of
VPC-A then Private EC2 can access internet via VPC Peering. 
OR

NAT Gateway is not free and will be charge and Data processing as well.
Always we have to place NAT Gateway in Public Subnet.Elastic IP must be add for NAT Gateway.
If NAT Gateway attached with Private EC2 , You can send traffic from Private EC2 to  internet
but no one can access to private EC2 via Internet.
Flow of Internet traffic of Private EC2 
Private EC2>NAT Gateway>Route table of Private EC2 add for NAT Gateway>NAT Gateway is
place in Public EC2>can go internet.
 
After you have created NAT Gateway ,You must update route table associate with one or more
private subnet to point Internet bound traffic to NAT Gateway. This enable instance in your
private subnets to communicate with the internet.
Note: If no longer need a NAT Gateway, you can delete it.Deleting a NAT gateway disassociates
its Elastic IP,but does not release the address from your account.so delete Elastic IP as well.
Architecture of VPC with NAT Gateway
Q . In Single server how can make high availability.
Ans. TO take snapshot every hours then can make high availability.
Q. How to connect  multiple web url & application  to EC2.
Q. what is latest version of Terraform.
ANS.Terraform v.0.13
Q. what is latest version of Terraform v.013 and Terraform v.012
ANS: This guide focuses on changes from v0.12 to v0.13. Terraform supports upgrade
tools and features only for one major release upgrade at a time, so if you are currently
using a version of Terraform prior to v0.12 please upgrade through the latest minor
releases of all of the intermediate versions first, reviewing the previous upgrade guides
for any considerations that may be relevant to you.
In particular, Terraform v0.13 no longer includes the terraform 0.12upgrade command
for automatically migrating module source code from v0.11 to v0.12 syntax. If your
modules are written for v0.11 and earlier you may need to upgrade their syntax using
the latest minor release of Terraform v0.12 before using Terraform v0.13.
Before You Upgrade
When upgrading between major releases, we always recommend ensuring that you can
run terraform plan and see no proposed changes on the previous version first,
because otherwise pending changes can add additional unknowns into the upgrade
process.
For this upgrade in particular, completing the upgrade will require running terraform
apply with Terraform 0.13 after upgrading in order to apply some upgrades to the
Terraform state, and we recommend doing that with no other changes pending.
Q.How to create EC2 with Terraform automation.
Q. How to auto creation EC2 in AWS.
ANS: using AWS CLI and put shell scripting then define crone tab in shell scripting.
Q. How to identify multiple server alarm in cloudwatch dashboard.
ANS: We create Alarm By using Instance ID. So we can identify by Instance ID in Dashboard.
Q. How to take Backup of EBS of EC2.
ANS. Using “Lifycycle Manager” tool can take Backup of EBS on given time.
Q. Upgrade/downgrade AWS EC2 resources (CPU and Resize RAM).
ANS. Login console account>Select EC2>Select stop in instance State in Action Option>Select Instance
Setting in Action Option>Change Instance Type = t2.micro to t2.small.
Q. If getting High CPU utilization in EC2 then what need to take action?
ANS. Upgrade/downgrade AWS EC2 resources (CPU and Resize RAM).
Process: Login console account>Select EC2>Select stop in instant Stat in Action Option>Select Instance
Setting in Action Option>Change Instance Type = t2.micro to t2.small.
OR
Configure Auto scaling for EC2 and create rule enhance EC2 for high CPU and traffic.

Q. How do I change my EBS volume?


ANS. To modify an EBS volume using the console. Choose Volumes, select the volume to modify, and
then choose Actions, Modify Volume. The Modify Volume window displays the volume ID and
the volume's current configuration, including type, size, and IOPS. You can change any or all of these
settings in a single action.
Steps: Login Console> Volumes>Action>Modify Volume>change size >Reboot EC2
Q. Generate daily, weekly and monthly report related to Operations.
ANS. By AWS cost & usage Report, we can export report as required.
https://www.youtube.com/watch?v=8T0s2cTNHLU
Q. My account AMI can copy in cross region.
ANS. Yes you can copy to cross region. After you have made your selections and started the copy you
will be provided with the ID of the new AMI in the destination region.
Q. What is CIDR.
Q. How can access EC2 Instance from other Account?
Q.How can I create IAM Policy and for different Users.
Q. Auto Scaling Group (ASG).
Q. How can add in existing Autoscaling with new AMI.
ANS:
Step 1: Create your new AMI. The easiest way I have found to do this is actually via the EC2 console. ...
Step 2: Test your AMI. ...
Step 3: Update the launch configuration to use the AMI. ...
Step 4: Update the Auto Scaling group.

Q. How can add Inbound and outbound in SG.


ANS: Inbound for incoming traffic and outbound rule for outgoing traffic at Server level. By default all
allow in Outbound traffic .If you remove all allow traffic in outbound in SG then outgoing traffic will
not go . If you add some IP in Outbound rule in SG then server can give outgoing traffic to that
particular IPs.  
On NACL Security and difference with SG.
Security of traffic at Subnet level that can have Inbound and outbound rule .For Inbound rule put with
IP and port of application and form Outbound put Source IP with range of source IP port then traffic
will allow to source. If you apply NACL then all restriction will be applicable on all EC2 instance which
are placed in same subnet because NACL have restriction on basis of Subnet level.
Q. How can assess Internet from Private Subnet .No Public Subnet available?
Q How to pull or push logs of EC2 to S3 Bucket on particular time.
Q. How to know your application is running (website) in EC2.
Q Terraform file
Q  What is Horizontal and Vertical Autoscaling
Ans. Horizontal scaling means no. of server add to make stable traffic and Vertical Scaling means
increase capacity/server type in existing running server.
or
Horizontal Scaling is the act of changing the number of EC2 Instance without changing the size of any
EC2 Instance. Vertical Scaling. Vertical Scaling is increasing the size and computing power of a single
instance without increasing the number of nodes or instances.

Q Any challenges in AWS JOB.


Q How can access to multiple application from private ec2 instance.
Ans. Add configure in route53 to create record for routing and give the target LB. all application need to
add in listener tab in LB and configure the direction of your application.

Q How to set EC2 backup to S3 Bucket with automation


Ans. Connect AWS CLI > create role in EC2 with access S3 bucket> login EC2 root and define cron tab
give backup/logs path to S3 bucket.

Q. How  to access my Amazon S3 bucket privately without using authentication, such as AWS
Identity and Access Management (IAM) credentials. How can I do that?

Ans. Short description

You can access an S3 bucket privately without authentication when you access the S3 bucket
from an Amazon Virtual Private Cloud (Amazon VPC) that has an endpoint to Amazon S3.

Follow these steps to set up VPC endpoint access to the S3 bucket:

1. Create a VPC endpoint for Amazon S3.


2. Add a bucket policy that allows access from the VPC endpoint.

Resolution
Before you begin, you must create a VPC that you'll access the bucket from.

Create a VPC endpoint for Amazon S3

1. Open the Amazon VPC console.


2. Using the Region selector in the navigation bar, set the AWS Region to the same Region as the
VPC that you want to use.
3. From the navigation pane, choose Endpoints.
4. Choose Create Endpoint.
5. For Service category, verify that AWS services is selected.
6. For Service Name, select the service name(gateway of S3) that includes "s3". For example, the
service name in the US East (N. Virginia) Region is com.amazonaws.us-east-1.s3.
7. For VPC, select the VPC that you want to use.
8. For Configure route tables, select the route tables based on the associated subnets that you
want to be able to access the endpoint from.
9. For Policy, verify that Full Access is selected.
10. Choose Create endpoint.
11. Note the VPC Endpoint ID. You need this ID for a later step.

Add a bucket policy that allows access from the VPC endpoint

Update your bucket policy with a condition that allows users to access the S3 bucket when the
request is from the VPC endpoint that you created. To allow those users to download objects
(s3:GetObject), you can use a bucket policy that's similar to the following:

Note: For the value of aws:sourceVpce, enter the VPC endpoint ID of the endpoint that you
created.
{
   "Version": "2012-10-17",
   "Id": "Policy1415115909152",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPCE-only",
       "Principal": "*",
       "Action": "s3:GetObject",
       "Effect": "Allow",
       "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"],
       "Condition": {
         "StringEquals": {
           "aws:sourceVpce": "vpce-1a2b3c4d"
         }
       }
     }
   ]
}
Important: This policy allows access from the VPC endpoint, but it doesn't deny all access from
outside the endpoint. If a user from the same account is authenticated, this policy still allows
the user to access the bucket from outside the VPC endpoint. If you need a more restrictive
bucket policy, then use a policy that explicitly denies access to any requests from outside the
endpoint.

Q. How can monitor VPC.


Ans. VPC flow log.

Q. What is Centralized logging.


Ans. The Centralized Logging solution enables organizations to collect, analyze, and display
Amazon CloudWatch Logs in a single dashboard. AWS services generate log data, such as
audit logs for access, configuration changes, and billing events. ... You can collect Amazon
CloudWatch Logs from multiple accounts and AWS Regions.

Q. How many type of virtualization are using in AWS cloud.

Ans. Every AWS AMI uses the bare metal Xen hypervisor. Xen offers two types of
virtualization: HVM (Hardware Virtual Machine) and PV (Paravirtualization). But before
looking at these virtualization capabilities, it's important to understand how the Xen
architecture works.
AWS revealed that it had created a new KVM-based hypervisor, not the Xen hypervisor it
had trusted for years. The new hypervisor was revealed as a virtual note in the news for the
new EC2 instance type called "C5" powered by Intel Skylake Xeons.
AWS is a cloud specialist co-op, while VMware is a pioneer in virtualization. The principle
contrast is that VMware is in virtualization, while AWS is essentially in cloud administrations.
Linux Amazon Machine Images uses one of two types of virtualization: paravirtual virtual
machine (PV) or virtual hardware (HVM). The primary contrasts among PV and HVM AMIs
are the means by which they begin and whether they can exploit unique equipment
expansions (CPU, system, and capacity) for better execution

Q. AWS Certificate Manager.


ANS. Certificate manager is requiring for install certificate in https application. You can also
install certificate while creation of Load balancer in Listener setting/ Also can import Certificate
from External.

Q. EBS Encryption.

Q. How can give specific user permission on S3 Bucket.


Ans.

Q. what is use of AWS config.


Ans. AWS Config is a rule where we can check compliance of AWS service. We can multiple rule
define for Compliance and Non Compliance of AWS Services and also check Status of activity
logs on applied AWS services. 

Q. what is difference between publickey and private key

Ans. 

Q. How many minimum instance can start in autoscaling.


Ans. If you specify scaling policies, then Amazon EC2 Auto Scaling can launch or
terminate instances as demand on your application increases or decreases. For example, the
following Auto Scaling group has a minimum size of one instance, a desired capacity of
two instances, and a maximum size of four instances.

Q. How many types of Record Set zone can create in Route 53.
Ans.  Name Server , Type,Value & Alias & Routing Policy
Name Server----domain name ex. www.google.com
Type: Type of application Ex. Type of Ip Address.
Alias: Can select Load balancer / cloud front/S3 

Q. How many way to access AWS Account.


Ans.
1. through VPN to access Private Subnet EC2 with Private Ip
2. Through AWS Url Account
3. AWS CLI
4. Through active directory Server (AD Server)’s Credential of your organization and use your
organization url for AWS then can access (Ex.  https://techmahindra/aws/login)

Q. VPC Peering
Ans. VPC peering can done different region and other region with different account and can
communicate with EC2’s Private IPs. Transitive communication not possible.

Q. Default VPC have attached IGW or not


Ans. Yes, But Manual creation VPC need to attach IGW.

Q. How can access Bucket from other users


Ans. Need to give allow permission in policy of Bucket for specific user as demand.
Note: Never make Public of your bucket if you want make then only give read permission to
Bucket.

Q. How many VPC Create in a region.


Ans. 5 VPC

Q. How many Elastic Ip Create in a region.


Ans. 5 Elastic IPs

Q. How many Subnet can create in a Region.


Ans. 200 subnet for each VPC. We can define maximum 5 VPC in a region means 1000(200X5)
Subnet can define in a region.

Q. What is DMS.
Q. How to connect AWS resources from On premises. 

Q. How to connect On premises DB from AWS EC2. 


Ans. Using Direct Connect can connect  On Premises DB to AWS EC2’s Application &  VPN
Tunnels can connect from On-Premises’ DB to AWS EC2. 

Q. What is Bastion Host.


Ans. By using Bastion Host to access Ec2 of Private Subnet.

Q. How many way to access to EC2 of Private Subnet.


Ans. By using Bastion Host & By VPN.

Q. Difference between ALB and NLB


Ans.
  ALB NLB
Protocols Http,Https TCP
Platform VPC VPC

Path Base Routing Y N

Elastic Load Balancing automatically distributes your


Path Base Routing Y incoming traffic across multiple targets, such as EC2
instances
Host Base Routing Y N
Configurable Idel
Y N
Connection Timeout
SSl Off Loading Y N
Server Name
Y N
Identification(SNI)
Sticky Session Y N
Backend Server Encryption Y N
User Authentication Y N
Static IP N Y
Elastic IP N Y
Preserve Source IP
N Y
Address
Logging Y Y
Zonal Fail Over Y Y
Load Balancing Delation
Y Y
Protection
IP Address As a Target Y Y
HC Y Y
Cloudwatch matrics Y Y
Cross Zone Load Balancing Y Y
     
     

Q. I'm using Amazon Simple Storage Service (Amazon S3) to host a static website, and I'm
using Amazon CloudFront to serve the website. However, the website is down. How can I fix
this? .

Ans.
Resolution.
Before you begin, confirm the following:

1. You have internet access.


2. The origin domain name that's specified on your CloudFront distribution points to the correct S3
bucket with no typos or other errors.
If you have internet access and the origin domain name is correct, then check the error response
that you get from trying to access your website:

403 Access Denied error

A 403 Access Denied error indicates that there's a permissions problem that's causing your website
to appear down. For troubleshooting instructions, see I’m using an S3 website endpoint as the origin
of my CloudFront distribution. Why am I getting HTTP response code 403 (Access Denied)?

Important: Be sure to check the Block Public Access settings for your website's S3 bucket. These
settings can block anonymous requests to your website. Block Public Access settings can apply to
AWS accounts or individual buckets.

404 Not Found error

A 404 Not Found error indicates that the request is pointing to a website object that doesn't exist.
Check the following:

 Verify that the URL to the website object doesn't have any typos.
 Verify that the website object exists on the S3 bucket that's hosting your website. You can check the
bucket using the Amazon S3 console. Or, you can run the list-objects command using the AWS
Command Line Interface (AWS CLI).

Q.What is AWS Transit Gateway


Ans. 
AWS Transit Gateway:
Q. How can Copy data from an S3 bucket in one account and Region to another account and
Region
Ans. We can use AWS “Data Sync” service for transfer data from one bucket to another bucket.
While configuring Data Sync there is multiple option for selecting  different account and region .
according that we can configure and transfer data.

Use AWS DataSync configuration setup below


Open the AWS DataSync console>>Create a task>>Create a new location for Amazon
S3>>Select your S3 bucket as the source location >>Update the source location configuration
settings. Make sure to specify the AWS Identity Access Management (IAM) role that will be used to
access your source S3 bucket.>>Select your S3 bucket as the destination location.>>Update the
destination location configuration settings. Make sure to specify the AWS Identity Access
Management (IAM) role that will be used to access your S3 destination bucket.>>Configure settings
for your task.>>Review the configuration details.>>Choose Create task.>>Start your task.
Note: When you use AWS DataSync, you will incur additional costs. To preview any DataSync
costs, review the DataSync pricing structure and DataSync limits.

OR  

Ans.
This pattern uses a source account and a destination account in different Regions. You attach a
bucket policy to your source S3 bucket that grants the destination account access through AWS
IAM. You then create an IAM policy in your destination account that allows a user to
perform PutObject and GetObject actions on the source S3 bucket. Finally, you
run copy and sync commands to transfer data from the source S3 bucket to the destination S3
bucket.
Accounts own the objects that they upload to S3 buckets. If you copy objects across different
accounts and Regions, you grant the destination account ownership of the copied objects. You
can change the ownership of an object by changing its ACL to bucket-owner-full-control.
However, we recommend that you grant programmatic cross-account permissions to the
destination account because ACLs can be difficult to manage for multiple objects.
Required service for configuration:  S3,AWS CLI & IAM
Copy command will run on AWS CLI
aws s3 cp s3:// DOC-EXAMPLE-BUCKET-SOURCE / \ s3:// DOC-EXAMPLE-BUCKET-TARGET / \ --recursive
--source-region SOURCE-REGION-NAME --region DESTINATION-REGION-NAME

Synchronize command will run on AWS CLI


aws s3 sync  s3:// DOC-EXAMPLE-BUCKET-SOURCE / \ s3:// DOC-EXAMPLE-BUCKET-TARGET / \ --source-
region SOURCE-REGION-NAME --region DESTINATION-REGION-NAME

Q. I want to transfer a large amount of data (1 TB or more) from one Amazon Simple Storage
Service (Amazon S3) bucket to another bucket. How can I do that?

Ans.

Depending on your use case, you can perform the data transfer between buckets using one of the
following options:

 Run parallel uploads using the AWS Command Line Interface (AWS CLI)
 Use an AWS SDK
 Use cross-Region replication or same-Region replication
 Use Amazon S3 batch operations
 Use S3DistCp with Amazon EMR
 Use AWS DataSync

Q. How to monitor EC2 instances by memory?

Ans.
Unfortunately, there is no metric for memory consumption in Cloudwatch.

The recommended way to monitor memory usage is to create a custom Cloudwatch metric
by using your own monitoring scripts on your instances. AWS have published
documentation on how to achieve this on Linux instances with a set of (unsupported) scripts.
Once your instances are publishing the custom metrics, you will be able to attach alarms to
them in CloudWatch.

Q. How to block specific IP at server level.


Ans. Yes, we can block only through NACL for specific IP blocking. But NACL will block
whole subnet then we can put that EC2 on that subnet then we can apply block on NACL for that
Subnet.

Q. If Region is going down then how to take backup and how to take caring traffic.

Ans. AWS Backup now supports cross-region backup, enabling AWS customers to copy
backups across multiple services to different regions.

Using AWS Backup, you can copy backups to multiple AWS Regions on demand or
automatically as part of a scheduled backup plan. Cross-Region replication is particularly
valuable if you have business continuity or compliance requirements to store backups a
minimum distance away from your production data.
There is two way to take cross region Backup 1. On- Demand 2. Schedule Backup Plan.

Performing on-demand cross-Region backup

To copy an existing backup on-demand

1. Open the AWS Backup console at https://console.aws.amazon.com/backup.


2. Choose Backup vaults.
3. Choose the vault that contains the recovery point you want to copy.
4. In the Backups section, select a recovery point to copy.
5. Using the Actions dropdown button, choose Copy.
6. Enter the following values:
Copy to destination
Choose the destination AWS Region for the copy. You can add a new copy rule per copy to a new
destination.
Destination Backup vault
Choose the destination backup vault for the copy.
Transition to cold storage
Choose when to transition the backup copy to cold storage. Backups transitioned to cold storage must
be stored there for a minimum of 90 days. This value cannot be changed after a copy has transitioned
to cold storage.
To see the list of resources that you can transition to cold storage, see the "Lifecycle to cold storage"
section of the Feature availability by resource table. The cold storage expression is ignored for other
resources.
Retention period
Choose specifies the number of days after creation that the copy is deleted. This value must be greater
than 90 days beyond the Transition to cold storage value. The Always retention period retains your
copy indefinitely.
IAM role
Choose the IAM role that AWS Backup will use when creating the copy. The role must also have AWS
Backup listed as a trusted entity, which enables AWS Backup to assume the role. If you
choose Default and the AWS Backup default role is not present in your account, one will be created for
you with the correct permissions.
7. Choose Copy.

 
Scheduling cross-Region backup

You can use a scheduled backup plan to copy backups across AWS Regions.

To copy a backup using a scheduled backup plan

1. Open the AWS Backup console at https://console.aws.amazon.com/backup.


2. In My account, choose Backup plans, and then choose Create Backup plan.
3. On the Create Backup plan page, choose Build a new plan.
4. For Backup plan name, enter a name for your backup plan.
5. In the Backup rule configuration section, add a backup rule that defines a backup schedule, backup
window, and lifecycle rules. You can add more backup rules later.
a. For Backup rule name, enter a name for your rule.
b. For Backup vault, choose a vault from the list. Recovery points for this backup will be saved in
this vault. You can create a new backup vault.
c. For Backup frequency, choose how often you want to take backups.
d. For services that support PITR, if you want this feature, choose Enable continuous backups for
point-in-time recovery (PITR). For a list a services that support PITR, see that section of
the Feature availability by resource table.
e. For Backup window, choose Use backup window defaults - recommended. You can customize
the backup window.
f. For Copy to destination, Choose the destination AWS Region for your backup copy. Your backup
will be copied to this Region. You can add a new copy rule per copy to a new destination. Then
enter the following values:
Copy to another account's vault
Do not toggle this option. To learn more about cross-account copy, see Creating backup copies across
AWS accounts
Destination Backup vault
Choose the backup vault in the destination Region where AWS Backup will copy your backup.
If you would like to create a new backup vault for cross-Region copy, choose Create new Backup vault.
Enter the information in the wizard. Then choose Create Backup vault.
6. Choose Create plan.

Q. if not configure NAT Gateway and Internet Gateway then any other option to connect
internet in you instance.

Ans . I think VPN. I am not sure at least internet gateway or NAT Gateway should be available.

Q. If We configure more than one AWS CloudTrail log in different different account then how
can we connect in central log.
Ans. Receiving CloudTrail log files from multiple regions
You can configure CloudTrail to deliver log files from multiple regions to a single S3 bucket for a single account. For
example, you have a trail in the US West (Oregon) Region that is configured to deliver log files to a S3 bucket, and a
CloudWatch Logs log group. When you change an existing single-region trail to log all regions, CloudTrail logs
events from all regions that are in a single AWS partition in your account. CloudTrail delivers log files to the same
S3 bucket and CloudWatch Logs log group. As long as CloudTrail has permissions to write to an S3 bucket, the
bucket for a multi-region trail does not have to be in the trail's home region.
To log events across all regions in all AWS partitions in your account, create a multi-region trail in each partition.
In the console, by default, you create a trail that logs events in all AWS Regions. This is a recommended best
practice. To log events in a single region (not recommended), use the AWS CLI. To configure an existing single-
region trail to log in all regions, you must use the AWS CLI.
To change an existing trail so that it applies to all Regions, add the --is-multi-region-trail option to the update-
trail command.
aws cloudtrail update-trail --name my-trail --is-multi-region-trail
To confirm that the trail now applies to all Regions, the IsMultiRegionTrail element in the output shows true.
--------------------------------------------------------------------------------------------------------

    "IncludeGlobalServiceEvents": true, 

    "Name": "my-trail", 

    "TrailARN": "arn:aws:cloudtrail:us-east-2:123456789012:trail/my-trail", 

    "LogFileValidationEnabled": false, 

"IsMultiRegionTrail": true, 

    "IsOrganizationTrail": false,

    "S3BucketName": "my-bucket"

Note:When a new region launches in the aws partition, CloudTrail automatically creates a trail for you in the new
region with the same settings as your original trail.

Q. Difference between GP2 & GP3

Ans. 

AWS announced the new SSD storage type, gp3, At AWS re:Invent 2020. It is the next
generation of General Purposed block storage in AWS.
This post compares both types in terms of pricing and performance.
 
Performance:
AWS measures the volumes performance in two dimensions:
IOPS: the maximum number of Input and Output requests the volume can serve per
second.
 
Throughput: the maximum amount of data that can be read/written per second.
 
With gp2 volumes, you don't have direct control on the IOPs and throughput, they
depend on the volume size. To get higher performance you have to use a bigger
volume size.
 
gp3 removed this limitation and you can increase the IOPs and/or throughput
independently up to 16,000 IOPs and 1,000 MiB/s respectively.
Please note that gp3 volume size, IOPs and throughput still impact each other. The
maximum throughput that can be reached = (maximum IOPs) x (average IO size). Also,
the maximum IOPs = (500) x (provisioned GiB). So, 32 GiB or smaller volumes can't
reach the 16,000 IOPs limit.
WHAT ARE THE NEW FEATURES OF EBS GP3?
Now, GP3 speed and IOPS can be customized, and the Burstable capacity has been
increased by 4x. The Maximum Throughput is now 1,000 MB/s compared to GP2’s 250
MB/s.
The entry point includes 3,000 IOPS for free and a throughput of 125 MB/s.

The storage cost has a discount of 20% ($0.08/GB-month vs $0.1/GB-month of GP2).


Adding more provisioned IOPS costs $0.005/p IOPS-month. Adding throughput costs
$0.04/ MB/s-month 
 

You might also like