Storage Technologies Unit 4,5-1
Storage Technologies Unit 4,5-1
BC entails preparing for, responding to, and recovering from a system outage that
adversely affects business operations. It involves proactive measures, such as
business impact analysis and risk assessments, data protection, and security, and
reactive countermeasures, such as disaster recovery and restart, to be invoked in
the event of a failure. The goal of a business continuity solution is to ensure the
“information availability” required to conduct vital business operations.
INFORMATION AVAILABILITY
➢ Information availability (IA) refers to the ability of the infrastructure to
function according to business expectations during its specified time of operation.
Information availability ensures that people (employees, customers, suppliers, and
partners) can access information whenever they need it.
Information availability can be defined with the help of reliability, accessibility and
timeliness.
Reliability: This reflects a component’s ability to function without failure, under
stated conditions, for a specified amount of time.
Accessibility: This is the state within which the required information is accessible
at the right place, to the right user. The period of time during which the system is in
an accessible state is termed system uptime; when it is not accessible it is termed
system downtime.
Timeliness: Defines the exact moment or the time window (a particular time of the
day, week, month, and/or year as specified) during which information must be
accessible.
For example, if online access to an application is required between 8:00 AM and
10:00 pM each day, any disruptions to data availability outside of this time slot are
not considered to affect timeliness.
Table 11-1 lists the approximate amount of downtime allowed for a service to
achieve certain levels of 9s availability.
For example, a service that is said to be “five 9s available” is available for
99.999 percent of the scheduled time in a year (24 × 7 × 365).
Table 11-1: Availability Percentage and Allowable Downtime
Common terms of BC
■ Disaster recovery: This is the coordinated process of restoring systems,
data, and the infrastructure required to support key ongoing business operations in
the event of a disaster.
■ Disaster restart: This is the process of restarting business operations with
mirrored consistent copies of data and applications.
■ Recovery-Point Objective (RPO): This is the point in time to which
systems and data must be recovered after an outage. It defines the amount of data
loss that a business can endure.
For example, if the RPO is six hours, backups or replicas must be made at least
once in 6 hours.
Figure 11-2 shows various RPOs and their corresponding ideal recovery
strategies. For example:
RPO of 24 hours: This ensures that backups are created on an offsite tape drive
every midnight.
RPO of 1 hour: This ships database logs to the remote site every hour.
RPO of zero: This mirrors mission-critical data synchronously to a remote site.
■ Recovery-Time Objective (RTO): The time within which systems,
applications, or functions must be recovered after an outage. It defines the amount
of downtime that a business can endure and survive.
For example, if the RTO is two hours, then use a disk backup because it enables a
faster restore than a tape backup.
Some examples of RTOs and the recovery strategies to ensure data availability are
listed below
■ RTO of 72 hours: Restore from backup tapes at a cold site.
■ RTO of 12 hours: Restore from tapes at a hot site.
■ RTO of 4 hours: Use a data vault to a hot site
BC PLANNING LIFECYCLE
• BC planning must follow a disciplined approach like any other planning
process. Organizations today dedicate specialized resources to develop and main
tain BC plans.
• The BC planning life cycle includes five stages (see Figure 11-3):
1. Establishing objectives
2. Analyzing
3. Designing and developing
4. Implementing
5. Training, testing, assessing, and maintaining
Several activities are performed at each stage of the BC planning lifecycle,
includingthe following key activities:
1. Establishing objectives
■ Determine BC requirements.
■ Estimate the scope and budget to achieve requirements.
■ Select a BC team by considering subject matter experts from all areas
of the business, whether internal or external.
■ Create BC policies.
2. Analyzing
4. Implementing
■ Implement risk management and mitigation procedures that include backup,
replication, and management of resources.
■ Prepare the disaster recovery sites that can be utilized if a disaster affects
the primary data center.
■ Implement redundancy for every resource in a data center to avoid single
points of failure.
BACKUP METHODS
• Hot backup and cold backup are the two methods deployed for backup. They
are based on the state of the application when the backup is performed.
• In a hot backup, the application is up and running, with users accessing their data
during the backup process. In a cold backup, the application is not active during the
backup process.
• The backup of online production data becomes more challenging because data is
actively being used and changed. An open file is locked by the operating system
and is not copied during the backup process until the user closes it.
• The backup application can back up open files by retrying the operation on files
that were opened earlier in the backup process. During the backup process, it may
be possible that files opened earlier will be closed and a retry will be successful.
• The maximum number of retries can be configured depending on the backup
application. However, this method is not considered robust because in some
environments certain files are always open.
• In such situations, the backup application provides open file agents. These agents
interact directly with the operating system and enable the creation of consistent
copies of open files. In some environments, the use of open file agents is not
enough.
• For example, a database is composed of many files of varying sizes, occupying
several file systems. To ensure a consistent database backup, all files need to be
backed up in the same state. That does not necessarily mean that all files need to be
backed up at the same time, but they all must be synchronized so that the database
can be restored with consistency.
• Consistent backups of databases can also be done by using a cold backup. This
requires the database to remain inactive during the backup. Of course, the
disadvantage of a cold backup is that the database is inaccessible to users during
the backup process.
• Hot backup is used in situations where it is not possible to shut down the
database. This is facilitated by database backup agents that can perform a backup
while the database is active. The disadvantage associated with a hot backup is that
the agents usually affect overall application performance.
• A point-in-time (PIT) copy method is deployed in environments where the
impact of downtime from a cold backup or the performance resulting from a hot
backup is unacceptable.
A pointer-based PIT copy consumes only a fraction of the storage space and can be
created very quickly. A pointer-based PIT copy is implemented in a disk-based
solution whereby a virtual LUN is created and holds pointers to the data stored on
the production LUN or save location.
In this method of backup, the database is stopped or frozen momentarily while the
PIT copy is created. The PIT copy is then mounted on a secondary server and the
backup occurs on the primary server.
To ensure consistency, it is not enough to back up only production data for
recovery. Certain attributes and properties attached to a file, such as permissions,
owner, and other metadata, also need to be backed up.
These attributes are as important as the data itself and must be backed up for
consistency. Backup of boot sector and partition layout information is also critical
for successful recovery.
In a disaster recovery environment, bare-metal recovery (BMR) refers to a backup
in which all metadata, system information, and application configurations are
appropriately backed up for a full system recovery.
BMR builds the base system, which includes partitioning, the file system layout,
the operating system, the applications, and all the relevant configurations.
BMR recovers the base system first, before starting the recovery of data files.
Some BMR technologies can recover a server onto dissimilar hardware.
DATA DEDUPLICATION :
Data deduplication emerged as a key technology to dramatically reduce the amount
of space and the cost that are associated with storing large amounts of data. Data
deduplication is the art of intelligently reducing storage needs in order of
magnitude.
This method is better than common data compression techniques.
Data deduplication works through the elimination of redundant data so that only
one instance of a data set is stored. IBM has the broadest portfolio of data
deduplication solutions in the industry, which gives IBM the freedom to solve
client issues with the most effective technology.
Whether it is source or target, inline or post, hardware or software, disk or tape,
IBM has a solution with the technology that best solves the problem:
IBM ProtecTIER® Gateway and Appliance IBM System Storage N series
Deduplication IBM Tivoli Storage Manager
Data deduplication is a technology that reduces the amount of space that is
required to store data on disk. It achieves this space reduction by storing a single
copy of data that is backed up repetitively.
Data deduplication products read data while they look for duplicate data. Data
deduplication products break up data into elements and create a signature or
identifier for each data element.
Then, they compare the data element signature to identify duplicate data. After
they identify duplicate data, they retain one copy of each element. They create
pointers for the duplicate items, and discard the duplicate items.
The effectiveness of data deduplication depends on many variables, including the
rate of data change, the number of backups, and the data retention period.
For example, if you back up the same incompressible data one time a week for six
months, you save the first copy and you do not save the next 24. This method
provides a 25:1 data deduplication ratio. If you back up an incompressible file on
week one, back up the exact same file again on week two, and never back it up
again, this method provides a 2:1 data deduplication ratio.
A more likely scenario is that a portion of your data changes from backup to
backup so that your data deduplication ratio changes over time.
With data deduplication, you can minimize your storage requirements. Data
deduplication can provide greater data reduction and storage space savings than
other existing technologies.
Figure 6-13 shows the concept of data deduplication.
Data deduplication can reduce your storage requirements but the benefit you
derive is determined by your data and your backup policies. Workloads with a
high database content have the highest data deduplication ratios.
However, product functions, such as IBM Tivoli Storage Manager Progressive
Incremental or Oracle Recovery Manager (RMAN), can reduce the data
deduplication ratio.
Compressed, encrypted, or otherwise scrambled workloads typically do not
benefit from data deduplication.
Good candidates for data deduplication are text files, log files, uncompressed and
non-encrypted database files, email files (PST, DBX, and IBM Domino®), and
Snapshots (Filer Snaps, BCVs, and VMware images).
o There are a variety of approaches to cloud backup, with available services that can
easily fit into an organization's existing data protection process. Varieties of cloud
backup include the following:
Encryption
Data is encrypted before it is delivered over the internet to guarantee that it is safe
from illegal access. The encryption method employs a one-of-a-kind key produced
by the cloud backup program, and only the user has access to it.
Storage
After the data has been backed up, it is stored on remote servers operated by the
cloud storage provider. The data is kept in a safe, off-site location, which adds an
extra degree of security against data loss due to hardware failure, theft, or other sorts
of calamities.
Recovery
To restore your data, just log into the cloud backup service and choose the files you
want to recover. The data will subsequently, be sent from distant servers to your
device. This technique is often quick and simple, and it does not require physical
storage media or particular technological knowledge.
Back up content
1. Back up photos and videos.
2. Back up files and folders.
DATA ARCHIVE :
An electronic data archive is a repository for data that has fewer access requirements.
Types of Archives :
It can be implemented as online, nearline, or offline based on the means of access:
■ Online archive: The storage device is directly connected to the host to make
the data immediately available. This is best suited for active archives.
■ Nearline archive: The storage device is connected to the host and infor- mation
is local, but the device must be mounted or loaded to access the information.
■ Offline archive: The storage device is not directly connected, mounted, or
loaded. Manual intervention is required to provide this service before information
can be accessed.
• An archive is often stored on a write once read many (WORM) device, such as a
CD-ROM. These devices protect the original file from being overwritten. Some
tape devices also provide this functionality by implementing file locking
capabilities in the hardware or software.
• Although these devices are inexpensive, they involve operational, management,
and maintenance overhead.
• Requirements to retain archives have caused corporate archives to grow at a rate
of 50 percent or more per year. At the same time, organizations must reduce costs
while maintaining required service-level agreements (SLAs). Therefore, it is
essential to find a solution that minimizes the fixed costs of the archive’s
operations and management.
• Archives implemented using tape devices and optical disks involve many hidden
costs.
• The traditional archival process using optical disks and tapes is not optimized to
recognize the content, so the same content could be archived several times.
• Additional costs are involved in offsite storage of media and media management.
Tapes and optical media are also susceptible to wear and tear. Frequent changes
in these device technologies lead to the overhead of converting the media into
new formats to enable access and retrieval.
• Government agencies and industry regulators are establishing new laws and
regulations to enforce the protection of archives from unauthorized destruction
and modification.
• These regulations and standards affect all businesses and have established new
requirements for preserving the integrity of information in the archives.
• These requirements have exposed the hidden costs and shortcomings of the
traditional tape and optical media archive solutions
REPLICATION :
• Replication is the process of creating an exact copy of data. Creating one
or more replicas of the production data is one of the ways to provide Business
Continuity (BC).
• Data replication, where the same data is stored on multiple storage devices
• Host-based data replication uses the servers to copy data from one site to
another site. Host-based replication software usually includes options like
compression, encryption and, throttling, as well as failover.
DATA MIGRATION
• In general, data migration means moving digital information.
• Transferring that information to a different location, file format, environment,
storage system, database, datacenter, or application all fit within the definition of
data migration.
• Data migration is the process of selecting, preparing, extracting, and
transforming data and permanently transferring it from one computer storage
system to another.
• Data migration is a common IT activity. However, data assets may exist in
many different states and locations, which makes some migration projects more
complex and technically challenging than others.
During data migrations, teams must pay careful attention to the following
challenges:
Source data. Not preparing the source data being moved might lead to data
duplicates, gaps or errors when it's brought into the new system or application.
Wrong data formats. Data must be opened in a format that works with the system.
Files might not have access controls on a new system if they aren't properly
formatted before migration.
Mapping data. When stored in a new database, data should be mapped in a
sensible way to minimize confusion.
Sustainable governance. Having a data governance plan in place can help
organizations track and report on data quality, which helps them understand the
integrity of their data.
Security. Maintaining who can access, edit or remove data is a must for security.
There are three broad categories of data movers: host-based, array-based and
network appliances. Host-based software is best for application-specific
migrations, such as platform upgrades, database replication and file copying.
Array-based-based software is primarily used to migrate data between
similar systems. Network appliances migrate volumes, files or blocks of data
depending on their configuration.
Data migration vs. data integration vs. data conversion
✓ Data migration is the process of transferring data between data storage
systems or formats,
✓ Data integration is the process of combining data from multiple source
systems -- creating a unified set of data for operational and analytical uses. The
primary goal of data integration is to produce consolidated data sets that are clean
and consistent. Integration is a core element of the data management process.
✓ Data conversion is the process of changing data from one format to
another. If a legacy
system and a new system have identical fields, an organization could just do a data
migration; however, the data from the legacy system is generally different and
needs to be modified before migrating. Data conversion is often a step in the data
migration process.
Self-service DRaaS: The least expensive option is self-service DRaaS, where the
customer is responsible for the planning, testing and management of disaster
recovery, and the customer hosts its own infrastructure backup on virtual machines
in a remote location. Careful planning and testing are required to make sure that
processing can fail over to the virtual servers instantly in the event of a disaster.
This option is best for those who have experienced disaster recovery experts on
staff.
• There are the major goals of information security which are as follows −
Confidentiality − The goals of confidentiality is that only the sender and the predetermined
recipient should be adequate to approach the element of a message. Confidentiality have
negotiated if an unauthorized person is capable to create the message.
For example, it can be a confidential email message sent by user A to user B, which is
penetrated by user C without the authorization or knowledge of A and B. This kind of attack
is known as interception.
Integrity − When the element of a message are transformed after the sender sends it, but
since it reaches the intended recipient, and it can said that the principle of the message is lost.
For example, consider that user A sends message to user B and User C alter with a message
basically sent by user A, which is absolutely intended for user B.
User C somehow handles to access it, modify its elements and send the changed message to
user B. User B has no method of understanding that the element of the message changed after
user A had sent it. User A also does not understand about this change. This kind of attack is
known as modification.
Availability − The main goals of information security is availability. It is that resources must
be available to authorized parties at all times.
For instance, because of the intentional actions of an unauthorized user C, an authorized user
A cannot allow contact a server B. This can overthrow the principle of availability. Such an
attack is known as interruption.
STORAGE SECURITY DOMAINS
• Storage devices that are not connected to a storage network are less vulnerable because
they are not exposed to security threats via networks. However, with increasing use of
networking in storage environments, storage devices are becoming highly exposed to security
threats from a variety of sources.
• If each component within the storage network is considered a potential access point,
one must analyze the attack surface that each of these access points provides and identify the
associated vulnerability.
• In order to identify the threats that apply to a storage network, access paths to data
storage can be categorized into three security domains: application access, management
access, and BURA (backup, recovery, and archive).
• Figure 15-1 depicts the three security domains of a storage system environment.
Assets, threats, and vulnerability are considered from the perspective of risk identification and
control analysis.
Assets
• Information is one of the most important assets for any organization. Other assets
include hardware, software, and the network infrastructure required to access this
information.
• To protect these assets, organizations must develop a set of parameters to ensure the
availability of the resources to authorized users and trusted networks. These parameters apply
to storage resources, the network infrastructure, and organizational policies.
• Several factors need to be considered when planning for asset security. Security
methods have two objectives.
• First objective is to ensure that the network is easily accessible to authorized users. It
should also be reliable and stable under disparate environmental conditions and volumes of
usage.
• Second objective is to make it very difficult for potential attackers to access and
compromise the system. These methods should provide adequate protection against
unauthorized access to resources, viruses, worms, Trojans and other malicious software
programs.
Threats
• Threats are the potential attacks that can be carried out on an IT infrastructure. These
attacks can be classified as active or passive. Passive attacks are attempts to gain unauthorized
access into the system.
• They pose threats to confidentiality of information. Active attacks include data
modification, Denial of Service (DoS), and repudiation attacks. They pose threats to data
integrity and availability. In a modification attack, the unauthorized user attempts to modify
information for malicious purposes.
• A modification attack can target data at rest or data in transit. These attacks pose a
threat to data integrity. Denial of Service (DoS) attacks denies the use of resources to
legitimate users.
• These attacks generally do not involve access to or modification of information on the
computer system. Instead, they pose a threat to data availability.
• The intentional flooding of a network or website to prevent legitimate access to
authorized users is one example of a DoS attack. Repudiation is an attack against the
accountability of the information.
• It attempts to provide false information by either impersonating someone or denying
that an event or a transaction has taken place.
• Table 15-1 describes different forms of attacks and the security services used to
manage them.
Vulnerability
• The paths that provide access to information are the most vulnerable to potential
attacks. Each of these paths may contain various access points, each of which provides
different levels of access to the storage resources.
• It is very important to implement adequate security controls at all the access points on
an access path. Implementing security controls at each access point of every access path is
termed as defense in depth.
• Attack surface, attack vector, and work factor are the three factors to consider when
assessing the extent to which an environment is vulnerable to security threats. Attack surface
refers to the various entry points that an attacker can use to launch an attack. Each component
of a storage network is a source of potential vulnerability
• An attack vector is a step or a series of steps necessary to complete an attack. For
example, an attacker might exploit a bug in the management interface to execute a snoop
attack whereby the attacker can modify the configuration of the storage device to allow the
traffic to be accessed from one more host.
• Work factor refers to the amount of time and effort required to exploit an attack
vector.
• For example, if attackers attempt to retrieve sensitive information, they consider the
time and effort that would be required for executing an attack on a database.
• The preventive control attempts to prevent an attack; the detective control detects
whether an attack is in progress; and after an attack is discovered, the corrective controls are
implemented.
• Preventive controls avert the vulnerabilities from being exploited and prevent an
attack or reduce its impact. Corrective controls reduce the effect of an attack, while detective
controls discover attacks and trigger preventive or corrective controls.
Digital security controls include such things as usernames and passwords, two-
factor authentication, antivirus software, and firewalls.
Cloud security controls include measures you take in cooperation with a cloud
services provider to ensure the necessary protection for data and workloads. If
your organization runs workloads on the cloud, you must meet their corporate
or business policy security requirements and industry regulations.
Risk management
Businesses face different types of risks, including financial, legal, strategic, and
security risks. Proper risk management helps businesses identify these risks and find ways to
remediate any that are found.
Companies use an enterprise risk management program to predict potential problems
and minimize losses.
For example, you can use risk assessment to find security loopholes in your computer
system and apply a fix.
Compliance
Compliance is the act of following rules, laws, and regulations. It applies to legal and
regulatory requirements set by industrial bodies and also for internal corporate policies.
In GRC, compliance involves implementing procedures to ensure that business
activities comply with the respective regulations.
For example, healthcare organizations must comply with laws like HIPAA that
protect patients' privacy.
BENEFITS OF GRC :
• By implementing GRC programs, businesses can make better decisions in a risk-
aware environment.
• An effective GRC program helps key stakeholders set policies from a shared
perspective and comply with regulatory requirements.
• With GRC, the entire company comes together in its policies, decisions, and actions.
The following are some benefits of implementing a GRC strategy at your organization.
Data-driven decision-making
You can make data-driven decisions within a shorter time frame by monitoring your
resources, setting up rules or frameworks, and using GRC software and tools.
Responsible operations
GRC streamlines operations around a common culture that promotes ethical values and
creates a healthy environment for growth. It guides strong organizational culture development
and ethical decision-making in the organization.
Improved cybersecurity
With an integrated GRC approach, businesses can employ data security measures to protect
customer data and private information. Implementing a GRC strategy is essential for your
organization due to increasing cyber risk that threatens users' data and privacy. It helps
organizations comply with data privacy regulations like the General Data Protection
Regulation (GDPR). With a GRC IT strategy, you build customer trust and protect your
business from penalties.
IMPLEMENTATION OF GRC:
Companies of all sizes face challenges that can endanger revenue, reputation, andcustomer and
stakeholder interest.
Some of these challenges include the following:
✓ Internet connectivity introducing cyber risks that might compromise data storage
security
✓ Businesses needing to comply with new or updated regulatory requirements
✓ Companies needing data privacy and protection
✓ Companies facing more uncertainties in the modern business landscape
✓ Risk management costs increasing at an unprecedented rate
✓ Complex third-party business relationships increasing risk
WORKING OF GRC :
GRC in any organization works on the following principles:
Key stakeholders
GRC requires cross-functional collaboration across different departments that practices
governance, risk management, and regulatory compliance.
Some examples include the following:
✓ Senior executives who assess risks when making strategic decisions
✓ Legal teams who help businesses mitigate legal exposures
✓ Finance managers who support compliance with regulatory requirements
✓ HR executives who deal with confidential recruitment information
✓ IT departments that protect data from cyber threats
GRC framework
A GRC framework is a model for managing governance and compliance risk in acompany
It involves identifying the key policies that can drive the company toward its goals. By
adopting a GRC framework, you can take a proactive approach to mitigating risks, making
well-informed decisions, and ensuring business continuity.
Companies implement GRC by adopting GRC frameworks that contain key policies that align
with the organization's strategic objectives.
Key stakeholders base their work on a shared understanding from the GRC framework as
they devise policies, structure workflows, and govern the company.
Companies might use software and tools to coordinate and monitor the success of the GRC
framework.
GRC maturity
GRC maturity is the level of integration of governance, risk assessment, andcompliance
within an organization.
You achieve a high level of GRC maturity when a well-planned GRC strategy results in cost
efficiency, productivity, and effectiveness in risk mitigation.
Meanwhile, a low level of GRC maturity is unproductive and keeps business unitsworking in
silos.
GRC TOOLS:
GRC tools are software applications that businesses can use to manage policies, assess risk,
control user access, and streamline compliance.
There are some of the following GRC tools to integrate business processes, reducecosts,
and improve efficiency.
GRC software
GRC software helps automate GRC frameworks by using computer systems. Businessesuse
GRC software to perform these tasks:
✓ Oversee policies, manage risk, and ensure compliance
✓ Stay updated about various regulatory changes that affect the business
✓ Empower multiple business units to work together on a single platform
✓ Simplify and increase the accuracy of internal auditing
User management
You can give various stakeholders the right to access company resources with user management
software.
This software supports granular authorization, so you can precisely control who hasaccess to
what information.
User management ensures that everyone can securely access the resources they needto get
their work done.
Auditing
You can use auditing tools like AWS Audit Manager to evaluate the results ofintegrated
GRC activities in your company.
By running internal audits, you can compare actual performance with GRC goals.
You can then decide if the GRC framework is effective and make necessary improvements.
You must bring different parts of your business into a unified framework to implement GRC.
Building an effective GRC requires continuous evaluation and improvement. The
following tips make GRC implementation easier.
Availability management
• The critical task in availability management is establishing a proper guideline for all
configurations to ensure availability based on service levels.
• For example, when a server is deployed to support a critical business function, the
highest availability standard is usually required.
• This is generally accomplished by deploying two or more HBAs, multipathing
software with path failover capability, and server clustering. The server must be connected to
the storage array using at least two independent fabrics and switches that have built-in
redundancy. Storage devices with RAID protection are made available to the server using at
least two front-end ports. In addition, these storage arrays should have built-in redundancy for
various components, support backup, and local and remote replication. Virtualization
technologies have significantly improved the availability management task. With
virtualization in place resources can be dynamically added or removed to maintain the
availability.
Capacity management
• The goal of capacity management is to ensure adequate availability of resources for
all services based on their service level requirements.
• Capacity management provides capacity analysis, comparing allocated storage to
forecasted storage on a regular basis.
• It also provides trend analysis of actual utilization of allocated storage and rate of
consumption, which must be rationalized against storage acquisition and deployment
timetables.
• Storage provisioning is an example of capacity management.
• It involves activities such as device configuration and LUN masking on the storage
array and zoning configuration on the SAN and HBA components. Capacity management
also takes into account the future needs of resources, and setting up monitors and analytics to
gather such information.
Performance management
• Performance management ensures the optimal operational efficiency of all
components.
• Performance analysis is an important activity that helps to identify the performance of
storage infrastructure components.
• This analysis provides the information — whether a component is meeting expected
performance levels. Several performance management activities are initiated for the
deployment of an application or server in the existing storage infrastructure.
• Every component must be validated for adequate performance capabilities as defined
by the service levels. For example, to optimize expected performance levels, activities on the
server such as the volume configuration, designing the database, application layout
configuration of multiple HBAs, and intelligent multipathing software must be fine-tuned.
The performance management tasks on a SAN include designing sufficient ISLs in a multi-
switch fabric with adequate bandwidth to support the required performance levels. The
storage array configuration tasks include selecting the appropriate RAID type and LUN
layout, front-end and back-end ports, and LUN accessibility (LUN masking) while
considering the end-to-end performance.
Security Management
• Security management prevents unauthorized access and configuration of storage
infrastructure components.
• For example, while deploying an application or a server, the security management
tasks include managing user accounts and access policies, that authorizes users to perform
role-based activities.
• The security management tasks in the SAN environment include configuration of
zoning to restrict an HBA’s unauthorized access to the specific storage array ports. LUN
masking prevents data corruption on the storage array by restricting host access to a defined
set of logical devices.
Reporting
• It is difficult for businesses to keep track of the resources they have in their data
centers, for example, the number of storage arrays, the array vendors, how the storage arrays
are being used, and by which applications.
• Reporting on a storage infrastructure involves keeping track and gathering
information from various components/processes.
• This information is compiled to generate reports for trend analysis, capacity planning,
chargeback, performance, and to illustrate the basic configuration of storage infrastructure
components.
• Capacity planning reports also contain current and historic information about
utilization of storage, file system, database tablespace, and ports.
• Configuration or asset management reports include details about device allocation,
local or remote replicas, and fabric configuration; and list all equipment, with details such
as their value, purchase date, lease status, and maintenance records.
• Chargeback reports contain information about the allocation or utilization of storage
infrastructure components by various departments or user groups. Performance reports
provide details about the performance of various storage infrastructure components.
STORAGE INFRASTRUCTURE MANAGEMENT PROCESSES :
• Storage Management is defined as it refers to the management of the data storage
equipment’s that are used to store the user/computer generated data.
• Hence it is a tool or set of processes used by an administrator to keep your data and
storage equipment’s safe.
• Storage management is a process for users to optimize the use of storage devices and to
protect the integrity of data for any media on which it resides and the category of storage
management generally contain the different type of subcategories covering aspects such as
security, virtualization and more, as well as different types of provisioning or automation,
which is generally made up the entire storage management software market.
Storage management key attributes: Storage management has some key attribute which
is generally used to manage the storage capacity of the system. These are given below:
1. Performance
2. Reliability
3. Recoverability
4. Capacity
PROVISIONING
• This method entails assigning storage capacity by analyzing current capabilities, such
as storage on physical drives or the cloud, and deciding the proper information to store in
each location.
• It's important to consider factors such as ease of access and security when
determining where to store your data.
• Planning where to store data allows organizations to discover whether they have
ample storage space available or whether they should reconfigure their system for better
efficiency.
DATA COMPRESSION
• This is the act of reducing the size of data sets without compromising them.
Compressing data allows users to save storage space, improve file transfer speeds and
decrease the amount of money they spend on storage hardware and network bandwidth.
• Data compression works by either removing unnecessary bits of information or
redundancies within data.
• For example, to compress an audio file, a data compression tool may remove parts of
the file that contain no audible noise.
DATA MIGRATION
• This method entails moving data from one location to another. This can include the
physical location, such as from one hard drive to another, or the application that uses the data.
• Data migration is often necessary when introducing new hardware or software
components into an organization.
• For example, if a business purchases new computers for its office, it's important to
transfer all data from the old systems to the new ones.
• Important factors to consider while implementing data migration include ensuring
network bandwidth, effective transfer speeds, data integrity and ample storage space for the
new location throughout the transfer.
DATA REPLICATION
• This process includes making one or more copies of a particular data set, as there are
several reasons why a company may want to replicate its data.
• For example, you may wish to create a backup if there's a problem with an original
data set. You may also want to replicate data so you can store it across different locations,
improving the overall accessibility across your network.
• There are two types of data replication: synchronous and asynchronous.
Synchronous data replication is when companies copy any changes to an originaldata set in
the replicated data set. This type of replication ensures updated information but may also use
require more resources than asynchronous replication. Asynchronous replication only
occurs when a professional enters a command intothe database, so it's not an automatic
process. With this type, your company has morecontrol over the resources used to replicate
data but may not possess real-time databackups.
AUTOMATION
• Automation is the process of having tools automatically manage your data. Rather
than updating your data manually, you can use software tools to accomplish this task for you.
• For example, you could use a tool to automatically update a shared database whenever
you make a change on your local computer, rather than requiring manual updates. This
would ensure that the database contains updated information for all users and prevents users
from viewing outdated information if a user forgets to submit changes.
DISASTER RECOVERY
• Disaster recovery is a plan companies create for potential scenarios regarding data
issues.
• For example, if the hard drive that stores your data breaks, it's important to have an
effective plan that allows your business to return to normal operations. This plan might
include switching to a backup hard drive, making a new copy of that backup and
purchasing a new primary hard drive.
• Important elements in a disaster recovery plan include speed, data integrity and costs.
Effective organizations often have plans that decrease technological downtime as much as possible