Crisc All One Exam 2nd
Crisc All One Exam 2nd
Chapter 1 Governance
Chapter 2 IT Risk Assessment
Chapter 3 Risk Response and Reporting
Chapter 4 Information Technology and Security
Appendix A Implementing and Managing a Risk
Management Program
Appendix B About the Online Content
Glossary
Index
CONTENTS
Acknowledgments
Introduction
Chapter 1 Governance
Organizational Governance
Organizational Strategy, Goals, and
Objectives
Organizational Structure, Roles, and
Responsibilities
Organizational Culture
Policies and Standards
Business Processes
Organizational Assets
Risk Governance
Enterprise Risk Management and Risk
Management Frameworks
Three Lines of Defense
Risk Profile
Risk Appetite and Risk Tolerance
Legal, Regulatory, and Contractual
Requirements
Professional Ethics of Risk Management
Chapter Review
Quick Review
Questions
Answers
Chapter 2 IT Risk Assessment
IT Risk Identification
Risk Events
Threat Modeling and Threat Landscape
Vulnerability and Control Deficiency
Analysis
Risk Scenario Development
IT Risk Analysis and Evaluation
Risk Assessment Concepts, Standards,
and Frameworks
Risk Assessment Standards and
Frameworks
Risk Ranking
Risk Ownership
Risk Register
Risk Analysis Methodologies
Business Impact Analysis
Inherent and Residual Risk
Miscellaneous Risk Considerations
Chapter Review
Quick Review
Questions
Answers
Chapter 3 Risk Response and Reporting
Risk Response
Risk and Control Ownership
Risk Treatment/Risk Response Options
Third-Party Risk
Issues, Findings, and Exceptions
Management
Management of Emerging Risk
Control Design and Implementation
Control Types and Functions
Control Standards and Frameworks
Control Design, Selection, and Analysis
Control Implementation
Control Testing and Effectiveness
Evaluation
Risk Monitoring and Reporting
Risk Treatment Plans
Data Collection, Aggregation, Analysis,
and Validation
Risk and Control Monitoring Techniques
Risk and Control Reporting Techniques
Key Performance Indicators
Key Risk Indicators
Key Control Indicators
Chapter Review
Quick Review
Questions
Answers
Chapter 4 Information Technology and Security
Enterprise Architecture
Platforms
Software
Databases
Operating Systems
Networks
Cloud
Gateways
Enterprise Architecture Frameworks
Implementing a Security Architecture
IT Operations Management
Project Management
Business Continuity and Disaster Recovery
Management
Business Impact Analysis
Recovery Objectives
Recovery Strategies
Plan Testing
Resilience and Risk Factors
Data Lifecycle Management
Standards and Guidelines
Data Retention Policies
Hardware Disposal and Data Destruction
Policies
Systems Development Life Cycle
Planning
Requirements
Design
Development
Testing
Implementation and Operation
Disposal
SDLC Risks
Emerging Technologies
Information Security Concepts, Frameworks,
and Standards
Confidentiality, Integrity, and Availability
Access Control
Data Sensitivity and Classification
Identification and Authentication
Authorization
Accountability
Non-Repudiation
Frameworks, Standards, and Practices
NIST Risk Management Framework
ISO 27001/27002/27701/31000
COBIT 2019 (ISACA)
The Risk IT Framework (ISACA)
Security and Risk Awareness Training Programs
Awareness Tools and Techniques
Developing Organizational Security and
Risk Awareness Programs
Data Privacy and Data Protection Principles
Security Policies
Access Control
Physical Access Security
Network Security
Human Resources
Chapter Review
Quick Review
Questions
Answers
Appendix A Implementing and Managing a Risk
Management Program
Today’s Risk Landscape
What Is a Risk Management Program?
The Purpose of a Risk Management
Program
The Risk Management Life Cycle
Risk Discovery
Types of Risk Registers
Reviewing the Risk Register
Performing Deeper Analysis
Developing a Risk Treatment
Recommendation
Publishing and Reporting
Appendix B About the Online Content
System Requirements
Your Total Seminars Training Hub Account
Privacy Notice
Single User License Terms and Conditions
TotalTester Online
Technical Support
Glossary
Index
ACKNOWLEDGMENTS
From Peter:
I am immensely grateful to Wendy Rinaldi for affirming the
need to have this book published on a tight timeline. My
readers, including current and future risk managers, deserve
nothing less.
Heartfelt thanks to Wendy Rinaldi and Janet Walden for
proficiently managing this project, facilitating rapid
turnaround, and equipping us with the information and
guidance we needed to produce the manuscript.
Many thanks to Janet Walden and Nitesh Sharma for
managing the editorial and production ends of the project and
to Bart Reed for copyediting the book and further improving
readability. I appreciate KnowledgeWorks Global Ltd. for
expertly rendering my sketches into beautifully clear line art
and laying out the pages. Like stage performers, they make
hard work look easy, and I appreciate their skills.
Heartfelt thanks to Matt Webster (the author of Do No
Harm and former CISO at Galway Holdings) for his
invaluable tech review of the entire manuscript. Matt’s
experience in security leadership and risk management
resulted in many improvements in the manuscript. A thanks
also to others, including Mark Adams.
Many thanks to my literary agent, Carole Jelen, for her
diligent assistance during this and other projects. Sincere
thanks to Rebecca Steele, my business manager and publicist,
for her long-term vision and for keeping me on track.
Bobby and Dawn, thank you for including me and
welcoming me to this project. The first edition of this book
was entirely yours, and I’m honored to have been included in
this edition. I have enjoyed working with you and on this
project. But most important to me: it has been a pleasure
getting to know both of you better. You both have my deepest
respect.
Despite having written more than 40 books, I have
difficulty putting into words my gratitude for my wife,
Rebekah, for tolerating my frequent absences (in the home
office) while I developed the manuscript. This project could
not have been completed without her loyal and unfailing
support and encouragement.
From Dawn:
I continue to be proud to call Bobby Rogers my coauthor
and friend. I couldn’t ask for a better partner in crime.
Many thanks to Peter Gregory for jumping in and
improving our work. Thank you, Peter, for the guidance and
support!
McGraw Hill continues to provide both excellent editors
and excellent people to guide our projects along the way; we
could not have pulled off this project without them and their
consistent support.
A big thank you to our technical editor, Matthew Webster.
You did a great job keeping us on our toes.
Finally, I heartily acknowledge the contribution of my
family and friends who have supported me throughout my
various crazy endeavors. I could not ask for more than the love
and encouragement I receive every day toward pursuing my
goals. Thank you all.
From Bobby:
First, I’d like to thank all the good folks at McGraw Hill
and their associates for guiding us throughout this book,
helping us to ensure a quality product. Wendy Rinaldi, Caitlin
Cromley-Linn, and Janet Walden were awesome to work with,
making sure we stayed on track and doing everything they
could to make this a wonderful experience. We’re very
grateful to them for giving us the chance to write this book and
believing in us every step of the way. Nitesh Sharma of
KnowledgeWorks Global Ltd. was great to work with as
project manager, and I’m happy to work again with Bart Reed,
our copy editor on this project, who always manages to make
me sound far more intelligent with his improvements to my
writing.
Dawn Dunkerley has been one of my best friends for
several years now, and I also consider her one of the smartest
folks in our profession, so I was doubly happy to have her
coauthor this book once again. She has added some fantastic
insight and knowledge to this book; we couldn’t have done it
without her. Thanks much, Dawn!
I would also like to offer a profound thanks to Peter for
agreeing to be our coauthor on this project. Three minds are
definitely better than two, and Peter brought a new insight into
ISACA’s certifications and processes that we did not have for
the first edition. His work in rewriting, revamping,
redesigning, and rearranging the book material to make it
more closely align to ISACA’s exam requirements was a
critically needed improvement for this book to continue to be a
great reference and study guide for the exam.
Matthew Webster deserves some special thanks because, as
the technical editor, he had a difficult job, which was to make
sure we stayed reasonable and technically accurate throughout
the book. Matthew definitely contributed to the clarity,
understanding, and technical accuracy of the text. Thanks for
all your help, Matthew!
Most importantly, I would like to thank my family for
allowing me to take time away from them to write, especially
during the difficult times we live in right now. To my wife,
Barb, my children and their families, Greg, Sarah and Sara, AJ
and Audra, and my grandchildren, Sam, Ben, Evey, Emmy,
and big Caleb, and now, my first great-grandson, little Caleb: I
love all of you.
From Peter, Dawn, and Bobby:
We are so grateful for Matthew Webster’s contributions to
the completeness and quality of this book. Through his
experience in risk management and control, Matthew provided
expert commentary and many suggested changes that made the
book that much better. Thank you, Matthew!
INTRODUCTION
Experience Requirements
To qualify for CRISC certification, you must have completed
the equivalent of three years of total work experience in at
least two of the CRISC domains. Additional details on the
minimum certification requirements, substitution options, and
various examples are discussed next.
Substitution of Experience
Unlike most other ISACA certifications, there are no available
experience waivers or substitutions. Instead, you are required
to have three or more years of experience in IT risk
management and IS control, as described in the preceding
section.
Exam Questions
Each registrant has four hours to take the multiple-choice
question exam. There are 150 questions on the exam,
representing the four job practice areas. Each question has four
answer choices, of which you can select only one best answer.
You can skip questions and return to them later, and you can
also flag questions that you want to review later if time
permits. While you are taking your exam, the time remaining
will appear on the screen.
When you have completed the exam, you are directed to
close it. At that time, the exam will display your pass or fail
status, with a reminder that your score and passing status are
subject to review.
You will be scored for each job practice area and then
provided one final score. All scores are scaled. Scores range
from 200 to 800; however, a final score of 450 is required to
pass.
Exam questions are derived from a job practice analysis
study conducted by ISACA. The areas selected represent those
tasks performed in a CRISC’s day-to-day activities and
represent the background knowledge required to develop and
manage an information security program. You can find more
detailed descriptions of the task and knowledge statements at
https://www.isaca.org/credentialing/crisc/crisc-exam-content-
outline.
Exam Coverage
The CRISC exam is quite broad in its scope. The exam covers
four job practice areas, as shown in Table 1.
Continuing Education
The goal of continuing professional education requirements is
to ensure that individuals maintain CRISC-related knowledge
to better develop and manage security management programs.
To maintain CRISC certification, individuals must obtain
120 continuing education hours within three years, with a
minimum requirement of 20 hours per year. Each CPE hour is
to account for 50 minutes of active participation in educational
activities.
Revocation of Certification
A CRISC-certified individual may have their certification
revoked for the following reasons:
• Failure to complete the minimum number of CPEs
during the period.
• Failure to document and provide evidence of CPEs in an
audit.
• Failure to submit payment for maintenance fees.
• Failure to comply with the Code of Professional Ethics
can result in investigation and ultimately can lead to
revocation of certification.
If you have received a revocation notice, you will need to
contact the ISACA Certification Department at
[email protected] for more information.
Volunteer
As a nonprofit organization, ISACA relies on volunteers to
enrich its programs and events. There are many ways to help,
and one or more of these volunteer opportunities might be
suitable for you:
• Speaking at an ISACA event Whether you do a
keynote address or a session on a specific topic,
speaking at an ISACA event is a mountaintop
experience. You can share your knowledge and
expertise on a particular topic with attendees, but you’ll
learn some things, too.
• Serving as a chapter board member Local chapters
don’t run by themselves—they rely on volunteers who
are working professionals who want to improve the lot
of other professionals in the local community. Board
members can serve in various ways, from financial
management to membership to events.
• Starting or helping a CRISC study group Whether as
a part of a local chapter or at large, consider starting or
helping a group of professionals who want to learn the
details of the CRISC job practice. We are proponents of
study groups because study group participants make the
best students: they take the initiative to take on a big
challenge to advance their careers.
• Writing an article ISACA has online and paper-based
publications with articles on a wide variety of subjects,
including current developments in security, privacy,
risk, and IT management from many perspectives. If
you have specialized knowledge on some topic, other
ISACA members can benefit from this knowledge if
you write an article.
• Participating in a credential working group ISACA
works hard to ensure that its many certifications remain
relevant and up to date. Experts around the world in
many industries give their time to ensure that ISACA
certifications remain the best in the world. ISACA
conducts online and in-person working groups to update
certification job practices, write certification exam
questions, and publish updated study guides and
practice exams. Peter contributed to the first working
group in 2013 when ISACA initially developed the
CRISC certification exam; he met many like-minded
professionals, some of whom he is still in regular and
meaningful contact with.
• Participating in ISACA CommunITy Day ISACA
organizes a global effort of local volunteering to make
the world a better, safer place for everyone. Learn about
the next CommunITy Day at
https://engage.isaca.org/communityday/.
• Writing certification exam questions ISACA needs
experienced subject matter experts who are willing to
take the time to write new certification exam questions.
ISACA has a rigorous, high-quality process for exam
questions that includes training. Who knows—you
could even be invited to an in-person workshop on
writing exam items. You can find out more about how
this works at https://www.isaca.org/credentialing/write-
an-exam-question.
You can learn about these and many other volunteer
opportunities at https://www.isaca.org/why-isaca/participate-
and-volunteer.
Please take a minute to reflect on the quality and richness
of the ISACA organization and its many world-class
certifications, publications, and events. These are all fueled by
volunteers who made ISACA into what it is today. Only
through your contribution of time and expertise will ISACA
continue in its excellence for future security, risk, privacy, and
IT professionals. And one last thing you can only experience
on your own: volunteering not only helps others but enriches
you as well. Will you consider leaving your mark and making
ISACA better than you found it?
Continue to Grow Professionally
Continuous improvement is a mindset and a lifestyle that is
built in to IT service management and information security—
it’s even a formal requirement in ISO/IEC 27001! We suggest
that you periodically take stock in your career status and
aspirations, be honest with yourself, and determine what
mountain you will climb next. If needed, find a mentor who
can guide you and give you solid advice.
While this may not immediately make sense to you, know
this: helping others, whether through any of the volunteer
opportunities listed previously or in other ways, will enrich
you personally and professionally. We’re not talking about
feathers in your cap or juicy items on your résumé, but rather
the growth in character and wisdom that results from helping
and serving others, particularly when you initiated the helping
and serving.
Professional growth means different things to different
people. Whether it’s a better job title, more money, a better (or
bigger or smaller) employer, a different team, more
responsibility, or more certifications, embarking on long-term
career planning will pay dividends. Take control of your career
and your career path—this is yours to own and shape as you
will.
Summary
Becoming and being a CRISC professional is a lifestyle, not
just a one-time event. It takes motivation, skill, good
judgment, persistence, and proficiency to be a strong and
effective leader in the world of information security
management. The CRISC was designed to help you navigate
the security management world with greater ease and
confidence.
Each CRISC job practice area will be discussed in detail in
the following chapters, and additional reference material will
be presented. Not only is this information helpful in studying
prior to the exam, but it is also meant to serve as a resource
throughout your career as an information security management
professional.
CHAPTER 1
Governance
In this chapter, you will:
• Understand the concepts of organizational governance
and how goals and objectives support it
• Learn about structure, roles, and responsibilities
• Analyze how organizational risk culture is facilitated
through the definition of risk appetite and risk tolerance
• Understand the concepts of enterprise risk management,
associated frameworks, and the ethics of risk
management
This chapter covers Certified in Risk and Information
Systems Control Domain 1, “Governance.” The domain
represents 26 percent of the CRISC examination.
Organizational Governance
Governance is the glue that holds all the different framing
elements of an organization together: its mission, strategy,
goals, and objectives. Governance establishes the requirements
an organization must meet and consists of both internal and
external governance. External governance comes in the form
of laws, regulations, professional and industry standards, and
other sources of requirements imposed on the organization
from the outside. Internal governance typically supports
external governance through policies, procedures, and
processes. For example, if a law imposes requirements to
protect certain sensitive data using specific controls or to a
certain standard, then policies and procedures further support
those requirements by formalizing and codifying them within
the organization. Policies and procedures can also be
independent of external governance and simply reflect the
organization’s culture, needs, and values, usually established
by its executive management. In any event, governance is the
controlling factor for the organization; governance keeps the
organization on the right focus and ensures that it is meeting
its compulsory requirements, such as obedience to laws, as
well as performing its due diligence and due care
responsibilities.
In addition to imposing requirements on the organization,
governance also refers to the structure by which the
organization is led and regulated. Again, this could come from
external or internal sources. External sources for governance
could consist of an external board of directors or regulatory
agencies. These entities ensure that the organization is led
from the perspective of responsibility and accountability.
Internal governance comes from internal business leaders and
reflect business drivers not related to externalities. In reality,
good governance typically reflects a balance between both
internal and external business drivers.
The infrastructure framework of the organization—in the
form of strategy, goals, objectives, mission statements, and so
on—supports governance, as we will discuss further in this
chapter.
Organizational Culture
Organizational culture is the term that describes how people
treat each other and how people get things done. Many
organizations establish a set of values that define the norms of
behavior. Terms like respect, collaboration, and teamwork are
often seen in these values. Some organizations will publish
formal value statements and print them for display in lobbies,
offices, and conference rooms.
The way that an organization’s leaders treat each other and
the rest of the organization sets an example for behavioral
norms. Often, these norms reflect those formal values, but
sometimes they may differ. One could say that an
organization’s stated culture and its actual culture may vary a
little, or a lot. The degree of alignment itself is a reflection of
an organization’s culture.
An organization also has a risk culture, which is essentially
how the organization as an entity feels about and deals with
risk. This culture is developed from several sources. First, it
can come from the organization’s leadership, based on their
business and management philosophies, attitudes, education,
and experience. It can also come from the organization’s
governance. Remember that governance is essentially the rules
and regulations imposed on the organization by either external
entities (in the form of laws, for example) or internally by the
organization itself. In any case, the culture of the organization
really defines how the organization feels about risk and how it
treats risk over time. We will talk later about how two
concepts, risk tolerance and risk appetite, support the
organizational risk culture.
Business Processes
The organization’s mission is the reason that it exists, whether
this is producing goods, offering services, and so on. A shoe
manufacturer is in the business of making shoes; therefore, its
mission is to make good quality shoes. Most businesses have a
mission statement that describes their mission. While the
mission is the overall reason for existence, business processes
are the activities that carry out that mission. A shoe
manufacturer’s business processes could be as high level as
manufacturing, sales, and marketing, but even those higher-
level processes are broken down into activities, such as sewing
cloth and leather, developing types and styles of shoes, and
selling them to retailers. These are all processes that support
the business. Each of these business processes also incurs
some level of business risk. There could be risks in the
manufacturing process due to the types of materials or specific
manner of attaching the shoe soles to the rest of the shoe. A
faulty process could mean that the shoes are of lesser quality
than other competitors or that the styles don’t meet the
demands of consumers. This, in turn, will lead to fewer sales
or more consumer returns. Regardless of whether they are
higher-level processes or more detailed activities and tasks,
each of these is supported by one or more systems or sets of
data. These systems also incur risk, but of a different nature.
The risk incurred by the systems includes the ability to
effectively use those systems and data to support the different
business processes. Therefore, IT (or even cybersecurity) risk
directly informs business process risk since it could affect the
ability of the organization to carry out its business processes
and, in turn, its overall mission.
Business processes are “owned” by different managers
within the organization. This means that the business process
owners are responsible for both the day-to-day and long-term
operations and success of the process. This also means,
ordinarily, that they are the primary owners of the business
risk associated with the process. Of course, higher-level
management bears ownership and responsibility for both the
process and risk also, but these are typically the people in
charge of the business process.
Better organizations will take a formal approach with
regard to the development and management of business
processes. Documents that describe processes, roles and
responsibilities, and key assets may be formally managed with
business process change management and periodic review and
be kept in official repositories to ensure that no unauthorized
changes occur.
Organizations with higher maturity will develop metrics
and even key performance indicators (KPIs) associated with
each business process. This allows management to understand
the quantity and quality of business process output, which can
be used to determine tactical and strategic improvements that
can make processes more effective and efficient. Risk
practitioners must work hand-in-hand with business process
owners to develop these KPIs, as well as associated key risk
indicators (KRIs) and key control indicators (KCIs).
NOTE KPIs, as well as KRIs and KCIs, are covered in more
detail in Chapter 3, where we discuss risk response and
reporting as part of Domain 3.
Organizational Assets
Company assets can include physical items, such as computer
and networking equipment, office machines (for example,
scanners and copiers), work facilities, and information
processing centers, as well as nonphysical items, such as
valuable data. Asset identification involves identifying both
types of assets and determining their value. Asset values must
be established beyond the mere capital costs; a true asset
valuation should consider several factors. For example, a
consideration should be the cost to repair or recover the asset
versus simply replacing the asset outright. Often, repairing the
asset may be less expensive in the short run, but the cost of the
different components required to conduct a repair should be
considered. Also, it’s important to remember that this might
only be a temporary solution—one that could come back to
haunt you (and your pockets) in the long run.
Asset management is the collection of activities used to
oversee the inventory, classification, use, and disposal of
assets. Asset management is a foundational activity, without
which several other activities could not be effectively done,
including vulnerability management, device hardening,
incident management, data security, and some aspects of
financial management.
In information security, asset management is critical to the
success of vulnerability management. If assets are not known
to exist, they may be excluded from processes used to identify
and remediate vulnerabilities. Similarly, it will be difficult to
harden assets if their existence is not known. What’s more, if
an unknown asset is attacked, the organization may have no
way of directly realizing this in a timely manner. If an attacker
compromises an unknown device, the attack may not be
known until the attacker pivots and selects additional assets to
compromise. This time lag could prove crucial to the impact of
the incident.
Asset Identification
A security management program’s main objective (whether
formally stated or not) is the protection of the organization’s
assets. These assets may be tangible or intangible, physical,
logical, or virtual. Here are some examples of assets:
• Buildings and property These assets include real
estate, structures, and other improvements.
• Equipment This can include machinery, vehicles, and
office equipment such as copiers, printers, and scanners.
• IT equipment This includes computers, printers,
scanners, tape libraries (the devices that create backup
tapes, not the tapes themselves), storage systems,
network devices, and phone systems.
• Virtual assets In addition to the tangible IT equipment
cited, virtual assets include virtual machines and the
software running on them.
• Supplies and materials These can include office
supplies as well as materials used in manufacturing.
• Records These include business records, such as
contracts, video surveillance tapes, visitor logs, and far
more.
• Information This includes data in software
applications, documents, e-mail messages, and files of
every kind on workstations and servers.
• Intellectual property This includes an organization’s
designs, architectures, patents, software source code,
processes, and procedures.
• Personnel In a real sense, an organization’s personnel
are the organization. Without its staff, the organization
cannot perform or sustain its processes.
• Reputation One of the intangible characteristics of an
organization, reputation is the individual and collective
opinion about an organization in the eyes of its
customers, competitors, shareholders, and the
community.
• Brand equity Similar to reputation, this is the perceived
or actual market value of an individual brand of product
or service that is produced by the organization.
Risk Governance
Risk governance has several different aspects. As mentioned
earlier, governance provides the overall requirements of what
the organization must adhere to and could include legal,
ethical, safety, and professional requirements. Risk
governance, then, describes the requirements that the
organization must adhere to in terms of managing both
business and IT risk. Laws, regulations, and other external
governance can dictate portions of risk governance, as can
internal governance, which comes in the form of policy. Risk
governance is also set forth by executive management in the
form of risk appetite and tolerance, risk strategy, and
throughout the various policies that support risk management.
Risk governance includes adopting a risk assessment and
analysis methodology, risk treatment and response options,
risk monitoring processes, and so on. A multitude of risk
governance and management frameworks have been published
to assist in this process, which we will discuss next.
Operational Management
Operational management refers to the tactical management
activities that take place at the business unit or business
process level. At this level, risk is managed on a day-to-day
basis by monitoring control effectiveness as well as any risk
associated with the business process itself or its supporting
systems. This line of defense may be implemented by the risk
practitioner, the business process owner, or even IT and
cybersecurity personnel. This is the layer in which controls are
operated as a part of business processes and often
implemented as a part of information systems.
Risk Profile
A risk profile encompasses an overall asset, organization, or
business process under review and includes detailed
information on all aspects of those items and how they
contribute to, mitigate, or influence risk. The risk profile for a
system, for instance, would present detailed information on all
the different characteristics of the system, its security controls,
the risk assessment and analysis for the system, the risk
responses for the system, and ongoing management for that
risk. Risk profiles periodically change based on a variety of
factors, including the criticality or sensitivity of the system,
the vulnerabilities inherent to the system, the threats that may
exploit those vulnerabilities, and the mitigating security
controls that protect the system. Regulations and societal
norms also influence risk profile, particularly in areas such as
data privacy. The risk profile is used to actively manage
ongoing risk and ensure that it remains within acceptable
levels.
Chapter Review
Organizational governance drives how an organization
conducts itself within legal, regulatory, and even self-imposed
constraints. Governance focuses on legal compliance, due
diligence and care, and ethical behavior. This governance can
come from both external and internal sources in the form of
laws, regulations, professional standards, and internal policies
and procedures. Risk management is often a requirement
included in governance standards.
Quick Review
• Governance guides the conduct of an organization in
terms of legal and ethical behavior.
• Governance can come from external and internal
sources.
• External governance is imposed through laws,
regulations, professional standards, risk frameworks,
and contract requirements.
• Internal governance comes from internal policies and
procedures.
• The organization’s mission, goals, and objectives must
include and support governance requirements.
• Senior management defines risk appetite and tolerance
for the organization.
• Risk capacity is the amount of loss an organization can
incur without its very existence being threatened.
• Risk appetite is the amount of risk an organization is
willing to take on in pursuit of its mission.
• Risk tolerance is the level of variance from the appetite
an organization is willing to permit for any particular
business venture.
• An organization’s mission, goals, and objectives are
typically defined at the strategic or long-term level.
• Organizational structure can affect an organization’s
ability to deal with risk in terms of its risk culture,
appetite, tolerance, and resilience.
• Organizational business units, which consist of various
business processes, must deal with both business and IT
risk at the process level.
• Business process risk contributes to the overall
organizational risk and is managed at various levels,
starting with the risk owner, who is responsible and
accountable for risk management for a particular set of
business processes or business units.
• An enterprise risk management strategy and program
must be in place to deal with lower-, middle-, and
higher-tier risks. Risk governance drives the enterprise
risk management program.
• Organizational risk culture defines how an organization
feels about and deals with risk and comes from the
organization’s leadership and governance.
• Organizational policies are internal governance that
reflects the management philosophy, culture, and
directives of senior leaders.
• Policies are intended to articulate with external
governance.
• Organizational assets include anything of value to the
organization, including physical equipment, facilities,
human assets, and information.
• Asset management is a collection of activities used to
manage the inventory, classification, use, and disposal
of assets.
• The three lines of defense in managing enterprise risk
are operational management, risk and compliance
management, and auditing accountability.
• The risk profile consists of detailed information on all
aspects of an asset and how those aspects contribute to,
mitigate, or influence risk.
• The risk profile is used to manage risk for an asset.
• Governance can be imposed by laws, regulations, and
contracts, which are legally enforceable.
• Codes of conduct and ethics dictate the behavior of risk
management professionals.
Questions
1. An organization is subject to healthcare regulations that
govern the protection requirements of individual health
data. Which of the following describes this type of
governance?
A. External
B. Internal
C. Regulatory
D. Professional
2. Your company handles credit card transaction processing
as part of its business processes. Which of the following
best describes the source and type of governance it may
incur because of these business processes?
A. Internal, policies and procedures
B. External, industry standards
C. External, laws and regulations
D. Internal, industry standards
3. Your organization is structured into various departments,
and each of these has its own activities that support the
mission, goals, and objectives of the organization. These
activities are decomposed from a high level and include
various tasks to support the various business units. Which
the following best describes these activities?
A. Operational management
B. Organizational strategy
C. Business processes
D. Risk management functions
4. An organizational culture that is averse to risk and change
is more likely to have the following in terms of risk
appetite and tolerance?
A. High risk appetite, low risk tolerance
B. Low risk appetite, high risk tolerance
C. Low risk appetite, low risk tolerance
D. High risk appetite, high risk tolerance
5. Which of the following roles is the lowest level of
responsibility in terms of risk ownership for a business
process or unit?
A. Organizational CEO or president
B. VP for risk management
C. Risk practitioner
D. Business process owner
6. Which of the following is the level of variation or
deviance an organization is willing to accept for a
particular business venture?
A. Risk appetite
B. Risk tolerance
C. Risk acceptance
D. Risk capacity
7. A small business unit in the production department is
incurring risk for one of its lower-level business
processes. Although this risk is focused on the business
process and not at the organizational level, it must be
accounted for in the overall organizational risk
assessment. At what level should this risk be considered?
A. Organizational risk
B. Operational risk
C. Strategic risk
D. Tactical risk
8. An organization needs to create its internal risk
management program and begins with risk governance.
Which of the following should the organization create
first?
A. Risk strategy
B. Risk management policy
C. Risk assessment procedure
D. Risk management methodology
9. Which of the following describes why the organization
exists?
A. Organizational mission statement
B. Organizational strategy
C. External governance
D. Organizational policy
10. Which of the following activities is considered
foundational, without which other management activities,
such as vulnerability management, incident management,
and data security, could not be accomplished?
A. Configuration management
B. Risk management
C. Asset management
D. Financial management
11. Which of the following is not one of the three domains
listed in ISACA’s Risk IT Framework?
A. Risk Governance
B. Risk Evaluation
C. Risk Assessment
D. Risk Response
12. Which of the following requires that a risk practitioner
perform duties professionally, with due diligence and
care, as required by professional standards?
A. Risk IT Framework
B. NIST Risk Management Framework
C. ISACA Code of Professional Ethics
D. Laws and regulations
13. A risk manager is analyzing a risk item and intends to
recommend that risk acceptance be the recommended
treatment. Which of the following must the risk manager
consider to make this determination?
A. Risk awareness
B. Risk capacity
C. Risk tolerance
D. Risk appetite
14. The organization’s legal counsel is considering the
prospect of real estate prices, including office space
leasing costs, increasing significantly in the next five
years. What level of risk management should be used to
manage this risk?
A. Asset
B. ERM
C. Program
D. Market
15. All of the following are factors that influence an
organization’s culture except which one?
A. Published values
B. Policies
C. Leadership behavior
D. Behavioral norms
Answers
1. A. Laws and regulations are a form of external
governance.
2. B. This describes an external source of governance in the
form of industry standards, specifically the Payment Card
Industry Data Security Standard (PCI DSS).
3. C. Business processes are the activities that support the
organizational mission and can be broad or broken down
into detailed activities and tasks.
4. C. An organization that is risk averse likely has both a
low risk appetite and low risk tolerance.
5. D. Although various levels of organizational management
may be responsible and accountable for risk, the lowest
level of risk ownership for a business process or unit is
the business process owner.
6. B. Risk tolerance is the amount of variation or deviation
an organization is willing to accept from its risk appetite
that a particular business venture may incur.
7. D. Tactical risks are those encountered by smaller
production sections—those that carry out the day-to-day
work of the organization. A lower-level business process
may be considered to have a tactical risk.
8. A. The risk strategy for the organization should be
created first because it provides long-term direction for
the risk management program. The risk management
strategy also supports external governance. Policies and
procedures are created afterward to support the strategy
as well as implement external governance.
9. A. The organizational mission statement is developed by
the organization to describe its overall mission, which is
its very reason for existence.
10. C. Asset management is one of the foundational activities
on which other activities depend (vulnerability
management, configuration management, and so on)
because assets must be inventoried and accounted for
first.
11. C. Risk Assessment is not one of the domains covered in
the Risk IT Framework. The three domains are Risk
Governance, Risk Evaluation, and Risk Response.
12. C. The ISACA Code of Professional Ethics establishes
requirements for ethical conduct and behavior that all
certified professionals must adhere to, including the
requirement to perform all duties professionally, with due
diligence and care, as required by professional standards.
13. C. Risk tolerance determines the amount of risk an
organization will accept for any individual risk situation.
14. B. A risk matter such as macroeconomic risk generally
will be managed by an organization’s enterprise risk
management (ERM) program.
15. B. Of the available choices, an organization’s policies are
the least likely to influence an organization’s culture.
CHAPTER 2
IT Risk Assessment
In this chapter, you will:
• Understand the role of risk assessments and risk
analysis in the risk management life cycle
• Learn about the techniques used to identify various
types of risks
• Become familiar with risk identification and risk
analysis techniques such as threat modeling,
vulnerability analysis, control deficiency analysis, root
cause analysis, and risk scenario development
• Understand the concepts and steps taken in a risk
assessment
• Be familiar with the role and structure of a risk register
• Understand the concepts of inherent risk and residual
risk
This chapter covers Certified in Risk and Information
Systems Control Domain 2, “IT Risk Assessment.” The
domain represents 20 percent of the CRISC examination.
IT Risk Identification
IT risk identification comprises the activities performed to
identify various types of risks associated with an
organization’s use of information technology. The types of IT
risks that can be identified fall into these categories:
• Strategic This category represents broad themes,
including whether IT is aligned to the business and
providing services required by the business, and
whether IT can support and respond to changes in
business strategy.
• Operational This category represents many issues,
including IT service management, service quality,
production schedule, resilience, and tactical matters
such as scheduling and availability.
• Cybersecurity This represents issues related to the
ability of internal and external threat actors to
compromise information systems or the information
stored in them.
• Privacy This represents both cybersecurity-related risks
as well as business practices related to the handling of
personal information.
• Supply chain This includes the availability of products
such as laptops, servers, and components, plus services
like tech support, and service providers, including SaaS,
PaaS, and IaaS. Supply chain itself has both operational
and cybersecurity-related risks.
• Compliance This represents IT-related activities that an
organization may be required to perform due to a
regulation, policy, or contract terms.
Security professionals need to train themselves to recognize
new risks. While at times they may be bold and obvious,
sometimes they will be subtle and easily missed for what they
are. Often, significant risks are identified when they are least
expected. In many organizations, IT risks are not apparent
until a threat event occurs. The activities that may lead to the
realization of new risks include the following:
• Audits An audit may identify a deficiency in a business
process or system that warrants more than remediation-
and-forget-it treatment. Often, this requires that a
security analyst or risk manager study the audit
deficiency to understand its root cause. Rather than
think of the deficiency as a one-time (or repeated)
mistake, perhaps the design of the control is the cause,
or something even broader such as the overall design of
a system or set of business processes. Where there’s
smoke, there’s fire—sometimes.
• Security and privacy incidents A security or privacy
incident may reveal a latent problem that has been
waiting to be discovered. The risk may not be directly
related to the incident or its cause. It could be
discovered by accident while an incident responder or
security analyst investigates an incident more deeply.
• Penetration tests While penetration tests can identify
specific vulnerabilities, sometimes it’s essential to read
between the lines and understand the overall theme or
meaning of the penetration test results.
• News and social media articles Information about
breaches, cybercriminal organizations, new tools and
innovations, and other cybersecurity topics can spur a
risk manager to identify a risk inside their organization.
We have all experienced this in our careers; an example
that comes to mind is this: news of a critical website
vulnerability at a competitor’s organization was widely
reported. Our management contacted us to see whether
we had the same or a similar exposure.
• Security advisories Advisories on vulnerabilities and
active attacks draw attention to specific actions and
technologies. A formal cyberthreat intelligence or attack
surface reduction program will subscribe to and
consume all such advisories to determine whether the
organization is vulnerable and then take action if
needed. We consider these advisories to be more tactical
than we would include in overall risk identification.
However, sometimes an advisory will help us realize
that something systemic isn’t quite right in our
organization and that we want to capture that idea while
it’s on our minds.
• Networking Discussions about information technology,
security, or privacy with professionals in other
organizations can trigger the realization of a risk in
one’s own organization. We do caution you on this
point: a risk in one organization, even when similar
practices or technologies are in play, does not
necessarily mean that another organization has the same
risk.
• Whistleblowers A disgruntled or concerned employee
may choose to remain anonymous and disclose
information about a condition or practice that may
represent a genuine risk to the organization.
• Passive observation During the course of work, a risk
manager or other cybersecurity staff may realize a
heretofore undiscovered risk. Sometimes a risk can lie
before us for an extended period of time until some
conversation or thought triggers a new realization.
• Threat modeling This is a particular type of risk
assessment, typically focused on an individual
information system or business process, to identify
likely threat scenarios.
• Secure configuration assessments Configurations that
are insecure or out of compliance can be a source of
critical risks. This can be out of compliance with
configuration baselines like the Center for Internet
Security. Others can be basic settings such as password
strength or length.
• Governance assessments Governance can be a huge
impact on how well the process runs. For example, if
there are not status checks on systems, the systems may
not be configured properly, thus increasing the risk to
the organization.
• Data governance risks For some organizations, how
data is used or shared can be very risky. For example, if
a contract explicitly forbids the sharing of data, but data
is being shared anyway, there can be legal
repercussions. Reviewing the data governance process
is an important part of protecting the data.
• Privacy assessments For some companies, privacy is
becoming a greater concern due to enhanced regulation.
Not being aware of the laws can have a huge impact on
the privacy of data and can lead to fines and lawsuits.
• Risk-aware culture The extent to which leaders,
management, and staff in an organization are trained to
recognize risk will result in better outcomes, including
more effective security and privacy by design, and a
willingness to accept that risks are always present and
can be managed. Security awareness training should
include risk awareness.
• Risk assessments This is the wide gate through which
many risks will be identified and analyzed. We’re
putting this last on the list for a few reasons: risk
assessments generally have a specified scope and are
aligned to a specific framework of controls, so risk
analysts are, by design, looking only for particular types
of risks in specific places. These blinders can sometimes
prevent someone from seeing certain types of risks, or
risks in specific contexts or places.
Our hope here is that you will realize the discovery of risk
can occur in many ways. The best tool the risk manager can
obtain is an open mind, which is free to explore the
possibilities of IT risk in the organization.
Risk Events
Risk events are the specific scenarios that, if they occurred,
would inflict some impact on an organization’s ongoing
operations. A risk event includes a description of one or more
threat scenarios, the identification of the types of threat actors,
statements of vulnerabilities that could enable a threat scenario
to occur, the value of relevant assets, and the probability of
occurrence.
Threat Modeling
Threat modeling is a risk identification technique that involves
examining every possible threat agent, action or event, attack
vector, and vulnerability for a given system, asset, or process,
and then modeling or simulating how it could progress and the
damage that could occur. Threat modeling has its origins in the
U.S. and U.K. defense industries. A threat assessment
examines how these threats could affect the particular asset,
organization, or system you are looking at, in context, and it
can be done on one of several levels, either simultaneously or
separately. For example, you could perform a threat
assessment on a new system being developed or installed. You
would examine the different threat agents and threats that
could affect that particular system. You could also look at an
organizational process, or even the entire organization itself,
and perform a comprehensive threat assessment. In addition to
scaling a threat assessment, as we just described, you could
look at threats from a particular perspective, performing a
threat assessment specifically for technical threats, physical
ones, or even external operating environment threats. This
relates to the scope of the assessment. For example, you could
look at the technical aspects of a given system or examine
threats inherent to the processes and procedures associated
with a particular business unit. Figure 2-3 shows this scoping
and scaling process.
Figure 2-3 Scoping and scaling a threat assessment
As this figure illustrates, you could scale an assessment to
include only specific systems within some functional regions.
You could scope it by examining not only all the threats that
apply to the technical aspects of systems in those particular
functional areas but also threats that affect associated business
or management processes. You could have any combination of
scope and scale that the organization needs to fulfill its
assessment goals.
Threat landscape is a term that connotates a visual
metaphor for considering all the threats that have been
examined and identified as relevant to a given system or
process. A risk manager could be overheard as saying, “The
threat landscape for our online commerce system is
particularly hostile,” meaning there are many significant
threats that need to be considered as controls and defenses are
designed.
Table 2-1 illustrates how threat agents and threats could be
categorized, given factors such as intent, relationships, skills,
and so on. Note that these could affect any number or
combination of elements (processes, systems, and so on)
within an organization. You could categorize threats and threat
agents using these characteristics and develop others that
frame and define threats and threat agents for your
organization.
Vulnerability Analysis
Vulnerability analysis is a detailed examination of
vulnerabilities that may exist or vulnerabilities that have been
determined to exist. A vulnerability assessment can give you
information about the weaknesses inherent to one or more
assets. The vulnerability assessment might include examining
all aspects of an asset or capability to determine any
vulnerability it has, including the physical, technical, and
operational vulnerabilities inherent to the asset. For example,
technical vulnerabilities can be related to configuration issues
or lack of hardening on a network device. Physical
vulnerabilities could relate to the lack of physical protection
for the network device. Operational vulnerabilities might
include the lack of backup processes or configuration control
exercised for the device.
Control Gaps
A control gap is a situation where controls either do not exist
or are not sufficient to provide adequate protection for an asset
to reduce either impact or likelihood (and thereby risk) to an
acceptable level. In this situation, after the lack of existence or
effectiveness of the control is established, the organization
must remediate the problem. We won’t go into much depth on
remediation here; that’s the subject of the Risk Response and
Reporting domain, which you’ll read about in Chapter 3.
However, during your analysis, you must identify control gaps
and make recommendations for possible response options that
would close those gaps. You may identify gaps in physical,
technical, or administrative controls, even when other types of
controls perform some security and protection functions. For
example, suppose an organization implements account
management processes, such as creating accounts in a
standardized fashion, verifying identities before user accounts
are created, and so on. In that case, the organization has
operational and technical controls in place that help provide
secure account management. However, if the organization has
no written policy that requires this process and dictates that it
be performed consistently, how can the organization be
assured that these procedures will always be followed
uniformly and in a secure manner? If there is no policy, the
organization leaves it up to individual account operators to
perform this function as they want, regardless of how “most
people do it.” In this case, a control gap would be that,
although the organization is applying some level of control
over this process, there is no written policy ensuring that it is
standardized.
Control Recommendations
Once you’ve identified existing controls, determined how
effective they are, and identified gaps between the current and
desired end states of controls, you should be able to make
sound, risk-based recommendations to senior managers on
how controls should be supplemented, added, or changed.
Recommending controls is a risk-based process; remember
that there is a risk that you are trying to minimize, but you
must balance this between control functionality and economy
of resources. You have several considerations to keep in mind
when making control recommendations:
• What more could be done to supplement or change
existing controls to close the gap between current and
desired end states of risk?
• Should new or different controls be applied to replace
existing ones?
• What are the costs involved in applying additional
controls, and do they exceed the value of the asset or the
costs to replace it?
• Would additional controls require retooling or
significantly changing the infrastructure, processes, or
asset itself?
• Would additional controls require new personnel or
training for current personnel?
• Can additional controls be implemented that also reduce
risks in other areas (economy of use)?
The answers to these questions, at least some of them, will
determine how you make the recommendations to
management about closing the “control gap.” Both the cost of
implementation and the return on investment (ROI) received
from implementing new controls or strengthening existing
ones will affect management’s acceptance of the
recommendations. The asset value, or impact if it were
significantly damaged or lost, will also affect how well the
recommendation is received. Implementing a costly control to
protect an asset worth less in terms of impact is not usually an
economically sound decision. Organizational levels of risk
appetite and tolerance will also affect control
recommendations since any new or additional control must
reduce risk enough to fall within those levels. In general, when
recommending ways to close the control gap, whether it is by
introducing new controls, modifying existing ones, or
modifying the infrastructure to reduce likelihood or impact,
you should do the following:
• Try to leverage existing controls that can be used across
the board to include additional assets.
• Look for “quick wins” first—controls that are quickly
and inexpensively implemented (such as policies,
training, procedure changes, and so on).
• Prioritize control recommendations with risk; the
greater the risk, the more attention to those particular
control gaps.
• Be realistic in your recommendations: You can’t reduce
risk to absolute zero, you don’t have all the resources
you would like at your disposal, and you don’t have to
offer a 100 percent solution. Sometimes a 70 percent
solution is better than a 0 percent solution.
• Provide alternatives to your primary recommendation
for each control gap. Give management alternatives
based on cost, level of risk reduction, and effectiveness.
Types of Assessments
Several types of assessments can be performed to determine
the presence of risks, threats, or vulnerabilities in a system or
process. Some of the techniques discussed here involve
different ways of thinking about risk, while others employ
manual or automated tools to examine information of some
type.
• Risk assessment An assessment technique used to
identify and classify various risks associated with
systems or processes.
• Gap assessment An assessment of processes and/or
systems to determine their compliance to policies,
standards, or requirements.
• Threat modeling Also known as a threat assessment,
this is a threat-centric technique used to identify specific
threat scenarios that may occur. Threat modeling is
discussed in more detail earlier in this chapter.
• Vulnerability assessment An assessment used to
identify specific vulnerabilities that may be present in
processes and/or systems.
• Maturity assessment An assessment of processes or
capabilities to determine their maturity, generally
according to a maturity standard such as the capability
maturity model integration (CMMI) or the NIST
cybersecurity framework (CSF).
• Penetration test This is an examination of systems,
networks, or applications that identifies exploitable
vulnerabilities. Tools used in a penetration test include
port scanners, sniffers, protocol analyzers, fuzzers,
password crackers, and specialized tools to identify
specific vulnerabilities. A penetration test not only
identifies but also validates a vulnerability by exploiting
it—or demonstrating how it can be exploited if actual
exploitation would bring harm to the asset (important
when pen testing a production system).
• Data discovery A manual or automated activity in
which tools or techniques are used to examine the
contents of a target system to determine the presence of
specific types of data. Data discovery may also involve
an examination of access rights to specific types of data.
• Architecture and design review A manual activity in
which the architecture or design of a process or system
is examined to identify potential weaknesses.
• Code review A manual activity in which software
source code is examined for logic errors and security
vulnerabilities.
• Code scan An automated activity in which a code-
scanning program or tool examines software source
code to identify vulnerabilities and other defects.
• Audit A formal inspection of a control or process to
determine whether it is being followed and meets its
objectives. Audits are discussed briefly in this book and
fully in CISA Certified Information Systems Auditor All-
In-One Exam Guide.
Scope of an Assessment
Before an assessment can begin, its scope needs to be
determined. By this, we mean that the systems, processes,
locations, teams, and/or time periods that are the focus of the
assessment need to be formally established and
communicated.
The opposing forces that influence scope are the desire to
make it large enough to identify hard-to-find risks, but small
enough to be efficient and complete in a reasonable period. A
balance must be reached that enables the assessment to
succeed.
OCTAVE Methodology
As part of a U.S. government contract with Carnegie Mellon
University, the Operationally Critical Threat, Asset, and
Vulnerability Evaluation (OCTAVE) methodology was
developed to assist organizations in identifying and assessing
information security risk. The methodology was initially
developed by the Software Engineering Institute (SEI) at
Carnegie Mellon University in 1999 and has been updated
over the years to its current version, OCTAVE Allegro, in
2007. OCTAVE uses a concept called workshops, made up of
members of an organization, sometimes including outside
facilitators with risk management and assessment expertise.
The methodology has prescribed procedures, including
worksheets and information catalogs to assist organizational
members, drawn from all functional areas of an organization,
to frame and assess risk based on internal organizational
context (infrastructure, governance, business environment, and
so on). OCTAVE has three iterations: the original OCTAVE,
OCTAVE-S, and OCTAVE Allegro.
OCTAVE and OCTAVE-S The original OCTAVE
methodology was released in 1999 and was designed for
organizations with at least 300 employees. The assumptions
for using OCTAVE included that the organization operates and
maintains multitiered information infrastructure and performs
vulnerability assessments. While OCTAVE is flexible and
allows an organization to tailor its use based on its own
assessment needs, the methodology is performed as three
sequential phases. The first phase covers identifying assets,
threats, protection practices, vulnerabilities, and security
requirements. You can see where this maps to the practices
we’ve described throughout this chapter as determining
threats, identifying assets and their vulnerabilities, and
evaluating existing controls. In the second phase, the members
of the workshop perform assessment activities and evaluate
the infrastructure. In the third phase, the workshop members
develop response strategies for the identified risks.
OCTAVE-S is fundamentally the same as the original
OCTAVE, with a few minor differences. First, OCTAVE-S
was designed to assess smaller organizations, usually with
fewer than 100 people. Second, this iteration relies more on
internal personnel with extensive knowledge of the
organization and its infrastructure and less on outside risk
experts or facilitators. The number of workshop members will
also be considerably less; the team may rely on fewer than ten
key experts in the organization. OCTAVE-S processes and
procedure documents were also written to include more
detailed security-related information. This approach does not
rely on outside security experts or risk facilitators to assist the
team.
OCTAVE Allegro The OCTAVE Allegro methodology,
introduced in 2007, expands the previous two iterations but
does not necessarily replace them. It includes a more business-
centered and operational risk approach to the assessment. It is
also asset-centered in its approach; it focuses on assets and
their use or misuse, the environment they are used in, and how
threats and vulnerabilities specifically target and affect the
assets. Allegro can be used on a scaled basis; either individuals
or workshop teams can use this method without requiring a
high degree of risk management or assessment knowledge and
experience. OCTAVE Allegro is divided into four phases,
which include eight steps. These phases and steps are
described next.
Phase 1: Establish Drivers Step 1: Establish risk
measurement criteria. This step serves to develop and
establish the organization’s methodology and measurement
criteria used to assess risk. OCTAVE uses a qualitative
assessment methodology and measurements but can use
quantitative methods for certain aspects of the overall process,
such as determining likelihood and impact.
Phase 2: Profile Assets Step 2: Develop an information asset
profile. In this step, the organization develops an asset profile,
which is a collection of information that describes the asset,
including its characteristics, priority, impact on the
organization, and value. This profile also contains security
requirements the asset may have.
Step 3: Identify information asset containers. The
information asset container describes how data is stored,
where it is processed, and how it is transported. Containers are
usually networks and systems, including those that the
organization directly operates and those that it outsources.
Phase 3: Identify Threats Step 4: Identify areas of concern. In
this step, potential risk factors are identified and are used to
develop threat scenarios.
Step 5: Identify threat scenarios. In the context of
OCTAVE, threat scenarios describe categories of actors and
the relevant threats in each category. The scenarios are
typically identified by using a threat tree, which simply maps
actors and scenarios.
Phase 4: Identify and Mitigate Risks
Step 6: Identify risks. In this step, consequences such as
impact and likelihood are identified and measured, which
inform risk.
Step 7: Analyze risks. Once impact and likelihood have
been identified and measured, risks are analyzed. During this
step, risk scores are developed. These come from developing
impact and likelihood values, emphasizing impact in
particular, since this is an asset-driven assessment.
Step 8: Select mitigation approach. In this step, risk
response strategies are developed, analyzed, and
recommended.
ISO/IEC Standards
ISO stands for the International Organization for
Standardization, and IEC stands for the International
Electrotechnical Commission. Together, these two
organizations are responsible for a plethora of information
technology and manufacturing standards used worldwide,
including several that apply to information security and risk
management.
ISO/IEC 27005:2018 Standards in the ISO/IEC 27000 series
are all about information security, and the 27005:2018
standard (“Information technology – Security techniques –
Information security risk management”) supports the common
ideas promulgated in both ISO/IEC 27001 and 27002. The
standard defines the entire risk management life cycle,
including detailed risk assessment principles. The standard
doesn’t offer specific risk management or assessment
methodologies. However, it does serve to describe a
formalized, structured risk management process, such as
developing context, scope, methods, and so on. It also
describes the two primary types of assessment methodologies:
qualitative and quantitative. Additionally, it describes
processes used to develop risk response strategies,
communication with stakeholders, and continuous risk
monitoring.
ISO/IEC 31010:2019 ISO/IEC 31010:2019 is where the meat
of information regarding risk assessments is located for the
ISO/IEC standards. This document, “Risk Management – Risk
Assessment Techniques,” articulates and extends the risk
management principles of ISO/IEC 31000:2018 and provides a
concrete overview of the risk assessment process. This
standard describes risk assessment as the combination of risk
identification, analysis, and evaluation, with a precursor step
of establishing the risk context within the organization. The
standard also addresses communication and consultation with
various stakeholders and management, risk treatment
(response), and monitoring risk.
Risk Ranking
When risks have been identified in a risk assessment, it
generally makes sense to display these risks in an ordered list.
Most often, the highest risks will appear at the top of a list,
indicating the nature of the sequencing. For instance, risks
with the most significant impact may appear first, or perhaps a
composite risk score will be calculated for each risk (based on
probability, impact, and asset value) and risk items sorted by
this overall risk calculation. Risk ranking, then, helps tell the
story of the most critical risks that have been identified.
Risk ranking can be portrayed visually, which helps the
reader better understand the universe of risks that have been
identified. Figure 2-8 shows a risk map that visually indicates
the nature and severity of risks.
Risk Ownership
To correctly manage and act on unacceptable risks, each risk is
assigned a risk owner. Generally, the risk owner is the
department head or business unit leader associated with the
business process that is the focus of a risk. It is usually not
appropriate to assign risk ownership to IT, as IT is the steward
of information and information systems as well as provides
services as directed by department heads and business unit
leaders.
Similarly, the organization’s cybersecurity leader (often a
CISO) is not the assignee of risks. Instead, the cybersecurity
leader facilitates risk conversations and decisions with
department heads and business unit leaders, particularly those
whose business processes are associated with individual risks.
In risk management, there are usually several different
“owners” associated with each risk, including the following:
• Risk owner This is usually the department head or
business unit leader associated with the business process
where the risk resides. Even if a risk is related to an IT
system, IT is generally not the risk owner. Rarely, if
ever, is the cybersecurity leader assigned as a risk
owner.
• Risk treatment decision-maker This person makes a
business decision regarding the disposition (treatment)
of an individual risk. Risk treatment is discussed fully in
Chapter 3.
• Risk remediation owner This is the person responsible
for any remediation that has been selected as a part of
risk treatment.
An organization’s risk management program charter (or
other formal document) should define the roles and
responsibilities of each type of risk owner.
These and other facets of risk ownership are generally
indicated in the risk register, which is discussed next.
Risk Register
A risk register is a business record containing information
about business risks and their origin, potential impact, affected
assets, probability of occurrence, and treatment. A risk register
is the central business record in an organization’s risk
management program (the set of activities used to identify and
treat risks).
Together with other records, the risk register serves as the
focal point of information that an organization is attempting to
manage risk. Other records include information about risk
treatment decisions and approvals, tracking projects linked to
risks, risk assessments, and other activities that contribute to
the risk register.
When new risks are identified, they are added to the risk
register to be later analyzed, discussed, and decisions will be
made about their disposition.
A risk register can be stored in a spreadsheet, database, or
within an integrated risk management (IRM) tool—formerly
known as a governance, risk, and compliance (GRC) tool—
used to manage risk and other activities in the security
program. Table 2-4 shows a typical risk register entry.
Table 2-4 Sample Risk Register Data Structure
EXAM TIP You will not have to know any particular risk
analysis method in detail for the exam, but you should
understand how the qualitative, quantitative, and
semiquantitative methods work.
Figure 2-13 BIA sample intake form for gathering data about
key processes
Typically, the information collected on intake forms is
transferred to a multi-columned spreadsheet, where
information on all of the organization’s in-scope processes can
be viewed together. This will become even more useful in
subsequent phases of the BCP project, such as the criticality
analysis.
Statements of Impact
When processes and systems are being inventoried and
cataloged, it is also vitally important to obtain one or more
statements of impact for each process and system. A statement
of impact is a qualitative or quantitative description of the
impact on the business if the process or system were
incapacitated for a time.
For IT systems, you might capture the number of users and
the names of departments or functions affected by the
unavailability of a specific IT system. Include the geography
of affected users and functions if that is appropriate. Here are
some sample statements of impact for IT systems:
• Three thousand users in France and Italy will be unable
to access customer records, resulting in degraded
customer service.
• All users in North America will be unable to read or
send e-mail, resulting in productivity slowdowns and
delays in some business processes.
Statements of impact for business processes might cite the
business functions that would be affected. Here are some
sample statements of impact:
• Accounts payable and accounts receivable functions
will be unable to process, impacting the availability of
services and supplies and reducing revenue.
• The legal department will be unable to access contracts
and addendums, resulting in lost or delayed revenue.
Statements of impact for revenue-generating and revenue-
supporting business functions could quantify financial impact
per unit of time (be sure to use the same units of time for all
functions to be easily compared with one another). Here are
some examples:
• Inability to place orders for materials will cost at the
rate of $12,000 per hour.
• Delays in payments will cost $45,000 per day in interest
charges and late fees.
As statements of impact are gathered, it might make sense
to create several columns in the main worksheet so that like
units (names of functions, numbers of users, financial figures)
can be sorted and ranked later.
When the BIA is completed, you’ll have the following
information about each process and system:
• Name of the process or system
• Who is responsible for its operation
• A description of its function
• Dependencies on systems
• Dependencies on suppliers
• Dependencies on key employees
• Quantified statements of impact in terms of revenue,
users affected, and/or functions impacted
Criticality Analysis
When all of the BIA information has been collected and
charted, the criticality analysis (CA) can be performed.
Criticality analysis is a study of each system and process, a
consideration of the impact on the organization if it is
incapacitated, the likelihood of incapacitation, and the
estimated cost of mitigating the risk or impact of
incapacitation. In other words, it’s a somewhat special type of
risk analysis that focuses on key processes and systems.
The criticality analysis needs to include, or reference, a
threat analysis. A threat analysis is a risk analysis that
identifies every threat that has a reasonable probability of
occurrence, plus one or more mitigating controls or
compensating controls, and new probabilities of occurrence
with those mitigating/compensating controls in place. To give
you an idea of what this looks like, refer to Table 2-6, which
provides a lightweight example of what we’re talking about.
Table 2-6 Sample Threat Analysis for Identifying Threats and
Controls for Critical Systems and Processes
In the preceding threat analysis, notice the following:
• Multiple threats are listed for a single asset. In the
preceding example, we mentioned just eight threats. For
all the threats but one, we listed only a single mitigating
control. For the extended power outage threat, we listed
two mitigating controls.
• The cost of downtime wasn’t listed. For systems or
processes where you have a cost per unit of time for
downtime, you’ll need to include it here, along with
some calculations to show the payback for each control.
• Some mitigating controls can benefit more than one
system. That may not have been obvious in this
example. However, in the case of an uninterruptible
power supply (UPS) and an electric generator, many
systems can benefit, so the cost for these mitigating
controls can be allocated across many systems, thereby
lowering the cost for each system. Another example is a
high-availability storage area network (SAN) located in
two different geographic areas; although it’s initially
expensive, many applications can use the SAN for
storage, and all will benefit from replication to the
counterpart storage system.
• Threat probabilities are arbitrary. In Table 2-6, the
probabilities are for a single occurrence in an entire year
(for example, 5 percent means the threat will be realized
once every 20 years).
• The length of the outage was not included. You may
need to include this also, particularly if you are
quantifying downtime per hour or another unit of time.
It is probably becoming evident that a threat analysis and
the corresponding criticality analysis can get complicated. The
rule here should be as follows: the complexity of the threat and
criticality analyses should be proportional to the value of the
assets (or revenue, or both). For example, in a company where
application downtime is measured in thousands of dollars per
minute, it’s probably worth taking a few weeks or even months
to work out all of the likely scenarios and a variety of
mitigating controls and then work out which ones are the most
cost-effective. On the other hand, for a system or business
process where the impact of an outage is far less costly, a good
deal less time might be spent on the supporting threat and
criticality analysis.
Inherent Risk
No personal or business activity is entirely free of risk, and
some activities are inherently more risky than others. The
concept of inherent risk expresses the level of risk associated
with a process or activity before applying any protections or
safeguards. In the vernacular, some activities are just riskier
than others.
Take, for example, two personal activities: gardening and
hang gliding. We would state that the inherent risk associated
with hang gliding is significantly higher than that of
gardening. With hang gliding, any number of realistic
scenarios can result in the death of the hang glider. At the
same time, gardening is relatively safe, and death by gardening
is highly unlikely. In both gardening and hang gliding, the
probability of risk realization is relatively low, although the
impact of risk realization is decidedly higher for hang gliding.
One approach to risk analysis is assigning inherent risk to a
process or system, and then calculating changes to risk after
protective controls and safeguards are applied. This helps a
risk analyst understand the changes in risk when individual
controls and safeguards are applied, and it better informs
management of the full nature of risk for a given part of the
organization.
Residual Risk
The risk management and risk treatment processes tend to
reduce risk, but they rarely eliminate all risk. Instead, risk
mitigation and risk transfer reduce risk to a lower level. This
“leftover” risk is known as residual risk.
Residual risk is best explained with a real-world example.
An organization’s workforce is equipped with laptop
computers running Windows or macOS, neither of which
utilizes whole-disk encryption. Risk analysis suggests that the
risks associated with the theft or disappearance of an
unencrypted laptop computer are quite high, with potential for
a significant security or privacy incident, depending on whose
laptop computer is lost or stolen.
In this scenario, the organization decides to implement
whole-disk encryption on all of its laptop computers. Note that
this measure does nothing to change the probability of laptop
computers being lost or stolen. However, this measure changes
the impact significantly. While, with whole-disk encryption,
the organization need no longer worry about the potential data
compromise, the stolen laptop will still need to be replaced.
The residual risk, in this example, is the financial risk
associated with the funds required to purchase a new laptop
computer, plus any labor needed to make it ready for
employee use.
To continue this example further, this residual risk is one in
which the residual risk may still be a risk that warrants further
analysis and treatment. Analysis of the risks associated with
the need to replace a lost or stolen laptop computer could
result in a recommendation of the purchase of security cables
that are issued to each employee, together with directives that
laptop computers are to be locked to a piece of furniture (or
locked in a safe) when not in the direct control of the
employee. This measure reduces the probability of theft or
loss, but not the impact: if the laptop computer is stolen, it
must still be replaced, but theft is less likely to occur since, at
least some of the time, employees will use those cable locks,
reducing the number of opportunities for someone to steal
them.
Larger and more complex risk scenarios may require
multiple rounds of risk analysis and risk treatment. However,
an organization will eventually reach the point where the
residual risk is low enough that business leaders and
department heads will accept the residual risk.
Often, risk analysis, risk treatment, and residual risk are
more complicated than this. A risk assessment performed on a
business process or information system is likely to identify
multiple risks, which should be considered in whole as well as
individually. It may be easier to understand all the risks
together to compare various types of risks and risk treatment
options when considered together. Sometimes, a single risk
treatment measure may mitigate more than one of the
identified risks. Figure 2-14 visually depicts multiple rounds
of risk analysis and risk treatment until the final remaining risk
is accepted.
Chapter Review
IT risk identification consists of activities performed to
identify various types of risks associated with an
organization’s use of information technology, including
strategic, operational, cybersecurity, privacy, supply chain, and
compliance risks.
The types of activities where risks may be identified
include risk assessments, audits, incidents, penetration tests,
news and social media, advisories, threat modeling, word of
mouth, whistleblowers, and passive observation.
Threat modeling is a risk identification technique that
involves examining every possible threat agent, action or
event, attack vector, and vulnerability for a given system,
asset, or process, and then modeling or simulating how it
could progress and the damage that could occur.
Vulnerability and control deficiency analysis are important
parts of risk analysis, focusing on weaknesses in a system or
process that could result in a potentially serious incident if
exploited by a threat actor.
Root cause analysis, or RCA, is a method of problem-
solving that seeks to identify the root cause of an event,
situation, or problem.
A risk scenario is a potential real-world, business-
impacting event that consists of a threat, threat actor,
vulnerability, and asset.
Risk analysis is a detailed examination of a risk to better
understand basic factors, including the likelihood and impact
of risk occurrence, and the development of a measure that
might be enacted to reduce likelihood or impact.
Several types of assessments can be used to identify risks,
including a gap assessment, threat modeling, vulnerability
assessment, maturity assessment, penetration test, data
discovery, code review, architecture review, design review,
code scan, audit, and risk assessment.
Several techniques are used in a risk assessment to identify
risks, including interviews, documentation review,
observation, and system testing.
Risk ranking is the process of sequencing risks, generally
with the greatest risks appearing first.
A risk owner is the owner of a business process or system
that focuses on an identified risk.
A risk register is a business record containing information
about business risks and their origin, potential impact, affected
assets, probability of occurrence, and treatment.
A risk analysis is the detailed analysis of a risk that has
been identified in an activity, such as a risk assessment.
Quantitative risk analysis uses concrete (nonsubjective)
numerical values and statistical analysis to calculate the
likelihood and impact of risk. On the other hand, a qualitative
risk analysis involves using subjective scales, such as a
numeric scale from 1 to 10, or subjective values, such as High,
Medium, and Low.
There are several risk analysis techniques, including fault-
and event-tree analysis, factor analysis of information risk
(FAIR), and bow-tie analysis.
A business impact analysis (BIA) is used to identify an
organization’s business processes, the interdependencies
between processes, the resources required for process
operation, and the impact on the organization if any business
process is incapacitated for a time for any reason.
A statement of impact is a qualitative or quantitative
description of the impact on the business if the process or
system were incapacitated for a time.
A criticality analysis is a study of each system and process,
a consideration of the impact on the organization if it is
incapacitated, the likelihood of incapacitation, and the
estimated cost of mitigating the risk or impact of
incapacitation.
Inherent risk is the level of risk associated with a specific
activity or function before applying any protective controls or
safeguards.
Residual risk is the risk that remains after any risk transfer
or risk mitigation that may have been performed.
The materiality of a risk may help to highlight its
importance to management.
Quick Review
• CRISC candidates and risk managers must be familiar
with the vocabulary of terms related to risk assessment
and risk analysis, including asset, vulnerability, threat,
impact, attack, and risk.
• CRISC and IT Risk Management consider cybersecurity
risks and operational risk, workforce risk, privacy risk,
and compliance risk.
• Appendixes D and E in NIST Special Publication 800-
30, “Guide for Conducting Risk Assessments,” offer a
comprehensive list of threat agents and their related
threats.
• Risk, asset, and control owners are not always the same
person, functional area, department, or organization.
• Risk analysis is a detailed, focused examination of a risk
performed as a part of an overall risk assessment.
• The scope of an assessment determines the boundaries
of inquiry and analysis of a process or system.
• Well-known risk assessment standards include NIST SP
800-30 (Guide for Conducting Risk Assessments),
OCTAVE, ISO/IEC 27005 (Information technology –
Security techniques – Information security risk
management), ISO/IEC 31010 (Risk Management –
Risk Assessment Techniques), and ISACA’s Risk IT
Framework.
Questions
1. Awareness of new risks can be obtained through all of the
following activities except which one?
A. Security advisories
B. Privacy incidents
C. Budget reviews
D. Audits
2. In a discussion about the security of endpoint devices,
one security professional described that some endpoints’
antivirus programs are not functioning correctly. What is
this a description of?
A. Threat
B. Vulnerability
C. Risk
D. Threat model
3. A security specialist is examining a particular preventive
control and has determined that the control is not
operating as designed. What term describes the state of
the control?
A. Undocumented
B. Inefficient
C. Fail-open
D. Ineffective
4. A risk analyst reviewing a business process has identified
a control gap. What is the meaning of this conclusion?
A. A control exists but is undocumented.
B. A control does not exist.
C. An earlier risk assessment determined that a control
is not necessary.
D. A new risk has been discovered.
5. Several security professionals are discussing a recent
security incident. The discussion is proceeding through
the phases of the incident and the conditions preceding
the incident in order to understand why those conditions
occurred. What are the security professionals performing?
A. Root cause analysis
B. Incident debrief
C. Incident forensics
D. Reverse engineering
6. What is the primary purpose for risk scenario
development?
A. To identify the highest rated risks
B. To quantify the highest rated risks
C. To prepare an executive briefing
D. To associate risks with business objectives
7. A risk manager wants to better understand the factors in
particular business processes that could lead to an
incident. What activity should the risk manager perform?
A. Risk analysis
B. Threat modeling
C. Vulnerability analysis
D. Root cause analysis
8. Hackers, cybercriminal organizations, disgruntled
employees, and hurricanes are known as what?
A. Threat agents
B. Threat actors
C. Threats
D. Attacks
9. An auditor has contacted an organization’s data center
security manager and has requested a walkthrough. What
is the auditor asking for?
A. A meeting to discuss one or more security controls
B. A tour of the data center
C. An explanation for a recent incident
D. Interpretation of a security report
10. What does NIST Special Publication 800-30 describe?
A. A risk management methodology
B. A risk assessment methodology
C. Security and privacy controls
D. Audit procedures for security and privacy controls
11. A security analyst is finishing up a risk assessment and is
about to identify the risk owner for each risk in the report.
In general, who should be the risk owner for each risk?
A. The board of directors
B. The CISO or CIRO
C. The CIO
D. The related business process owner
12. Which of the following is the best description of a risk
register?
A. A list of threat agents and threat actors
B. A list of identified risks
C. The root cause analysis (RCA) for a security incident
D. A list of likely risk scenarios
13. A risk manager is performing risk analysis on a specific
risk to better understand the risk and to identify potential
remedies. Various aspects of the initial risk and its
potential remedies are scored on a scale of 1–10. What
type of risk analysis is being performed?
A. Qualitative risk analysis
B. Quantitative risk analysis
C. Semiquantitative risk analysis
D. Factor Analysis of Information Risk (FAIR)
14. An organization has performed a risk analysis of a
complex scenario. Two different mitigation strategies
have been approved that will reduce the original risk to a
much lower level. The risk that remains after these
mitigation strategies have been completed is known as
what?
A. Inherent risk
B. Leftover risk
C. Residual risk
D. Accepted risk
15. Which of the following is the best method for managing
residual risk?
A. Subject it to risk analysis and risk treatment.
B. Retain it on the risk register and consider it as
accepted.
C. Retain it on the risk register and label it as closed.
D. Add it to the cumulative total of risk tolerance.
Answers
1. C. Awareness of risks can be obtained through numerous
sources and activities, such as audits, incidents,
advisories, threat modeling, assessments, and news
articles. Budget reviews are an unlikely source of
information about new risks.
2. B. The matter of a failure of a protective control such as
antivirus or a firewall is known as a vulnerability. The
basic terms of risk, such as vulnerability, threat, control,
and risk, are sometimes misused.
3. D. A control that is not operating as designed is
considered ineffective. To reach this conclusion, an
analyst or auditor needs to understand the objective of the
control and determine whether the control as it is
presently operating meets that objective.
4. B. A control gap is a situation where a control does not
exist, but should exist. A control gap can also describe a
situation where a control is not properly designed or
implemented.
5. A. A root cause analysis (RCA) attempts to get to the
ultimate explanation of an incident, by proceeding
backward through time to ask “why” specific conditions
or events occurred.
6. D. Risk scenario development is performed to associate
specific risks with specific business objectives or
business activities. Risk scenario development makes
risks more tangible by describing their potential impact
on the organization in business terms.
7. C. A vulnerability analysis is a detailed examination of a
process or system to discover vulnerabilities that a
credible threat agent could exploit.
8. A. Threat agents are the entities, human or not, that may
perform actions that could cause harm to an organization,
its processes, or its systems.
9. B. A walkthrough with an auditor is a discussion where a
control owner describes the operation of one or more
controls and answers specific questions about them.
10. B. NIST SP 800-30, “Guide for Conducting Risk
Assessments,” is a step-by-step guide to performing
qualitative and quantitative risk assessments. The
standard also briefly describes the overall risk
management life-cycle process to provide context for
individual risk assessments.
11. D. The owner of individual risks should be the business
unit leader or department head that manages business
processes most closely related to those risks. In rare
circumstances, risks should be assigned to the CIO,
CISO, or CIRO.
12. B. A risk register is a master list of all credible risks that
have been identified and are awaiting further analysis,
treatment, or follow-up.
13. A. Qualitative risk analysis uses a scoring scheme such as
“high-medium-low” or a simple numeric scale such as 1–
3, 1–5, or 1–10.
14. C. Residual risk is the risk that remains after risk
treatment (whether mitigation, transfer, or avoidance) has
been completed.
15. A. The best way to manage residual risk is to include it in
the risk register and then subject it to the usual risk
analysis, risk scenario development, and risk treatment.
3 CHAPTER
Risk Response
Every organization has risk. The leaders of smart
organizations, however, have determined how they will deal
with that risk appropriately for their business context. Will
they accept all risks? Will they transfer some risks to a third
party, such as an insurance company? These are some
questions that need to be considered within the risk response
process, as discussed in this chapter.
We will also discuss the major control frameworks within
the field and how they are incorporated into risk response. You
should note that risk is different for every organization; risk
within an educational context is different from a financial
context, which is different from a government context. Also
keep in mind that IT risks are different from business or
mission-oriented risks. It’s important to note that the mission
of the organization helps to shape risk responses as well. This
is why it’s so important that organizations conduct a thorough
risk analysis and understand and prioritize the response
options that are right for them. You should also remember that
the risks to the business, the business goals and objectives, and
the systems supporting those functions should all be
considered throughout the risk response process. It’s easy as
an IT professional to forget that the business objectives are the
top priority, but to maximize efficiency and effectiveness (and
to pass the exam), you should keep that in mind.
NOTE Risk, asset, and control owners are not always the
same person, functional area, or even organization. It’s
important that you identify these particular owners early in the
assessment process and maintain careful coordination and
communication between these and other relevant stakeholders,
within the boundaries of your authority and assessment scope.
Having different types of owners can result in politically
sensitive issues that revolve around resourcing, responsibility,
accountability, and sometimes even blame.
Risk Mitigation
Risk mitigation generally involves the application of controls
that lower the overall level of risk by reducing the
vulnerability, the likelihood of a successful threat exploiting
that vulnerability, or the impact to the asset if the risk were to
be realized. Mitigation actions come in a number of forms; a
policy requiring nightly offsite backups of critical data can
mitigate the risk of data loss, for instance. Similarly, applying
a newly released web browser patch can mitigate the risk of
exploitation. The goal is to get the risk down to a level
considered acceptable by the organization’s leadership. We
will discuss controls and their implementation a bit later in the
chapter, but for now you should understand that you may have
to add new controls; update, upgrade, or even change controls;
or even in some cases change the way your business processes
work. All of these actions are designed to mitigate or reduce
risk.
Risk Sharing
Risk sharing (often referred to as risk transference) entails the
use of a third party to offset part of the risk. Risk sharing can
involve the outsourcing of some activities to a third party to
reduce the financial impacts of a risk event in many cases.
Sharing offsite (co-location) assets and contractual obligations
with other entities is one way that organizations implement
risk sharing; a cloud service provider can be used within this
scenario. The cloud provider might be contractually obligated
to assume part of the financial impact in the event of a breach,
but be aware that there is a potential loss of brand goodwill or
other intangible assets that can be difficult to offload. Another
example of risk sharing is the use of an insurance service to
cover the financial impacts (at least partially) of a breach;
however, the intangible losses mentioned previously would
still be present. The point is that risk sharing (or transference)
typically only works with a portion of the risk; it does not
reduce all of the risk. Therefore, multiple risk response options
used concurrently will likely be needed.
Risk Acceptance
There will always be some level of residual risk, no matter
how many controls you implement or the amount of insurance
you purchase. Again, the goal is to get the risk down to a level
considered acceptable by the leadership. Risk acceptance
occurs when the active decision is made to assume the risk
(either inherent or residual) and take no further action to
reduce it. The word active is critical here; risk acceptance is
different from being risk ignorant in that the risk is identified,
alternatives are considered, and a conscious decision is made
to accept it. A good example of this would be when you have
taken all practical steps to reduce the risk of an attack from a
malicious entity. Your organization may have installed
perimeter security devices, strong authentication methods, and
intrusion detection systems, which will reduce the risk
significantly. However, since risk cannot be completely
eliminated, there is always a chance that an attacker may enter
the infrastructure through some other means. This residual risk
may have to be accepted because it might be financially
unwise to invest more resources into further mitigating the
risk. Similarly, a company might choose to willfully ignore
government-mandated control implementation that would
incur fines or other financial penalties, assuming that the
financial cost incurred would be less than the cost of
implementing the controls. As we will also discuss, the value
of the asset may be far less than the cost of implementing any
mitigating controls, so in those cases risk acceptance may be
the right choice.
Note that risk acceptance of any kind should be a formal
management decision, and fully documented, along with
provisions to monitor the risk, typically through the risk
register. As with all risk response options, management should
be cognizant of monitoring risk, since it frequently changes,
and in turn make changes to the response as dictated by the
changing operating environment, new technologies, and the
threat landscape.
Risk Avoidance
Risk can be avoided through a number of methods. Sometimes
risk can be avoided by simply choosing not to participate in
the activity that incurs the risk. For example, an organization
may decide that a particular business venture or entry into a
new market would incur far more risk than it is willing to take
on. Perhaps you don’t need to conduct the activity at all.
Avoiding risk can mean eliminating risk associated with a
particular activity by not performing that planned activity.
Risk avoidance is the option taken when, after all risk
treatment options, including mitigating controls, have been
considered, the level of risk is still not acceptable. An
organization might choose not to take on a proposed project
because of the probability of failure and subsequent loss of
capital, for example. Another scenario might be the
organization choosing to not adopt a cloud solution because of
the level of sensitivity of information that would be stored
within the cloud. In any event, risk avoidance does not simply
mean ignoring the risk, as some people may be led to believe.
Risk avoidance requires careful thought and planning as well
as balancing the level of risk incurred in following a particular
path versus eliminating the risk by not following that path at
all.
Third-Party Risk
In today’s business world, an organization rarely functions
independently from other entities. Most businesses are not
entirely independent and self-sustaining; they depend on
suppliers, service providers, and so on to help them bring their
products and services to market. These third parties often
perform services that are outsourced from the organization
because of a lack of infrastructure or simply a desire to not
perform these functions or services internally. A great example
of a third-party service provider is one that provides cloud
services to businesses, such as data storage, applications, and
even entire infrastructures to organizations that don’t have the
means or the desire to create the internal structures needed to
manage them internally. Sometimes it is more cost effective
for an organization to simply hire a third party to perform a
function or provide a service than it is to expend the resources
to create and maintain all of the equipment and processes
itself.
If you recall the risk treatment options discussed earlier,
transferring (or sharing) risk includes the use of third parties to
bear some of the risk in performing certain business functions.
Insurance is the classic example of transferring risk used in
most risk management training texts, but this instance usually
only provides monetary funds in case of a disaster or liability
scenario. Transferring risk to third parties also includes
outsourcing certain functions and services to third parties, so
this, too, is a standard risk treatment method. The
organization, for example, can reduce its risk of accounting
and budget errors with a small, untrained staff by outsourcing
its accounting functions to a third-party accounting firm.
It’s important to note that while this strategy of dealing with
risk is effective and common, the organization is transferring
only risk, not responsibility. The organization almost always
retains final accountability and responsibility for functions and
services outsourced to a third party. For example, transferring
a business’s accounting functions to an outside firm mitigates
risk by transferring some of it to the third party, but the
organization is still ultimately responsible to its stakeholders,
government regulators, banks, and financial obligations, even
if the accounting firm makes an error. As another example, an
organization that makes use of cloud data storage services still
maintains accountability and responsibility for any data stored
with a cloud provider if that provider suffers a breach that
results in the release of sensitive information. While third-
party providers may still incur some level of liability in these
cases, the final liability rests with the organization that
contracted with them. Liability, accountability, and
responsibility are also some of the key risk factors with third
parties that we’ll discuss next.
Third-Party Threats
Like the risk factors mentioned earlier, threats associated with
dealing with third-party providers primarily relate to trust and
the contractual and legal aspects of those arrangements.
Threats include failure to provide services as specified in
contracts, failure to meet performance or security levels,
failure to protect data, and loss of availability for systems or
data to an organization’s users. Third-party providers can
certainly succumb to some of the same threats the organization
would, such as natural disasters and so forth. Therefore, some
of the threats could also include loss of service or data for the
business in some of these negative events. Third-party
providers are also susceptible to hacking and malicious
insiders, the same as any organization would be. Therefore,
threats of data loss or disclosure are also present with third-
party providers.
Third-Party Vulnerabilities
Vulnerabilities that result from associations or dealings with
third-party providers include failure to adequately cover
contingencies in contractual agreements, such as system or
data availability or loss, lack of or inadequate security
protections, and service levels that fall below an acceptable
standard. Furthermore, any vulnerabilities that the third-party
provider has inherent to its own organization are,
unfortunately, indirectly incurred by any organization that
deals with the third party, since those vulnerabilities could
affect data in its custody or services provided to others.
Examples of internal vulnerabilities that could affect
organizations for whom the third party provides services
include infrastructure vulnerabilities (such as patch or
configuration management issues, loss of equipment, or
malicious intrusion by hackers), organizational vulnerabilities
(such as loss of revenue or bankruptcy), and even
technological vulnerabilities (such as lack of sufficient
encryption strengths or strong authentication methods). All
these vulnerabilities could indirectly affect any organization
that has a contractual agreement with a third-party provider.
Exception Management
No matter how standardized security controls are, there are
always exceptions, and you need to have a process in place to
manage those exceptions and keep them minimized. Consider
the example in the previous section where an operating system
patch cannot be applied because of a legacy application that is
critical to the organization’s mission. This mitigation is
important, but less important than the legacy application and
its associated mission. Exception management, as mentioned
earlier, comes into play here; you should evaluate the risk of
implementing the patch and breaking the application, thus
putting the organization’s mission in jeopardy, versus the risk
of not implementing the patch and dealing with the security
ramifications that come with that decision. Management will
have to make a decision regarding which is the riskier choice.
If the choice is to make an exception and not implement the
patch, you need to document this as an accepted exception to
the policy and track it, usually in the risk register. Perhaps the
patch can be implemented in the future, or another mitigation
could come along and make the exception moot. This is
another aspect of risk acceptance; you are asking management
to accept an ongoing risk, as an exception, because the
response cannot be implemented for whatever reason. Either
way, keeping a close eye on the issue is key to not letting these
exceptions slip through the cracks.
Regardless of where issues, security findings, and
exceptions to normal processes, policies, or baselines occur,
these departures should be avoided as much as possible, and
when they cannot be avoided, they must be carefully
considered, with great care taken to reduce the risks they
present.
Emerging Technologies
New or emerging technologies can present risks to
organizations simply because there are several different
important considerations when integrating the latest
technologies into the existing infrastructure. Many
organizations have an unfortunate tendency to rush out, buy,
and attempt to implement the latest (and often unproven)
technologies on the market, without careful planning or
consideration of the existing infrastructure and how it will
react to the new technologies. One of the primary
considerations organizations must look at is making a business
case for a new technology, which may be to fill a gap that the
older technology does not provide or to provide a capability
that the organization must now have to compete in the market
space. In other words, the company really must justify new
technologies to make them worth the risk they could incur.
At the opposite end of the spectrum, many companies are
averse to investing a great number of resources in new
technologies, instead relying on older, tried-and-true
technologies that have always worked in the past. These
organizations fail to consider the changing threat landscape,
and sometimes the need for maintaining a cutting edge in the
marketplace. The threat landscape can render older
technologies obsolete and ineffective in protecting the
company’s information assets. The market a particular
organization participates in can also drive the need to update
its older technologies to those that make it able to produce
goods and services faster, more efficiently, and to better meet
the needs of its customers.
In any event, emerging technologies present challenges to
organizations that must be considered. These challenges
include the emerging risk factors, vulnerabilities, and threats
that go along with those technologies. We will discuss those in
the next few sections.
Emerging Threats
Threats resulting from the introduction of new and emerging
technologies into the existing infrastructure are numerous. If
the organization has failed to adequately plan for the new
technology, these threats can become significant. These
include untested or unproven technologies, non-
interoperability, incompatible security mechanisms, and lack
of suitability of the technology for use in the organization. The
organization could also incur additional cost and require more
resources because of a faulty implementation. These threats
can be minimized by careful planning and integrating new
technologies using a stable systems development life cycle
(SDLC) model.
Additionally, organizations should implement both threat
modeling and threat hunting processes into their security
program when new technologies are considered. Threat
modeling and threat hunting can help an organization identify
not only the generic threats that affect all organizations, but
those specific to its own business model, processes, assets,
vulnerabilities, and risk scenarios involving the new
technology.
Emerging Vulnerabilities
As with threats, vulnerabilities that go along with working
with emerging technologies are numerous as well. General
vulnerabilities could include a lack of trained staff committed
to managing and implementing the new technology. Lack of
adequate project planning is also a serious vulnerability that
could affect the organization’s ability to effectively integrate
new technologies. Another vulnerability could be a weak
support contract or other type of warranty, guarantee, or
support for a new technology or system. Most of these
vulnerabilities appear when new technologies are first
implemented and tend to become mitigated or lessened as the
technologies are integrated into the existing infrastructure, but
they still exist.
Beyond general vulnerabilities, new technologies also have
their own inherent technical vulnerabilities. Often these
vulnerabilities are not discovered until long after a technology
has been implemented. Although vendors are increasingly
security conscious in implementing and designing
technologies from a secure perspective, zero-day
vulnerabilities are often discovered months or even years after
those technologies are installed and operational.
To make matters more challenging, some vulnerabilities are
not detectable by vulnerability management tools. They are
only detailed by the vendor, and only if they happen to detail
them. In some situations, this makes monitoring the updates
for the applications all the more relevant to ensure, including
in the vulnerability management process.
When implementing a new technology, organizations
should make an extra effort toward vulnerability scanning and
assessment, and they should pay particular attention to new
vulnerabilities, even those that seem unrelated to the new
technology. Often these vulnerabilities could be indirectly
related and appear because of interoperability or integration
issues with the infrastructure.
In some cases, it may make sense to perform other types of
validations against new technology—especially if there are
additional risk factors such as large amounts of sensitive data.
Validations may include but are not limited to penetration
testing, secure configuration reviews, and application security
testing.
EXAM TIP The key areas of concern with emerging
technologies are interoperability and compatibility. These two
concerns affect the security, functionality, and performance of
the new technologies when they are installed into an existing
infrastructure.
Control Frameworks
Now that we’ve covered some generalities about controls, it’s
time to look at various control frameworks you might use in
your professional career. Keep in mind that some control
frameworks go into more depth than others on their controls;
some controls in a given framework may treat a particular
subject area lightly, whereas others may go into detail on a
particular subject area. Also keep in mind that some control
frameworks are geared toward specific environments or areas,
and not all frameworks are necessarily geared toward security
specifically; some frameworks apply to a broad general
business or risk management area but may have security
controls built into them for specific needs or direct security
actions from a higher level. Not all control frameworks are
technical in nature either; some frameworks apply to security
and risk from a business process perspective rather than a
technical perspective. Some control frameworks are privacy
oriented and provide a different set of controls. The control
frameworks described in the following sections are just some
of the more popular ones you may encounter in your
professional career as a risk practitioner.
NIST The U.S. Department of Commerce’s National Institute
of Standards and Technology (NIST) has produced several
documents—some of which we have already mentioned—that
relate to risk management. The entire Risk Management
Framework (RMF), and all of its different components, spans
several documents. We’re not going to provide a
comprehensive review of the framework and its related
documents here, but we will examine a particular piece of the
framework: the controls catalog. NIST Special Publication
(SP) 800-53, revision 5, “Security and Privacy Controls for
Information Systems and Organizations,” is the control catalog
supporting NIST’s RMF. The NIST controls are a
comprehensive set of security measures and processes
intended to mitigate and reduce risk from an IT security
perspective. The NIST control catalog is divided up into 20
separate control groups, called families, each related to a
particular aspect of IT security.
The NIST control families span a great deal of subject
areas. Each family has several controls in it; there are smaller
families (such as the Awareness and Training family) that may
have as few as six controls, but there are others that have many
different controls that go into great detail on security
measures. An example is the System and Communications
Protection (SC) family, with 51 controls.
Control Design
Controls are designed and applied to a system, data, process,
or even organizational element to address a weakness or
vulnerability or to counter a specific threat. Often, controls are
designed to address a specific risk response strategy, such as
risk reduction/mitigation, sharing, avoidance, and acceptance.
Controls are applied to reduce the impact and likelihood
elements of risk, as follows:
• Reduce the likelihood of a threat agent initiating a threat
(deterrence, preventative, and detection controls are
examples).
• Reduce the likelihood of a threat exploiting a
vulnerability (preventative and detection controls).
• Reduce the impact to the organization or asset if a threat
exploits a vulnerability (corrective, compensating, and
recovery controls).
• Reduce the vulnerability (deterrent, preventative,
corrective, and compensating controls all could perform
this action).
Note that none of the scenarios listed earlier describes a
case where a control prevents a threat or a threat actor. Threats
and threat actors exist, and controls can’t really prevent their
existence. Controls can, however, affect how they interact with
vulnerabilities and assets. Also note that all these strategies
may be pursued simultaneously in the case of controls that are
well designed and effective, using the concept of economy of
scale and multiple use of controls.
Control Selection
When an organization selects controls to be applied to protect
systems and minimize risk, several considerations come to
mind. First, the organization should look at its governance
requirements and ensure that its controls meet those
requirements, particularly where specific data protection
requirements are indicated. This may include level of
performance or function of a control (encryption strength, for
instance). The organization also has to look at its existing
infrastructure. It is likely to already have some level of
controls in place, although the controls might not be applied
directly to the system in question. Take, for example, an
organization that has industrial control systems and other types
of operational technologies present. Many of these devices
cannot support basic security measures, such as authentication
or encryption; however, as compensating controls, the
organization could implement network segmentation by
putting them in their own physically or logically separated
subnets and install authentication and encryption mechanisms
specifically for those devices, since they do not have their own
built-in capabilities. If there are existing controls that can
already fulfill the requirements for a control, those should be
used to the maximum extent possible.
Second, controls should be selected also based on their
level of effectiveness, as indicated by the design parameters
we discussed earlier. A control that is minimally effective, and
perhaps meets governance requirements, may still not be
desirable if it does not provide the level of security the
organization needs. Effectiveness also may mean how good
the control is at reducing risk or ensuring compliance. If either
of these levels of functionality or performance are not met by
the control, the organization should consider strengthening or
changing the control.
Third, cost is also an issue with control selection. A control
that costs $10,000 to implement but only protects a single
system that is valued at $2000 is not cost-effective. However,
a control that costs $10,000 and protects several systems, even
if individually they are not high-value systems, but
collectively generate $100,000 in revenue for the organization
per year, is likely more cost-effective. Control selection with
regard to cost should include the cost of the control as well as
the cost to maintain it. This includes spare parts and
components, service contracts, and the cost of qualified
personnel to maintain the control. This is where quantitative
data elements, such as asset value (AV), exposure factor (EF),
single loss expectancy (SLE), and annualized loss expectancy
(ALE), which you learned about in earlier chapters, come into
play, since they affect whether a control is worth it financially
to implement.
Ultimately, security and risk practitioners recommend
controls based on their effectiveness and cost to management,
who must make the decision regarding their implementation. If
the desired controls cannot be implemented, for whatever
reason, both the practitioners and management must consider
selecting appropriate compensating controls, which must be
approved and documented.
Control Analysis
Controls are analyzed at several different points in their life
cycle. They are looked at to ensure they fulfill their basic
functions of asset protection, compliance with governance, and
risk reduction. They’re examined for not only effectiveness
(how well they do their job) but also efficiency (meaning that
they do their job with as little additional effort, resources, and
management as possible). Even if a control is performing its
job functions wonderfully, if the cost to maintain it is
prohibitive, the organization may need to look at changing the
control or its implementation in some fashion.
The most common way to analyze controls is through
controls assessment, where controls are evaluated for their
effectiveness. A control can be evaluated using various
methods, such as interviewing key personnel responsible for
managing and maintaining the control, reviewing the
documentation associated with the control, observing the
control in action, and testing the control through technical
means. These different assessment methods will give a risk
practitioner an idea of how effective the control is in meeting
its primary functions.
Risk assessments for the specific system under review, as
well as the entire organization, will also indicate how well a
control is performing its function. Even if a risk practitioner is
not examining a particular control, levels of increased or
decreased risk for the system itself or the entire organization
would indicate whether or not one or more controls are
effective. Historical and trend analysis methods tell a risk
practitioner how risk has changed over time. They also help
the practitioner predict how risk will change, given changes in
the environment, technology, the threat landscape, and how
effective the control will be. If risk has increased inexplicably,
the practitioner must analyze the different controls designed
and implemented to mitigate that risk.
After controls have been tested and analyzed, results from
those analyses should be recorded on the overall risk register,
particularly with regard to risk the control is supposed to
mitigate. Obviously, management must be informed of any
changes in risk caused by control effectiveness issues.
Control Implementation
Controls aren’t simply installed in infrastructures, even in the
event there is urgency to do so. As we have previously
discussed, some planning goes into control implementation.
Most of this planning focuses on determining the effectiveness
of the control and ensuring it is designed to protect assets,
assist in compliance, and reduce risk. There are also other
considerations when implementing controls, including the
change management process, configuration management, and
testing.
Implementation Testing
Although controls undergo testing at various points in their life
cycle, testing before implementation is critical because often it
is difficult to go back to a previous control or a point in time
before the control is implemented. This is especially true if the
infrastructure is significantly altered to accommodate the new
control. Consider a network infrastructure that has a new
firewall or other network security device installed. The
network may have to be rerouted and reconfigured and new
rules configured on the network security device. If for some
reason it is installed without testing and then fails, it may
cause serious impact to business operations if users cannot use
the network, get on the Internet, or otherwise are unable to
perform their duties.
Several types of implementation testing should occur on
new controls before they are implemented. One of the first is
interoperability testing. Interoperability is critical since the
new control must work with existing technologies and
infrastructure. If for some reason they do not work together,
this can cause work stoppages or outages and will impact
business operations. A risk analysis conducted before
implementing a new control should take this into account, and
the organization should be reasonably informed about how the
new technology or control will interact with the existing
infrastructure before it is tested and implemented. Another
type of testing involves testing security mechanisms that are
inherent to the new control, such as authentication and
encryption mechanisms. Not only should they be interoperable
with existing infrastructure, but they should also be tested for
function and performance as well as compliance with any
governance. It would not do to install a security control that
has legacy authentication or encryption mechanisms that do
not meet governance requirements, for example, and therefore
fails to protect systems and data.
Testing a control, especially a complex one, before
implementation ensures that it is interoperable, satisfies
technical requirements, meets governance requirements, and
performs and functions as it should. Understanding the
limitations and operating parameters of a complex control
through testing before implementation is in itself a method of
risk reduction.
Quick Review
• Risk response is what is needed after risk has been
appropriately assessed and analyzed.
• Risk and control ownership must be carefully
considered as part of risk response.
• Business process owners are not necessarily risk or
control owners.
• Controls that span the entire organization are called
common controls.
• There are four generally accepted risk responses:
mitigation, sharing or transference, acceptance, and
avoidance.
• Risk mitigation means to reduce risk through the
implementation of controls.
• Risk sharing allows an organization to transfer some of
the risk to a third-party provider or through insurance.
• Risk sharing does not absolve an organization of its
responsibility, accountability, or legal liability.
• Risk acceptance means to take on any leftover risk after
all other risk treatment options have been implemented.
• Risk avoidance does not mean to ignore risk; it simply
means to avoid actions that result in the risk.
• Evaluating risk response options involves considering
several factors, including cost, effectiveness, value of
the asset, the organization, and governance, as well as
other external factors.
• Third-party risk management means that the
organization must manage any risk inherent to third-
party risk sharing, contracts, and so on.
• Third-party risk factors include the same vulnerabilities
and threats that other organizations encounter.
• Considerations with third-party risk include protection
of controls, responsibility, accountability, legal liability,
and data ownership.
• Issues, findings, and exceptions management helps to
manage risk when there are departures from normal
processes, security controls, or baselines.
• Exceptions must be approved by management,
thoroughly documented, and tracked in the risk register.
• Emerging risk must be managed due to a changing
operating environment, emerging technologies, or a
changing threat landscape.
• Emerging technologies should not be immediately
adopted until they have been evaluated for risk.
• The primary risks of emerging technologies include
interoperability and compatibility with existing systems
as well as security mechanisms.
• Controls are security measures or safeguards
implemented to protect systems and data, ensure
compliance with governance, and reduce risk.
• The three types of controls are administrative, technical,
and physical.
• Administrative (managerial) controls are implemented
through policies and procedures.
• Technical (logical) controls are implemented through
hardware and software.
• Physical (operational) controls are implemented through
physical barriers and systems used to protect people,
equipment, data, and facilities.
• The five control functions are deterrent, preventative,
detective, corrective, and compensating.
• Deterrent controls must be known in order to be
effective.
• Preventative controls do not have to be known; they
serve to prevent malicious actions or policy violations.
• Detective controls are used to discover or detect
malicious actions or policy violations.
• Corrective controls are used to remedy an immediate
security issue and are temporary in nature.
• Compensating controls are used in place of a preferred
primary control and are generally longer-term in nature.
• Control frameworks are available to assist organizations
in implementing controls to protect their systems, meet
governance requirements, and reduce risk.
• Which control framework an organization selects may
be based on governance. Some governance vehicles
require specific controls, but others are more flexible.
• Control frameworks include the NIST controls, CIS
controls, PCI DSS, and the ISO/IEC 27001/27002
control catalogs.
• Controls are designed and selected based on their cost
effectiveness, how well they perform and function, how
well they protect assets, how they assist in meeting
compliance requirements, and how well they reduce
risk.
• Controls are tested and evaluated during various points
in their life cycle, including before and after
implementation and during security assessments such as
vulnerability assessments, risk assessments, and
penetration testing.
• Controls can be tested using four primary methods:
interviews with key personnel, documentation reviews,
control observation, and technical testing.
• Change and configuration management are key
processes involved in control implementation.
• Interoperability with the existing infrastructure is a key
concern while implementing a new control.
• Controls must be thoroughly documented in the risk
register.
• Risk responses and risk treatment options should be
reported to management on both a periodic and as-
needed basis.
• Risk practitioners should understand reporting
requirements levied on the organization by both internal
management and external governance.
• Risk practitioners should ensure that all data collected
to form reports and give information to management
must be complete, accurate, and trustworthy.
• Risk and control monitoring techniques should be part
of the formally adopted risk management methodology
and included in various documents, including risk
strategy, policy, and procedures.
• Risk reporting techniques include narrative reports,
charts, and graphs such as heat maps, scorecards, and
centralized dashboards, among other techniques.
• The risk reporting technique used, as well as the
complexity of the data and the goal of the report, should
be tailored to the audience.
• Key performance indicators are pieces of information
that help measure the performance of controls and other
aspects of the security program as well as show progress
toward the organization’s goals in those areas.
• Key risk indicators show how risk has increased or been
reduced.
• Key control indicators show how controls are
performing and functioning in terms of effectiveness,
compliance, or risk reduction.
Questions
1. You are the business process owner for the manufacturing
department of your company. After a risk assessment,
several key risks have been identified that directly
involve the protection of data within your department.
Both the IT security and physical security departments
have security safeguards in place, or planned for
implementation, to help reduce those risks. Which of the
following most accurately describes ownership in this
scenario?
A. You are the risk owner, and the IT and physical
security departments are the control owners.
B. As the business process owner, you are both the risk
and control owner.
C. The IT and physical security departments own both
the risk and the controls.
D. Executive management owns both the risk and
controls; neither you nor the IT and security
department have any responsibility for that
ownership.
2. As a risk practitioner in a larger organization, you have
been asked to review the company’s risk response options
for a particular risk and make recommendations to the
company’s executive management. For some of the risk,
there are obvious controls that could be implemented, but
this will not completely eliminate all of the risk. None of
the risk can be borne by any third party, and the business
processes involved with the risk are extremely critical.
Which of the following risk treatment options would you
recommend after all possible mitigations have been put in
place?
A. Risk avoidance
B. Risk mitigation
C. Risk acceptance
D. Risk sharing
3. Your company has just established a contract with a third-
party cloud services provider. Based on the contract
language, in the event of a breach, the provider must
disclose details of the event as well as allow an audit of
the security controls that should have protected any data
disclosed during the breach. The contract language as
well as laws and regulations also stipulate that the third-
party provider retains some legal liability for the loss.
Which of the following accurately describes the
responsibility and accountability borne by both your
organization and the third-party provider during this
event?
A. All the responsibility and accountability are placed
on the third-party provider.
B. All of the responsibility and accountability are placed
on the organization.
C. The third-party provider has the responsibility for
protecting the data, but the organization retains
accountability.
D. Both the third-party provider and the organization
bear responsibility and accountability for the loss of
the data as well as legal liability.
4. Which of the following is the major risk factor associated
with integrating new or emerging technologies into an
existing IT infrastructure?
A. Security mechanisms
B. Data format
C. Vendor supportability
D. Interoperability
5. During a vulnerability scan of its systems, your
organization discovers findings that cannot be remediated
without breaking several critical line-of-business
applications. Which of the following should be the path
forward in determining a solution to this problem?
A. You should patch the systems regardless of the
impact of the line-of-business applications since
security is far more critical to the organization.
Business process owners must accept this and
configure their line-of-business applications
accordingly.
B. You must not patch a system that would affect the
line-of-business applications, since this affects the
mission and bottom line of the organization, which is
far more important to the organization than its
security posture.
C. You should employ the risk avoidance response by
not worrying about the security or business
ramifications of the problem, since it is beyond your
control and will waste critical resources in
addressing it.
D. You should evaluate the risk of patching systems and
suffering the impact to the line-of-business
applications versus the security risk if the systems
are not patched and then allow management to make
a decision based on that risk. Any exceptions should
be formally approved and documented.
6. Your company has implemented several new controls to
strengthen security inside its facility. These controls
include stronger steel doors, with new cipher locks, and
two more guard stations to keep unauthorized personnel
from entering sensitive areas. Which of the following
types and functions would those new controls be
considered?
A. Physical, detective
B. Physical, preventive
C. Administrative, preventive
D. Administrative, deterrent
7. Your company is now doing business internationally, and
as a result, one of its key business partners overseas
requires that it adopt international security standards. You
examine several different standards, and you are already
using internally developed controls that may be mapped
to a new standard. Which of the following control
standards is the most appropriate to use when working
with international partners?
A. NIST SP 800-53
B. CIS controls
C. ISO/IEC 27001/27002
D. HIPAA controls
8. Your company is concerned about implementing controls
for a new sensitive system it is about to install. Some
existing controls can be easily used, but the new system
will require more advanced network security devices
since the data sensitivity requirements are much higher
under laws and regulations imposed on the company.
These laws and regulations mandate a higher level of
protection because of the sensitivity of that system and
data. Which of the following is the most important
concern the company must consider when implementing
the new security devices?
A. Cost
B. Compliance
C. Risk reduction
D. Interoperability
9. Your company has installed a new network security
device and is testing it in parallel with the rest of the
infrastructure before cutting over all systems on the
infrastructure to rely on the device. Security mechanisms
within the device meet your strict governance compliance
requirements, and risk analysis shows that it can reduce
risk of an external attack significantly. However, some
but not all critical systems on the network refuse to
communicate with the device, making it difficult to
troubleshoot any issues and stopping critical data from
transiting the network. Which of the following is the most
likely issue you should examine?
A. Interoperability
B. Authentication mechanisms
C. Encryption mechanisms
D. Network protocol issues
10. Your organization is preparing to implement many new
security controls, including upgrading legacy network
security devices. While a thorough risk analysis has been
performed, there is still uncertainty about how the
network devices will interact with the rest of the
infrastructure. Management is concerned with
interoperability issues and the ability to monitor the
installation carefully. Which of the following will assist
in helping the implementation go smoothly?
A. Vulnerability assessment prior to installation
B. Parallel cutover
C. Risk analysis
D. Change and configuration management processes
11. You are collecting data for your annual risk and control
effectiveness report to the company’s board of directors.
You’re gathering data from multiple sources, but you are
concerned about the usefulness of the data used to create
a report. Which of the following is not a concern
regarding the data you’re collecting?
A. Completeness
B. Accuracy
C. Trustworthiness
D. Complexity
12. You are creating a schedule for risk reporting for various
managers and events. Which of the following strategies
should you employ regarding reporting of risk treatment
options and plans?
A. Report on their progress periodically and as needed.
B. Report annually to senior management and quarterly
to middle management.
C. Report results only when the risk treatment options
have been implemented and are complete.
D. Do not report on risk treatment options; only report
on increases or decreases in risk as required by
governance.
13. Which of the following documents should describe the
risk management methodology, to improve risk
assessment and analysis methodologies, adopted by the
organization?
A. Risk management policy
B. Risk management strategy
C. Risk assessment procedures
D. Organizational strategy
14. You are implementing a new method to quickly inform
management of any changes in the overall control
effectiveness or risk posture of the organization. You
want it to be in near real time but only include the critical
information necessary that managers need to make
informed decisions. You also want management to have
the ability to drill down into more detailed information if
needed. Which of the following reporting techniques
would fill these requirements?
A. Heat map
B. Narrative report
C. Centralized dashboard
D. Scorecard
15. You are developing metrics for senior management
regarding control effectiveness and the risk posture of the
organization. You need to create a metric that shows
managers how potential risk has been reduced due to
implementing several new controls over the past six
months. You want to aggregate this measurement into one
indicator. Which of the following would be the most
effective indicator to show this reduction?
A. Key performance indicator
B. Key risk indicator
C. Key management indicator
D. Key control indicator
Answers
1. A. As the business process owner, you have responsibility
and accountability for the risk, while the IT and physical
security departments are the control owners.
2. C. After all other risk reductions and mitigation actions
have been implemented, any residual risk must be
accepted. Risk mitigation has already occurred, and the
scenario states that risk cannot be shared with any third
party. Since these are critical business processes, the risk
cannot be avoided.
3. D. Both the third-party provider and the organization bear
some level of responsibility, accountability, and legal
liability for the data loss since both the contract language
and laws address this situation. However, it is likely that
the organization will ultimately bear the most
responsibility and accountability for the data loss to its
customers.
4. D. Interoperability is the major risk factor associated with
integrating new or emerging technologies into an existing
IT infrastructure. It covers a wide range of factors,
typically including backward compatibility, data format,
security mechanisms, and other aspects of system
integration.
5. D. You should evaluate the risk of patching systems and
suffering the impact to the line-of-business applications
versus the security risk if the systems are not patched and
then allow management to decide based on that risk. Any
exceptions should be formally approved and documented.
You cannot simply ignore the risk, and taking any action
without carefully considering the risk would be
detrimental to the organization, whether it is making the
choice to break the line-of-business applications or
ignoring the security ramifications of the problem.
6. B. The controls are physical because they involve using
physical barriers to secure the interior of the facility. The
controls are also preventive because they prevent people
from entering sensitive, restricted areas. The controls are
not administrative because they do not involve written
policy or procedures established by management.
7. C. ISO/IEC 27001 and 27002 are international standards
developed to ensure interoperability and translation of
security controls across international boundaries. The
other control sets, while they may possibly be used in
areas outside the United States, are unique to the U.S.
8. B. Compliance is actually the most important
consideration in this scenario because the company is
installing a system that is considered sensitive enough to
be regulated by laws and regulations. Since the data
sensitivity requirements under those laws and regulations
require that level of protection, the company has no
choice but to implement those advanced network security
devices, regardless of cost, interoperability, or any other
consideration.
9. A. Interoperability seems to be the issue here. Since
authentication and encryption mechanisms meet
compliance standards, it may be that they are not
interoperable with existing infrastructures. Network
protocols aren’t likely an issue since some devices on the
network can communicate with the new device. All
indications are that the new device may not be
interoperable with some of the other devices, security
mechanisms, or protocols on the network.
10. D. Proper change and configuration management
processes will assist in an orderly implementation and
changeover to the needed network security devices.
Management has the ability to carefully consider, prove,
and monitor changes to the infrastructure, with the ability
to back out the changes according to defined processes if
something goes wrong.
11. D. Data complexity is not an issue since it will be your
job as risk practitioner to distill the data into
understandable information for the board of directors.
You are most concerned with the completeness of the
data, how accurate it is, and whether it comes from a
trustworthy source.
12. A. You should report the progress of risk treatment
options both periodically, as determined by organizational
policy, and as needed, depending on the criticality or
importance of the information and the desires of
management.
13. A. The risk management policy, which supports external
and internal governance, should dictate the risk
management methodology used by the organization.
Often this methodology will include any risk assessment
or analysis methods mandated for use. Both the risk
management strategy and the overall organizational
strategy are long-term, higher-level documents that do not
go into this level of detail. Risk assessment procedures
are step-by-step processes, activities, and tasks used to
perform a risk assessment, but they do not direct overall
risk management methodologies.
14. C. A centralized dashboard can supply all the necessary
critical information managers need to view control
effectiveness and risk posture, all in one location.
Information can be fed to the dashboard in real time, and
managers would have the ability to drill down on
particular data elements to gain more details if needed.
15. B. In this scenario, a key risk indicator (KRI) could be
developed to show the reduction in risk. A key
performance indicator (KPI) would show the level of
performance of a control or other aspect of organizational
security, not necessarily the reduction in risk. A key
control indicator (KCI) would show the effectiveness of a
particular control, not the reduction in risk. Key
management indicators are published by executive
management and cover various metrics they are
concerned with regarding the overall performance of
organizational objectives. They do not indicate a
reduction in risk.
4 CHAPTER
Enterprise Architecture
The enterprise architecture within an organization affects the
business’s risk in several different ways. Aspects of enterprise
architecture risk include interoperability, supportability,
security, maintenance, and how the different pieces and parts
of the infrastructure fit into the systems development life
cycle. The business views IT as an investment of capital funds,
much like it does facilities and other equipment: as a means of
supporting the business mission. Information systems
represent a risk to the business because of interoperability,
supportability, security, and other issues. It costs the
organization money to maintain and support all the IT assets
within the organization, in the form of parts, software licenses,
training for administrators and users, and upgrades. There are
also the intangible aspects of IT, such as business value and
liability. IT systems affect the bottom line of the organization,
so a lot of thought is put into managing risk for them.
Additionally, you should take care to remember that
information technology risk is only a piece of the entire
enterprise risk picture. In the next few sections, we’re going to
talk about different aspects of the information systems
architecture and how they contribute to the overall enterprise
risk in the organization.
Platforms
Platforms are an element of the enterprise infrastructure that
contributes to information security and business risk for
several reasons. First, it costs to field and simultaneously
maintain different operating systems and environments that
come in different platforms. Platforms also introduce risk into
the environment in the form of interoperability, security, and
supportability. A diverse platform environment (with mixed
platforms, such as Windows, Linux, Macs, Unix, and so on)
can affect interoperability with other systems due to different
versions of software, different network protocols, security
methods, and so on. A diverse environment can also affect
supportability because the organization must maintain
different skillsets and a wide knowledge base to support the
diverse platforms.
On the other hand, maintaining a homogeneous
environment can reduce costs, ensure interoperability, and
allow a more common set of security controls and
mechanisms, such as patch management and configuration
management. However, there is even risk involved in a
homogeneous platform environment because of the likelihood
that a vulnerability discovered in one system would also be
shared in many others, offering a wider attack vector for a
potential malicious actor. It is really a matter of the systems
development life cycle as to how and when platforms are
developed, introduced into the infrastructure, implemented,
maintained, and, eventually, disposed of, and there is risk
inherent in all of these different phases, as we’ll discuss in
more detail later.
Software
Software introduces risk simply because, today, it’s so critical
to business operations. Businesses need not only basic word
processing and spreadsheet software but also complex
databases, line-of-business applications, specialized software,
security software, and other types of applications. Software
must be managed within its own life cycle as well; it is
constantly being patched, upgraded, superseded, and replaced
by better, faster software with more features that usually costs
more. Risks that are inherent to managing applications within
an organization include supportability, backward
compatibility, data format compatibility, licensing, and proper
use.
Adding to this complexity are the decisions an organization
makes in terms of the selection of proprietary software, open-
source software, general-purpose commercial software, or
highly specialized software. All these different categories
incur different levels of cost, supportability, licensing, and
feature sets. Interoperability also plays a part in application
risk, as it does with other infrastructure components.
Applications that do not use common data formats or produce
usable output for the organization create the risk of expense or
additional work that goes into transforming data between
incompatible applications. Applications also introduce risk
into the business environment with the level of security
mechanisms built into them and how effective those security
mechanisms are in protecting data residing in the application.
It’s worth mentioning here that web-based applications, in
addition to presenting the same risks as normal client-server
apps, also have their own unique risks. Security is a definite
risk imposed by web-based applications since they often
directly connect to unprotected networks such as the Internet.
Other risks include those that come from the wide variety of
web programming languages and standards available for
developers to use.
Databases
Databases, as a subset of applications, impose some of the
same risks that applications and other software do.
Additionally, databases incur risks associated with data
aggregation, compatibility, privacy, and security. Aggregation
and inference are risks associated with database systems.
Unauthorized access and data loss are also huge risks that
databases introduce into the enterprise environment.
Operating Systems
Although we discussed platforms in a previous section, it’s
also worth mentioning operating systems as their own separate
risk element in the enterprise infrastructure. The terms
platform and operating system are sometimes used
interchangeably, but, truthfully, a platform is more a hardware
architecture than an operating system categorization.
A platform could be an Intel PC or a tablet chipset, for
example, which are designed and architected differently and
run totally different operating systems. Different operating
systems, on the other hand, could run on the same platform but
still introduce risk into the organization for the same reasons
discussed previously with applications. For example, there are
interoperability and supportability risks and all the other issues
that go hand in hand with the normal operating system life
cycle, such as patch and vulnerability management. Licensing,
standardization, level of user control, and configuration are
also issues that introduce risk into the organizational
computing environment.
Networks
Networks are another aspect of enterprise infrastructure that is
absolutely critical to protecting data. They are most effective
when they are implemented in a way that not only incorporates
the business logic of the organization but also follows the data
flow—taking into account how the software is most
implemented and integrated with other systems such as
databases.
Let’s first take a step back to understand what a network is.
At its most basic level, a network is a mechanism that allows
systems to communicate with each other. The components are
often devices such as switches (for local network
communication) and routers (for communications between
networks). There are a number of protection mechanisms on
networks such as firewalls (which regulate network traffic),
intrusion protection systems (which protect traffic at a deeper
level than firewalls), threat intelligence gateways (which are
designed to protect external networks prior to connecting to
firewalls), and so on. This is only a sampling of the protective
technologies that are network based. The deeper the
understanding of the technologies, the more granular a risk
assessor may be.
The philosophy one uses to set up the location of protective
network devices can be referred to as the architecture. For
example, for sensitive data, most compliance frameworks
require that the data not be stored in the database on the same
local network segment as a web server. This way, if the web
server is compromised, attackers will not have immediate
access to the data. If they sit on the same network, that may be
considered a higher risk in a risk assessment—the
recommendation being to move the database to another
network behind the web server, with firewalls and intrusion
detection systems being the intermediary between the two
local networks.
Another factor to consider is encryption of data in motion.
If a legacy system does not have encryption and it is sending
data to another location on the Internet, there is a risk to
anyone between the networks for an on-path attack (formerly
known as a “man-in-the-middle attack”). A VPN tunnel can be
set up between the two networks to ensure that at least the
traffic is encrypted as it traverses the networks (or Internet).
In the end, networks and networking can be extremely
challenging and detailed. What we have covered here only
scratches the surface of what is entailed by networks and
networking. It is worth your time to explore this topic in depth
to help create a better network and ultimately add to a risk
assessment.
Cloud
No book on risk would be complete without considering
cloud-based risks. The concept of “Anything as a Service”
(XaaS) means that, for a price, nearly anything a user or
company would use a computing system for can be delivered
to them by a cloud service provider through the cloud
infrastructure—typically through a thin client or web interface.
In such a serverless architecture, a cloud provider manages the
server and associated architecture and allocates resources as
necessary to the end users. This requires less upfront
investment in hardware and software by the customer, greater
scalability, and centralized management for administration and
security concerns. Each cloud infrastructure is a little different,
but there are three main types of cloud infrastructure to
consider when looking at risk:
• Software as a Service (SaaS) allows a customer to
essentially lease software, such as applications and
databases, thus enabling rapid rollout to the greater
user community.
• Platform as a Service (PaaS) provides the framework of
an operating system and associated software required to
perform a function (for example, the Linux operating
system and components needed to run a web server).
• Infrastructure as a Service (IaaS) provides the ability to
quickly stand up virtual machines, storage devices, and
other infrastructure that would otherwise require the
purchase of physical devices.
Gateways
A number of other technologies that can be used to protect
organizations can be factored into a risk assessment. For
example, secure e-mail gateways and secure web gateways are
a must-have to protect organizations from web-based threats.
These technologies can help to filter malware, block malicious
links or websites, help prevent phishing, or a host of other
tasks. Technically, they do not fit into the category of other
technologies so they deserve special mention. A good risk
assessor will learn about these technologies and how they can
help reduce the risks related to the organization. That
knowledge can be applied to the risk assessment process.
IT Operations Management
Managing the enterprise infrastructure is one of the most
work-intensive efforts in an organization, especially in a larger
business. The enterprise infrastructure in an organization
covers a broad scope of areas. Infrastructure, of course, covers
workstations, printers, and other end-user devices. It also
covers the major pieces needed to conduct business, such as
servers, cabling, switches, routers, and a variety of other
network and security devices. Most organizations also include
wireless networks in their supporting equipment. Additionally,
the infiltration of mobile devices into businesses makes this a
new and sometimes difficult area to bring under the
organization’s infrastructure umbrella. Challenges with the
enterprise infrastructure include not only maintenance,
upgrades, updates, and implementing new technologies but
also includes some of the more typical traditional management
challenges, such as budgeting, project management, and
staffing.
Key areas in IT operations management include server and
infrastructure management, end-user support, help desk,
problem escalation management, and, of course, cybersecurity.
IT managers must maintain and meet any internal agreements
that cover guaranteed service delivery in support of the
different divisions within the organization, as well as service
level agreements (SLAs) with other organizations. While
information security personnel tend to focus more on IT risk,
we find that most of the risk factors, threats, and
vulnerabilities affecting IT also affect all the operations within
the business. In the next few paragraphs, however, we’ll focus
on those that directly affect managing the IT operations in an
organization.
Management of IT operations incurs risk factors such as the
size and complexity of the organization and its IT
infrastructure, the criticality and priority of the different
systems that make up the infrastructure, and the internal
management processes of the organization. Resourcing issues
also affect the management piece of IT operations; staffing the
right people who have trained on the various technologies the
organization needs as well as creating a budget that helps
maintain the current level of support and future growth for IT
are examples of resource-related risk factors. Additionally,
compliance with legal and governance requirements in the
face of increased regulation are also risk factors the IT
managers must deal with.
Organizational structure also affects IT operations because
the IT infrastructure may be managed on either a centralized
or a decentralized basis—sometimes by one centralized IT
shop, or in the case of larger organizations, by multiple areas
within other divisions that are delegated the task of managing
their own piece of the infrastructure. IT might also be
managed on a functional basis; the accounting and human
resources divisions may have the responsibility for managing
the systems that support their mission. Additionally, the IT
infrastructure may be managed on a geographical basis, in the
case of physically separated locations in large organizations.
All these structural considerations are factors that contribute to
risk in managing IT operations.
Also directly affecting IT operations risk are two key
processes: the change and configuration management
processes. We’ve already discussed risk factors that affect the
SDLC model in an organization, as well as how new and
emerging technologies can affect the existing infrastructure.
These same risk factors also directly affect the daily
management of IT operations simply because they introduce
change into the network. Some of this change can be planned
and carefully introduced, but often there is change—from the
large-scale network side, all the way down to individual
configuration items on hosts—that may have unforeseen
consequences and affect the network in different ways. How
the organization deals with change and configuration
management is a risk factor that affects not only the SDLC but
also the management of day-to-day operations.
Threats that affect the IT operations are like those that
affect other areas of the business and could be external or
internal threats from a variety of sources. Some of these
threats have a ripple effect in that they may first affect other
areas of business and, in turn, affect IT. For example, the
threat of a poor economy may affect the profitability and
sustainment of the business, resulting in cuts, which often
include IT personnel or equipment. Obviously, there are also
direct threats to the daily management of the IT infrastructure.
These include external threats such as hackers, of course, but
also more commonly come as increases in cost to maintain
operations, issues with external service providers, and disaster-
related events (weather, fire, and so on). Internal threats could
include those carried out by malicious insiders or even careless
workers, such as theft, sabotage, accidental equipment
breakage, and so on. Internal threats could also come from
management processes and include budget cuts, loss of trained
personnel, shifts in organizational priorities, and transitioning
from one market segment to another.
Technical vulnerabilities are beyond the scope and purpose
of this book; our focus here is on those associated with the IT
management aspect of the organization. Vulnerabilities
affecting IT management include faulty processes and
procedures, lack of resources, and lack of trained technical
personnel. An end-user population that has not been trained or
held accountable for proper care and treatment of the
infrastructure could also be a vulnerability. Other
vulnerabilities might include a lack of infrastructure
monitoring, a lack of control of sensitive information in IT
systems, and a failure to maintain a stable SDLC throughout
the entire infrastructure.
Project Management
Organizations use project, program, and portfolio management
to oversee and sustain both short- and long-term aspects of
systems and processes. These categories apply to the scope
and scale of different sets of activities or processes within an
organization. Projects are limited duration sets of activities
geared toward a particular goal; programs are ongoing, longer
term, and may also encompass several individual projects as
well as other activities specific to processes that may have an
indefinite duration. Portfolio management is the oversight of
several different programs by a senior person in the
organization. Keep in mind that the major difference between
a project and a program is duration; a project has a definite
beginning and ending, whereas a program is indefinite.
At the core of a project are three primary drivers: the scope
(the amount or range of work), schedule (when work is to be
done and its completion date), and cost (including all the
resources expended toward the completion of the project). All
projects depend on these three elements to ensure success.
Any shortfalls in cost, delays in schedule, or out-of-scope
work (also called scope creep) affect the ability of the project
to succeed, on time, within budget, and all the agreed-upon
work completed. We’ll discuss the risk factors that can affect
all three of these elements next.
Since the three major elements of a project or program are
scope, schedule, and cost, risk factors associated with projects
and programs primarily affect each of these elements. Factors
that affect scope often come from disagreement on exactly
what work must be accomplished, in what order, and who will
do it. These factors should be examined and agreed upon
during the requirements process, where the project is formally
scoped. In complex projects, factors that may affect scope
include the necessity to task workers from different sections or
departments, the exact amount of work to be done, in what
order it will be done, and the priority of each task and subtask.
The political culture within the organization can adversely
affect scope as well since there may be disagreement between
departments or executives on what work is required and whose
responsibility it is to accomplish it.
The scheduling component of a project has several risk
factors as well. The amount of work to be done (scope) affects
the schedule as a risk factor because of the time given to
accomplish a specific amount of work and the resources
allocated to this work. Lack of workers or materials adversely
affects the schedule, as does the availability of equipment,
work location or site, and external factors such as contract
negotiations, government shutdowns, and so on, as examples.
Cost risk factors stem from lack of budget control during
the project, including an inaccurate budget or estimation of
cost, unexpected expenditures, a weak economy, and so on.
Project costs are affected by the cost of labor (to include
wages, insurance, and benefits) as well as the cost of supplies
or materials needed to complete the project. Additionally,
since all three of the elements of scope, schedule, and cost are
also risk factors for each other, a delay in schedule or any
extra, out-of-scope work impacts the project budget and costs
as well.
Other risk factors for projects include training, staffing the
project with the right mixture of personnel who have the right
skill sets, attitudes, and focus on making the project or
program a success. A manager without the right project
management skill set is a risk factor that can affect scope,
schedule, and cost if they do not manage those three critical
elements properly. A technician without the right technical
skills may cost the project more money in training or delay
work unnecessarily due to rework or slow production.
In addition to factors such as scope, cost, and schedule,
adding an indefinite or longer-term duration affects programs
as a risk factor because you must project how they will be
staffed, funded, equipped, and otherwise managed in the
future, over a longer, indeterminate period. Therefore, risk
factors could be long term and fluctuate over that time. Factors
such as market, economy, and technology changes will
influence risk in a program much more so than a project,
affecting its scope, schedule, and cost over a longer period.
The threats to a project or program are similar; the major
difference is the duration of the time that the threat represents
potential harm to a project or program. Keep in mind that
threats for a project or program aren’t those you might
traditionally think of as threat sources; they are rarely
malicious in nature and usually come from the business
environment itself. For example, threats to cost are usually
those that involve money or resource issues, such as price
increases, currency value fluctuation, stock market variations,
and so on. These are usually external threats; internal threats to
costs might include a sudden budget restructuring, cutting
funds for a project, bankruptcy, and so on. Threats to the
schedule might include worker shortage, strikes, inadequate
training, delays in contract negotiations, delays from suppliers,
and so on. Scope threats usually involve extra or additional
work due to faulty requirements gathering and decisions about
what falls into the project’s scope and scale.
NOTE Sometimes it’s difficult to distinguish between threats
to each of these elements individually and threats that affect all
three project elements simultaneously, as it is more often the
case that any threat affecting one element will usually, even
indirectly, affect all three.
Since vulnerabilities are weaknesses or a lack of protective
controls, the vulnerabilities found in project and program
management are inherent to weaknesses in their critical
elements and processes. For example, a weakness often found
in the cost element of a project or program is a failure of the
organization to adequately budget for all the resources it
requires for the project. The organization might not list every
piece of equipment or every major supply it needs or consider
labor costs such as overtime in the event of schedule slippage.
It may not adequately negotiate firm prices with a supplier,
who then may raise prices on critical parts or equipment.
These are all vulnerabilities inherent to the budgeting and cost
elements of a project.
Scheduling also has some common vulnerabilities
associated with it. If an organization fails to build in potential
scheduling delays, such as work stoppages, dependencies on
other organizations to fulfill parts of the schedule or even
holidays, this can lead to problems with getting the project
completed on time. Another scheduling vulnerability is to
incorrectly estimate the amount of time it takes to perform a
task or get a piece of work accomplished.
Vulnerabilities inherent to the scope element of a project
affect how much work must be done, in what order, and by
whom. A work breakdown structure (WBS) is a document that
the project manager develops that covers exactly all this
information and more. The WBS describes, in excruciating
detail, the work, its subcomponent parts and tasks, what skill
sets are needed, and what resources are required to do the job.
Other documents may break down individual tasks into much
more detail, even describing step by step how a task is
accomplished. If the project manager and team do not create
these documents, this is a weakness in that the work may not
be performed exactly as needed. Resources may not be
adequately allocated for the work, or the work may be
performed by unskilled persons. Additionally, the amount and
quality of work may not be performed to requirements. Failure
to set requirements initially in the project and periodically
check and track the work against those requirements are also
vulnerabilities associated with scope since they can affect
whether or not the work meets the established requirements.
An additional vulnerability that simultaneously affects all
three project elements is the failure to monitor or control all
these elements on an ongoing basis throughout the life of the
project or program. Scope, schedule, and cost must be
monitored on a continuing basis to ensure that they meet the
original requirements set forth in the project’s charter. The
project or program manager must monitor and control these
things to prevent cost overruns, schedule slippage, and scope
creep. Failure to monitor or control these elements is a serious
vulnerability and could jeopardize the success of the entire
project or program. Table 4-2 gives examples of some of the
risk factors, threats, and vulnerabilities related to project and
program management.
Table 4-2 Examples of Project Risk Factors, Threats, and
Vulnerabilities
Recovery Objectives
Two key areas of concern in business continuity are the
recovery point objective (RPO) and the recovery time objective
(RTO). Both areas can be measured in the context of time. The
recovery point objective is the amount of data that can be lost
based on a time measurement. In other words, if the recovery
point objective is two days, the most amount of data that can
be lost without a serious effect on the organization’s ability to
function would be two days’ worth of data. Note that this
measurement isn’t concerned with the actual amount of data in
terms of quantity, such as gigabytes, for example; instead, it’s
concerned with measuring how many days or hours’ worth of
data can be lost, at most, without impeding the function of the
business. The recovery time objective, also measured in terms
of time, is the maximum amount of time the organization can
afford to lose before it starts recovery. This measurement of
time could be days, hours, minutes, or even seconds.
Recovery Strategies
The recovery strategy developed by the organization is a result
of careful planning, as well as budgeting and allocating
resources to the continuity management efforts. The recovery
strategy should address the most efficient methods of getting
the business back online and functioning after properly
responding to the critical needs of protecting people and
equipment during the disaster itself. Keep in mind that
recovering the business to an operational state does not
necessarily mean it will be working at 100 percent of its
previous operational capacity; the business may only be able
to get a few key services up immediately following a disaster.
The organization should take a realistic look at the degree of
continuity possible, given the seriousness and scope of the
disaster, its resources, the environment it is working with after
the disaster, and the availability of people, supplies,
infrastructure, facilities, and equipment. All of this should be
considered in the recovery strategy planning.
Plan Testing
Testing the disaster recovery plan is of critical importance;
without testing the plan, the organization won’t really know
whether it’s effective until a disaster occurs. Normally, that’s a
bit too late to find out that you haven’t carefully considered all
the different events that could occur during a disaster and how
you will deal with them. Testing the plan involves making sure
that people know what their responsibilities are, that they are
well-trained, that they have all the equipment and resources
needed to adequately respond to disaster, and that they practice
these responses. Testing the disaster recovery plan will point
out deficiencies in training, equipment, resources, and
response activities.
Data Classification
An organization’s documentation can be voluminous,
comprising a variety of documents of varying levels of value
and importance. Depending on the type of document, the
amount of security and types of procedures used in storing and
distributing that document can greatly vary. Some documents
might be considered public, so they can be posted in a public
forum or distributed freely to anyone. Other documents might
be highly confidential and contain information that only
certain individuals should be allowed to see.
To aid in the document management effort, documents need
to be assigned security classifications to indicate their level of
confidentiality and then labeled appropriately. Each
classification requires different standards and procedures of
access, distribution, and storage. The classification also sets a
minimum standard of privileges required by a user to access
that data. If a user doesn’t have the necessary access privileges
for that classification of data, the user won’t be able to access
it. Typically, access is delineated using subjective levels such
as high, medium, and low. These should be agreed upon by
management based on the data’s sensitivity and the damage to
the organization if the data is subjected to unauthorized access.
Several levels of classification can be assigned, depending
on the type of organization and its activities. A typical
organization might have only two classifications: private and
public. Private classified documents are intended only for the
internal use of the organization and can’t be distributed to
anyone outside the organization. Public documents, however,
would be available to anyone. Government and military
institutions might have several levels of confidentiality, such
as Unclassified, Confidential, Secret, Top Secret, and so on.
Each level of classification represents the level of severity if
that information is leaked. For example, the lowest level
(Unclassified) means that the document is not considered
confidential or damaging to security and can be more freely
distributed (though not necessarily releasable publicly). At the
highest level (Top Secret), documents are highly restricted and
would be severely damaging to national security if they were
to fall into the wrong hands. Each document needs to be
assigned a classification depending on the sensitivity of its
data, its value to the organization, its value to other
organizations in terms of business competition, the importance
of its integrity, and the legal aspects of storing and distributing
that data.
Document Disposal
Document disposal can often be a tricky issue. In some cases,
a document needs to be destroyed to avoid future legal or
confidentiality ramifications. In other cases, it’s illegal to
destroy certain documents that are required by law as evidence
for court proceedings. Only your organization’s legal
department can decide on retention and disposal policies for
documents. Once decided on, these policies need to be
communicated to workers to ensure that sensitive documents
are either destroyed or retained as per their classification.
Without proper disposal techniques, the organization is
susceptible to dumpster diving attacks.
Planning
The first phase of the SDLC normally involves some type of
conceptual planning process, where the need for the system is
determined and a general idea is developed of what the system
needs to do and what purpose it will serve. Some models refer
to this as the initiation phase. During this phase, the feasibility
of developing or acquiring a system is studied and includes
considerations such as when the system is needed, how much
it might cost, what type of staffing it might need to develop,
implement, maintain it, what other systems it might need to
connect to or interface with, and so on. Security and privacy
professionals want to be involved in this phase of the life cycle
so that they can understand the basic concepts of the system
and provide feedback on any potential security or privacy
risks.
Often, it is the planning phase where a business case is
developed, to help the organization determine whether the
system (or changes to an existing system) is justifiable. A
business case describes the resources required to build (or
change) the system and the economic benefits expected to be
derived from the system.
The phrase move to the left or shift left describes the desire
for security and privacy professionals to be involved as early
as possible (as far to the left on a timeline) in the
conceptualization, requirements, and design of a system, in
order to help avoid situations where basic concept or design of
a system may be deeply flawed from a security or privacy
perspective.
Requirements
The next phase of the SDLC is generally the requirements
phase, and this phase is typically named as such in most
models. During this phase, different sets of requirements are
developed. These requirements might include functional
requirements, performance requirements, security
requirements, and business requirements. Requirements dictate
concrete terms, such as what a system is exactly supposed to
do, how it will do it, to what degree, and what standards it
must meet. Without a clear set of requirements, there can’t be
any good traceability back to what the system was supposed to
do in the first place. That’s why it’s very important to try to
accurately cover all the possible requirements and needs the
system may have to meet. This is also where the scope of the
development process is established. Security and privacy
professionals should be invited to develop security and privacy
requirements for the system, to ensure that later phases of the
project will include all required characteristics.
Design
The next few phases can be a bit different based on the model
under discussion and whether the product being developed is a
system or software. System developers would move from the
requirements phase into designing the architecture in terms of
developing a general high-level design of how the system is
supposed to work, what major components it might have, and
what other systems it might interface with. Software
developers similarly might design software architecture at a
higher level during this phase. As the SDLC development
process moves forward, this overall architecture is
decomposed down into smaller pieces until finally individual
components or pieces of code can be developed. Again, how
this design process progresses depends on which model is
being used. Security and privacy professionals should be asked
to review the design of the system to ensure that requirements
were properly represented.
Development
After the design phase, decisions must be made as to whether
to build a system or software or simply acquire it from a third
party. Some systems and software can be bought from other
developers, but some must be developed specifically for the
end application or end use. If the system is to be acquired from
some other source, and the design specifications are provided
to the best source, then a system or software that meets the
requirements and design will be provided. If the system is to
be developed, the next phase of the SDLC comes into play.
During this phase, the software is written, and systems are
assembled from individual components. Organizations can use
tools to examine a system’s source and object code to identify
any security-related defects that could be exploited by an
attacker.
Testing
Following the development phase, many SDLC models
include a test phase. In the test phase, different aspects of the
software and systems are tested. Many different tests can take
place during this phase. Some of these tests might involve unit
testing, which simply means to test an individual component
or unit of software at its most basic level for functionality and
performance. Integration testing, another type of testing,
usually involves testing the overall function and performance
of all the components of the system or piece of software
working together, as they would when assembled. During this
phase, other types of testing could take place, including
interface testing (testing different aspects of the system when
it interfaces with other separate systems in its environment)
and security testing (which may include compliance testing,
vulnerability testing, and even penetration testing). User
acceptance testing involves end users of the system
performing various functions to determine whether they will
accept the system in its current, developed state.
Disposal
Once a system has been in operation for a while, it may
eventually be superseded or replaced by a more efficient or
updated system. When the system is replaced or removed from
the environment, it is said to have entered the disposal phase.
Some models refer to this also as the retirement phase. During
this phase, the system is taken out of production, dismantled,
and is no longer used. Its components may be reused, or they
may be disposed of.
SDLC Risks
Most of the risk factors inherent to the SDLC affect both the
entire model and each of the different phases separately. Since
the SDLC could be considered to have several different facets,
including a management process, an engineering framework,
and even a methodology, you could say that risk factors that
affect the SDLC are also common to other types of processes.
For instance, earlier, we discussed risk factors that affect
project and program management. These risk factors affect
scope, schedule, and cost. The SDLC suffers from similar risk
factors because each of the phases, as well as the SDLC, have
different scope, schedule, and cost elements. Each of these
phases can also be affected by risk factors that impact the
availability of resources (manpower, funding, equipment,
facilities, and so on), such as management commitment and
organizational structure.
Unique risk factors that affect the SDLC include those that
affect system changes, system configuration throughout the
life cycle, and those that affect releasing different versions of
the system as updates take place. Most of these elements occur
during implementation, maintenance, and sustainment of a
system but also can occur in other phases of the SDLC where
multiple versions of a system are fielded simultaneously, as
well as in models where there is a continuous iterative process
of updating and upgrading a system or software. Risk factors
that affect system changes include how well the change and
configuration management processes work and how changes
are implemented into the system. Change and configuration
management processes that are not well managed are risk
factors, because this could cause changes that are not
documented or not tested for security, functionality, or
performance. Additionally, factors such as the complexity of
the system, how the organization is structured to manage the
systems development process, and how rapid system changes
occur are also potential risk factors.
Threats to the SDLC manifest themselves when systems are
produced that are not interoperable or compatible with other
systems and their environment or systems are not secure.
Other threats might include systems that do not meet the
requirements they were originally intended to meet. These
threats would affect both the functionality and performance of
the system as well. Threats to the design process include faulty
design specifications as well as faulty requirements input from
the previous requirements phase. Most of the threats to the
SDLC come in the form of mismanagement,
miscommunication, incorrect expectations, and lack of a
commitment of resources or expertise.
Many different vulnerabilities can affect the SDLC; again, a
great many of them have been previously discussed as also
affecting the project or program management process. These
vulnerabilities affect scope, schedule, and cost, of course, but
can also affect the quality of the system or product. In
addition, the system’s security, functionality, and performance
are also affected by these different vulnerabilities. While
probably too many to list here, examples of vulnerabilities that
affect the SDLC start with a lack of firm requirements and
mixed expectations from the different system stakeholders. A
lack of documentation in all phases is also a significant
vulnerability to the entire SDLC, but particularly to the
requirements and design phases. Failure to develop solid
design specifications and systems architecture is a
vulnerability that can result in a faulty design that does not
meet system requirements. In the development phase, lack of
adequate consideration for system interfaces, as well as how
subcomponents fail to support higher-level components or
meet technical requirements, can result in a shoddy system or
software. Failure to test systems properly and thoroughly, as
well as their interaction with their environment, may mean that
serious functional or performance issues with the system are
not discovered until well after it is put into production.
Vulnerabilities in the implementation phase include faulty
implementation, lack of traceability to original requirements,
and failure to properly maintain a sustained system after it has
been put into production. And lastly, vulnerabilities associated
with the disposal or retirement phase of the SDLC can result in
systems that are not properly replaced or not securely disposed
of. This can cost the organization money and potential
liability.
Emerging Technologies
New or emerging technologies can present risks to
organizations simply because there are several different
important considerations when integrating the latest
technologies into the existing infrastructure. Many
organizations have an unfortunate tendency to rush out, buy,
and attempt to implement the latest and greatest technologies
on the market without careful planning or consideration of the
existing infrastructure and how it will react to the new
technologies. One of the primary considerations organizations
must look at is making a business case for the new technology,
which may be to fill a gap that the older technology does not
provide or to provide a capability that the organization must
now have to compete in the market space. In other words, the
organization really must justify new technologies to make
them worth the risk they could incur.
Emerging technologies have several risk factors inherent to
their integration into the existing infrastructure. If the
organization has been able to justify the implementation based
on a true business need for the emerging technology, it must
consider several risk factors. One of the major risk factors is
interoperability with existing infrastructure. Frequently, newer
technologies don’t always work properly with older systems
right out of the box; adjustments may need to be made to the
existing infrastructure to integrate the new technology, or even
bridging technologies may be needed to connect the two
together. Interoperability doesn’t just involve the right
connections; it can involve data formats and flows, security
methods, interfaces into other systems, and changes to
business processes. These considerations, and many others, are
risk factors that must be considered before acquiring and
integrating new technologies.
Another risk factor is security. New technologies may have
security mechanisms that are not necessarily backward
compatible with existing ones. Examples include encryption
algorithms and strengths, identification and authentication
technologies, integrity mechanisms, and even redundant or
backup systems. Additionally, the systems may involve a
learning curve that may intimidate users who must now learn
new security methods for the system. The human factor can be
a weak link in the security chain, so either lack of training or a
lack of adaptability to the new security mechanisms can
introduce risk.
Earlier, when we discussed the SDLC model, we pointed
out that system updates and changes can be risky if not
managed properly. Integrating new technologies into the
environment with older ones can introduce both intentional
and unintended changes into the environment as well,
affecting the stability of the organization’s SDLC with a
particular system. Therefore, change is also a risk factor. Even
in the disposal or replacement phase of the SDLC, introducing
new technologies to replace older ones can be problematic if
not planned and executed properly. New technologies that are
not adequately tested for functionality, performance,
integration, interoperability, and security may not be able to
adequately replace older systems, resulting in extended costs
and possibly even requiring an extension of the older systems’
life cycle.
Threats resulting from the introduction of new and
emerging technologies into the existing infrastructure are
numerous. If the organization has failed to adequately plan for
the new technology, these threats can become significant.
Some of these threats include untested or unproven
technologies, non-interoperability, incompatible security
mechanisms, and suitability of the technology for use in the
organization. The organization could also incur additional
costs and require more resources due to faulty implementation.
These threats can be minimized through careful planning and
by integrating new technologies using a stable SDLC model.
As with threats, vulnerabilities associated with emerging
technologies are numerous as well. Vulnerabilities could
include a lack of trained staff committed to managing and
implementing the new technology. Lack of adequate project
planning is also a serious vulnerability that could affect the
organization’s ability to effectively integrate new technologies.
Another vulnerability could be a weak support contract or
another type of warranty, guarantee, or support for a new
technology or system. Most of these vulnerabilities appear
when a new technology is first implemented and tend to
become mitigated or lessened as the technology is integrated
into the existing infrastructure, but they still exist.
Access Control
As an information security professional, you probably already
know that a security control is a security measure or protection
applied to data, systems, people, facilities, and other resources
to guard them from adverse events. Security controls can be
broken down and categorized in several ways. Access controls
directly support the confidentiality and integrity goals of
security, and indirectly support the goal of availability. Access
control essentially means that we will proactively ensure that
only authorized personnel are able to access data or the
information systems that process that data. Access controls
ensure that only authorized personnel can read, write to,
modify, add to, or delete data. They also ensure that only the
same authorized personnel can access the different information
systems and equipment used to store, process, transmit, and
receive sensitive data.
There are several different types of access controls,
including identification, authentication, and authorization
methods, encryption, object permissions, and more.
Remember that access controls can be administrative,
technical, or physical in nature. Administrative controls are
those implemented as policies, procedures, rules and
regulations, and other types of directives or governance. For
example, personnel policies are usually administrative access
controls. Technical controls are those we most often associate
with security professionals, such as firewalls, proxy servers,
VPN concentrators, encryption techniques, file and folder
permissions, and so on. Physical controls are those used to
protect people, equipment, and facilities. Examples of physical
controls include fences, closed-circuit television cameras,
guards, locked doors, gates, and restricted areas.
In addition to classifying controls in terms of
administrative, technical, and physical, we can also classify
access controls in terms of their functions. These functions
include preventative controls, detective controls, corrective or
remedial controls, deterrent controls, and compensating
controls. All the different controls can be classified as one or
more of these different types of functions, depending on the
context and the circumstances in which they are being used.
Controls were described in more detail in Chapter 3.
Authorization
Authentication to a resource doesn’t automatically guarantee
you have full, unrestricted access to a resource. Once you are
authenticated, the system or resource defines what actions you
are authorized to take on a resource as well as how you are
allowed to interact with that resource. Authorization is what
happens once you’ve successfully identified yourself and have
been authenticated to the network. Authorization dictates what
you can or can’t do on the network, in a system, or with a
resource. This is usually where permissions, rights, and
privileges come in. In keeping with the concept of least
privilege, users should only be authorized to perform the
minimum actions they need in order to fulfill their position’s
responsibilities. Authorization has a few different components.
First, there is need-to-know. This means that there must be a
valid reason or need for an individual to access a resource, and
to what degree. Second, an individual may have to be trusted,
or cleared, to access a resource. This may be accomplished
through a security clearance process or non-disclosure
agreement, for example.
Accountability
Accountability means that a person is going to be held
responsible for their actions on a system or with regard to their
interaction with data. Accountability is essentially the
traceability of a particular action to a particular user. Users
must be held responsible for their actions, and there are
different ways to do this, but it is usually ensured through
auditing. First, there must be a unique identifier that is tied
only to a particular user. This way, the identity of the user who
performs an action or accesses a resource can be positively
established. Second, auditing must be properly configured and
implemented on the system or resource. What we are auditing
is a user’s actions on a system or interactions with a resource.
For example, if a user named Sam deletes a file on a network
share, we want to be able to positively identify which user
performed that action, as well as the circumstances
surrounding the action (such as the time, date, from which
workstation, and so on). This can only be accomplished if we
have auditing configured correctly and take the time to review
the audit logs to establish accountability.
NOTE Although related, accountability is not the same thing
as auditing. Accountability uses auditing as just one method to
ensure that the actions of users can be traced to them and that
they are held responsible for those actions. Other methods,
such as non-repudiation, are used as well.
Non-Repudiation
Non-repudiation is closely related to accountability. Non-
repudiation ensures that the user cannot deny that they took an
action, simply because the system is set up such that no one
else could have performed the action. The classic example of
non-repudiation is given as the proper use of public key
cryptography. If a user sends an e-mail that is digitally signed
using their private key, they cannot later deny that they sent
the e-mail since only they are supposed to have access to the
private key. In this case, the user can be held accountable for
sending the e-mail, and non-repudiation is ensured.
Note that there is no hard and fast rule about mapping
security elements and access controls to security goals; all of
these elements and controls can support any one or even more
than one goal at a time. For example, encryption, a technical
access control, can support both confidentiality and integrity at
the same time.
Frameworks
A framework is a generally overarching methodology for a set
of activities or processes. It may not get into the detailed
processes and procedures; instead, it provides for a 500-foot
view of the general direction and steps used to build a more
detailed program or process. A framework is used as an
overall architecture for a greater effort. A framework has
characteristics that include defined steps and repeatability, and
it can be tailored based on the organization’s needs. In terms of
a risk management framework, you may have a set of general
steps defining how to approach risk management, which
include listing the processes and activities necessary to build
such a program or effort. You would then break down these
larger steps into specific supporting procedures for this effort
based on the needs of your organization and using standards
(described next). Frameworks are typically selected and
adopted at the strategic level of corporate management and
governance.
Standards
A standard is a mandatory set of procedures or processes used
by the organization and usually fits into an overall framework.
Standards often define more detailed processes or activities
used to perform a specific set of tasks. Standards are used for
compliance reasons and made mandatory by an organization or
its governance. The National Institute for Standards and
Technology (NIST) standards are mandatory for use by the
United States federal government, for instance, but are
published as an option for private organizations and industries
to adopt if they so choose. If an organization adopts the NIST
standards for risk management, for example, the organization
may make them mandatory for use by its personnel. Then all
processes and activities for a given effort within the
organization would have to use and meet those standards.
Some standards define the level of depth or implementation of
a security control or measure. The Federal Information
Processing Standards (FIPS) for cryptography and encryption
are an example of this; they set forth the different levels of
encryption strength for various cryptography applications that
may be required in certain circumstances. So, if you create
security policies and procedures for implementing
cryptography within the organization, the FIPS standard could
tell you to what level those policies and procedures must be
implemented.
Practices
A practice is a normalized process that has been tried and
proven as generally accepted within a larger community.
Practices could also be developed by a standards organization
or a recognized authority regarding a particular subject or
particular process. Professional industry organizations or
vendors often develop practices documents. You might also
see “best practices” promulgated by various industries or
organizations, for example. Practices are not usually
mandatory but could be made mandatory by the corporate
management or other governance if they were so inclined.
The next few sections give more detailed examples of some
of the formal frameworks and standards you should be familiar
with for the exam and in real life as a risk management
professional. We recommend you pay particular attention to
the ones developed and published by ISACA; these will likely
be present in some form on the exam. Of course, in this book,
we’re only going to give you a brief overview of each, and you
should take the time to review the actual standards and
frameworks in depth before you sit for the exam.
Step 1: Prepare
The first step, Prepare, has the purpose of carrying out
essential activities to help prepare all levels of the organization
to manage its security and privacy risks using the RMF. It
involves identifying key risk management roles, establishing
the organizational risk management strategy, determining the
organization’s risk tolerance, assessing the organization’s risk
assessment, developing the organizational strategy for
continuous monitoring, and identifying the implemented
common controls.
Step 2: Categorize
Step 2, Categorize, involves inventorying the types of
information on target systems and assigning categorization
levels to that information based on the level of impact if the
security goals of confidentiality, integrity, and availability
were affected or compromised for the information on the
system. This step uses subjective values of high, medium, and
low to assign values to each of the three goals for a particular
type of information. Types of information processed on the
system could include business-sensitive, financial, protected
health information, and so on. FIPS 199, “Standards for
Security Categorization of Federal Information and
Information Systems,” as well as NIST Special Publication
800-60, “Guide for Mapping Types of Information and
Information Systems to Security Categories,” provide detailed
guidance on categorizing information systems.
Step 3: Select
Based on these individual values, as well as an aggregate of
them, the applicable security controls you would assign to this
information system would be accomplished in Step 3, Select.
This step provides baselines of security controls based on the
high, medium, and low values assigned during step 2. If the
aggregate value of information or system has been rated as
high, for example, the high baseline of security controls is
employed for that system. Once a security control baseline has
been established, the organization has the latitude and
flexibility to add or subtract security controls from the baseline
as it sees fit based on different factors, including the
applicability of some controls, the environment the system
operates within, and so on. The selected controls can be found
in the supporting NIST Special Publication 800-53, revision 5,
“Security and Privacy Controls for Information Systems and
Organizations,” which contains a catalog of all the NIST
controls.
Step 4: Implement
In Step 4 of the RMF, Implement, the selected controls are
applied to the information systems and data is processed on
those systems. This is a large process that can cover a good
deal of the life cycle of the system in question, and it may take
significant time and resources. In this step, the organization is
essentially securing the information system against any
validated threats and protecting identified vulnerabilities.
Step 5: Assess
Step 5, Assess, is where a lot of security professionals who
manage certification and accreditation activities or perform
risk assessments come into the picture. During this step, the
controls the organization selects for the information system are
formally assessed, verifying that they were implemented
correctly and validating whether they perform as they were
designed. They are assessed based on their effectiveness in
protecting against the threats they were implemented to protect
against. During this step, the system is assessed in its current
state, with all existing controls and mitigations in place. Based
on the assessment findings, there may be recommendations for
further controls and mitigations, as well as alterations to the
existing security posture for the system. In this step, the level
of risk to the system and its data is normally analyzed and
determined.
Step 6: Authorize
Step 6, Authorize, involves the decision from the entity in
charge to authorize a system to be implemented and put into
operation. This decision is based on various factors, including
the level of risk assessed during step 5, the risk appetite the
organization has settled on, and the tolerance for risk the
organization is willing to accept. The decision to authorize a
system for use may also come with caveats, including
conditional authorization based on the continued mitigation
and reduction of risk by the system or data owner. This
authorization is a formal authority for the system to operate,
made by someone with the legal authority to make that
decision. It is typically in writing and only valid for a specified
period, after which the system must be reassessed for risk and
control compliance.
Step 7: Monitor
Continuous monitoring of security controls defines step 7 in
the RMF; just because an authorization decision is rendered
doesn’t mean the system will now be operated forever without
continually monitoring its security posture for new or changed
risks. Existing controls should be monitored for continued
compliance and effectiveness against identified threats. New
risks will be occasionally discovered for the system as new
threats and vulnerabilities are identified, and the system will
have to be reauthorized after a certain period. Note that the
RMF is a cyclical process; all these steps will be accomplished
again for each system at various times over its life cycle.
ISO 27001/27002/27701/31000
Whereas the NIST RMF is very much an American
framework, the International Organization for Standardization
(ISO) frameworks are used globally. The ISO family is largely
focused on keeping information assets secure, ensuring
privacy, and managing risk across the following major
documents:
• ISO/IEC 27000:2018 Provides the overview of
information security management systems (ISMS).
• ISO/IEC 27001:2013 Specifies the requirements for
establishing, implementing, maintaining, and
continually improving an ISMS within the context of
the organization. It also includes requirements for the
assessment and treatment of information security risks
tailored to the needs of the organization.
• ISO/IEC 27002:2022 Gives guidelines for
organizational information security standards and
information security management practices, including
the selection, implementation, and management of
controls, taking into consideration the organization’s
information security risk environment(s).
• ISO/IEC 27701:2019 Specifies requirements and
provides guidance for establishing, implementing,
maintaining, and continually improving a privacy
information management system (PIMS).
• ISO 31000:2018 Provides guidelines on managing risk
faced by organizations.
Access Control
The following access control principles and policies help
provide a consistent organizational structure and procedures to
prevent internal fraud and corruption in your organization:
• Least privilege The least privilege principle grants
users only the access rights they need to perform their
job functions. This requires giving users the least
amount of access possible to prevent them from abusing
more powerful access rights.
• Separation of duties The separation of duties principle
ensures that one single individual isn’t tasked with high-
security and high-risk responsibilities. Certain critical
responsibilities are separated between several users to
prevent corruption.
• Job rotation Job rotation provides improved security
because no worker retains the same amount of access
control for a position indefinitely. This prevents internal
corruption by workers who might otherwise take
advantage of their long-term position and security
access.
• Mandatory vacation A mandatory vacation policy
requires employees to use their vacation days at specific
times of the year or to use all their vacation days
allotted for a single year. This policy helps detect
security issues with employees, such as fraud or other
internal hacking activities, because the anomalies might
surface while the user is away. Increasingly,
organizations are implementing a policy requiring
mandatory administrative leave for situations in which
an employee is under any sort of investigation, systems
related or otherwise.
Network Security
Several policies provide standard guidelines for implementing
network security principles within an organization and
encompass areas such as the use of the Internet and internal
network, data privacy, security incident response, human
resources (HR) issues, and document security. These policies
are often enforced by technical controls such as data loss
prevention (DLP) tools that monitor for breaches of policy and
issue a report when a breach occurs. Other tools may alert an
administrator to machines joining the network that don’t meet
security requirements (having out-of-date antivirus signatures,
for example) or report to an administrator when an
unauthorized machine has been added to the network or an
inappropriate website has been visited.
Acceptable Use
An acceptable use policy (AUP) is a policy consisting of a set
of established guidelines for the appropriate use of computer
networks within an organization. The AUP is a written
agreement, read and signed by workers, that outlines the
organization’s terms, conditions, and rules for Internet and
internal network use and data protection.
An AUP helps educate workers about the kinds of tools
they will use on the network and what they can expect from
those tools. The policy also helps to define boundaries of
behavior and, more critically, specifies the consequences of
violating those boundaries. The AUP also lays out the actions
that management and the system administrators may take to
maintain and monitor the network for unacceptable use, and it
includes the general worst-case consequences or responses to
specific policy violations.
Developing an AUP for your organization’s computer
network is extremely important for both organizational
security and limiting legal liability in the event of a security
issue. An AUP should cover the following issues:
• Legality The organization’s legal department needs to
approve the policy before it’s distributed for signing.
The policy will be used as a legal document to ensure
that the organization isn’t legally liable for any type of
Internet-related incident and any other transgressions,
such as cracking, vandalism, and sabotage.
• Uniqueness to your environment The policy should be
written to cover the organization’s specific network and
the data it contains. Each organization has different
security concerns—for example, a medical facility
needs to protect data that differs significantly from that
of a product sales organization.
• Completeness Beyond rules of behavior, the AUP
should also include a statement concerning the
organization’s position on personal Internet use on
company time.
• Adaptability Because the Internet is constantly
evolving, the AUP will need to be updated as new
issues arise. You can’t anticipate every situation, so the
AUP should address the possibility of something
happening that isn’t outlined.
• Protection for employees If your employees follow the
rules of the AUP, their exposure to questionable
materials should be minimized. In addition, the AUP
can protect them from dangerous Internet behavior, such
as giving out their names and e-mail addresses to
crackers using social engineering techniques.
The focus of an acceptable use policy should be on the
responsible use of computer networks and the protection of
sensitive information. Such networks include the Internet—for
example, web, e-mail (both personal and business), social
media, and instant messaging access—and the organization’s
intranet. An AUP should, at a minimum, contain the following
components:
• A description of the strategies and goals to be supported
by Internet access in the organization
• A statement explaining the availability of computer
networks to workers
• A statement explaining the responsibilities of workers
when they use the Internet
• A code of conduct governing behavior on the Internet
• A description of the consequences of violating the
policy
• A description of what constitutes acceptable and
unacceptable use of the Internet
• A description of the rights of individuals using the
networks in the organization, such as user privacy
• A description of the expectations for the access and use
of information
• Proper use of social media
• A disclaimer absolving the organization from
responsibility under specific circumstances
• A form for workers to sign indicating their agreement to
abide by the AUP
Note that many organizations’ websites contain an
acceptable use policy or terms of use statement that protects
them from any liability from users of the site.
Social Media
Websites such as Facebook, Twitter, and Instagram are more
popular than ever, and workers often use these sites,
sometimes during the workday, to keep up with friends,
family, and activities. While keeping your workers’ morale
high is a plus, it’s important to limit social media usage at
work, as it can be a hit to overall productivity. Perhaps even
more importantly, workers who are posting negative
comments about your organization, or even posting potentially
private intellectual property, can be a competitor’s dream. For
example, consider a disgruntled employee who begins
tweeting about your organization’s secret spaghetti sauce
recipe, which is then copied by a competitor. Not good!
However, some pleasant scenarios for an organization can
be directly attributed to workers’ social media usage. That’s
why it’s important to determine what level of social media use
your organization is comfortable with while workers are on the
clock. Many organizations have a policy that social media use
during work is only allowed on breaks and lunch hours and
that workers may not discuss or disclose any information
regarding their workplace or intellectual property.
Personal E-Mail
As with social media, many workers have personal e-mail
accounts they may want to keep an eye on throughout the day;
this lets them know that bills are being paid, packages have
been delivered, and so on. Maybe they even use e-mail to keep
up with friends. Although this can be positive for workers’
morale, it is important that an organization understands how
personal e-mail is being used throughout the workday. An
important consideration is a potential threat associated with
sophisticated adversaries in cyberspace who know a great deal
about the organization’s workers and may use their personal e-
mail account for spearfishing and other nefarious activities. If
malware is introduced through this personal e-mail usage
during the workday, it then becomes your problem—assuming
they’re using one of the organization’s systems to read their
personal e-mail. That malware could potentially leak trade or
other secrets about your organization. Again, as with social
media, it is important for an organization to dictate the terms
of how personal e-mail will be used throughout the workday
and whether personal e-mail is allowed to be used on the
organization’s more sensitive systems. For example, it is
generally considered bad practice to allow personal e-mail to
be used on production computers, where malware could have a
catastrophic effect if introduced into the environment.
Privacy
Privacy policies are agreements that protect individually
identifiable information. An organization engaged in online
activities or e-commerce has a responsibility to adopt and
implement a policy to protect the privacy of personally
identifiable information (PII). Increasingly, regulations such as
the European Union’s General Data Protection Regulation
(GDPR), the California Consumer Privacy Act (CCPA), and
the U.S. Health Insurance Portability and Accountability Act
(HIPAA) require a privacy policy that is acknowledged before
use. Organizations should also take steps to ensure online
privacy when interacting with other organizations, such as
business partners. Privacy obligations also extend to an
organization’s use of the PII of its workers.
The following recommendations pertain to implementing
privacy policies:
• An organization’s privacy policy must be easy to find,
read, and understand, and it must be available prior to or
at the time the PII is collected or requested.
• The policy needs to state clearly what information is
being collected; the purpose for which that information
is being collected; possible third-party distribution of
that information; the choices available to an individual
regarding the collection, use, and distribution of the
collected information; a statement of the organization’s
commitment to data security; and what steps the
organization takes to ensure data quality and access.
• The policy should disclose the consequences, if any, of
a person’s refusal to provide information or the refusal
to permit its processing.
• The policy should include a clear statement of what
accountability mechanism the organization uses, such as
procedures for dealing with privacy breaches, including
how to contact the organization and register complaints.
• Individuals must be given the opportunity to exercise
choice regarding how PII collected from them online
can be used when such use is unrelated to the purpose
for which the information was collected. At a minimum,
individuals should be given the opportunity to opt out of
such use.
• When an individual’s information collected online is to
be shared with a third party, especially when such
distribution is unrelated to the purpose for which the
information was initially collected, the individual
should be given the opportunity to opt out.
• Organizations creating, maintaining, using, or
disseminating PII should take appropriate measures to
ensure its reliability and should take reasonable
precautions to protect the information from loss, misuse,
or alteration.
Each organization must evaluate its use of the Internet and
its internal systems to determine the type of privacy policy it
needs in order to protect all involved parties. The privacy
policy will protect the organization from legal issues, raising
employees, customers, and constituents’ comfort levels
regarding the protection of their information. A privacy policy
should include the following elements:
• Information collection Collect, use, and exchange only
data pertinent to the exact purpose, in an open and
ethical manner. The information collected for one
purpose shouldn’t be used for another. Notify persons of
information you have about them, its proposed use and
handling, as well as the enforcement policies.
• Direct marketing The organization can use only non-
PII for marketing purposes and must certify that the
persons’ personal information won’t be resold to third-
party organizations.
• Information accuracy Ensure the data is accurate,
timely, and complete and has been collected in a legal
and fair manner. Allow people the right to access,
verify, and change their information in a timely,
straightforward manner. Inform people of the data
sources and allow them the option of removing their
names from the marketing lists.
• Information security Apply security measures to
safeguard the data on databases. Establish worker
training programs and policies on the proper handling of
PII. Limit the access to a need-to-know basis on
personal information and divide the information so that
no one worker or unit has the whole picture. Follow all
government regulations concerning data handling and
privacy.
Hiring
When hiring employees for a position within the organization,
the HR department is responsible for the initial employee
screening. This usually takes place during the first interview:
an HR representative meets with the potential employee to
discuss the organization and to get a first impression, gauging
whether this person would fit into the organization’s
environment. This interview generally is personality based and
nontechnical. Further interviews are usually more oriented
toward the applicant’s skill set and are conducted by the
department advertising the position. Both types of interviews
are important because the applicant could possess excellent
technical skills for the position, but their personality and
communications skills might not be conducive to integration
with the work environment.
During the interview process, HR also conducts
background checks of the applicant and examines and verifies
their educational and employment history. Reference checks
are also performed, where HR can obtain information on the
applicant from a third party to help confirm facts about the
person’s past. HR will also verify professional licenses and
certifications. Depending on the type of organization, such as
the government or the military, the applicant might have to go
through security clearance checks or even a credit check,
medical examination, and drug testing.
To protect the confidentiality of organization information,
the applicant is usually required to sign a non-disclosure
agreement (NDA), which legally prevents the applicant from
disclosing sensitive organization data to other organizations,
even after termination of employment. These agreements are
particularly important with high-turnover positions, such as
contract or temporary employment.
When an employee is hired, the organization also inherits
that person’s personality quirks or traits. A solid hiring process
can prevent future problems with new employees.
Termination
The dismissal of workers can be a stressful and chaotic time,
especially because terminations can happen quickly and
without notice. A worker can be terminated for a variety of
reasons, such as performance issues, personal and attitude
problems, or legal issues such as sabotage, espionage, or theft.
Alternatively, the worker could be leaving to work for another
organization. The HR department needs to have a specific set
of procedures ready to follow in case a worker resigns or is
terminated. Without a step-by-step method of termination,
some procedures might be ignored or overlooked during the
process and thus compromise the organization’s security.
A termination policy should exist for each type of situation
where a worker is leaving the organization. For example, you
might follow slightly different procedures for terminating a
worker who’s leaving to take a job in an unrelated industry
than a worker who’s going to work for a direct competitor. In
the latter case, the worker might be considered a security risk
if they remain on the premises for their two-week notice
period, where they could transmit organization secrets to the
competition.
Similarly, terminating a contractor or consultant will
involve different policies and procedures since their
employment relationship is with another organization. Finally,
organizations with one or more labor unions must navigate
those waters according to collective bargaining agreements
and rules.
A termination policy should include the following
procedures for the immediate termination of a worker:
• Securing work area When the termination time has
been set, the worker in question should be escorted from
their workstation area to the HR department. This
prevents them from using their computer or other
organization resource once notice of termination is
given. Their computer should be turned off and
disconnected from the network. When the worker
returns to their desk to collect personal items, someone
should be with them to ensure that they do not take
private organization information. Finally, the worker
should be escorted out of the building.
• Return of identification As part of the termination
procedure, the worker’s identification should be
returned. This includes identity badges, pass cards, keys
for doors, and any other security device used for access
to organization facilities. This prevents the person from
accessing the building after being escorted from the
premises.
• Return of equipment All organization-owned
equipment must be returned immediately, such as
desktops, laptops, cell phones, tablets, organizers, or
any other type of electronic equipment that could
contain confidential information.
• Suspension of accounts An important part of the
termination procedure is the notification to the network
administrators of the situation. They should be notified
shortly before the termination takes place to give them
time to disable any network accounts and phone access
for that worker. The network password of the account
should be changed, and any other network access the
worker might have, such as remote access, should be
disabled. The worker’s file server data and e-mail
should be preserved and archived to protect any work or
communications the organization might need for
operational or legal reasons.
EXAM TIP All user access, including physical and network
access controls, needs to be disabled for a worker once they
have been terminated. This prevents the worker from
accessing the facility or network.
Chapter Review
The enterprise architecture within an organization affects the
risk of the business in several different ways. Aspects of
enterprise architecture risk include interoperability,
supportability, security, maintenance, and how the different
pieces and parts of the infrastructure fit into the systems
development life cycle.
Key areas in IT operations management include server and
infrastructure management, end-user support, help desk, and
problem escalation management, and, of course, cybersecurity.
Organizations use project, program, and portfolio
management to oversee and sustain both short- and long-term
aspects of systems and processes.
At the core of a project are three primary drivers. These are
scope (amount or range of work), schedule (when work is to
be done and its completion date), and cost (including all the
resources expended toward the completion of the project).
Business continuity is concerned with the careful planning
and deployment of resources involved in keeping the mission
of the business going (that is, keeping the business functioning
regardless of negative events). Disaster recovery is primarily
concerned with the quick reaction involved with preserving
lives, equipment, and data immediately following a serious
negative event.
Resilience refers to the ability of the business to survive
negative events and continue with its mission and function.
Data lifecycle management is a lifecycle process concerned
with the creation, protection, use, retention, and disposal of
business information.
The systems development life cycle, or SDLC, is essentially
the entire life of a system or product, from concept to
development, from implementation to disposal. Security
should be a part of every step in the SDLC, thus ensuring
security by design.
New or emerging technologies can present risks to
organizations simply because there are several different
important considerations when integrating the latest
technologies into an existing environment.
Access controls ensure that only authorized personnel can
read, write to, modify, add to, or delete data. They also ensure
that only the same authorized personnel can access the
different information systems and equipment used to store,
process, transmit, and receive sensitive data.
The three goals of security, known as the CIA triad, are
confidentiality, integrity, and availability. Supporting these
three goals are other elements of security, such as access
control, data sensitivity and classification, identification,
authentication, authorization, accountability, and, finally, non-
repudiation.
Standards, frameworks, and practices relevant to risk
management include NIST RMF, COBIT 2019, and the
various ISO standards.
Data classification policy defines data sensitivity levels, as
well as handling and disposal procedures to ensure adequate
protection of sensitive information.
Industry frameworks, standards, and practices can be
adopted when building an information security program, as
opposed to building these elements from scratch.
The NIST Risk Management Framework (RMF) is a seven-
step methodology that provides for risk management all the
way through the information systems life cycle.
COBIT is a management framework developed by ISACA
and facilitates the governance of enterprise information and
technology. COBIT consists of two layers, governance and
management, which further break down into a total of 40
separate objectives.
The Risk IT Framework comes from traditional risk
management principles of various enterprise risk management
(ERM) standards and describes activities and processes
thought of as best practices.
Information security awareness training ensures an
organization’s workforce is aware of policy, behavior
expectations, and safe computer and Internet usage.
Security policies and procedures are official organization
communications that are created to support those critical
principles of data privacy and data protection.
Effective physical access controls ensure that only
authorized personnel are permitted to access work centers,
processing centers, and other work areas. This contributes to
an environment with better information security as well as
workplace safety.
An organization practices due care by taking responsibility
for all activities that take place in corporate facilities.
An organization practices due diligence by implementing
and maintaining these security procedures consistently to
protect the organization’s facilities, assets, and workers.
Due process guarantees that in the event of a security issue
by a worker, the worker receives an impartial and fair inquiry
into the incident to ensure the worker’s employment rights are
not being violated.
Each organization must evaluate its use of the Internet and
its internal systems to determine the type of privacy policy it
needs in order to protect all involved parties.
An organization’s HR department is an important link
regarding organization and worker security. The HR
department is responsible for hiring employees, tracking
contractors, consultants, and other temporary workers,
ensuring workers conform to company codes and policies
during their term of work, and maintaining organization
security in case of worker termination.
Quick Review
• Two common enterprise architecture frameworks are
The Open Group Architecture Framework (TOGAF)
and the Zachman framework.
• Business continuity and disaster recovery are primarily
concerned with the goal of business resilience.
• Two of the key processes involved in business
continuity planning are conducting a business impact
assessment (BIA) and determining a recovery strategy.
• Two key areas of concern in business continuity are the
recovery point objective (RPO) and the recovery time
objective (RTO).
• Resilience is the ability of an organization to resist the
effects of negative events and its ability to bounce back
after one of these events.
• Data classification refers to policies that define the
proper use, handling, and protection of data at various
sensitivity levels.
• The term systems development life cycle represents an
evolution of the original term software development life
cycle, reflecting an earlier age when most organizations
wrote their own custom business applications.
• Some of the risks associated with emerging technology
involve staff unfamiliarity, resulting in improper
implementation or use.
• Access controls directly support the confidentiality and
integrity goals of security and indirectly support the
goal of availability.
• Identification refers to the act of an individual or entity
presenting valid credentials to a system. Authentication
involves the verification of one’s identity with a
centralized database. Authorization refers to the access
rights or privileges assigned to a user account. Non-
repudiation is a record-keeping property of a system
whereby a user will not be able to deny having
performed some action.
• A framework is a generally overarching methodology
for a set of activities or processes.
• A standard is a mandatory set of procedures or
processes used by the organization and usually fits into
an overall framework.
• A practice is a normalized process that has been tried
and proven as generally accepted within a larger
community.
• Periodic security awareness training can be used to
reinforce and refresh stale knowledge and bring workers
up to date on the latest tools, techniques, and
risk considerations.
• The least privilege principle grants users only the access
rights they need to perform their job functions.
• Separation of duties ensures that one single individual
isn’t tasked with high-security and high-risk
responsibilities.
• Job rotation provides improved security because no
worker retains the same amount of access control for a
position indefinitely.
• Mandatory vacations enable security and audit staff to
examine absent workers’ procedures and records to
identify signs of misbehavior or fraud.
• An acceptable use policy (AUP) is a policy consisting of
a set of established guidelines for the appropriate use of
computer networks within an organization.
• Social media policy governs acceptable use regarding
the sharing of organization information and
representation of the organization.
• The use of personal e-mail is a potential distraction as
well as a risk for information leakage and the
introduction of malware into an environment.
• Privacy policies are agreements that protect individually
identifiable information.
• Human resources should track not only employees but
temporary workers, contractors, and consultants to
ensure the integrity of access controls.
Questions
1. You are managing a project that involves the installation
of a new set of systems for the accounting division of
your organization. You have just been told that there have
been budget cuts and the project will not be able to
purchase additional equipment needed for the installation.
You now have to find other areas to cut in order to fund
the extra equipment. Which element of project
management is most affected by this threat?
A. Schedule
B. Scope
C. Cost
D. Quality
2. As a risk practitioner in a larger organization, you have
been asked to review the company’s SDLC model for
potential risk areas. The model includes the
Requirements, Design, Development, Implementation,
and Disposal phases. Software and systems are moved
from the development environment immediately into the
production environment and implemented. Which SDLC
phase would you recommend that the business add to
reduce risk of integration or functionality issues as the
system is implemented?
A. Initiation
B. Test
C. Sustainment
D. Maintenance
3. Lack of a well-written work breakdown structure
document can contribute to a vulnerability that affects
which aspect of project management?
A. Cost
B. Schedule
C. Scope
D. Quality
4. Which of the following is the major risk factor associated
with integrating new or emerging technologies into an
existing IT infrastructure?
A. Security mechanisms
B. Data format
C. Vendor supportability
D. Interoperability
5. Which of the following is a short-term process primarily
concerned with protecting personnel, facilities, and
equipment immediately following a disaster or major
incident?
A. Disaster recovery
B. Business continuity
C. Business impact analysis
D. Recovery point objective
6. Which of the following would be a vulnerability that
stems from the business not identifying its critical assets
and processes during the continuity management process?
A. Failure to adequately consider system requirements
during the SDLC
B. Failure to perform a business impact analysis
C. Failure to consider interoperability with integrating
new technologies
D. Failure to maintain redundant systems or data
backups
7. Your business just went through a major storm, which has
flooded your data center. Members of your recovery team
are attempting to salvage equipment as well as locate
critical data backups. No one seems to know exactly what
they’re supposed to do, and they don’t have the right
equipment available to them. Additionally, there is no
coordinated effort within the team to perform specific
tasks. Which of the following vulnerabilities most likely
led up to this scenario?
A. Failure to back up sensitive data
B. Failure to acquire an alternate processing site
C. Lack of a business impact analysis
D. Failure to test the disaster recovery plan
8. After a few incidents where customer data was
transmitted to a third party, your organization is required
to create and adhere to a policy that describes the
distribution, protection, and confidentiality of customer
data. Which of the following policies do you create?
A. Privacy
B. Due care
C. Acceptable use
D. Service level agreement
9. You need to create an overall policy for your organization
that describes how your users can properly make use of
company communications services, such as web
browsing, e-mail, and File Transfer Protocol (FTP)
services. Which of the following policies do you
implement?
A. Acceptable use policy
B. Due care
C. Privacy policy
D. Service level agreement
10. Which of the following security goals is concerned with
ensuring that data has not been modified or altered during
transmission?
A. Confidentiality
B. Availability
C. Integrity
D. Non-repudiation
11. Which of the following is most concerned with ensuring
that users cannot deny that they took an action?
A. Accountability
B. Non-repudiation
C. Auditing
D. Authorization
12. Which of the following describes a set of mandatory
procedures or processes used by an organization?
A. Standard
B. Framework
C. Practice
D. Policy
13. Which of the following frameworks might be used in
business governance and IT enterprise management?
A. NIST RMF
B. COBIT 2019
C. The Risk IT Framework
D. ISO 27001
14. You are implementing an organization-wide risk
management strategy, and you are using the NIST Risk
Management Framework (RMF). You have just
completed step 1 of the RMF, “Categorize information
systems.” Which of the following steps should you
complete next in the RMF sequence?
A. Authorize system.
B. Assess security controls.
C. Continuous monitoring.
D. Select security controls.
15. Which of the following statements most accurately
reflects the effect of information technology (IT) on risk
to the business enterprise? (Choose two.)
A. Information technology is a serious risk to the
mission of the organization.
B. Information technology is used to protect the
organization’s information.
C. Information technology is used to eliminate risk to
the mission of the organization.
D. Information technology is used to generate the
organization’s information.
Answers
1. C. Cost is the element of project management that is most
affected by the threat of not enough funding.
2. B. A test phase introduced into this model would reduce
risk by ensuring that a system or software application
meets performance and functionality standards before it is
introduced into the production environment, potentially
eliminating costly issues before they occur.
3. C. Lack of a well-written work breakdown structure
document can contribute to a vulnerability that affects a
project’s scope.
4. D. Interoperability is the major risk factor associated with
integrating new or emerging technologies into an existing
IT infrastructure. It covers a wide range of factors, which
typically include backward compatibility, data format,
security mechanisms, and other aspects of system
integration.
5. A. Disaster recovery is a short-term process primarily
concerned with protecting personnel, facilities, and
equipment immediately following a disaster or major
incident.
6. B. Failure to perform a business impact analysis could
cause a vulnerability in that the organization would not be
able to adequately identify its critical business processes
and assets.
7. D. Failure to test the disaster recovery plan on a periodic
basis—as well as make sure people are trained and have
the right equipment—can result in poor or ineffective
recovery efforts.
8. A. A privacy policy concerns the protection and
distribution of private customer data. Any company,
especially one engaged in online activities or e-
commerce, has a responsibility to adopt and implement a
policy for protecting the privacy of individually
identifiable information.
9. A. An acceptable use policy establishes rules for the
appropriate use of computer networks within your
organization. The policy describes the terms, conditions,
and rules of using the Internet and its various services
within the company’s networks.
10. C. Integrity is concerned with ensuring that data has not
been modified or altered during transmission or storage.
11. B. Non-repudiation is concerned with ensuring that users
cannot deny that they took a particular action.
12. A. A standard is a set of mandatory procedures or
processes used by an organization.
13. B. COBIT is used in business governance and IT
enterprise management.
14. D. Step 2 of the RMF is “Select security controls” and is
accomplished after information systems have been
categorized.
15. B, D. Information technology is used to generate the
business’s information as well as protect it.
A APPENDIX
Risk Discovery
In Chapter 2, we made a list of the ways in which new IT risks
can be discovered, with complete explanations for each. We’ll
provide just the summary here:
• Audits
• Penetration tests
• Security advisories
• Whistleblowers
• Threat modeling
• Risk assessments
• Security and privacy incidents
• Operational incidents
• News and social media articles
• Professional networking
• Passive observation
• Risk-aware culture
This list should not be construed as complete. Instead, it
should help you understand that there are many ways in which
a risk manager will discover new risks. Put another way, it
would be dangerous to limit the identification of new risks to a
fixed set of inputs, as this could artificially eliminate a source
that is critical for some. Risk discovery is sometimes
methodical and sometimes spontaneous. The spontaneity of
risk discovery compels us to advise you to consider all
possible likely and unlikely sources of risk. We can no more
tell you the types of risk sources to include than we can tell
you which of those sources will be related to your next risk
event.
Rating Risks—Qualitative
Not all risks are created equal. Some may be nearly trivial,
while others represent an existential threat to the organization.
A risk register must contain some columns used to describe
the level of risk for each entry.
The two basic qualitative measures of risk are probability
and impact, as detailed next:
• Probability This specifies the likelihood of an actual
occurrence of the risk. In qualitative terms, probability
is typically expressed as High, Medium, or Low or in
basic numeric terms such as 1 (Low) to 3 (High), 1 to 5,
or 1 to 10.
• Impact This specifies the impact on the organization
due to an actual occurrence of the risk. Like probability,
impact in qualitative risk ratings is expressed as High,
Medium, or Low or on a numeric scale such as 1–3, 1–
5, or 1–10.
Note that, in qualitative risk rating, no attempt at an actual
probability or impact is being made. Instead, the purpose of
qualitative risk ratings is one of triage—these ratings help the
risk manager distinguish entries with higher risks from entries
with lower risks.
Qualitative Terms Help with Ratings
Rating the probability and impact of risks on numeric
scales of 1–3 or 1–5 can, at times, be too abstract. For this
reason, we recommend a list of terms be associated with
these levels for both probability and impact. Using these
terms as a guide can help the risk manager perform more
consistent risk ratings for risk register entries.
For event probability on a scale of 1–5, these terms can
be used:
• 5 = Certain
• 4 = Likely
• 3 = Possible
• 2 = Unlikely
• 1 = Rare
Similarly, these terms can be used for impact:
• 5 = Global
• 4 = Country
• 3 = Department
• 2 = Team
• 1 = Individual
These impact terms describe the scope of impact for a
risk event. The highest-rated events would affect the entire
organization, while the lowest-rated events would affect
only a single worker.
Rating Risks—Quantitative
Sometimes, a risk manager is asked to go beyond qualitative
risk rating and provide actual probabilities and financial
impacts of risks. “How much will it cost us if customer data is
stolen?” is a reasonable question that an executive may pose to
a risk manager. Arguably, the risk manager should attempt to
arrive at an estimate to help the executive better understand
the consequences of a security breach.
In our practice, we do not attempt to uplift every item in the
risk register from qualitative to quantitative terms. Doing so
would take a considerable amount of time and not result in
much additional insight. Instead, we embark on quantitative
risk analysis when examining individual risks. We discuss this
further in the section “Performing Deeper Analysis.”
The Risk Register Is a Living Document
The risk register is a living document, and even when risks
are reduced, this is recorded and reflected in the document;
they are not simply deleted. This provides a history of risk
management within the organization. The contents of the
risk register should be managed accordingly.
Risk Register
The risk register itself is considered a detailed, operational
business record. Reporting about the risk register generally
consists of high-level depictions of risk instead of reporting
the detail. Challenges in reporting risk include the following:
• Trends Changes in the size of the risk register, or the
aggregate or average risk levels, may be misleading,
particularly in newer risk programs that still identify
risk.
• Completeness A risk register should not be considered
a complete record of risk, as there may be numerous
undiscovered and unidentified risks.
New Risks
Risk leaders may opt to disclose the number of, or even some
details about, new risks identified in the reporting cycle. This
reporting is a good indication that the risk program continues
to keep its risk radar operating and open to risk discovery.
Risk Review
Reporting and metrics on risk reviews are an indication of
engagement and involvement. Executives should expect risk
leaders to periodically engage business leaders and department
heads throughout the organization and may want to know how
often (and with whom) this occurs.
Risk Treatment
The rate at which risk treatment decisions are made, and trends
in risk treatment options, shows that the risk leader continues
to work through the risk register.
Risk Renewals
As discussed earlier in this appendix, risk acceptance should
not be perpetual but should expire. Risk leaders may want to
publish, as a leading indicator, the number of risks nearing
renewal. Those risks will be redeliberated and new decisions
made, whether to continue to accept the risks for another year
or take a different approach.
Risk Prioritization
In risk reporting, leaders often care only about the top-priority
risk, such as a top 5, 10, or 20 risks. These may not be
precisely the top risks, according to the risk manager.
However, these priorities represent an articulation of risk
appetite and risk tolerance, as well as leadership’s culture and
focus on risk. All too often, risk prioritization with managers
boils down to nothing more than cost, unfortunately.
System Requirements
The current and previous major versions of the following
desktop browsers are recommended and supported: Chrome,
Microsoft Edge, Firefox, and Safari. These browsers update
frequently, and sometimes an update may cause compatibility
issues with the TotalTester Online or other content hosted on
the Training Hub. If you run into a problem using one of these
browsers, please try using another until the problem is
resolved.
Privacy Notice
McGraw Hill values your privacy. Please be sure to read the
Privacy Notice available during registration to see how the
information you have provided will be used. You may view
our Corporate Customer Privacy Policy by visiting the
McGraw Hill Privacy Center. Visit the mheducation.com site
and click Privacy at the bottom of the page.
TotalTester Online
TotalTester Online provides you with a simulation of the
CRISC exam. Exams can be taken in Practice Mode or Exam
Mode. Practice Mode provides an assistance window with
hints, explanations of the correct and incorrect answers, and
the option to check your answer as you take the test. Exam
Mode provides a simulation of the actual exam. The number of
questions, the types of questions, and the time allowed are
intended to be an accurate representation of the exam
environment. The option to customize your quiz allows you to
create custom exams from selected domains or chapters, and
you can further customize the number of questions and time
allowed.
To take a test, follow the instructions provided in the
previous section to register and activate your Total Seminars
Training Hub account. When you register, you will be taken to
the Total Seminars Training Hub. From the Training Hub
Home page, select your certification from the Study drop-
down menu at the top of the page to drill down to the
TotalTester for your book. You can also scroll to it from the
list of Your Topics on the Home page, and then click the
TotalTester link to launch the TotalTester. Once you’ve
launched your TotalTester, you can select the option to
customize your quiz and begin testing yourself in Practice
Mode or Exam Mode. All exams provide an overall grade and
a grade broken down by domain.
Technical Support
For questions regarding the TotalTester or operation of the
Training Hub, visit www.totalsem.com or e-mail
[email protected].
For questions regarding book content, visit
www.mheducation.com/customerservice.
GLOSSARY
A
acceptable use policies (AUPs), 169–170
acceptance, risk, 84–85, 200–201
access control
non-repudiation, 158
PCI DSS, 100
principles and policies, 167–168
types, 155–156
account suspension, 175
accountability
auditing relationship, 13
overview, 158
risk ownership, 83
third-party risk, 87
adaptability issues in AUPs, 169
administrative controls, 32, 94
advisories, 24
aggregation of data, 108–109
ALE (annual loss expectancy), 60–62, 103
analysis
controls, 31–37, 103–104
data, 108–109
risk. See risk analysis methodologies
analysis worksheets, 197–198
annual loss expectancy (ALE), 60–62, 103
annualized rate of occurrence (ARO), 60
Anything as a Service (XaaS), 131
architecture and design reviews, 42
ARO (annualized rate of occurrence), 60
assess step in RMF, 161
assessments
data collection for, 42–43
defined, 40
IT risk. See IT risk assessment
scope, 42
types, 41–42
asset-level risk registers, 193
asset value (AV)
control selection factor, 103
qualitative risk analysis method, 191
risk analysis, 56
assets
collecting and organizing data for, 9–10
description, 26, 156
examples, 7–8
identifying in risk analysis, 56, 58
IT risk assessment, 43–44
management systems, 9
organizational, 6–10
sources of data for, 8–9
attacks, defined, 26
attainability in KPIs, 111–112
audience for training programs, 166
audits
accountability, 158
description, 42
new risk identification, 23
overview, 13
risk register information source, 55
AUPs (acceptable use policies), 169–170
authentication, 157
authorization, 157–158
authorize step in RMF, 161–162
AV (asset value)
control selection factor, 103
qualitative risk analysis method, 191
risk analysis methodology, 56
availability
description, 154–155
health data, 73
key performance indicators, 111–112
avoidance, risk, 85
awareness training programs, 165–167
B
BAAs (business associate agreements), 87
background checks in hiring, 173
battlefield allegory, 185–186
Bayesian analysis method, 66
BCDR. See business continuity and disaster recovery (BCDR)
management
BCP (business continuity planning) projects, 67–68
BIA. See business impact analysis (BIA)
bottom-up risk scenario development, 39
bow-tie analyses, 65–66
brainstorming and risk identification, 25
brand equity assets, 8
building assets, 7
business alignment, 45
business associate agreements (BAAs), 87
business context in risk analysis, 56
business continuity and disaster recovery (BCDR)
management
business impact analysis, 66, 141
overview, 140–141
plan testing, 142
recovery objectives, 141
recovery strategies, 141–142
resilience and risk factors, 142–144
business continuity planning (BCP) projects, 67–68
business impact analysis (BIA)
criticality analysis, 69–70
inventory key processes and systems, 67
overview, 66, 141
recovery objectives, 70
statements of impact, 68–69
business impact in risk analysis, 58
business leaders, risk register reviews with, 195–196
business processes
asset organization by, 9
organizational governance, 5–6
BYOD devices, 184, 193
C
CA (criticality analysis), 69–70
California Consumer Privacy Act (CCPA), 171
categorize step in RMF, 160–161
causal analysis step in root cause analysis, 37
CCBs (change control boards), 89
CCPA (California Consumer Privacy Act), 171
Center for Internet Security (CIS), 97–98, 100
centralized dashboards for controls, 110
change control boards (CCBs), 89
change management, 89–90
controls, 104
IT operations management, 136
SDLC, 151
change risk in emerging technologies, 153
chief information security officers (CISOs), 201
chronology step in root cause analysis, 37
CIS (Center for Internet Security), 97–98, 100
CISOs (chief information security officers), 201
classification
assets, 58
data, 145–146, 156–157
clearance for authorization, 158
cloud services
enterprise architecture, 131
risk, 184
COBIT 2019, 162–164
Code of Professional Ethics, 15
code reviews, 42
code scans, 42
codes of conduct and ethics, 174
collection of data, 108–109
Committee of Sponsoring Organizations of the Treadway
Commission (COSO), 163
common controls, 36, 82
common controls providers, 36
Common Vulnerability Scoring System (CVSS) score, 32
communication between risk register layers, 194
compatibility in emerging technologies, 153
compensating controls, 94–95
completeness issues
acceptable use policies, 169
risk registers, 203
complexity increases, 184
compliance management, 12–13
compliance risks, 23
concepts in IT risk assessment, 40–45
conduct, codes of, 174
confidentiality
data classification, 145–146, 157
health data, 73
information security, 128
overview, 154–155
configuration assessments, 24
configuration management
controls, 104
IT operations management, 136
overview, 89–90
SDLC, 151
consultants
risk register information source, 56
tracking, 173–174
contingency responses, 203
continuity. See business continuity and disaster recovery
(BCDR) management
contractors, tracking, 173–174
contractual requirements, 14–15
controls
analysis, 103–104
change and configuration management, 104
deficiency analysis overview, 30–31
design, 101–102
effectiveness evaluation, 31–34, 106
frameworks, 98–101
gaps, 34
key control indicators, 113
key performance indicators, 110–112
key risk indicators, 112–113
monitoring techniques, 109
overview, 93
ownership, 36, 82–83
recommendations, 34–35
reporting techniques, 109–110
security, 155–156
selecting, 102–103
standards, 96–101
testing, 104–106
types and functions, 93–96
corrective actions in root cause analysis, 37
corrective controls, 94–95
COSO (Committee of Sponsoring Organizations of the
Treadway Commission), 163
cost
controls, 103
pressure on, 184
project management, 137–139
risk response options, 85
countermeasure controls, 96
critical processes risks, 143
criticality analysis (CA), 69–70
culture in organizational governance, 4
currency of technology in risk scenario development, 40
CVSS (Common Vulnerability Scoring System) score, 32
cyber insurance, 202
cybercrime, 183
cybersecurity risks, 23
D
dashboards for controls, 110
data
availability, 155
classification, 145–146, 156–157
collecting for assessment, 42–43
destruction policies, 147, 155
discovery, 42
retention policies, 146–147
sensitivity, 156–157
data governance risks, 24
data in motion, 131
data lifecycle management
overview, 144–145
standards and guidelines, 145–146
data processing in risk reporting, 108–109
databases
enterprise architecture, 130
networks, 130–131
risk analysis, 56
Delphi method, 66
denial of service, 155
design
controls, 101–102
SDLC phase, 149–150
destruction of data, 147, 155
detective controls, 94–95
deterrent controls, 94–95
development phase in SDLC, 150
differentiation step in root cause analysis, 37
direct marketing, privacy policies for, 172
discovery
data, 42
risk, 188–192
disposal
documents, 146
hardware, 147
SDLC phase, 151
distance risks in BCDR, 143
documents and documentation
classification, 145–146
control implementation, 104–105
disposal, 146
handling, retention, and storage, 146
reviews in IT risk assessment, 43
due care, 171
due diligence, 171
due process, 171
E
e-mail, 170–171
EF (exposure factor) in control selection, 103
effectiveness
controls, 102–103, 106
risk response options, 85
effort in KRIs, 112
electric generators, 70
emerging risk
managing, 90–93
technologies, 152–153
encryption for networks, 131
enhance responses, 203
enterprise architecture
cloud, 131
databases, 130
frameworks, 132–135
gateways, 132
networks, 130–131
operating systems, 130
overview, 128–129
platforms, 129
security architecture, 135
software, 129–130
enterprise risk management (ERM), 10–11
entity-level risk registers, 193
equipment
description, 7
recovery strategies, 141–142
returning in termination, 175
ERM (enterprise risk management), 10–11
ethics codes, 15, 174
evaluation
controls effectiveness, 31–34, 106
defined, 40–41
risk analysis, 59
event-tree analysis, 64–65
events
defined, 26
overview, 25–26
types, 28
vocabulary terms, 26–27
exception management, 89–90
exploits
description, 26
risk responses, 203
exposure factor (EF) in control selection, 103
external audits as risk register information source, 55
F
Factor Analysis of Information Risk (FAIR), 65
families, control, 97, 99
fault-event-tree analysis, 64–65
Federal Information Processing Standards (FIPS), 160
financial system asset inventory, 8
findings, managing, 89–90
FIPS (Federal Information Processing Standards), 160
five whys in root cause analysis, 37–38
formal risk register reviews, 195
frameworks
controls, 98–101
enterprise architecture, 132–135
information technology and security, 159
IT risk assessment, 45–51
risk management, 10–11
G
gap assessments, 41
gateways in enterprise architecture, 132
General Data Protection Regulation (GDPR), 171
geography issues
asset collection and organization, 9
business continuity and disaster recovery, 143
goals in organizational governance, 2–3
governance
assessments, 24
organizational. See organizational governance
overview, 1–2
questions, 17–20
review, 15–17
risk, 10–15
governance level in COBIT, 163–164
governance, risk, and compliance (GRC) tools, 53
government classified documents, 145–146, 157
guards, 168
guidelines for data lifecycle management, 145–146
H
hardware disposal, 147
Health Insurance Portability and Accountability Act (HIPAA)
controls requirements, 96
OCR involvement, 73
privacy issues, 171
heat maps for controls, 110
hiring employees, 173
human resources
codes of conduct and ethics, 174
hiring, 173
terminating, 174–175
tracking, 173–174
I
IaaS (Infrastructure as a Service), 131
identification
assets, 6–8
and authentication, 157
risk, 22–25
identification step in root cause analysis, 37
ignoring risk, 202
impact
business continuity and disaster recovery, 144
business impact analysis, 68–69
description, 26
key risk indicator criteria, 112
qualitative risk analysis, 190
risk analysis, 58–59
implement step in RMF, 161
implementation testing for controls, 104–105
incidents
new risk identification, 23
risk register information source, 55
indicators
key control, 113
key performance, 110–112
key risk, 112–113
industry development as risk register information source, 55
information assets, 7
information policies for privacy, 172
information sources for risk registers, 55–56
information technology and security
access control, 155–156, 167–168
accountability, 158
authorization, 157–158
business continuity and disaster recovery management,
140–144
confidentiality, integrity, and availability, 154–155
data lifecycle management, 144–147
data sensitivity and classification, 156–157
emerging technologies, 152–153
enterprise architecture. See enterprise architecture
frameworks, 159
human resources, 173–175
identification and authentication, 157
IT operations management, 135–137
network security, 168–172
non-repudiation, 158–159
overview, 127–128
physical access security, 168
project management, 137–140
questions, 178–182
review, 175–178
security and risk awareness training programs, 165–167
security concepts, 154
security policies, 167
standards, 159–165
systems development life cycle, 147–152
Information Technology Infrastructure Library (ITIL), 162–
163
Infrastructure as a Service (IaaS), 131
inherent risk
description, 26
IT risk assessment, 71
innovation in cybercrime, 183
integrated risk management (IRM) tools, 53
integration testing in SDLC, 150
integrity, 155
intellectual property assets, 7
intent threat category, 30
intentional threats in BCDR, 143
interface testing in SDLC, 150
internal audits as risk register information source, 55
International Organization for Standardization (ISO)
frameworks, 162
interoperability
controls, 105
emerging technologies, 153
interviews
asset identification, 8
hiring, 173
IT risk assessment, 43, 106
inventories
financial system assets, 8–9
key processes and systems, 67–68
IoT devices, 184
IRM (integrated risk management) tools, 53
ISACA
COBIT, 162–164
Code of Professional Ethics, 15
key risk indicators, 112–113
Risk IT Framework, 11–12, 49–51, 164–165
system testing, 45
ISO (International Organization for Standardization)
frameworks, 162
ISO/IEC 27000, 162
ISO/IEC 27001, 100–101, 162
ISO/IEC 27002, 101, 162
ISO/IEC 27005, 49
ISO/IEC 27701, 162
ISO/IEC 31000, 162
ISO/IEC 31010, 49
ISO/IEC standards overview, 48–49
issues management, 89–90
IT equipment assets, 7
IT operations management, 135–137
IT risk assessment
analysis and evaluation overview, 40
business impact analysis, 66–70
concepts, 40–45
inherent and residual risk, 71–72
miscellaneous risk considerations, 72–73
overview, 21–22
questions, 75–79
review, 73–75
risk analysis methodologies, 56–66
risk events, 25–28
risk identification, 22–25
risk ownership, 51–52
risk ranking, 51
risk registers, 52–56
risk scenario development, 38–40
standards and frameworks, 45–51
threat landscape, 29–30
threat modeling, 28–30
vulnerability and control deficiency analysis, 30–38
IT systems portfolios for asset identification, 8
ITIL (Information Technology Infrastructure Library), 162–
163
J
job rotation, 168
K
key control indicators (KCIs)
business processes, 6
controls, 113
key performance indicators (KPIs)
business processes, 6
controls, 110–112
key processes and systems inventories, 67
key risk indicators (KRIs)
business processes, 6
controls, 112–113
L
laws as risk register information source, 56
layers in risk registers, 193–194
least privilege principle, 157–158, 167
legal requirements, 14–15
legality issues in AUPs, 169
likelihood
description, 27
risk analysis, 58–59
Likert scale in qualitative risk analysis, 62
lines of defense, 12–13
logical controls, 94
loss event frequency in FAIR, 65
M
management buy-in for training programs, 166
management level in COBIT, 163–164
managerial controls, 94
mandatory vacations, 168
market factor risks in BCDR, 143
maturity assessments, 41
maximum tolerable downtime (MTD), 70
measurability of key performance indicators, 111
methods threat category, 30
military classified documents, 145–146
minutes, 187
mission risks in BCDR, 143
mitigation of risk
deeper analysis, 200
follow up, 204
process, 83–84
risk analysis, 58
monitor step in RMF, 162
monitoring
controls, 109
risk, 106–108
MTD (maximum tolerable downtime), 70
multifactor authentication, 157
N
National Institute of Standards and Technology (NIST)
business continuity and disaster recovery, 142
control effectiveness, 33–34
control framework, 98–99
control guidelines and standards, 96–98
quantitative risk analysis, 62–63
risk assessment guidelines, 45–47
Risk Management Framework, 11–12, 97, 160–162
Special Publication 800-17, 97
Special Publication 800-30
quantitative risk analysis, 62–63
risk assessment guidelines, 45–47
Special Publication 800-34, 142
Special Publication 800-37, 11–12
Special Publication 800-53, 33–34, 97–99, 161
Special Publication 800-60, 161
standards, 159–160
natural threats in BCDR, 143
NDAs (non-disclosure agreements), 173
need-to-know authorization, 158
network security
acceptable use policies, 169–170
due care, due diligence, and due process, 171
overview, 168
personal e-mail, 170–171
privacy, 171–172
social media, 170
networks
enterprise architecture, 130–131
new risk identification, 24
new risks, reporting, 203
news articles for risk identification, 23
NIST. See National Institute of Standards and Technology
(NIST)
non-disclosure agreements (NDAs), 173
non-repudiation, 158–159
O
objectives in organizational governance, 2–3
objectives threat category, 30
observation in IT risk assessment, 43
OCR (Office of Civil Rights), 73
OCTAVE (Operationally Critical Threat, Asset, and
Vulnerability Evaluation) methodology, 47–48
Office of Civil Rights (OCR), 73
online data for asset identification, 8
operating systems in enterprise architecture, 130
operational controls, 32, 94
operational criticality in qualitative risk analysis method, 191
operational management, 12
operational risks, 23
Operationally Critical Threat, Asset, and Vulnerability
Evaluation (OCTAVE) methodology, 47–48
organizational governance
assets, 6–10
business processes, 5–6
culture, 4
overview, 2
policies and standards, 5
strategy, goals, and objectives, 2–3
structure, roles, and responsibilities, 3–4
organizational units, asset organization by, 9
outsourcing risks in BCDR, 143
ownership of risk and control, 36, 51–52, 82–83
P
PaaS (Platform as a Service), 131
passive observation for new risk identification, 24
Payment Card Industry Data Security Standard (PCI DSS),
99–100
penetration tests
description, 41
new risk identification, 23
personal e-mail, 170–171
personally identifiable information (PII), 171
personnel assets, 7
PHI (protected health information), 73
physical access security, 168
physical controls, 32, 94
physical location risks in BCDR, 143
PII (personally identifiable information), 171
plan testing in BCDR, 142
planning phase in SDLC, 149
Platform as a Service (PaaS), 131
platforms in enterprise architecture, 129
PMBOK (Project Management Body of Knowledge), 162
policies
acceptable use, 169–170
access control, 167–168
organizational governance, 5
privacy, 171–172
retention and destruction, 146–147
security, 167
termination, 174–175
portfolios in project management, 137
practices, 160
preparation step in RMF, 160
preventive controls, 94–95
PRINCE2 (PRojects IN Controlled Environments 2), 163
prioritization, risk, 204
privacy
assessments, 24
network security, 171–172
risks, 23
private classified documents, 145–146
probability
description, 27
qualitative risk analysis method, 190–191
probable loss magnitude in FAIR, 65
processes
asset organization by, 9
organizational governance, 5–6
risk management programs, 187
professional ethics, 15
program charters, 186
program-level risk registers, 193
programs
project management, 137
risk management, 186–188
training, 165–167
project management, 137–140
Project Management Body of Knowledge (PMBOK), 162
PRojects IN Controlled Environments 2 (PRINCE2), 163
property assets, 7
protected health information (PHI), 73
protection issues in acceptable use policies, 169
public documents, 145–146
publishing, 203–204
Q
qualitative analysis method
deeper analysis, 198
risk analysis, 59–60, 62–63
risk management programs, 187
risk ratings, 190–192
quantitative analysis method
deeper analysis, 198
risk analysis, 59–63
risk management programs, 187
risk ratings, 192
R
ranking risk, 51
ratings
qualitative analysis, 190–192
quantitative analysis, 192
RCA (root cause analysis)
key risk indicators, 113
steps, 36–37
RE (Risk Evaluation) domain, 11, 49–51
real-time data, 108–109
recommendations for risk treatment, 199–203
records, 7
recovery objectives in business impact analysis, 70
recovery point objectives (RPOs), 60–61, 141
recovery strategies, 141–142
recovery time objectives (RTOs), 60–61, 141
regulations
asset organization by, 10
overview, 14–15
pressure on, 183–184
risk register information source, 56
relationship threat category, 30
relevancy
key performance indicators, 111–112
risk scenario development, 39
reliability in key risk indicators, 112
renewals, risk, 204
reporting, risk. See risk reporting
reputation assets, 8
requirements phase in SDLC, 149
residual risk
description, 27
IT risk assessment, 71–72
risk acceptance, 84
resilience in BCDR, 142
responsibilities in organizational governance, 3–4
results threat category, 30
retention
data, 146–147
documents, 146
retirement phase in SDLC, 151
return of identification in termination, 175
return on investment (ROI) in control recommendations, 35
reviewing risk registers, 194–196
risk
description, 26
emerging technologies, 152–153
SDLC, 151–152
risk acceptance, 84–85, 200–201
risk analysis, description, 27
risk analysis methodologies
asset identification, classification, and valuation, 58
bow-tie analyses, 65–66
Factor Analysis of Information Risk, 65
fault- and event-tree analysis, 64–65
likelihood determination and impact analysis, 58–59
miscellaneous, 66
overview, 56–58
qualitative analysis, 59
quantitative analysis, 59–62
risk appetite
deeper analysis, 198
description, 13–14
risk analysis, 58
risk assessments
description, 27
IT. See IT risk assessment
risk register information source, 55
risk avoidance, 202
risk-aware culture, 24
risk awareness training programs, 165–167
risk derivation in FAIR, 65
Risk Evaluation (RE) domain, 11, 49–51
risk events
overview, 25–26
types, 28
vocabulary terms, 26–27
risk factors
business continuity and disaster recovery, 142
emerging, 91–92
risk governance
enterprise risk management and risk management
frameworks, 10–11
ISACA Risk IT Framework, 49
legal, regulatory, and contractual requirements, 14–15
lines of defense, 12–13
overview, 10
risk identification, 22–25
Risk IT Framework
key risk indicators, 112–113
overview, 164–165
process areas, 49–50
risk landscape, 178–182
risk management
description, 27
frameworks, 10–11
overview, 12–13
Risk Management Framework (RMF)
controls, 97–99
steps, 11–12, 160–162
risk management life cycle
deeper analysis, 196–198
publishing and reporting, 203–204
risk discovery, 188–192
risk registers, 193–196
risk treatment recommendations, 199–203
risk management programs
processes, 186–187
purpose, 187–188
risk managers, 194
risk mitigation
deeper analysis, 200
follow up, 204
process, 83–84
risk analysis methodology, 58
risk ownership
considerations, 36
IT risk assessment, 51–52
risk prioritization, 204
risk profiles
description, 13
ISACA Risk IT Framework, 50
risk ranking in IT risk assessment, 51
risk ratings
qualitative analysis, 190–192
quantitative analysis, 192
risk registers
columns, 189–190
data structures, 52–55
description, 27, 188
information sources, 55–56
layers, 193–194
managing, 192
reports, 203
reviewing, 194–196
types, 193
updating, 56
risk renewals, 204
risk reporting
controls, 109–110
data processing, 108–109
key control indicators, 113
key performance indicators, 110–112
key risk indicators, 112–113
monitoring techniques, 109
overview, 81–82, 106–108
questions, 119–125
review, 114–118
treatment plans, 108
risk response
acceptance, 84–85
avoidance, 85
control design. See controls
ISACA Risk IT Framework, 49
managing, 89–93
mitigation, 83–84
options, 83–86
overview, 81–82
ownership, 82–83
questions, 119–125
review, 114–118
sharing, 84
third-party risk, 86–88
risk reviews in reports, 203
risk scenario development
bottom-up, 39
considerations, 39–40
overview, 38
top-down, 38–39
risk tolerance
deeper analysis, 198
description, 13–14
risk analysis methodology, 58
risk transfer, 201–202
risk treatment
description, 27
recommendations, 199–203
reports, 203
RMF (Risk Management Framework)
controls, 97–99
steps, 11–12, 160–162
ROI (return on investment) in control recommendations, 35
roles in organizational governance, 3–4
root cause analysis (RCA)
key risk indicators, 113
steps, 36–37
RPOs (recovery point objectives), 60–61, 141
RTOs (recovery time objectives), 60–61, 141
S
S.M.A.R.T. guidelines, 111–112
SaaS (Software as a Service), 131
safeguard controls, 96
SANs (storage area networks), 70
SANS Top 20 Critical Security Controls, 100
scaling process in threat modeling, 29
scenario components in FAIR, 65
schedules in project management, 137–139
scope
assessment, 42
project management, 137–139
threat modeling, 29
scorecards for controls, 110
SDLC. See systems development life cycle (SDLC)
secure configuration assessments, 24
securing work area in termination, 174
security
emerging technologies, 153
policies, 167
tools, 185
training programs, 165–167
security advisories, 24
security architecture in enterprise architecture, 135
security concepts in information technology and security, 154
security controls, 155–156
security incidents
new risk identification, 23
privacy, 23
risk register information source, 55
security professionals shortage, 185
security scans for asset identification, 9
security testing in SDLC, 150
SEI (Software Engineering Institute), 47–48
select step in RMF, 161
semiquantitative values, 62
sensitivity
asset organization by, 9
data, 156–157
key risk indicators, 112
risk reporting information, 204
separation of duties principle, 148, 167
service level agreements (SLAs)
IT operations management, 136
third-party risk, 87
service providers, asset organization by, 9
sharing risk, 84
single-factor authentication, 157
single loss expectancy (SLE)
control selection factor, 103
quantitative risk analysis, 60–62
skills threat category, 30
SLAs (service level agreements)
IT operations management, 136
third-party risk, 87
SLE (single loss expectancy)
control selection factor, 103
quantitative risk analysis, 60–62
social media
policies, 170
risk identification, 23
Software as a Service (SaaS), 131
Software Engineering Institute (SEI), 47–48
software in enterprise architecture, 129–130
specificity in key performance indicators, 111
staff
project management, 138
risk register reviews with, 195
stakeholders in risk environment, 106–107
standards
controls, 96–101
data lifecycle management, 145–146
information technology and security, 159–165
IT risk assessment, 45–51
NIST. See National Institute of Standards and Technology
(NIST)
organizational governance, 5
statements of impact in business impact analysis, 68–69
storage area networks (SANs), 70
storage of documents, 146
strategic risks, 22
strategies
organizational governance, 2–3
recovery, 141–142
structure in organizational governance, 3–4
supply chain risks, 23
suspension of accounts, 175
systems
IT risk assessment, 43–44
testing, 44–45
systems development life cycle (SDLC)
design phase, 149–150
development phase, 150
disposal phase, 151
implementation and operation, 150
overview, 147–148
planning phase, 149
requirements phase, 149
risks, 151–152
separation of duties, 148
test phase, 150
T
tailgating, 168
technical controls, 32, 94
technical data, 108–109
technologies, emerging, 91
telework, 184
temps, tracking, 173–174
termination, employee, 174–175
testing
controls, 104–106
penetration, 23, 41
recovery plans, 142
SDLC, 150
system, 44–45
The Open Group Architecture Framework (TOGAF)
COBIT, 162
enterprise architecture, 132–133
third-party risk
business continuity and disaster recovery, 143
factors, 87
managing, 88
overview, 86–87
threats, 88
vulnerabilities, 88
threat actors, 26
threat intelligence as risk register information source, 55
threat landscape, 29–30
threat modeling
description, 26, 41
new risk identification, 24
overview, 28–30
threat realization, 26
threats
business continuity and disaster recovery, 143
criticality analysis, 69
description, 26
emerging, 92, 153
IT operations management, 136–137
project management, 138–140
risk analysis, 56
third-party, 88
time sensitivity risks in BCDR, 143
timeliness in key performance indicators, 111–112
TOGAF (The Open Group Architecture Framework)
COBIT, 162
enterprise architecture, 132–133
Top 20 Critical Security Controls, 100
top-down risk scenario development, 38–39
tracking employees, 173–174
training
project management, 138
security and risk awareness programs, 165–167
treatments
options, 198
plans, 108
recommendations, 199–203
reports, 203
trends in risk registers, 203
triage apparatus
qualitative risk analysis, 192
risk management programs as, 187
U
unintentional threats in BCDR, 143
uninterruptible power supplies (UPSs), 70
uniqueness issues in AUPs, 169
unit testing in SDLC, 150
updating risk registers, 56
UPSs (uninterruptible power supplies), 70
user acceptance testing in SDLC, 150
V
vacations, mandatory, 168
validation of data, 108–109
valuation of assets, 58
virtual assets, 7
vocabulary terms for risk, 26–27
vulnerabilities
business continuity and disaster recovery, 144
description, 26
emerging technologies, 92–93, 153
IT operations management, 137
project management, 139–140
risk analysis, 56
SDLC, 152
third-party risk, 88
vulnerability and control deficiency analysis
control effectiveness, 31–34
control gaps, 34
control recommendations, 34–35
overview, 30–31
risk and control ownership, 36
root cause analysis, 36–37
vulnerability assessments
description, 41
risk register information source, 55
W
walkthroughs in IT risk assessment, 43
whistleblowers for new risk identification, 24
work breakdown structures (WBSs), 139
worksheets, analysis, 197–198
X
XaaS (Anything as a Service), 131
Z
Zachman framework, 133–135