draft-ietf-sacm-use-cases-04.txt   draft-ietf-sacm-use-cases-05.txt 
Security Automation and Continuous Monitoring WG D. Waltermire Security Automation and Continuous Monitoring WG D. Waltermire
Internet-Draft NIST Internet-Draft NIST
Intended status: Informational D. Harrington Intended status: Informational D. Harrington
Expires: April 24, 2014 Effective Software Expires: May 24, 2014 Effective Software
October 21, 2013 November 20, 2013
Endpoint Security Posture Assessment - Enterprise Use Cases Endpoint Security Posture Assessment - Enterprise Use Cases
draft-ietf-sacm-use-cases-04 draft-ietf-sacm-use-cases-05
Abstract Abstract
This memo documents a sampling of use cases for securely aggregating This memo documents a sampling of use cases for securely aggregating
configuration and operational data and evaluating that data to configuration and operational data and evaluating that data to
determine an organization's security posture. From these operational determine an organization's security posture. From these operational
use cases, we can derive common functional capabilities and use cases, we can derive common functional capabilities and
requirements to guide development of vendor-neutral, interoperable requirements to guide development of vendor-neutral, interoperable
standards for aggregating and evaluating data relevant to security standards for aggregating and evaluating data relevant to security
posture. posture.
skipping to change at page 1, line 37 skipping to change at page 1, line 37
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 24, 2014. This Internet-Draft will expire on May 24, 2014.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 3 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . . 4
2.1. Definition and Publication of Automatable Configuration 2.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 5
Guides . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1. Define, Publish, Query and Retrieve Content . . . . . 5
2.2. Automated Checklist Verification . . . . . . . . . . . . 6 2.1.2. Endpoint Identification and Assessment Planning . . . 7
2.3. Organizational Software Policy Compliance . . . . . . . . 7 2.1.3. Endpoint Posture Attribute Value Collection . . . . . 8
2.4. Detection of Posture Deviations . . . . . . . . . . . . . 7 2.1.4. Posture Evaluation . . . . . . . . . . . . . . . . . 8
2.5. Search for Signs of Infection . . . . . . . . . . . . . . 7 2.1.5. Mining the Database . . . . . . . . . . . . . . . . . 9
2.6. Remediation and Mitigation . . . . . . . . . . . . . . . 8 2.2. Usage Scenarios . . . . . . . . . . . . . . . . . . . . . 10
2.7. Endpoint Information Analysis and Reporting . . . . . . . 8 2.2.1. Definition and Publication of Automatable
2.8. Asynchronous Compliance/Vulnerability Assessment at Ice Configuration Guides . . . . . . . . . . . . . . . . 10
Station Zebra . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2. Automated Checklist Verification . . . . . . . . . . 11
2.9. Vulnerable Endpoint Identification . . . . . . . . . . . 10 2.2.3. Detection of Posture Deviations . . . . . . . . . . . 13
2.10. Compromised Endpoint Identification . . . . . . . . . . . 10 2.2.4. Endpoint Information Analysis and Reporting . . . . . 14
2.11. Suspicious Endpoint Behavior . . . . . . . . . . . . . . 10 2.2.5. Asynchronous Compliance/Vulnerability Assessment at
2.12. Traditional endpoint assessment with stored results . . . 11 Ice Station Zebra . . . . . . . . . . . . . . . . . . 15
2.13. NAC/NAP connection with no stored results using an 2.2.6. Identification and Retrieval of Repository Content . 17
endpoint evaluator . . . . . . . . . . . . . . . . . . . 11 2.2.7. Content Change Detection . . . . . . . . . . . . . . 18
2.14. NAC/NAP connection with no stored results using a third- 2.2.8. Others... . . . . . . . . . . . . . . . . . . . . . . 18
party evaluator . . . . . . . . . . . . . . . . . . . . . 11 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18
2.15. Repository Interaction - A Full Assessment . . . . . . . 12 4. Security Considerations . . . . . . . . . . . . . . . . . . . 18
2.16. Repository Interaction - Filtered Delta Assessment . . . 12 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18
2.17. Direct Human Retrieval of Ancillary Materials. . . . . . 12 6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.18. Register with repository for immediate notification of 6.1. -04- to -05- . . . . . . . . . . . . . . . . . . . . . . 19
new security vulnerability content that match a 6.2. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 20
selection filter. . . . . . . . . . . . . . . . . . . . . 12 6.3. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 20
2.19. Others... . . . . . . . . . . . . . . . . . . . . . . . . 12 6.4. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 21
3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 6.5. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 21
4. Security Considerations . . . . . . . . . . . . . . . . . . . 13 6.6. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-
5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 22
6. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.7. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 23
6.1. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 13 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 13 7.1. Normative References . . . . . . . . . . . . . . . . . . 24
6.3. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 14 7.2. Informative References . . . . . . . . . . . . . . . . . 24
6.4. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 14 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24
6.5. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-
use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 15
6.6. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 16
7. References . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.1. Normative References . . . . . . . . . . . . . . . . . . 17
7.2. Informative References . . . . . . . . . . . . . . . . . 17
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 18
1. Introduction 1. Introduction
Our goal with this document is to improve our agreement on which Our goal with this document is to improve our agreement on which
problems we're trying to solve. We need to start with short, simple problems we're trying to solve. We need to start with short, simple
problem statements and discuss those by email and in person. Once we problem statements and discuss those by email and in person. Once we
agree on which problems we're trying to solve, we can move on to agree on which problems we're trying to solve, we can move on to
propose various solutions and decide which ones to use. propose various solutions and decide which ones to use.
This document describes example use cases for endpoint posture This document describes example use cases for endpoint posture
assessment for enterprises. It provides a sampling of use cases for assessment for enterprises. It provides a sampling of use cases for
securely aggregating configuration and operational data and securely aggregating configuration and operational data and
evaluating that data to determine the security posture of individual evaluating that data to determine the security posture of individual
skipping to change at page 4, line 38 skipping to change at page 4, line 41
operator or an application might trigger the process with a operator or an application might trigger the process with a
request, or the endpoint might trigger the process using an request, or the endpoint might trigger the process using an
event-driven notification. event-driven notification.
QUESTION: Since this is about security automation, can we drop QUESTION: Since this is about security automation, can we drop
the User and just use Application? Is there a better term to the User and just use Application? Is there a better term to
use here? Once the policy is selected, the rest seems like use here? Once the policy is selected, the rest seems like
something we definitely would want to automate, so I dropped something we definitely would want to automate, so I dropped
the User part. the User part.
2. A user/application selects a target endpoint to be assessed. 2. An operator/application selects one or more target endpoints to
be assessed.
3. A user/application selects which policies are applicable to the 3. A operator/application selects which policies are applicable to
target. the targets.
4. The application determines which (sets of) posture attributes 4. For each target:
need to be collected for evaluation.
QUESTION: It was suggested that mentioning several common A. The application determines which (sets of) posture attributes
acquisition methods, such as local API, WMI, Puppet, DCOM, need to be collected for evaluation.
SNMP, CMDB query, and NEA, without forcing any specific method
would be good. I have concerns this could devolve into a
"what about my favorite?" contest. OTOH, the charter does
specifically call for use of existing standards where
applicable, so the use cases document might be a good neutral
location for such information, and might force us to consider
what types of external interfaces we might need to support
when we consider the requirements. It appears that the
generic workflow sequence would be a good place to mention
such common acquisition methods.
5. The application might retrieve previously collected information QUESTION: It was suggested that mentioning several common
from a cache or data store, such as a data store populated by an acquisition methods, such as local API, WMI, Puppet, DCOM,
asset management system. SNMP, CMDB query, and NEA, without forcing any specific
method would be good. I have concerns this could devolve
into a "what about my favorite?" contest. OTOH, the
charter does specifically call for use of existing
standards where applicable, so the use cases document
might be a good neutral location for such information, and
might force us to consider what types of external
interfaces we might need to support when we consider the
requirements. It appears that the generic workflow
sequence would be a good place to mention such common
acquisition methods.
6. The application might establish communication with the target, B. The application might retrieve previously collected
mutually authenticate identities and authorizations, and collect information from a cache or data store, such as a data store
posture attributes from the target. populated by an asset management system.
7. The application might establish communication with one or more C. The application might establish communication with the
intermediary/agents, mutually authenticate their identities and target, mutually authenticate identities and authorizations,
determine authorizations, and collect posture attributes about and collect posture attributes from the target.
the target from the intermediary/agents. Such agents might be
local or external.
8. The application communicates target identity and (sets of) D. The application might establish communication with one or
collected attributes to an evaluator, possibly an external more intermediary/agents, mutually authenticate their
process or external system. identities and determine authorizations, and collect posture
attributes about the target from the intermediary/agents.
Such agents might be local or external.
9. The evaluator compares the collected posture attributes with E. The application communicates target identity and (sets of)
expected values as expressed in policies. collected attributes to an evaluator, possibly an external
process or external system.
QUESTION: Evaluator generates a report or log or notification F. The evaluator compares the collected posture attributes with
of some type? expected values as expressed in policies.
The following subsections detail specific use cases for data QUESTION: Evaluator generates a report or log or
collection, analysis, and related operations pertaining to the notification of some type?
publication and use of supporting content.
2.1. Definition and Publication of Automatable Configuration Guides 2.1. Use Cases
The following subsections detail specific use cases for assessment
planning, data collection, analysis, and related operations
pertaining to the publication and use of supporting content.
2.1.1. Define, Publish, Query and Retrieve Content
This use case describes the need for content to be defined and
published to a data store, as well as queried and retrieved from the
data store for the explicit use of posture collection and evaluation.
It is expected that multiple information models will be supported to
address the information needed to support the exchange of endpoint
metadata, and the collection and evaluation of endpoint posture
attribute values. It is likely that multiple data models will be
used to express these information models requiring specialized or
extensible content data stores.
The building blocks of this use case are:
Content Definition: Defining the content to drive collection and
evaluation. This may include evaluating existing stores of
content to find content to reuse and the creation of new
content. Developed content will be based on available data
models which may be standardized or proprietary.
Content Publication: The capability to publish content to a content
data store for further use. Published content may be made
publicly available or may be based on an authorization decision
using authenticated credentials. As a result, the visibility
of content to an operator or application may be public,
enterprise-scoped, private, or controlled within any other
scope.
Content Query: An operator or application should be able to query a
content data store using a set of specified criteria. The
result of the query will be a listing matching the query. The
query result listing may contain publication metadata (e.g.,
create date, modified date, publisher, etc.) and/or the full
content, a summary, snippet, or the location to retrieve the
content.
Content Retrieval: The act of acquiring one or more specific content
entries. This capability is useful if the location of the
content is known a priori, perhaps as the result of request
based on decisions made using information from a previous
query.
Content Change Detection: An operator or application needs to
identify content of interest that is new, updated, or deleted
in a content data store which they have been authorized to
access.
These building blocks are used to enable acquisition of various
instances of content based on specific data models that are used to
drive assessment planning (see section 2.1.2), posture attribute
value collection (see section 2.1.3), and posture evaluation (see
section 2.1.4).
2.1.2. Endpoint Identification and Assessment Planning
This use case describes the process of discovering endpoints,
understanding their composition, identifying the desired state to
assess against, and calculating what posture attributes to collect to
enable evaluation. This process may be a set of manual, automated,
or hybrid steps that are performed for each assessment.
The building blocks of this use case are:
Endpoint Discovery: The purpose of discovery is to determine the
type of endpoint to be posture assessed.
QUESTION: Is it just the type? Or is it to identify what
endpoint instances to target for assessment using metadata such
as the endpoint's organizationally expected type (e.g.,
expected function/role, etc.)
Identify Endpoint Targets Determine the candidate endpoint target(s)
to perform the assessment against. Depending on the assessment
trigger, a single endpoint may be targeted or multiple
endpoints may be targeted based on discovered endpoint
metadata. This may be driven by content that describes the
applicable targets for assessment. In this case the Content
Query and/or Content Retrieval building blocks (see section
2.1.1) may be used to acquire this content.
Endpoint Component Inventory: To determine what applicable desired
states should be assessed, it is first necessary to acquire the
inventory of software, hardware, and accounts associated with
the targeted endpoint(s). If the assessment of the endpoint is
not dependant on the component inventory, then this capability
is not required for use in performing the assessment. This
process can be treated as a collection use case for specific
posture attributes. In this case the building blocks for
Endpoint Posture Attribute Value Collection (see section 2.1.3)
can be used.
Posture Attribute Identification: Once the endpoint targets and
component inventory is known, it is then necessary to calculate
what posture attributes are required to be collected to perform
the evaluation. If this is driven by content, then the Content
Query and/or Content Retrieval building blocks (see section
2.1.1) may be used to acquire this content.
QUESTION: Are we missing a building block that determines what
previously collected data, if any, is suitable for evaluation and
what data needs to be actually collected?
At this point the set of posture attribute values to use for
evaluation are known and they can be collected if necessary (see
section 2.1.3).
2.1.3. Endpoint Posture Attribute Value Collection
This use case describes the process of collecting a set of posture
attribute values related to one or more endpoints. This use case can
be initiated by a variety of triggers including:
1. A posture change or significant event on the endpoint.
2. A network event (e.g., endpoint connects to a network/VPN,
specific netflow is detected).
3. Due to a scheduled or ad hoc collection task.
The building blocks of this use case are:
Collection Content Acquisition: If content is required to drive the
collection of posture attributes values, this capability is
used to acquire this content from one or more content data
stores. Depending on the trigger, the specific content to
acquire might be known. If not, it may be necessary to
determine the content to use based on the component inventory
or other assessment criteria. The Content Query and/or Content
Retrieval building blocks (see section 2.1.1) may be used to
acquire this content.
Posture Attribute Value Collection: The accumulation of posture
attribute values. This may be based on collection content that
is associated with the posture attributes.
Once the posture attribute values are collected, they may be
persisted for later use or they may be immediately used for posture
evaluation.
2.1.4. Posture Evaluation
This use case describes the process of evaluating collected posture
attribute values representing actual endpoint state against the
expected state selected for the assessment. This use case can be
initiated by a variety of triggers including:
1. A posture change or significant event on the endpoint.
2. A network event (e.g., endpoint connects to a network/VPN,
specific netflow is detected).
3. Due to a scheduled or ad hoc evaluation task.
The building blocks of this use case are:
Posture Attribute Value Query: If previously collected posture
attribute values are needed, the appropriate data stores are
queried to retrieve them. If all posture attribute values are
provided directly for evaluation, then this capability may not
be needed.
Evaluation Content Acquisition: If content is required to drive the
evaluation of posture attributes values, this capability is
used to acquire this content from one or more content data
stores. Depending on the trigger, the specific content to
acquire might be known. If not, it may be necessary to
determine the content to use based on the component inventory
or other assessment criteria. The Content Query and/or Content
Retrieval building blocks (see section 2.1.1) may be used to
acquire this content.
Posture Attribute Evaluation: The comparison of posture attribute
values against their expected results as expressed in the
specified content. The result of this comparison is output as
a set of posture evaluation results.
Completion of this process represents a complete assessment cycle as
defined in section Section 2.
2.1.5. Mining the Database
This use case describes the need to analyze previously collected
posture attribute values from one or more endpoints. This is an
alternate use case to Posture Evaluation (see section 2.1.4 that uses
collected posture attributes values for analysis processes that may
do more than evaluating expected vs. actual state(s).
The building blocks of this use case are:
Query: Query a data store for specific posture attribute values.
Change Detection: An operator should have a mechanism to detect the
availability of new or changes to existing posture attribute
values. The timeliness of detection may vary from immediate to
on demand. Having the ability to filter what changes are
detected will allow the operator to focus on the changes that
are relevant to their use.
QUESTION: Does this warrant a separate use case, or should this be
incorporated into the previous use case?
2.2. Usage Scenarios
In this section, we describe a number of usage scenarios that utilize
aspects of endpoint posture assessment. These are examples of common
problems that can be solved with the building blocks defined above.
2.2.1. Definition and Publication of Automatable Configuration Guides
A vendor manufactures a number of specialized endpoint devices. They A vendor manufactures a number of specialized endpoint devices. They
also develop and maintain an operating system for these devices that also develop and maintain an operating system for these devices that
enables end-user organizations to configure a number of security and enables end-user organizations to configure a number of security and
operational settings. As part of their customer support activities, operational settings. As part of their customer support activities,
they publish a number of secure configuration guides that provide they publish a number of secure configuration guides that provide
minimum security guidelines for configuring their devices. minimum security guidelines for configuring their devices.
Each guide they produce applies to a specific model of device and Each guide they produce applies to a specific model of device and
version of the operating system and provides a number of specialized version of the operating system and provides a number of specialized
configurations depending on the devices intended function and what configurations depending on the devices intended function and what
add-on hardware modules and software licenses are installed on the add-on hardware modules and software licenses are installed on the
device. To enable their customers to evaluate the security posture device. To enable their customers to evaluate the security posture
of their devices to ensure that all appropriate minimal security of their devices to ensure that all appropriate minimal security
settings are enabled, they publish an automatable configuration settings are enabled, they publish an automatable configuration
checklist using a popular data format that defines what settings to checklist using a popular data format that defines what settings to
collect using a network management protocol and appropriate values collect using a network management protocol and appropriate values
for each setting. They publish these guides to a public content for each setting. They publish these checklist to a public content
repository that customers can query to retrieve applicable guides for repository that customers can query to retrieve applicable checklist
their deployed enterprise network infrastructure endpoints. for their deployed specialized endpoint devices.
Guides could also come from sources other than a device vendor, such Automatable configuration checklist could also come from sources
as industry groups or regulatory authorities, or enterprises could other than a device vendor, such as industry groups or regulatory
develop their own checklists. authorities, or enterprises could develop their own checklists.
2.2. Automated Checklist Verification This usage scenario employs the following building blocks defined in
Section 2.1.1 above:
Content Definition: To allow content to be defined using
standardized or proprietary data models that will drive
Collection and Evaluation.
Content Publication: Providing a mechanism to publish created
content to a content data store.
Content Query: To locate and select existing content that may be
reused.
Content Retrieval To retrieve specific content from a content data
store for editing.
While each building block can be used in a manual fashion by a human
operator, it is also likely that these capabilities will be
implemented together in some form of a content editor or generator
application.
2.2.2. Automated Checklist Verification
A financial services company operates a heterogeneous IT environment. A financial services company operates a heterogeneous IT environment.
In support of their risk management program, they utilize vendor In support of their risk management program, they utilize vendor
provided automatable security configuration checklists for each provided automatable security configuration checklists for each
operating system and application used within their IT environment. operating system and application used within their IT environment.
Multiple checklists are used from different vendors to insure Multiple checklists are used from different vendors to insure
adequate coverage of all IT assets. adequate coverage of all IT assets.
To identify what checklists are needed, they use automation to gather To identify what checklists are needed, they use automation to gather
an inventory of the software versions utilized by all IT assets in an inventory of the software versions utilized by all IT assets in
the enterprise. This data gathering will involve querying existing the enterprise. This data gathering will involve querying existing
data stores of previously collected endpoint software inventory data stores of previously collected endpoint software inventory
posture data and actively collecting data from reachable endpoints as posture data and actively collecting data from reachable endpoints as
needed utilizing network and systems management protocols. needed utilizing network and systems management protocols.
Previously collected data may be provided by periodic data Previously collected data may be provided by periodic data
collection, network connection-driven data collection, or ongoing collection, network connection-driven data collection, or ongoing
event-driven monitoring of endpoint posture changes. event-driven monitoring of endpoint posture changes.
Using the gathered software inventory data and associated asset Using the gathered hardware and software inventory data and
management data indicating the organizational defined functions of associated asset management data that may indicate the organizational
each endpoint, they locate and query each vendors content repository defined functions of each endpoint, checklist content is queried,
for the appropriate checklists. These checklists are cached locally located and downloaded from the appropriate vendor and 3rd-party
to reduce the need to download the checklist multiple times. content repositories for the appropriate checklists. This content is
cached locally to reduce the need to download the checklist content
multiple times.
Driven by the setting data provided in the checklist, a combination Driven by the setting data provided in the checklist, a combination
of existing configuration data stores and data collection methods are of existing configuration data stores and data collection methods are
used to gather the appropriate posture information from each used to gather the appropriate posture attributes from each endpoint.
endpoint. Specific data is gathered based on the defined enterprise Specific data is gathered based on the defined enterprise function
function and software inventory of each endpoint. The data and software inventory of each endpoint. The data collection paths
collection paths used to collect software inventory posture will be used to collect software inventory posture will be used again for
used again for this purpose. Once the data is gathered, the actual this purpose. Once the data is gathered, the actual state is
state is evaluated against the expected state criteria in each evaluated against the expected state criteria in each applicable
applicable checklist. Deficiencies are identified and reported to checklist. The results of this evaluation are provided to
the appropriate endpoint operators for remedy. appropriate operators and applications to drive additional business
logic.
Checklists could include searching for indicators of compromise on
the endpoint (e.g., file hashes); identifying malicious activity
(e.g. command and control traffic); detecting presence of
unauthorized/malicious software, hardware, and configuration items;
and other indicators.
A checklist can be assessed as a whole, or a specific subset of the
checklist can be assessed resulting in partial data collection and
evaluation.
Checklists could also come from sources other than the application or Checklists could also come from sources other than the application or
OS vendor, such as industry groups or regulatory authorities, or OS vendor, such as industry groups or regulatory authorities, or
enterprises could develop their own checklists. enterprises could develop their own checklists.
2.3. Organizational Software Policy Compliance While specific applications for checklists results are out-of-scope
for current SACM efforts, how the data is used may illuminate
specific latency and bandwidth requirements. For this purpose use of
checklist assessment results may include, but are not limited to:
Example Corporation, in support of compliance requirements, has o Detecting endpoint posture deviations as part of a change
identified a number of secure baselines for different endpoint types management program to include changes to hardware and software
that exist across their enterprise IT environment. Determining which inventory including patches, changes to configuration items, and
baseline applies to a given endpoint is based on the organizationally other posture aspects.
defined function of the device.
Each baseline, defined using an automatable standardized data format, o Determining compliance with organizational policies governing
identifies the expected hardware, software and patch inventory, and endpoint posture.
software configuration item values for each endpoint type. As part
of their compliance activities, they require that all endpoints
connecting to their network meet the appropriate baselines. The
configuration settings of each endpoint are collected and compared to
the baseline to make sure the configuration complies with the
appropriate baseline whenever it connects to the network and at least
once a day thereafter. These daily compliance checks evaluate the
posture of each endpoint and report on its compliance with the
appropriate baseline.
[TODO: Need to speak to how the baselines are identified for a given o Searching for current and historic signs of infection by malware
endpoint connecting to the network.] and determining the scope of infection within an enterprise.
2.4. Detection of Posture Deviations o Informing configuration management, patch management, and
vulnerability mitigation and remediation decisions.
o Detecting performance, attack and vulnerable conditions that
warrant additional network diagnostics, monitoring, and analysis.
o Informing network access control decision making for wired,
wireless, or VPN connections.
This usage scenario employs the following building blocks defined in
Section 2.1.1 above:
Endpoint Discovery: The purpose of discovery is to determine the
type of endpoint to be posture assessed.
Identify Endpoint Targets: To identify what potential endpoint
targets the checklist should apply to based on organizational
policies.
Endpoint Component Inventory: Collecting and consuming the software
and hardware inventory for the target endpoints.
Posture Attribute Identification: To determine what data needs to be
collected to support evaluation, the checklist is evaluated
against the component inventory and other endpoint metadata to
determine the set of posture attribute values that are needed.
Collection Content Acquisition: Based on the identified posture
attributes, the application will query appropriate content data
stores to find the "applicable" data collection content for
each endpoint in question.
Posture Attribute Value Collection: For each endpoint, the values
for the required posture attributes are collected.
Posture Attribute Value Query: If previously collected posture
attribute values are used, they are queried from the
appropriate data stores for the target endpoint(s).
Evaluation Content Acquisition: Any content that is needed to
support evaluation is queried and retrieved.
Posture Attribute Evaluation: The resulting posture attribute values
from previous Collection processes are evaluated using the
evaluation content to provide a set of posture results.
2.2.3. Detection of Posture Deviations
Example corporation has established secure configuration baselines Example corporation has established secure configuration baselines
for each different type of endpoint within their enterprise for each different type of endpoint within their enterprise
including: network infrastructure, mobile, client, and server including: network infrastructure, mobile, client, and server
computing platforms. These baselines define an approved list of computing platforms. These baselines define an approved list of
hardware, software (i.e., operating system, applications, and hardware, software (i.e., operating system, applications, and
patches), and associated required configurations. When an endpoint patches), and associated required configurations. When an endpoint
connects to the network, the appropriate baseline configuration is connects to the network, the appropriate baseline configuration is
communicated to the endpoint based on its location in the network, communicated to the endpoint based on its location in the network,
the expected function of the device, and other asset management data. the expected function of the device, and other asset management data.
It is checked for compliance with the baseline indicating any It is checked for compliance with the baseline indicating any
deviations to the device's operators. Once the baseline has been deviations to the device's operators. Once the baseline has been
established, the endpoint is monitored for any change events established, the endpoint is monitored for any change events
pertaining to the baseline on an ongoing basis. When a change occurs pertaining to the baseline on an ongoing basis. When a change occurs
to posture defined in the baseline, updated posture information is to posture defined in the baseline, updated posture information is
exchanged allowing operators to be notified and/or automated action exchanged allowing operators to be notified and/or automated action
to be taken. to be taken.
2.5. Search for Signs of Infection Like the Automated Checklist Verification usage scenario (see section
The Example Corporation carefully manages endpoint security with 2.2.2), this usage scenario supports assessment of checklists. It
tools that implement the SACM standards. One day, the endpoint differs from that scenario by monitoring for specific endpoint
security team at Example Corporation learns about a stealthy malware posture changes on an ongoing basis. When the endpoint detects a
package. This malware has just been discovered but has already posture change, an alert is generated identifying the specific
spread widely around the world. Certain signs of infection have been changes in posture allowing a delta assessment to be performed
identified (e.g. the presence of certain files). The security team instead of a full assessment in the previous case. This usage
would like to know which endpoints owned by the Example Corporation scenario employs the same building blocks as Automated Checklist
have been infected with this malware. They use their tools to search Verification (see section 2.2.2). It differs slightly in how it uses
for the signs of infection and generate a list of infected endpoints. the following building blocks:
The search for infected endpoints may be performed by gathering new Endpoint Component Inventory: Additionally, changes to the hardware
endpoint posture information regarding the presence of the signs of and software inventory are monitored, with changes causing
infection. However, this might miss finding endpoints that were alerts to be issued.
previously infected but where the infection has now erased itself.
Such previously infected endpoints may be detected by searching a
database of posture information previously gathered for the signs of
infection. However, this will not work if the malware hides its
presence carefully or if the signs of infection were not included in
previous posture assessments. In those cases, the database may be
used to at least detect which endpoints previously had software
vulnerable to infection by the malware.
2.6. Remediation and Mitigation Posture Attribute Value Collection: After the initial assessment,
posture attributes are monitored for changes. If any of the
selected posture attribute values change, an alert is issued.
When Example Corporation discovers that one of its endpoints is Posture Attribute Value Query: The previous state of posture
vulnerable to infection, a process of mitigation and remediation is attributes are tracked, allowing changes to be detected.
triggered. The first step is mitigating the impact of the
vulnerability, perhaps by placing the endpoint into a safe network or
blocking network traffic that could infect the endpoint. The second
step is remediation: fixing the vulnerability. In some cases, these
steps may happen automatically and rapidly. In other cases, they may
require human intervention either to decide what response is most
appropriate or to complete the steps, which are sometimes complex.
These same steps of mitigation and remediation may be used when Posture Attribute Evaluation: After the initial assessment, a
Example Corporation discovers that one of its endpoints has become partial evaluation is performed based on changes to specific
infected with some malware. Alternatively, the infected endpoint may posture attributes.
simply be monitored or even placed into a honeynet or similar
environment to observe the malware's behavior and lead the attackers
astray.
QUESTION: Is remediation and mitigation within the scope of the WG, This usage scenario highlights the need to query a data store to
and should the use case be included here? prepare a compliance report for a specific endpoint and also the need
for a change in endpoint state to trigger Collection and Evaluation.
2.7. Endpoint Information Analysis and Reporting 2.2.4. Endpoint Information Analysis and Reporting
Freed from the drudgery of manual endpoint compliance monitoring, one Freed from the drudgery of manual endpoint compliance monitoring, one
of the security administrators at Example Corporation notices (not of the security administrators at Example Corporation notices (not
using SACM standards) that five endpoints have been uploading lots of using SACM standards) that five endpoints have been uploading lots of
data to a suspicious server on the Internet. The administrator data to a suspicious server on the Internet. The administrator
queries the SACM database of endpoint posture to see what software is queries data stores for specific endpoint posture to see what
installed on those endpoints and finds that they all have a software is installed on those endpoints and finds that they all have
particular program installed. She then searches the database to see a particular program installed. She then queries the appropriate
which other endpoints have that program installed. All these data stores to see which other endpoints have that program installed.
endpoints are monitored carefully (not using SACM standards), which All these endpoints are monitored carefully (not using SACM
allows the administrator to detect that the other endpoints are also standards), which allows the administrator to detect that the other
infected. endpoints are also infected.
This is just one example of the useful analysis that a skilled This is just one example of the useful analysis that a skilled
analyst can do using the database of endpoint posture that SACM can analyst can do using data stores of endpoint posture.
provide.
2.8. Asynchronous Compliance/Vulnerability Assessment at Ice Station This usage scenario employs the following building blocks defined in
Zebra Section 2.1.1 above:
Posture Attribute Value Query: Previously collected posture
attribute values are queried from the appropriate data stores
for the target endpoint(s).
QUESTION: Should we include other building blocks here?
This usage scenario highlights the need to query a repository for
attributes to see which attributes certain endpoints have in common.
2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station
Zebra
A university team receives a grant to do research at a government A university team receives a grant to do research at a government
facility in the arctic. The only network communications will be via facility in the arctic. The only network communications will be via
an intermittent low-speed high-latency high-cost satellite link. an intermittent low-speed high-latency high-cost satellite link.
During their extended expedition they will need to show continue During their extended expedition they will need to show continue
compliance with the security policies of the university, the compliance with the security policies of the university, the
government, and the provider of the satellite network as well as keep government, and the provider of the satellite network as well as keep
current on vulnerability testing. Interactive assessments are current on vulnerability testing. Interactive assessments are
therefore not reliable, and since the researchers have very limited therefore not reliable, and since the researchers have very limited
funding they need to minimize how much money they spend on network funding they need to minimize how much money they spend on network
skipping to change at page 10, line 19 skipping to change at page 16, line 10
The collection request is queued for the next window of connectivity. The collection request is queued for the next window of connectivity.
The deployed assets eventually receive the request, fulfill it, and The deployed assets eventually receive the request, fulfill it, and
queue the results for the next return opportunity. queue the results for the next return opportunity.
The collected artifacts eventually make it back to the university The collected artifacts eventually make it back to the university
where the level of compliance and vulnerability expose is calculated where the level of compliance and vulnerability expose is calculated
and asset characteristics are compared to what is in the asset and asset characteristics are compared to what is in the asset
management system for accuracy and completeness. management system for accuracy and completeness.
2.9. Vulnerable Endpoint Identification Like the Automated Checklist Verification usage scenario (see section
2.2.2), this usage scenario supports assessment of checklists. It
differs from that scenario in how content, collected posture values,
and evaluation results are exchanged due to bandwidth limitations and
availability. This usage scenario employs the same building blocks
as Automated Checklist Verification (see section 2.2.2). It differs
slightly in how it uses the following building blocks:
Typically vulnerability reports identify an executable or library Endpoint Component Inventory: It is likely that the component
that is vulnerable, or worst case the software that is vulnerable. inventory will not change. If it does, this information will
This information is used to determine if an organization has one or need to be batched and transmitted during the next
more endpoints that have exposure to a vulnerability (i.e., what communication window.
endpoints are vulnerable?). It is often necessary to know where you
are running vulnerable code and what configurations are in place on
the endpoint and upstream devices (e.g., IDS, firewall) that may
limit the exposure. All of this information, along with details on
the severity and impact of a vulnerability, is necessary to
prioritize remedies.
2.10. Compromised Endpoint Identification Collection Content Acquisition: Due to intermittent communication
windows and bandwidth constraints, changes to collection
content will need to batched and transmitted during the next
communication window. Content will need to be cached locally
to avoid the need for remote communications.
Along with knowing if one or more endpoints are vulnerable, it is Posture Attribute Value Collection: The specific posture attribute
also important to know if you have been compromised. Indicators of values to be collected are identified remotely and batched for
compromise provide details that can be used to identify malware collection during the next communication window. If a delay is
(e.g., file hashes), identify malicious activity (e.g. command and introduced for collection to complete, results will need to be
control traffic), presence of unauthorized/malicious configuration batched and transmitted in the same way.
items, and other indicators. While important, this goes beyond
determining organizational exposure.
2.11. Suspicious Endpoint Behavior Posture Attribute Value Query: Previously collected posture
attribute values will be stored in a remote data store for use
at the university
This Use Case describes the collaboration between specific Evaluation Content Acquisition: Due to intermittent communication
participants in an information security system specific to detecting windows and bandwidth constraints, changes to evaluation
a connection attempt to a known-bad Internet host by a botnet zombie content will need to batched and transmitted during the next
that has made its way onto an organization's Information Technology communication window. Content will need to be cached locally
systems. The primary human actor is the Security Operations Center to avoid the need for remote communications.
Analyst, and the primary software actor is the configuration
assessment tool. Note, however, the dependencies on other tools,
such as asset management, intrusion detection, and messaging.
2.12. Traditional endpoint assessment with stored results Posture Attribute Evaluation: Due to the caching of posture
attribute values and evaluation content, evaluation may be
performed at both the university campus as well as the
satellite site.
An external trigger initiates an assessment of an endpoint. The This usage scenario highlights the need to support low-bandwidth,
Controller uses the data in the Datastore to look up authentication intermittent, or high-latency links.
information for the endpoint and passes that along with the
assessment request details to the Evaluator. The Evaluator uses the
Endpoint information to request taxonomy information from the
Collector on the endpoint, which responds with those attributes. The
Evaluator uses that taxonomy information along with the information
in the original request from the Controller to request the
appropriate content from the Content Repository. The Evaluator uses
the content to derive the minimal set of endpoint attributes needed
to perform the assessment and makes that request. The Evaluator uses
the Collector response to do the assessment and returns the results
to the Controller. The Controller puts the results in the Datastore.
2.13. NAC/NAP connection with no stored results using an endpoint 2.2.6. Identification and Retrieval of Repository Content
evaluator
A mobile endpoint makes a VPN connection request. The NAC/NAP broker In preparation for performing an assessment, an operator or
requests the results of the VPN connection assessment from the application will need to identify one or more content data stores
Controller. The Controller requests the VPN attributes from a that contain the content entries necessary to perform data collection
Content Repository. The Controller requests an evaluation of the and evaluation tasks. The location of a given content entry will
collected attributes from the Evaluator on the endpoint. The either be known a priori or known content repositories will need to
endpoint performs the assessment and returns the results. The be queried to retrieve applicable content.
Controller completes the original assessment request by returning the
results to the NAC/NAP broker, which uses them to set the level of
network access allowed to the endpoint.
QUESTION: I edited these from Gunnar's email of 9/11, to try to To query content it will be necessary to define a set of search
reduce the use of "assessment", to focus on collection and criteria. This criteria will often utilize a logical combination of
evaluation, and deal with use cases rather than architecture. I am publication metadata (e.g. publishing identity, create time,
not sure I got all the concepts properly identified. modification time) and content-specific criteria elements. Once the
criteria is defined, one or more content data stores will need to be
queried generating a result set. Depending on how the results are
used, it may be desirable to return the matching content directly, a
snippet of the content matching the query, or a resolvable location
to retrieve the content at a later time. The content matching the
query will be restricted based the authorized level of access allowed
to the requester.
2.14. NAC/NAP connection with no stored results using a third-party If the location of content is identified in the query result set, the
evaluator content will be retrieved when needed using one or more content
retrieval requests. A variation on this approach would be to
maintain a local cache of previously retrieved content. In this
case, only content that is determined to be stale by some measure
will be retrieved from the remote content store.
A mobile endpoint makes a VPN connection request. The NAC/NAP broker Alternately, content can be discovered by iterating over content
requests the results of the VPN connection assessment from the published with a given context within a content repository. Specific
Controller. The Controller requests the VPN attributes from a content can be selected and retrieved as needed.
Content Repository. The Controller requests an evaluation of the
collected attributes from an Evaluator in the network (rather than
trusting an evaluator on the endpoint). The evaluator performs the
evaluation and returns the results. The Controller completes the
original assessment request by returning the results to the NAC/NAP
broker, which uses them to set the level of network access allowed to
the endpoint.
QUESTION: I edited these from Gunnar's email of 9/11, to try to This usage scenario employs the following building blocks defined in
reduce the use of "assessment", to focus on collection and Section 2.1.1 above:
evaluation, and deal with use cases rather than architecture. I am
not sure I got all the concepts properly identified.
2.15. Repository Interaction - A Full Assessment Content Query: Enables an operator or application to query one or
more content data stores for content using a set of specified
criteria.
An auditor at a health care provider needs to know the current Content Retrieval: If content locations are returned in the query
compliance level of his network, including enumeration of known result set, then specific content entries can be retrieved and
vulnerabilities, so she initiates a full enterprise-wide assessment. possibly cached locally.
For each endpoint on the network, after determining its taxonomical
classification, the assessment system queries the content repository
for all materials that apply to that endpoint.
2.16. Repository Interaction - Filtered Delta Assessment 2.2.7. Content Change Detection
Before heading out on a road trip, a rep checks out an iOS tablet An operator or application may need to identify new, updated, or
computer from the IT department. Before turning over the laptop the deleted content in a content repository for which they have been
IT administrator first initiates a quick assessment to see if any new authorized to access. This may be achieved by querying or iterating
vulnerabilities that potentially yield remote access or local over content in a content repository, or through a notification
privilege escalation have been identified for that device type since mechanism that alerts to changes made to a content repository.
the last time the device had had a full assessment.
2.17. Direct Human Retrieval of Ancillary Materials. Once content changes have been determined, data collection and
evaluation activities may be triggered.
Preceding a HIPAA assessment the local SSO wants to review the HIPAA This usage scenario employs the following building blocks defined in
regulations to determine which assets do or do not fall under the Section 2.1.1 above:
regulation. Following the assessment he again queries the content
repository for more information about remediation strategies and
employee training materials.
2.18. Register with repository for immediate notification of new Content Change Detection: Allows an operator or application to
security vulnerability content that match a selection filter. identify content changes in a content data store which they
have been authorized to access.
Interested in reducing the exposure time to new vulnerabilities and Content Retrieval: If content locations are provided by the change
compliance policy changes, the IT administrator registers with his detection mechanism, then specific content entries can be
subscribed content repository(s) to receive immediate notification of retrieved and possibly cached locally.
any changes to the vulnerability and compliance content that apply to
his managed assets. Receipt of notifications trigger an immediate
delta assessment against those assets that potentially match.
2.19. Others... 2.2.8. Others...
Additional use cases will be identified as we work through other Additional use cases will be identified as we work through other
domains. domains.
3. IANA Considerations 3. IANA Considerations
This memo includes no request to IANA. This memo includes no request to IANA.
4. Security Considerations 4. Security Considerations
This memo documents, for Informational purposes, use cases for This memo documents, for Informational purposes, use cases for
security automation. While it is about security, it does not affect security automation. While it is about security, it does not affect
security. security.
5. Acknowledgements 5. Acknowledgements
skipping to change at page 13, line 21 skipping to change at page 18, line 51
5. Acknowledgements 5. Acknowledgements
The National Institute of Standards and Technology (NIST) and/or the The National Institute of Standards and Technology (NIST) and/or the
MITRE Corporation have developed specifications under the general MITRE Corporation have developed specifications under the general
term "Security Automation" including languages, protocols, term "Security Automation" including languages, protocols,
enumerations, and metrics. enumerations, and metrics.
Adam Montville edited early versions of this draft. Adam Montville edited early versions of this draft.
Kathleen Moriarty and Stephen Hanna contributed text describing the Kathleen Moriarty, and Stephen Hanna contributed text describing the
scope of the document. scope of the document.
Steve Hanna provided use cases for Search for Signs of Infection, Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa
Remediation and Mitigation, and Endpoint Information Analysis and Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and
Reporting. Aron Woland provided use cases text for various revisions of this
draft.
Gunnar Engelbach provided the use case about Ice Station Zebra, and
use cases regarding the content repository.
6. Change Log 6. Change Log
6.1. -03- to -04- 6.1. -04- to -05-
Changes in this revision are focused on section 2 and the subsequent
subsections:
o Moved existing use cases to a subsection titled "Usage Scenarios".
o Added a new subsection titled "Use Cases" to describe the common
use cases and building blocks used to address the "Usage
Scenarios". The new use cases are:
* Define, Publish, Query and Retrieve Content
* Endpoint Identification and Assessment Planning
* Endpoint Posture Attribute Value Collection
* Posture Evaluation
* Mining the Database
o Added a listing of building blocks used for all usage scenarios.
o Combined the following usage scenarios into "Automated Checklist
Verification": "Organizational Software Policy Compliance",
"Search for Signs of Infection", "Vulnerable Endpoint
Identification", "Compromised Endpoint Identification",
"Suspicious Endpoint Behavior", "Traditional endpoint assessment
with stored results", "NAC/NAP connection with no stored results
using an endpoint evaluator", and "NAC/NAP connection with no
stored results using a third-party evaluator".
o Created new usage scenario "Identification and Retrieval of
Repository Content" by combining the following usage scenarios:
"Repository Interaction - A Full Assessment" and "Repository
Interaction - Filtered Delta Assessment"
o Renamed "Register with repository for immediate notification of
new security vulnerability content that match a selection filter"
to "Content Change Detection" and generalized the description to
be neutral to implementation approaches.
o Removed out-of-scope usage scenarios: "Remediation and Mitigation"
and "Direct Human Retrieval of Ancillary Materials"
Updated acknowledgements to recognize those that helped with editing
the use case text.
6.2. -03- to -04-
Added four new use cases regarding content repository. Added four new use cases regarding content repository.
6.2. -02- to -03- 6.3. -02- to -03-
Expanded the workflow description based on ML input. Expanded the workflow description based on ML input.
Changed the ambiguous "assess" to better separate data collection Changed the ambiguous "assess" to better separate data collection
from evaluation. from evaluation.
Added use case for Search for Signs of Infection. Added use case for Search for Signs of Infection.
Added use case for Remediation and Mitigation. Added use case for Remediation and Mitigation.
skipping to change at page 14, line 22 skipping to change at page 21, line 5
third-party evaluator. third-party evaluator.
Added use case for Compromised Endpoint Identification. Added use case for Compromised Endpoint Identification.
Added use case for Suspicious Endpoint Behavior. Added use case for Suspicious Endpoint Behavior.
Added use case for Vulnerable Endpoint Identification. Added use case for Vulnerable Endpoint Identification.
Updated Acknowledgements Updated Acknowledgements
6.3. -01- to -02- 6.4. -01- to -02-
Changed title Changed title
removed section 4, expecting it will be moved into the requirements removed section 4, expecting it will be moved into the requirements
document. document.
removed the list of proposed caabilities from section 3.1 removed the list of proposed capabilities from section 3.1
Added empty sections for Search for Signs of Infection, Remediation Added empty sections for Search for Signs of Infection, Remediation
and Mitigation, and Endpoint Information Analysis and Reporting. and Mitigation, and Endpoint Information Analysis and Reporting.
Removed Requirements Language section and rfc2119 reference. Removed Requirements Language section and rfc2119 reference.
Removed unused references (which ended up being all references). Removed unused references (which ended up being all references).
6.4. -00- to -01- 6.5. -00- to -01-
o Work on this revision has been focused on document content o Work on this revision has been focused on document content
relating primarily to use of asset management data and functions. relating primarily to use of asset management data and functions.
o Made significant updates to section 3 including: o Made significant updates to section 3 including:
* Reworked introductory text. * Reworked introductory text.
* Replaced the single example with multiple use cases that focus * Replaced the single example with multiple use cases that focus
on more discrete uses of asset management data to support on more discrete uses of asset management data to support
skipping to change at page 15, line 37 skipping to change at page 22, line 20
"Deconfliction of Asset Identities". "Deconfliction of Asset Identities".
* Expanded the subsections for: Asset Identification, Asset * Expanded the subsections for: Asset Identification, Asset
Characterization, and Deconfliction of Asset Identities. Characterization, and Deconfliction of Asset Identities.
* Added a new subsection for Asset Targeting. * Added a new subsection for Asset Targeting.
* Moved remaining sections to "Other Unedited Content" for future * Moved remaining sections to "Other Unedited Content" for future
updating. updating.
6.5. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00 6.6. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-cases-00
o Transitioned from individual I/D to WG I/D based on WG consensus o Transitioned from individual I/D to WG I/D based on WG consensus
call. call.
o Fixed a number of spelling errors. Thank you Erik! o Fixed a number of spelling errors. Thank you Erik!
o Added keywords to the front matter. o Added keywords to the front matter.
o Removed the terminology section from the draft. Terms have been o Removed the terminology section from the draft. Terms have been
moved to: draft-dbh-sacm-terminology-00 moved to: draft-dbh-sacm-terminology-00
skipping to change at page 16, line 27 skipping to change at page 23, line 11
is important. is important.
* Added new sections, partially integrated existing content. * Added new sections, partially integrated existing content.
* Additional text is needed in all of the sub-sections. * Additional text is needed in all of the sub-sections.
o Changed "Security Change Management" to "Endpoint Posture Change o Changed "Security Change Management" to "Endpoint Posture Change
Management". Added new skeletal outline sections for future Management". Added new skeletal outline sections for future
updates. updates.
6.6. waltermire -04- to -05- 6.7. waltermire -04- to -05-
o Are we including user activities and behavior in the scope of this o Are we including user activities and behavior in the scope of this
work? That seems to be layer 8 stuff, appropriate to an IDS/IPS work? That seems to be layer 8 stuff, appropriate to an IDS/IPS
application, not Internet stuff. application, not Internet stuff.
o I removed the references to what the WG will do because this o I removed the references to what the WG will do because this
belongs in the charter, not the (potentially long-lived) use cases belongs in the charter, not the (potentially long-lived) use cases
document. I removed mention of charter objectives because the document. I removed mention of charter objectives because the
charter may go through multiple iterations over time; there is a charter may go through multiple iterations over time; there is a
website for hosting the charter; this document is not the correct website for hosting the charter; this document is not the correct
 End of changes. 70 change blocks. 
293 lines changed or deleted 598 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/