-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathdraft-coffin-sacm-vuln-scenario.xml
1706 lines (1632 loc) · 75.5 KB
/
draft-coffin-sacm-vuln-scenario.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY RFC2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
<!ENTITY RFC2629 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2629.xml">
<!ENTITY RFC7632 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7632.xml">
<!ENTITY I-D.ietf-sacm-requirements SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-sacm-requirements.xml">
<!ENTITY critical-controls SYSTEM "http://www.counciloncybersecurity.org/critical-controls/">
<!ENTITY charter-ietf-sacm-01 SYSTEM "https://datatracker.ietf.org/doc/charter-ietf-sacm/">
]>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable Processing Instructions (PIs) that most I-Ds might want to use.
(Here they are set differently than their defaults in xml2rfc v1.32) -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC) -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="4"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space
(using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="no" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of list of popular I-D processing instructions -->
<rfc category="info"
docName="draft-coffin-sacm-vuln-scenario-latest"
ipr="trust200902">
<!-- category values: std, bcp, info, exp, and historic
ipr values: full3667, noModification3667, noDerivatives3667
you can add the attributes updates="NNNN" and obsoletes="NNNN"
they will automatically be output with "(if approved)" -->
<!-- ***** FRONT MATTER ***** -->
<front>
<title abbrev="SACM Vuln Scenario">SACM Vulnerability Assessment Scenario</title>
<!-- Another author who claims to be an editor -->
<author fullname="Christopher Coffin" initials="C.C."
surname="Coffin">
<organization>The MITRE Corporation</organization>
<address>
<postal>
<street>202 Burlington Road</street>
<!-- Reorder these if your country does things differently -->
<city>Bedford</city>
<region>MA</region>
<code>01730</code>
<country>USA</country>
</postal>
<phone/>
<email>[email protected]</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<!-- Another author who claims to be an editor -->
<author fullname="Brant Cheikes" initials="B.C."
surname="Cheikes">
<organization>The MITRE Corporation</organization>
<address>
<postal>
<street>202 Burlington Road</street>
<!-- Reorder these if your country does things differently -->
<city>Bedford</city>
<region>MA</region>
<code>01730</code>
<country>USA</country>
</postal>
<phone/>
<email>[email protected]</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<!-- Another author who claims to be an editor -->
<author fullname="Charles Schmidt" initials="C.S."
surname="Schmidt">
<organization>The MITRE Corporation</organization>
<address>
<postal>
<street>202 Burlington Road</street>
<!-- Reorder these if your country does things differently -->
<city>Bedford</city>
<region>MA</region>
<code>01730</code>
<country>USA</country>
</postal>
<phone/>
<email>[email protected]</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<author fullname="Daniel Haynes" initials="D.H."
surname="Haynes">
<organization>The MITRE Corporation</organization>
<address>
<postal>
<street>202 Burlington Road</street>
<!-- Reorder these if your country does things differently -->
<city>Bedford</city>
<region>MA</region>
<code>01730</code>
<country>USA</country>
</postal>
<phone/>
<email>[email protected]</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<author fullname="Jessica Fitzgerald-McKay"
initials="J.M." surname="Fitzgerald-McKay">
<organization>Department of Defense</organization>
<address>
<postal>
<street>9800 Savage Road</street>
<city>Ft. Meade</city>
<region>Maryland</region>
<country>USA</country>
</postal>
<email>[email protected]</email>
</address>
</author>
<author fullname="David Waltermire" initials="D.W."
surname="Waltermire">
<organization>National Institute of Standards and
Technology</organization>
<address>
<postal>
<street>100 Bureau Drive</street>
<city>Gaithersburg</city>
<region>Maryland</region>
<code>20877</code>
<country>USA</country>
</postal>
<email>[email protected]</email>
</address>
</author>
<date year="2016"/>
<!-- Meta-data Declarations -->
<area>General</area>
<workgroup>SACM</workgroup>
<!-- WG name at the upperleft corner of the doc,
IETF is fine for individual submissions.
If this element is not present, the default is "Network Working Group",
which is used by the RFC Editor as a nod to the history of the IETF. -->
<keyword>todo</keyword>
<!-- Keywords will be incorporated into HTML output
files in a meta tag but they have no effect on text or nroff
output. If you submit your draft to the RFC Editor, the
keywords will be used for the search engine. -->
<abstract>
<t>This document provides a core narrative that walks
through an automated enterprise vulnerability
assessment scenario. It is aligned with the SACM use
cases and begins with an enterprise ingesting
vulnerability description data, followed by identifying
endpoints on the network and collecting and storing information
about them to enable posture assessment,
and finally ends with assessing these
endpoints against the vulnerability description
data to determine which ones are affected.
Processes that specifically overlap between this
scenario and SACM use cases will be noted where
applicable. Specifically, the relationship between
this document and the SACM use case building block
capabilities and the usage scenarios will be
covered.</t>
</abstract>
</front>
<middle>
<section title="Scope">
<t>The purpose of this document is to describe a
detailed scenario for vulnerability assessment, and
identify aspects of this scenario that could be used
in the development of an information model. This
includes classes of data, major roles, and a
high-level description of role interactions.
Additionally, this scenario intends to inform
engineering work on protocol and data model
development. The focus of the document is entirely
intra-organizational and covers enterprise handling
of vulnerability description data. The document does
not attempt to cover the security disclosure itself
and any prior activities of the security researcher
or discloser, nor does it attempt to cover the
specific activities of the vendor whose software is
the focus of the vulnerability description data
(i.e., the vulnerable software).</t>
<t>For the purposes of this document, the term
"vulnerability description data" is intended to
mean: "Data intended to alert enterprise IT
resources to the existence of a flaw or flaws in
software, hardware, and/or firmware, which could
potentially have an impact on enterprise
functionality and/or security." For the purpose of
this scenario, such data also includes information
that can be used to determine (to some level of
accuracy, although possibly not conclusively)
whether or not the flaw is present within an
enterprise, when compared to information about the
state of the enterprise's endpoints. For those who
are familiar with current security practices and
terminology, the use of vulnerability description
data is also synonymnous with security bulletin or
advisory.</t>
<t>This document makes no attempt to provide a
definition of a normalized data format (e.g.
industry standard) for vulnerability description
data although there is nothing precluding the
development of such a normalized data format. Also,
it does not attempt to define procedures by which a
vulnerability discoverer coordinates the release of
vulnerability description data to other parties.</t>
</section>
<section title="Assumptions">
<t>A number of assumptions must be stated in order to
further clarify the position and scope of this
document.</t>
<t>
<list style="symbols">
<t>The document begins with the assumption that
the enterprise has received vulnerability
description data, and that the data has already
been processed into a format that the
enterprise's security software tools can
understand and use. In particular, this
document: <list style="symbols">
<t>Does not discuss how the enterprise
identifies potentially relevant
vulnerability description data.</t>
<t>Does not discuss how the enterprise
collects the vulnerability description
data.</t>
<t>Does not discuss how the enterprise
assesses the authenticity of the
vulnerability description data.</t>
<t>Does not discuss parsing of the
vulnerability description data into a usable
format.</t>
</list>
</t>
<t>The document assumes that the enterprise has a
means of identifying enterprise endpoints. This
could mean identifying endpoints as they join
the network, actively scanning for connected
endpoints, passive scanning of network traffic
to identify connected endpoints, or some other
method of accounting for the presence of all
endpoints in the enterprise. The document also
does not distinguish between physical endpoints
and virtualized endpoints.</t>
<t>The document assumes that the enterprise has a
means of extracting relevant information about
enterprise endpoints. Moreover, this extracted
information is expressed in a format that is
compatible with the information extracted from
the vulnerability description data. The
document: <list style="symbols">
<t>Does not specify how relevant information
is identified.</t>
<t>Does not specify the mechanics of how
relevant information is extracted from the
data sources (such as the endpoint
itself).</t>
<t>Does not specify how extracted endpoint
information and vulnerability description
data is normalized to be compatible.</t>
</list>Note that having a means of extracting
relevant information about enterprise endpoints
is within the scope of the SACM Endpoint
Security Posture Assessment process. In the case
of this document, this sub-process is assumed to
be existent. </t>
<t>The document assumes that all information
described in the steps below is available in the
vulnerability description data and serves as the
basis of this assessment. Likewise, the document
assumes that the enterprise can provide all
relevant information about any endpoint needed
to perform the described analysis. The authors
recognize that this will not always be the case,
but these assumptions are taken in order to show
the breadth of data utilization in this
scenario. Less complete information may require
variations to the described steps.</t>
<t>The document assumes that the enterprise has a
policy by which assessment of endpoints based on
vulnerability description data is prioritized.
The document: <list style="symbols">
<t>Does not specify how prioritization
occurs.</t>
<t>Does not specify how prioritization impacts
assessment behaviors.</t>
</list>
</t>
<t>The document assumes that the enterprise has a
mechanism for long-term storage of vulnerability
description data and endpoint assessment
results (e.g., a data repository).</t>
<t>This document assumes that the enterprise has a
procedure for reassessment of endpoints at some
point after initial assessment. The document:
<list style="symbols">
<t>Does not specify how a reassessment would
impact individual assessment behaviors.
(i.e., it is agnostic as to whether the
assessment procedure is the same regardless
of whether this is the first or a subsequent
assessment for some set of vulnerability
description data.)</t>
<t>Does not provide recommendations or
specifics on reassessment intervals.</t>
</list>
</t>
</list>
</t>
</section>
<section
title="Endpoint Identification and Initial (Pre-Assessment) Data Collection">
<t>The first step in this scenario involves
identifying endpoints and collecting the basic
or minumum set of system information attributes
from them such as operating system type
and version. Further examples of system
information and attributes can be found below in the
section titled Endpoint Data Collection. This identification occurs
prior to the receipt of any specific vulnerability
description data and is part of the regular, ongoing
monitoring of endpoints within an enterprise. This
process is not meant to report on, or gather data for
any specific vulnerabilities. The information gathered
during this step could be applied in many enterprise
automation efforts. Specifically, in addition to
vulnerability management, it could be used by
configuration and license management tasks. All of
the information collected during this step is stored
in a central location such as a Repository.</t>
<t>This activity involves the following sub-steps:</t>
<section title="Identification">
<t>Prior to any other steps, the identification of
endpoints must occur. This involves locating (at
least virtually) and distinguishing between
endpoints on the network in a way that allows each
endpoint to be recognized in future interactions
and selected for specific treatment. This not only
allows later steps to determine the scope of what
endpoints need to be assessed, but also allows for
the unique identification of each endpoint. Unique
and persistent endpoint IDs are used to allow for
endpoints to be tracked over time and between
sensors as well as allow for proper counts of
assets during inventories and other similar
collections. Endpoint identity can be established
by collecting certain attributes that allow for
unique and persistent tracking of endpoints on the
enterprise network. Examples include, but are not
limited to, IP address, MAC address, FQDNs,
pre-provisioned identifiers such as GUIDs or
copies of serial numbers, certificates, hardware
identity values, or similar attributes. It is
important to note that the persistency of these
attributes will likely vary depending on the
enterprise. For example, a statically assigned IP
address is much more persistent than an IP address
assigned via DHCP.</t>
<section title="SACM Use Case Alignment">
<t>This sub-step aligns with the Endpoint Discovery,
Endpoint Characterization, and Endpoint Target
Identification building block capabilities. The
alignment is due to the fact that the purpose of
this sub-step is to discover, identify, and
characterize all endpoints on an enterprise
network.</t>
</section>
</section>
<section title="Processing Artifacts">
<t>Processing artifacts, such as the date and time
the collection was performed, should be collected
and stored. This timestamp is extremely important
when performing later assessments, as it is needed
for data freshness computations. The organization
may develop rules for stale data and when a new
data collection is required. This metadata is also
helpful in correlating information across multiple
data collections. This includes correlating both
pre-assessment data and secondary assessment data
(sections 4.3 Endpoint Data Collection and 6.2
Secondary Assessment).</t>
</section>
<section title="Endpoint Data Collection">
<t>The enterprise should perform ongoing collection
of basic endpoint information such as operating
system and version information, and an installed
software inventory. This information is collected
for general system monitoring as well as its
potential use in activities such as vulnerability
assessment.</t>
<t>Some examples of basic information to collect about endpoints
in this pre-assessment process could include:</t>
<t>
<list style="symbols">
<t>Endpoint type - traditional (e.g.,
workstation, server, etc.) network
infrastructure (e.g., switches, routers,
etc.), mobile (e.g., cell phones, tablets,
laptops, etc.), and constrained (e.g.,
industrial control systems, Internet of
Things, etc.)</t>
<t>Hardware version/firmware - e.g., BIOS
version, firmware revision, etc.</t>
<t>Operating system - e.g., Windows, Linux, Mac
OS, Android</t>
<t>Operating system attributes - e.g., version,
patch level, service pack level,
internationalized or localized version,
etc.</t>
<t>Installed software inventory - Would include
the software names and versions and possibly
other high-level attributes. Could be used to
quickly determine endpoint applicability when
new vulnerability description data
arrives.</t>
</list>
</t>
<t>Some additional and more advanced information to
collect from endpoints in this pre-assessment
process could include:</t>
<t>
<list style="symbols">
<t>Open ports and enabled services - This would
include applications listening for incoming
connections on open ports as well as services
that are starting, running, suspended, or
enabled to run pending some event.</t>
<t>Operating system optional component inventory
- some OS' have optional components that can
be installed which may not show up as separate
pieces of software (e.g., web and ftp servers,
demo web pages, shared libraries, etc.). Note
that this could also occur within third-party
applications as well.</t>
<t>Endpoint location - physical location (e.g.,
department, room, Global Positioning System
(GPS), etc.), logical location (e.g., what
network infrastructure endpoints (e.g.
switches, wireless access point, etc.) an
endpoint is connected to, etc.</t>
<t>Purpose - describes how the endpoint is used
within the enterprise (e.g., end-user system,
database server, public web server, etc.)</t>
<t>Criticality - enterprise defined rating
(possibly a score) that helps determine the
criticality of the endpoint. If this endpoint
is attacked or lost, what is the impact to the
overall enterprise?</t>
</list>
</t>
<t>It is important to note that some of these
attributes may exist natively on the endpoint
whereas other attributes may be assigned by a
human, computed, or derived from other data and
may or may not be available for collection on the
endpoint.</t>
<t>Furthermore, the possibility should be left open
for enterprises to define their own custom queries
and algorithms to gather and derive
enterprise-specific attributes that are deemed of
interest to regular enterprise operations.</t>
<t>In addition to collecting these attributes,
metadata about the attributes should also be
collected which could include: <list>
<t>Data origin - where the data originated
from</t>
<t>Data source - what provided the data</t>
<t>Date and time of collection - when the data
was collected</t>
</list>
</t>
<section title="SACM Use Case Alignment">
<t>This sub-step aligns with the Data Publication
building block capability because this section
involves storage of endpoint attributes within an
enterprise Repository. This sub-step also aligns
with the Endpoint Characterization and Endpoint
Target Identification building block capabilities
because it further characterizes the endpoint
through automated and possibly manual means. There
is direct alignment with the Endpoint Component
Inventory, Posture Attribute Identification, and
Posture Attribute Value Collection building block
capabilities since the purpose of this sub-step is
to perform an initial inventory of the endpoint
and collect basic attributes and their values.
Last, there is alignment with the Collection
Guidance Acquisition building block capabilities
as the inventory and collection of endpoint
attributes would be directed by some type of
enterprise or third-party guidance.</t>
</section>
</section>
<section title="Implementation Examples">
<t>Within the SACM Architecture, the Internal and External
Collector components could be used to allow enterprises to
collect posture attributes that demonstrate compliance with
enterprise policy. Endpoints can be required to provide posture
attributes, which may include identification attributes to
enable persistent communications.</t>
<t>The SWID Message and Attributes for IF-M standard
defines collection and validation of software identities
using the ISO Software Identification Tag Standard. Using this
standard, the identity of all installed software including the
endpoint operating system, could be collected and used
for later assessment.</t>
<t>The OVAL Definitions Model provides a data model that can be
used to specify what posture attributes to collect as well as
their expected values which can be used to drive an assessment.</t>
<t>The OVAL System Characteristics Model can be used to
capture information about an endpoint. The model is
specifically suited to expressing OS information, endpoint
identification information (such as IP and MAC addresses),
and other endpoint metadata.</t>
</section>
</section>
<section title="Vulnerability Description Data">
<t>The next step in the Vulnerability Assessment
scenario begins after vulnerability description data
has been received and processed into a form that can
be used in the assessment of the enterprise. As a
part of the enterprise process for managing
vulnerability description data, the enterprise
should store all received and processed
vulnerability description data in a Repository.
The stored vulnerability description data can be
used and compared with later vulnerability
description data for the purpose of duplicate
detection and in some cases, guidance on how to
handle similar issues.</t>
<t>All vulnerability description data should be
assigned an internal tracking ID by the enterprise
as a first step as this helps compensate for the
fact that incoming vulnerability description data
might not have a global identifier when it is
received, and might never be assigned one.</t>
<t>High-level vulnerability description data metadata
to store would include:</t>
<t>
<list style="symbols">
<t>Ingest date and time - the date and time that
the vulnerability description data was received
by the enterprise.</t>
<t>Date and time of vulnerability description data
release (i.e., publication or disclosure date
and time) - Some older vulnerability description
data may be ingested long after publication.
This can be useful when reviewing historical
enterprise information to (potentially) identify
the period when a particular endpoint was first
assessed as vulnerable. Sometimes this
information will help to differentiate between
similar vulnerability description data.</t>
<t>Version - the version or iteration of the
vulnerability description data according to the
author, if applicable.</t>
<t>External Vulnerability Description Data ID(s)
(if applicable) - any external or third-party
IDs assigned to the vulnerability description
data should be tracked. There could be multiple
IDs in some cases (e.g., vendor bug id, global
ID, discoverer's local ID, third-party
vulnerability database ID, etc.).</t>
<t>Severity Score (if available) - these may be
useful for later mitigation prioritization.</t>
</list>
</t>
<t>In addition to the described metadata, the raw or
original vulnerability description data would be
stored along with the specific information extracted
from it that is to be used in the applicability and
assessment process.</t>
<section title="SACM Use Case Alignment">
<t>This step aligns with the Data Publication and Data
Retrieval building block capabilities because this
section details storage of vulnerability description
data within an enterprise Repository and later
retrieval of the same.</t>
</section>
<section title="Implementation Examples">
<t>The Common Vulnerability Reporting Framework (CVRF)
is an XML-based language that attempts to standardize
the creation of vulnerability report documentation.
Using CVRF, the enterprise could create automated
tools based on the standardized schema which
would obtain the needed and relevant information
useful for later assessments and assessment results.</t>
</section>
</section>
<section title="Endpoint Applicability and Assessment">
<t>When new vulnerability description data is received
by the enterprise, applicable enterprise endpoints
must be identified and assessed. Endpoints are first
examined using the already obtained pre-assessment
data. If this is not sufficient to determine endpoint
applicability, a secondary data collection for
additional data and attributes may be performed to
determine status with regard to the vulnerability
description data.</t>
<section title="Applicability">
<t>The applicability of an endpoint and its
vulnerability status can, in many cases, be
determined entirely by the existence of a
particular version of installed software on the
endpoint. This data may have been collected in the
pre-assessment data collection. If the
applicability and vulnerability status of an
endpoint can be determined entirely by the
pre-collected data attribute set, no further data
collection is required.</t>
<t>Other cases may require specific data (i.e., file
system attributes, specific configuration
parameters, etc.) to be collected for the
assessment of a particular vulnerability
description data. In these cases, a secondary,
targeted vulnerability assessment is required.
Administrators may want to evaluate applicability
to the vulnerability description data iteratively.
Specifically, the process would compare against
pre-collected data first (easy to do and the data
is stored in a Repository), and then if needed,
query endpoints that are not already excluded from
applicability for additional required data. (I.e.,
A "fast-fail" model). To do this, the criteria for
determining applicability must be separable, so
that some conclusions can be drawn based on the
possession of partial data.</t>
<section title="SACM Use Case Alignment">
<t>This sub-step aligns with the Data Retrieval,
Data Query, and Posture Attribute Value Query
building block capabilities because, in this
sub-step, the process is attempting to determine
the vulnerability status of the endpoint using the
data that has previously been collected.</t>
</section>
</section>
<section title="Secondary Assessment">
<t>If the applicability and vulnerability status of
an endpoint cannot be determined by the
pre-assessment data collection, a secondary and
targeted assessment of the endpoint will be
required. A secondary assessment may also be
required in the case that data on-hand (either
from pre-assessment or from prior secondary
assessments) is stale or out-of-date.</t>
<t>The following data types and attributes are
examples of what might be required in the case of
a secondary and targeted assessment:</t>
<t>
<list style="symbols">
<t>Specific files and attributes - i.e., file
name, versions, size, write date, modified
date, checksum, etc. Some vulnerabilities may
only be distinguishable through the presence
or absence of specific files or their attributes.</t>
<t>Shared libraries - Some vulnerabilities will
affect many products across multiple vendors.
In these cases the vulnerability may apply to
a shared library. Under these circumstances,
product versions may be less helpful than
looking for the presence of one or more
specific files and their attributes.</t>
<t>Other software configuration information (if
applicable) - e.g., Microsoft Windows registry
queries, Apple configuration profiles, GConf,
Proc filesystem, text configuration files and
their parameters, and the installation paths.
Sometimes vulnerabilities only affect certain
software configurations and in some cases
these are not the default configurations.
Certain configuration attributes can be used
to determine the current configuration
state.</t>
</list>
</t>
<t>Note that the secondary assessment described here
does not need to be a pull assessment that is
initiated by the server. The secondary assessment
could also be part of a push to the server when
the endpoint detects a change to a vulnerability
assessment baseline.</t>
<section title="SACM Use Case Alignment">
<t>This sub-step aligns with the Data Publication
building block capability because this section
details storage of endpoint attributes within an
enterprise Repository. The sub-step also aligns
with the Collection Guidance Acquisition building
block capability since the vulnerability
description data (guidance) drives the collection
of additional endpoint attributes.</t>
<t>This sub-step aligns with the Endpoint
Characterization (both manual and automated) and
Endpoint Target Identification building block
capabilities because it could further characterize
the endpoint through automated and possibly manual
means. There is direct alignment with the Endpoint
Component Inventory, Posture Attribute
Identification, and Posture Attribute Value
Collection building block capabilities since the
purpose of this sub-step is to perform additional
and more specific component inventories and
collections of endpoint attributes and their
values.</t>
</section>
</section>
<section title="Implementation Examples">
<t>Within the SACM Architecture, the assessment task
would be handled by the Evaluator component. If pre-assessment
data is used, this would be stored on and obtained from a
Data Store component.</t>
<t>Within the SACM Architecture, the Internal and External
Collector components could be used to allow enterprises to
collect posture attributes that demonstrate compliance with
enterprise policy. Endpoints can be required to provide posture
attributes, which may include identification attributes to
enable persistent communications.</t>
<t>The SWID Message and Attributes for IF-M standard
defines collection and validation of software identities
using the ISO Software Identification Tag Standard. Using
this standard, all installed software including the
endpoint operating system could be collected and stored for later
assessment.</t>
<t>The OVAL Definitions Model provides a data model that can be
used to specify what posture attributes to collect as well as
their expected values which can be used to drive an assessment.</t>
<t>The OVAL System Characteristics Model can be used to
capture information about an endpoint. The model is
specifically suited to expressing OS information, endpoint
identification information (such as IP and MAC addresses),
and other endpoint metadata.</t>
<t>The SACM Internal and External Attribute Collector components
can be used to allow enterprises to collect posture attributes that
demonstrate compliance with enterprise policy. Endpoints
can be required to provide posture attributes, which may include
identification attributes to enable persistent communications.</t>
</section>
</section>
<section title="Assessment Results">
<t>Assessment results present the results of an
assessment, along with sufficient context so a human
or machine can make the appropriate response. This
context might include a description of the issue
provided by the vulnerability description data, the
endpoint attributes that indicate applicability, or
other information needed to respond to the results
of the assessment. Data in this step is stored for
auditing and forensic purposes.</t>
<t>The following details are important to track in
assessment results. Note that information may be
"included" by providing pointers to other records
stored in a Repository (e.g., vulnerability
description data, endpoint data, etc.).</t>
<t>
<list style="symbols">
<t>Date and time of assessment - The date and time
that the assessment was performed. To understand
when the data was compared against the
vulnerability description data and what
conclusions were drawn.</t>
<t>Data collection/attribute age - The age of the
data used in the assessment to make the endpoint
status determination.</t>
<t>Endpoint ID - The endpoint itself must be
identified for tracking results over time.</t>
<t>Vulnerability description data ID(s) - May include
both the internally defined ID as well as one or
more externally defined IDs if they exist. The
internally assigned ID allows linkage to the
correct vulnerability description data. If
available, external IDs provide a "pivot point"
to additional external information.</t>
<t>Vulnerable software product(s) - Identifies the
software products on the endpoint that resulted
in the endpoint being declared applicable. Since
some vulnerability description data identify
vulnerabilities in multiple products, this will
help identify the specific product (or products)
found to be vulnerable in the endpoint
assessment.</t>
<t>Endpoint vulnerability status - The endpoint
status based on the vulnerability description
data. Does the vulnerability exist on the
endpoint?</t>
<t>Vulnerability description - Not needed for
automated assessment but probably should be
included for human review. The reason for
inclusion is to support the human user
understanding of the vulnerability assessment
results within the application front-end or
interface.</t>
<t>Vulnerability remediation - Similar to the above,
remediation or vendor patch information would be
useful for a human response. In many cases, this
information may be a part of the description
information described above. Note that patch
information may change over time due to
supercession of the vendor patches.</t>
</list>
</t>
<section title="SACM Use Case Alignment">
<t>This step aligns with the Data Publication and Data
Retrieval building block capabilities because this
section details storage of vulnerability assessment
results within an enterprise Repository and later
retrieval of the same.</t>
</section>
<section title="Implementation Examples">
<t>The OVAL Results Model provides a data model to encode
the results of the assessment, which could then be stored
in a Repository and later accessed. The assessment results
described in this scenario could be stored and later
accessed using the OVAL Results Model. Note that the use of
the OVAL Results Model for sharing results is not recommended
per section 7.3 of the
<xref target="draft-hansbury-sacm-oval-info-model-mapping-01">
OVAL and the SACM Information Model</xref>.</t>
<t>Within the SACM Architecture, the generation of
the assessment results would occur in the Report Generator
component. Those results might then be moved to a
Data Store component for later sharing and retrieval as
defined by SACM.</t>
</section>
</section>
<!-- Possibly a 'Contributors' section ... -->
<section anchor="IANA" title="IANA Considerations">
<t>This memo includes no request to IANA.</t>
</section>
<section anchor="Security"
title="Security Considerations">
<t>This document provides a core narrative that walks
through an automated enterprise vulnerability
assessment scenario and is aligned with SACM
"Endpoint Security Posture Assessment: Enterprise
Use Cases" <xref target="RFC7632"/>. As a result,
the security considerations for <xref
target="RFC7632"/> apply to this document.
Furthermore, the vulnerability description data may
provide attackers with useful information such as
what software an enterprise is running on their
endpoints. As a result, organizations should
properly protect the vulnerability description data
it ingests.***TODO IS THIS COVERED BY
RFC7632???***</t>
</section>
</middle>
<!-- *****BACK MATTER ***** -->
<back>
<!-- References split into informative and normative -->
<!-- There are 2 ways to insert reference entries from the citation libraries:
1. define an ENTITY at the top, and use "ampersand character"RFC2629; here (as shown)
2. simply use a PI "less than character"?rfc include="reference.RFC.2119.xml"?> here
(for I-Ds: include="reference.I-D.narten-iana-considerations-rfc2434bis.xml")
Both are cited textually in the same manner: by using xref elements.
If you use the PI option, xml2rfc will, by default, try to find included files in the same
directory as the including file. You can also define the XML_LIBRARY environment variable
with a value containing a set of directories to search. These can be either in the local
filing system or remote ones accessed by http (http://domain/dir/... ).-->
<references title="Informative References">
<!-- Here we use entities that we defined at the beginning. -->
<!--&RFC2629;--> &RFC7632;
&I-D.ietf-sacm-requirements;
<!-- A reference written by by an organization not a person. -->
<reference anchor="critical-controls">
<front>
<title abbrev="Critical Security Controls"
>Critical Security Controls, Version 5.1</title>
<author>
<organization abbrev="Council on CyberSecurity"
>Council on CyberSecurity</organization>
</author>
<date/>
</front>
</reference>
<reference anchor="charter-ietf-sacm-01">
<front>
<title abbrev="Charter">Charter, Version
1.0</title>
<author>
<organization abbrev="SACM">Security Automation
and Continuous Monitoring</organization>
</author>
<date month="July" year="2013"/>
</front>
</reference>
<reference anchor="draft-hansbury-sacm-oval-info-model-mapping-01">
<front>
<title abbrev="OVAL and SACM Info Model">OVAL and the SACM Information Model</title>
<author>
<organization abbrev="SACM">Security Automation
and Continuous Monitoring</organization>
</author>
<date month="November" year="2015"/>
</front>
</reference>
</references>
<section title="Change Log">
<section title="Changes in Revision 01"
anchor="changes-in-revision-01">
<t>Clarification of the vulnerability description
data IDs in sections 4 and 6.</t>
<t>Added "vulnerability remediation" to the Assessment
Results and Data Attribute Table and Definitions
sections.</t>
<t>Added Implementation Examples to Endpoint
Identification and Initial (Pre-Assessment) Data
Collection, Vulnerability Description Data,
Endpoint Applicability and Assessment, and
Assessment Results sections.</t>
<t>Added an example to vulnerability description data
in the scope section.</t>
<t>Added a sentence to clarify vulnerability
description data definition in the scope section.</t>
<t>Added data repository example for long-term storage
scope item.</t>
<t>Added sentence to direct reader to examples of basic
system information in endpoint identification section.</t>
<t>Split the examples of information to collect in the
pre-assessment collection section into a basic and
advanced list.</t>
<t>Added examples of data stored in the repository in
the Assessment Results section.</t>
<t>Added sentence for human-assigned attributes in
the Future Work section.</t>
<t>Replaced "vulnerability report" to "vulnerability
description data" because the term report was
causing confusion. Similarly, replaced "assessment
report" with "assessment results".</t>
<t>Replaced "Configuration Management Database
(CMDB)" with "Repository" which is SACM's term for
a data store.</t>
<t>Replaced endpoint "Role" with "Purpose" because
"Role" is already defined in SACM. Also, removed
"Function" because it too is already defined in
SACM.</t>
<t>Clarified that the document does not try to
define a normalized data format for vulnerability
description data although it does not preclude the
creation of such a format.</t>
<t>Included additional examples of software
configuration information.</t>
<t>Clarified the section around endpoint
identification to make it clear designation
attributes used to correlate and identify endoints
are both persistent and unique. Furthermore, text
was added to explain how the persistency of
attributes may vary. This was based on knowledge
gained from the Endpoint ID Design Team.</t>
<t>Updated the Security Considerations section to
mention those described in <xref target="RFC7632"
/>.</t>
<t>Removed text around Bring Your Own Device (BYOD).
While important, BYOD just adds complexity to this