At that moment, it occurred to me that there is even a very literal link
from security to software process. I pulled out the CMM paper to explain
the concept of organizational maturity and realized that it applies exactly
to the kinds of security practices I've observed in industry.
There aren't any guides in the security literature to *evaluating* the
effectiveness of sec. policies and the management of them. There are
checklists and heuristics, but not the kind of 'empirically-based process
improvement' measures of SE.
It's doubly ironic given the physical colocation of CERT and SEI at CMU
that this doesn't appear (on first blush) to have ever been explored. It
would be a useful complement to the 'Orange Book' for trusted computing
bases -- the social, as opposed to technological standards and practices.
The other aspect of applying CMM to security is its metamodel: rather than
yet another checklist of 'levels of security' -- f(x) -- we can talk about
organizational control of security -- f'(x). Heck, even as a pure
gedankenexperiment, it should serve to justify (or parody?) the CMM.
Security is just another capability.
My question to you both, as advisors, is first, what can, and second, what
should be done with this insight? Is it a useful one?
"security-by-crisis". Incidents, when discovered (by accident) are
addrressed locally and ad-hoc. Proactive security processes, if any, are
bounded within single systems (just web servers or unix logins), single
projects or single modules (in sw development, leads to separate security
monitors). Little awareness of the trust relationships which are being
expressed as configuration files and cryptographic settings ("hey, we have
a firewall, right?"). Whole enterprise relies on a few stars or gurus:
individuals are the alpha and omega of security.
example: my personal network of machines. (hey! there could be a Personal
Security Process to further amuse WSH :-) Like CMM-SW, we expect a shocking
number of organizations to be at this level.
The next level up aims for a "disciplined process". There are formal
policies and procedures for incidents and normal processes ("how to add a
new user"). There is some awarness that security policy is a whole (don't
protect a web server without patching ftp and smtp holes, too). There is a
team responsible for security (nominal centralization). Basic tools have
been installed to audit and visualize security software. Industry standards
(like RFCs and CERT advisories) are available and applied. Security
architectures may still differ from project-to-project, though.
example: W3C claims to have policies which eventually get implemented, but
ad-hoc and incompletely. Like CMM-SW, we expect most organizations to be at
The next level up aims for a "standard and consistent process". Key is an
coherent plan for the technology and for its management. Written policies
are obeyed and modified, not merely compiled. Lessons are learned and
applied across domains and projects. Metrics are in place to measure
effectiveness. Security advisory board or similar review body actually
judges such measurments. This level of team is capable of developing new
secure tools beyond just applying exisiting ones.
example: a reasonably scalable campus IT office, like Schiller's at MIT.
Active monitoring of attack, processes in place for handling incidents,
statistical observation of large operations like newstudent activation.
Like CMM-SW, L3 is a benchmark for 'professional' security management: the
highest level typically encountered in the press or among peers.
The next level up aims for a "predictable process". The measures of L3 are
used as goals and can be predicted, especially to track vulnerable systems
and estimate risk of new projects. Incident reports and experience is
formally codified in organizationwide libraries. Security staff has
sign-off authority on all development within the organization.
example: a Telco with complete procedures (and staffs) and active learning
of new attacks and fraudulent behavior. Statistical *control* of security
breaches. Management responsibilty for trust decisions. Integrated analysis
of related systems (no backdoors). Like CMM-SW we expect only a few, very
visible organizations within a few key industries to achieve this (best
analogy: aerospace control sw).
The next level up aims for a "continuously improving process". They are
most likely to evolve within an environment of changing trust requirements
and changing roles [a telco, by contrast, is always defending a single
model: only callers can access their traffic and only the company can
access the billing data]. From SW, the analogy to "defect prevention" is
incident prevention through rigorous analysis of each failure. New
processes and tools are introduced only with credible cost-benefit analyses.
example: military C4I settings. new ways to compartmentalize and *change*
trust relationships as the system evolves, or to adapt to perceived threat
level. Not just awareness of *what* the trust polices are and how to
enforce them, but the *why*, which allows refactoring and improvement. Like
CMM-SW, only a handful of organizations worldwide will be at this level,
deemed too expensive by most. A further complication is that these
organizations in addition to being secure are *secret*.
--- NOTICE: CMM, Capability Maturity Model, IDEAL, Personal Software Process, and PSP are service marks of Carnegie Mellon University.
--- Rohit Khare /// Graduate Student /// UC Irvine Computer Science firstname.lastname@example.org /// Work: (714) 824-3100 /// Home: (714) 823-9705
[Urgent? (617) 960-5131 still works to page me]