Multiple single-level

Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware.

The drive to develop MLS operating systems was severely hampered by the dramatic fall in data processing costs in the early 1990s. Before the advent of desktop computing, users with classified processing requirements had to either spend a lot of money for a dedicated computer or use one that hosted an MLS operating system. Throughout the 1990s, however, many offices in the defense and intelligence communities took advantage of falling computing costs to deploy desktop systems classified to operate only at the highest classification level used in their organization. These desktop computers operated in system high mode and were connected with LANs that carried traffic at the same level as the computers.

MSL implementations such as these neatly avoided the complexities of MLS but traded off technical simplicity for inefficient use of space. Because most users in classified environments also needed unclassified systems, users often had at least two computers and sometimes more (one for unclassified processing and one for each classification level processed). In addition, each computer was connected to its own LAN at the appropriate classification level, meaning that multiple dedicated cabling plants were incorporated (at considerable cost in terms of both installation and maintenance).

Limits of MSL versus MLS

The obvious shortcoming of MSL (as compared to MLS) is that it does not support immixture of various classification levels in any manner. For example, the notion of concatenating a SECRET data stream (taken from a SECRET file) with a TOP SECRET data stream (read from a TOP SECRET file) and directing the resultant TOP SECRET data stream into a TOP SECRET file is unsupported. In essence, an MSL system can be thought of as a set of parallel (and collocated) computer systems, each restricted to operation at one, and only one, security level. Indeed, the individual MSL operating systems may not even understand the concept of security levels, since they operate as single-level systems. For example, while one of a set of collocated MSL OS may be configured to affix the character string "SECRET" to all output, that OS has no understanding of how the data compares in sensitivity and criticality to the data processed by its peer OS that affixes the string "UNCLASSIFIED" to all of its output.

Operating across two or more security levels then, must use methods extraneous to the purview of the MSL "operating systems" per se, and needing human intervention, termed "manual review". For example, an independent monitor (not in Brinch Hansen's sense of the term) may be provided to support migration of data among multiple MSL peers (e.g., copying a data file from the UNCLASSIFIED peer to the SECRET peer). Although no strict requirements by way of federal legislation specifically address the concern, it would be appropriate for such a monitor to be quite small, purpose-built, and supportive of only a small number of very rigidly defined operations, such as importing and exporting files, configuring output labels, and other maintenance/administration tasks that require handling all the collocated MSL peers as a unit rather than as individual, single-level systems. It may also be appropriate to utilize a hypervisor software architecture, such as VMware, to provide a set of peer MSL "OS" in the form of distinct, virtualized environments supported by an underlying OS that is only accessible to administrators cleared for all of the data managed by any of the peers. From the users' perspectives, each peer would present a login or X display manager session logically indistinguishable from the underlying "maintenance OS" user environment.

Advances in MSL

The cost and complexity involved in maintaining distinct networks for each level of classification led the National Security Agency (NSA) to begin research into ways in which the MSL concept of dedicated system high systems could be preserved while reducing the physical investment demanded by multiple networks and computers. Periods processing was the first advance in this area, establishing protocols by which agencies could connect a computer to a network at one classification, process information, sanitize the system, and connect it to a different network with another classification. The periods processing model offered the promise of a single computer but did nothing to reduce multiple cabling plants and proved enormously inconvenient to users; accordingly, its adoption was limited.

In the 1990s, the rise of virtualization technology changed the playing field for MSL systems. Suddenly, it was possible to create virtual machines (VMs) that behaved as independent computers but ran on a common hardware platform. With virtualization, NSA saw a way to preserve periods processing on a virtual level, no longer needing the physical system to be sanitized by performing all processing within dedicated, system-high VMs. To make MSL work in a virtual environment, however, it was necessary to find a way to securely control the virtual session manager and ensure that no compromising activity directed at one VM could compromise another.

MSL solutions

NSA pursued multiple programs aimed at creating viable, secure MSL technologies leveraging virtualization. To date, three major solutions have materialized.

Both the NetTop and Trusted Multi-Net solutions have been approved for use. In addition, Trusted Computer Solutions has developed a thin-client product, originally based on the NetTop technology concepts through a licensing agreement with NSA. This product is called SecureOffice(r) Trusted Thin Client(tm), and runs on the LSPP configuration of Red Hat Enterprise Linux version 5 (RHEL5).

Three competing companies have implemented MILS separation kernels:

In addition, there have been advances in the development of non-virtualization MSL systems through the use of specialized hardware, resulting in at least one viable solution:

Philosophical aspects, ease of use, flexibility

It is interesting to consider the philosophical implications of the MSL "solution path." Rather than providing MLS abilities within a classical OS, the chosen direction is to build a set of "virtual OS" peers that can be managed, individually and as a collective, by an underlying real OS. If the underlying OS (let us introduce the term maintenance operating system, or MOS) is to have sufficient understanding of MLS semantics to prevent grievous errors, such as copying data from a TOP SECRET MSL peer to an UNCLASSIFIED MSL peer, then the MOS must have the ability to: represent labels; associate labels with entities (here we rigorously avoid the terms "subject" and "object"); compare labels (rigorously avoiding the term "reference monitor"); distinguish between those contexts where labels are meaningful and those where they are not (rigorously avoiding the term "trusted computing base" [TCB]); the list goes on. One readily perceives that the MLS architecture and design issues have not been eliminated, merely deferred to a separate stratum of software that invisibly manages mandatory access control concerns so that superjacent strata need not. This concept is none other than the geminal architectural concept (taken from the Anderson Report) underlying DoD-style trusted systems in the first place.

What has been positively achieved by the set-of-MSL-peers abstraction, albeit, is radical restriction of the scope of MAC-cognizant software mechanisms to the small, subjacent MOS. This has been accomplished, however, at the cost of eliminating any practical MLS abilities, even the most elementary ones, as when a SECRET-cleared user appends an UNCLASSIFIED paragraph, taken from an UNCLASSIFIED file, to his SECRET report. The MSL implementation would obviously require every "reusable" resource (in this example, the UNCLASSIFIED file) to be replicated across every MSL peer that might find it useful—meaning either much secondary storage needlessly expended or intolerable burden on the cleared administrator able to effect such replications in response to users' requests therefor. (Of course, since the SECRET user cannot "browse" the system's UNCLASSIFIED offerings other than by logging out and beginning an UNCLASSIFIED system afresh, one evidences yet another severe limitation on functionality and flexibility.) Alternatively, less sensitive file systems could be NFS-mounted read-only so that more trustworthy users could browse, but not modify, their content. Albeit, the MLS OS peer would have no actual means for distinguishing (via a directory listing command, e.g.) that the NFS-mounted resources are at a different level of sensitivity than the local resources, and no strict means for preventing illegal uphill flow of sensitive information other than the brute-force, all-or-nothing mechanism of read-only NFS mounting.

To demonstrate just what a handicap this drastic effectuation of "cross-level file sharing" actually is, consider the case of an MLS system that supports UNCLASSIFIED, SECRET, and TOP SECRET data, and a TOP SECRET cleared user who logs into the system at that level. MLS directory structures are built around the containment principle, which, loosely speaking, dictates that higher sensitivity levels reside deeper in the tree: commonly, the level of a directory must match or dominate that of its parent, while the level of a file (more specifically, of any link thereto) must match that of the directory that catalogs it. (This is strictly true of MLS UNIX: alternatives that support different conceptions of directories, directory entries, i-nodes, etc.—such as Multics, which adds the "branch" abstraction to its directory paradigm—tolerate a broader set of alternative implementations.) Orthogonal mechanisms are provided for publicly shared and spool directories, such as /tmp or C:\TEMP, which are automatically—and invisibly—partitioned by the OS, with users' file access requests automatically "deflected" to the appropriately labeled directory partition. The TOP SECRET user is free to browse the entire system, his only restriction being that—while logged in at that level—he is only allowed to create fresh TOP SECRET files within specific directories or their descendants. In the MSL alternative, where any browsable content must be specifically, laboriously replicated across all applicable levels by a fully cleared administrator—meaning, in this case, that all SECRET data must be replicated to the TOP SECRET MSL peer OS, while all UNCLASSIFIED data must be replicated to both the SECRET and TOP SECRET peers—one can readily perceive that, the more highly cleared the user, the more frustrating his timesharing computing experience will be.

In a classical trusted systems-theoretic sense—relying upon terminology and concepts taken from the Orange Book, the foundation of trusted computing—a system that supports MSL peers could not achieve a level of assurance beyond (B1). This is because the (B2) criteria require, among other things, both clear identification of a TCB perimeter and the existence of a single, identifiable entity that has the ability and authority to adjudicate access to all data represented throughout all accessible resources of the ADP system. In a very real sense, then, the application of the term "high assurance" as a descriptor of MSL implementations is nonsensical, since the term "high assurance" is properly limited to (B3) and (A1) systems—and, with some laxity albeit, to (B2) systems.

Cross-domain solutions

MSL systems, whether virtual or physical in nature, are designed to preserve isolation between different classification levels. Consequently, (unlike MLS systems), an MSL environment has no innate abilities to move data from one level to another.

To permit data sharing between computers working at different classification levels, such sites deploy cross-domain solutions (CDS), which are commonly referred to as gatekeepers or guards. Guards, which often leverage MLS technologies themselves, filter traffic flowing between networks; unlike a commercial Internet firewall, however, a guard is built to much more stringent assurance requirements and its filtering is carefully designed to try to prevent any improper leakage of classified information between LANs operating at different security levels.

Data diode technologies are used extensively where data flows are required to be restricted to one direction between levels, with a high level of assurance that data will not flow in the opposite direction. In general, these are subject to the same restrictions that have imposed challenges on other MLS solutions: strict security assessment and the need to provide an electronic equivalent of stated policy for moving information between classifications. (Moving information down in classification level is particularly challenging and typically requires approval from several different people.)

As of late 2005, numerous high-assurance platforms and guard applications have been approved for use in classified environments. N.b. that the term "high-assurance" as employed here is to be evaluated in the context of DCID 6/3 (read "dee skid six three"), a quasi-technical guide to the construction and deployment of various systems for processing classified information, lacking both the precise legal rigidity of the Orange Book criteria and the underlying mathematical rigor. (The Orange Book is motivated by, and derived from, a logical "chain of reasoning" constructed as follows: [a] a "secure" state is mathematically defined, and a mathematical model is constructed, the operations upon which preserve secure state so that any conceivable sequence of operations starting from a secure state yields a secure state; [b] a mapping of judiciously chosen primitives to sequences of operations upon the model; and [c] a "descriptive top-level specification" that maps actions that can be transacted at the user interface (such as system calls) into sequences of primitives; but stopping short of either [d] formally demonstrating that a live software implementation correctly implements said sequences of actions; or [e] formally arguing that the executable, now "trusted," system is generated by correct, reliable tools [e.g., compilers, librarians, linkers].)

This article is issued from Wikipedia - version of the Friday, April 01, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.