Search This Blog

Access Authentication

Two-factor authentication is based on something a user possesses (such as a CAC card) and something a user knows (a password).

CAC stores 64KB of data storage and memory on a single integrated circuit chip. The CAC embeds a persons’ Public Key Infrastructure (PKI) certificate (from the National Security Agency). It includes data storage, a magnetic stripe and bar codes. This enables cardholders to sign documents digitally, encrypt emails, and establish secure online network connections.

CAC authorizations originate from the Authentication Data Repository  (ADR).  ADR is part of the Defense Enrollment Eligibility Reporting System (DEERS), a service of the Defense Manpower Data Center (DMDC). The DMDC Identity Authentication Office (IAO) then provides web services to customers needing an authentication approval, which in turn must be then synchronized with Component human resources applications, which finally deliver the CAC. *

The CAC also requires a CAC-reader, which is attached to a computer or a smart phone device. CAC reader installation process is cumbersome. **



The information stored on a CAC cannot be used alone for access authorization without entry of a password. Since passwords can be cracked and are hard to revoke or invalidate, automatic password generation devices are preferred in most cases involving SECRET or high-level classification.

The preferred way for obtaining a password is to generate it by means of a security token. That is a physical device that makes up the second factor in a two-factor authorization method.



Security tokens are used to confirm one's identity electronically. They have an internal battery that makes it possible to generate random password every sixty seconds. The system that confirms the token generated code must contain additional software for secure synchronization with data contained on the CAC.  There is a great diversity in methods, device types as well as vendors who supply for authorization synchronization. ***

SUMMARY

DoD access authorization methods are vulnerable. The actual revocation of a CAC card is not a real-time event and is not performed by the DMDC but by Components who rely on diverse and inconsistent personnel applications. Ideally, the revocation of a CAC card could be triggered from a Network Operations Center (NOC) instantly. However, the time elapsed from where the revocation is initiated to where it can be acted on is inconsistent with the risks of retaining access privileges for an unauthorized person.

The difficulty in achieving real time synchronization between the IAO, ADR, DEERS and the Component personnel systems is perhaps the primary reason why the dependability of access authorizations will remain a security risk. From a networking standpoint an on-line connection between IAO and a NOC is feasible.  The greatest obstacle here is the continued absence of a DoD-wide integrated personnel database.

The management of virtual desktop and smart phone clients from data centers offers an opportunity to simplify the management of software that controls CAC readers. However, the greatest gains would accrue from enabling the real-time connectivity between NOC controls and the DEERS databases.  



* http://www.cac.mil/Authenticating.html
** http://www.militarycac.com/files/SCR331FirmwareUpdateProcedure.pdf
*** http://en.wikipedia.org/wiki/Security_token 

Password Cracking

Password cracking is the process of discovering passwords from data that has been archived or transmitted by a computer system. A common approach is to repeatedly try guesses for the password. In cyber operations the purpose of password cracking is to gain unauthorized access to a system, or as a preventive measure to check for easily crackable passwords.

The top ranking password cracking software packages, out of a large collection, are as follows:

Cain & Abel is a password recovery tool for Microsoft OSs. It allows easy recovery of various kind of passwords by sniffing the network, cracking encrypted passwords using Dictionary, Brute-Force and Cryptanalysis attacks, recording VoIP conversations, decoding scrambled passwords, recovering wireless network keys, revealing password boxes, uncovering cached passwords and analyzing routing protocols. *

John the Ripper is a fast password cracker, currently available for Unix, Windows, DOS, BeOS, and OpenVMS. Its primary purpose is to detect weak Unix passwords. **

Hydra is a software project developed by the "The Hacker's Choice" (THC) organization that uses a dictionary attack to test for weak or simple passwords on one or many remote hosts running a variety of different services. THC-Hydra offers most developed password brute forcing. ***

L0phtCrack offers hash extraction from 64 bit Windows, multiprocessor algorithms and password recovery.****

Password strength is a measure of the effectiveness of a password in resisting guessing and brute-force attacks. It estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and randomness.

It is usual to estimate password strength in terms of information entropy, measured in bits, a concept from information theory. A password with, say, 42 bits of strength calculated in this way would be as strong as a string of 42 bits chosen randomly, say by a fair coin toss. Put another way, a password with 42 bits of strength would require 242 attempts to exhaust all possibilities during a brute force search.

SUMMARY
In cyber operations it is mandatory for the monitoring software at the Network Control Centers (NOCs) to run periodic verifications of every user security classification of how easy is to crack their passwords. DoD must include in every application a Password Assistant window that will reflect the implementation of security assurance policies. As a general rule in the case of cyber operations this will require at least twelve characters made of numbers and letters.


*http://www.oxid.it/cain.html
**http://www.openwall.com/john/
***http://www.darknet.org.uk/2007/02/thc-hydra-the-fast-and-flexible-network-login-hacking-tool/
****http://www.l0phtcrack.com/

Network Control Center (NOC) Monitoring Software

With the growth in the size of computer networks the monitoring of operations rises in importance. There are a large number of networks that manage hundreds or even thousands of servers.

For example a 2010 server census shows the following number of servers: Intel: 100,000; OVH: 80,000; SoftLayer: 76,000; Akamai Technologies: 73,000; 1&1 Internet: 70,000; Facebook: 60,000; Rackspace: 63,996 servers. * Large cloud services control enormous operations. Google has >800,000 servers and Microsoft has  >300,000 servers.

Such large aggregations of equipment require the monitoring of up-time, latency, capacity and response time. To take advantage of technological expertise, control of fail-over in case of a defect and very large economies of scale all server operators rely on Network Control Centers (NOC) for monitoring networks as well as all of the attached assets.

The software used by large NOCs is always customized for proprietary configurations. However, there are numerous vendors that make NOC monitoring technology available. To illustrate the scope and some of the features of NOC monitoring we will use the Nagios Open Source software for WikiMedia. **

As shown below, WikiMedia had 405 servers up, 7 down out of 1960 servers available but not used.



A wide range of diagnostic routines and statistics is available for operators for taking remedial actions. NOC personnel can narrow all reviews for a detailed examination of the status of each server:



SUMMARY

The tools that are available for NOC operations will represent a major investment for DoD. In the case of NMCI these tools represented a major share of “intellectual capital” that the Navy had to pay for as a part of the migration to NGEN where NOC software will be government owned.

Any future DoD network will have to consider tight integration between NOC operations and the Global Information Grid. The personnel and the software in the NOC represent the first line of defenses for assuring the security of cyber operations.



*http://www.datacenterknowledge.com/archives/2009/05/14/whos-got-the-most-web-servers/
**http://www.nagios.org/

“White” and “Black” Clouds - The Conficker Case

According to the Network World magazine (http://www.networkworld.com/community/node/58829) one of the biggest computing networks anywhere are the computers controlled by the Conficker computer worm. In March 2010 Conficker controlled 6.4 million computer systems. 230 global domains were penetrated, which included more than 18 million computers, or 28 terabits per second of bandwidth. Conficker operations could be therefore be classified as a “black” cloud. There is no reason why successors to the Conficker would not reappear in a different form in the future.

In comparison the biggest legitimate (e.g. “white”) cloud provider is Google. It is made up of 500,000 systems, 1 million CPUs and 1,500 gigabits per second (Gbps) of bandwdith. Amazon comes in second with 160,000 systems, 320,000 CPUs and 400 Gbps of bandwidth, while Rackspace offers 65,000 systems, 130,000 CPUs and 300 Gbps. Microsoft’s Azure is so far only in a start-up mode of cloud computing services.

The Google, Amazon, Azure and Rackspace “white” clouds have very little in common with the Conficker “black” clouds. They operate in completely different ways. The difference is in the ways they expose themselves to vulnerabilities of their security.

Conficker attacks any computer, anywhere where computers (servers, desktops, laptops) are not correctly defended. This lack of sufficient defenses applies to significant shares of the global population of over half a billion devices. Such attacks are launched from diverse origins that are identified as coming from “hackers”.

In contrast, the customers of Google, Amazon, Rackspace and Azure initiate and then manage their connections in well-defended computing environments. The differences between “white” and “black” clouds are the security measures how threats are applied.

Google, Amazon, Rackspace and Azure clouds are coordinated by methods, which incorporate software and hardware that offer elaborate protective measures for security assurance. Botnets as well as most virus attack mechanism do not target well-funded and well-defended clouds of “white” cloud firms.

Conficker and similar botnets function by exploiting millions of back doors that can be identified in operating systems such as in Microsoft Windows. Security counter measures must then deal with a long list of known Window flaws as well as with the human errors in the defense of exposed computers.

The vastness of the Conficker operations requires users to purchase, from a diversity of security vendors, protective devices to operate their IT systems. Individual owners of systems must become knowledgeable of the perils of malware, such as botnets like Conficker, when they decide to protect their own computing infrastructure.

SUMMARY
The relocation of a firm’s computer operations from a vulnerable “black cloud” into a better-defended “white cloud” has the advantage of lowering the costs of computer security. With the rapid escalation in the capabilities of attackers, organizations such as DoD can set up their own protected “private clouds” that will be able to afford the maintenance of a “white” network with lower risks and greater efficiency.

Stuxnet – An Example of Cyber Attack Capabilities

Technical analysis shows that Stuxnet consists of two separate malware attacks. These attacks are considerably different. One runs on Siemens S7-315 controllers and is fairly simple. Attack two runs on S7-417 controllers and is much more complex. Technical analysis shows that both attacks were developed using different tools. *

It appears that attacks one and two were deployed in combination as an all-out cyber strike against devices that manage process control devices. ** The following are some of the characteristics of the Stuxnet. In terms of complexity it exceeds anything ever seen:

1. Stuxnet targets the sabotage of process control equipment that is isolated from the Internet, but otherwise connected to selected administrative systems. It manipulates specific processes. The effects are completely hidden from the operators.
2. The deployment of Stuxnet suggests that the originators of the attacks must have possessed detailed insider knowledge about the operations of the target equipment.
3. The Stuxnet attack was combined with conventional hacker skills used to overcome primary defenses such stealing certificates.
4. Stuxnet is custom-designed by experts who have detailed Siemens process control knowledge. They are not amateurs engaged in the adaptation of off-the-shelf attack software.
5. To be worth all the enormous effort devoted to launching the attacks, the target for Stuxnet had to be of high value.
6. The central flaw in defending against Stuxnet is a total dependence on well-documented Siemens software. Software instructions manuals, including maintenance instructions, for Siemens controllers are downloadable and therefore completely exploitable.
7. Stuxnet’s attack software versions are reusable. Unlike explosives, they can be used over and over again. The vulnerabilities that Stuxnet exploits cannot be “patched” by Siemens. In effect, Stuxnet can be viewed as multiple zero-day attacks wrapped into several packages that aim at specific targets, such as Windows.

Stuxnet is most likely going to be the best-studied piece of malware for a long time. The attackers must know this. Therefore, the whole attack only makes sense within a very limited timeframe. After Stuxnet is analyzed, the specific attack form will not work any more. It’s a one-shot weapon that will be most likely reloaded, with modifications.

SUMMARY
From the standpoint of DoD this has ominous implications. Stuxnet-like attacks should be seen as a custom-made weapon that has the purpose of taking down well-defended targets, at a critical time.

The complexity of Stuxnet is designed to first install itself on Internet-connected administrative computers. If undetected Stuxnet can use already corrupted platforms for infecting other computers that were previously considered to be invulnerable because they did not connect to the Internet. Getting through the external perimeter is difficult, but well within the current state of the hacking art.

If a similar attack were launched against DoD’s critical cyber operations DoD will suffer from dependence on readily exploitable software already implanted in both the external as well as in the internal defense rings.

The software currently in use by DoD operations is mostly unclassified. The current proliferation of contractor-managed systems is very large. As compared with the complexity and sophistication that was necessary to breach relatively small defenses that protected standardized Siemens process controllers, the diversity of DoD targets will make it relatively easy to locate a wide range of attack opportunities.

*http://www.langner.com/en/2010/11/19/the-big-picture/
**http://en.wikipedia.org/wiki/Distributed_control_system 

Untraceable Sources of Malware

By far the greatest threat to the commercial, economic and political viability of the Global Information Infrastructure will come from information terrorists. Information terrorism has ceased to be an amateur effort and has migrated into the hands of well-organized, highly trained expert professionals. Information terrorist attacks can be expected to become a decisive element of any combined threat to the economic and social integrity of the international community. Nations whose lifeline becomes increasingly dependent on information networks should realize that there is no sanctuary from information-based assaults. Commercial organizations, especially in telecommunications, finance, transportation and power generation offer choice targets to massive disruption.

The introduction of Anonymous Re-mailers into the Internet has altered the capacity to balance attack and counter-attack, or crime and punishment. The widespread use and easy access to acquiring the capacity to launch anonymous (e.g. untraceable) messages and software is now a development that warrants attention in mounting defenses in cyber operations (Strassmann and Marlow, 1996). *

One of the most pervasive techniques for hiding the source of malware is the Tor anonymity network.**  It is a system composed of client software and a network of servers, which can hide information about users' locations and other factors, which might identify them. Use of this system makes it more difficult to trace Internet traffic to the user, including visits to Web sites, online posts, instant messages, and other communication forms. It is intended to protect users' personal freedom, privacy, and ability to conduct confidential business, by keeping their Internet activities from being monitored. The software is open-source and the network is free of charge to use.

Tor works by relaying communications through a network of systems run by volunteers in various locations. Because the Internet address of the sender and the recipient are not both readable at any step along the way (and in intermediate links in the chain, neither piece of information is readable), someone engaging in network traffic analysis and surveillance at any point along the line cannot directly identify which end system is communicating with which other. Furthermore, the recipient knows only the address of the last intermediate machine, not the sender. By keeping some of the network entry points hidden, Tor is also able to evade many Internet censorship systems, even ones specifically targeting Tor. ***

SUMMARY
DoD cyber operations must be protected against incoming transactions that originate from every anonymous source. Even though anonymous transactions can be routed through an unsuspecting insider, strong authentication of every source can protect the DoD network. In the case of a hostile insider acting as a conveyor of anonymous traffic the only defenses are techniques that are monitored by counterintelligence personnel.

*http://www.strassmann.com/pubs/anon-remail.html
**http://www.torproject.org/
***http://en.wikipedia.org/wiki/Tor_(anonymity_network)

Botnet Attacks

Networks of compromised computers controlled by a central server, better known as botnets, are the preferred tools for online criminals.

Hackers can use these co-opted systems to churn out spam, host malicious code, hide their tracks on the Internet, or flood a corporate network to cut off its access to the Web.

Whenever a new botnet appears, researchers race to reverse engineer the software it installs on a victim's machine, and to decode the way each bot communicates with the controlling server. Because these communications are often encrypted, such analyses can take weeks or months.

Launching botnet attacks is easy and readily available to individuals either for free or for a license fee to a criminal group. For instance, hacker group known as UpLevel developed Zeus, a point-and-click program for creating and controlling a network of compromised computer systems, also known as a botnet.

The latest version of this software, which can be downloaded for free and requires very little technical skill to operate, is one of the most popular botnet platforms for spammers, fraudsters, and people who deal in stolen personal information.

Some of the best known recent bots are the BredoLab with 30 million infections, Mariposa with 12 million infections, Conficker with over 10 million infections and Zeus wit over 3 million infections. There a hundred others, which are often minor variations but rapidly launched  versions of widely deployed botnets.

SUMMARY
A botnet's originator (aka "bot herder" or "bot master") can control a group of bots remotely thereby magnifying the severity of an attack. There are numerous techniques for defying bot attacks or preventing a bot from getting implanted. Almost all of such techniques depend on the rapidity with which a bot attack is detected, identified and then deflected.

From the standpoint of DoD a substantial reduction of the “attack surface”, e.g. “fat” clients, will reduce the number of computers where a bot attack can be deployed.

Wireless Connectivity for DoD

With an increased dependency on connectivity to cloud computing, browser-based personal computers and close to 100% systems availability the DoD will have to resort to wireless networks. To deliver high performance communication services and redundancy in circuits, DoD wireless will also have to depend on “mesh” computing.

The shift to wireless connections to the Internet is also driven by an increased dependency of a mobile workforce that uses multiple computing devices to keep informed.

DoD will have to depend on WiMax (Worldwide Interoperability for Microwave Access). This is a telecommunications protocol that provides fixed and fully mobile Internet access with expected capacity of one Gigabit/second, with further increases in speed expected.

WiMax can provide mobile broadband connections, offer a wireless alternative to cable and DSL for "last mile" broadband access, handle data, telecommunications (VoIP) and IPTV services and connect to Internet.

There are numerous devices that enable connectivity to a WiMax network, which includes chips embedded in personal computers. A WiMax cell tower has a range of up to 50 km, though with a decrease in that range higher speeds and greater reliability is attainable.

The preferred way of deploying WiMax would be by means of mesh networks that can offer not only enhanced reliability but also improved bandwidth capacity.


Summary
Placing increased reliance on WiMax mesh networks offers advantages in comparison with the costs of physical wiring. As DoD shifts to a mobile workforce, the availability of wireless connectivity becomes mandatory for security and interoperability reasons.

Most DoD manpower is concentrated in a few installations where the range of WiMax is sufficient to reach most personnel. From a security standpoint WiMax can potentially offer greater security than WiFi while it is also more effective for use in expeditionary deployments. Mobile communication services, such as provided by devices such as BlackBerry, will be ultimately displaced by DoD managed wireless networks that connect to collaboration applications without intermediaries.

From a planning standpoint it will be necessary to include WiMax (and its successors) in the design of DoD networks, especially in planning how the “last mile” connections are established.

Browser-Based Disruption of IT

 The Google Chromium OS starts with the assumption that you don't need an Operating System (OS) to do anything other than to run a browser. No other applications or services are available on a personal computer, whether it is a desktop, laptop or smart phone. Everything needed is on the net and can be reached and then retrieved through the browser. That's why the Chromium OS and the current Chromium browser have the identical name.

Consumers have several choices when it comes to managing computers. The principal operating systems for personal computers are the variations of MacOS, the variations of Windows or the variations of Unix.  In the case of mobile devices, the principal operating systems are variations of Windows Mobile, BlackBerry, Apple IOS, NokiaOS, Symbian and Android. If all releases and variations are accounted for, there are well over one hundred OSs in place. They are incompatible or require elaborate workarounds to be interoperable.

Chrome OS now proposes to change all. It is a stripped-down operating system with minimum features. It represents a radical overhaul in the way computers work: there is no desktop, no files or folders.
While the Chrome OS won't hit the market for months, there is advance information available that offers a preview:

1. The slimmed down Chrome OS is intended it to run on computers that use flash chips for storage instead of hard drives. The result is a cold boot in less than 25 seconds, including the time it takes to log in. After that a computer can restart in less than two seconds.
2. All of its applications are Web-based so users don't need to buy program discs or install applications
3. Downloads are small and incremental. They are pushed out automatically to users.
4. Web applications are available from an app store. Active applications run as separate icons or can be retrieved from tabs.
5. The feature that is relevant to DoD is the incorporation of the Trusted Platform Module (TPM). TPM complies with specifications that specify a secure cryptoprocessor that can store cryptographic keys that protect information, implemented on a "TPM chip" or "TPM Security Device":


This capability should make a system impermeable to viruses, bots and all malware.
6. A Chrome OS security review is also enclosed. *

SUMMARY
The theory behind Chrome OS is that users already depend on continuous access to the Internet. Network connectivity is essential for every operating system, such as for software updating. User habits also show that, on the average, they spend more time connected to the Internet or remotely to a data center server than off line.

Chrome OS doesn't run native applications, doesn't allow access to a local file storage system and doesn't include drivers for external devices. That means if needs to use a printer that must be available as a network service.

Whether Chrome OS will be widely accepted remains to be seen. As networks become more reliable, acquire higher capacity and demonstrate superior security the idea of browser-based systems will be also pursued by other browser vendors. Such solutions can be expected to overcome much of the complexity of existing architectures while materially reducing costs.

From the standpoint of DoD the prospect that a large share of its computing will be browser-based rather than PC-based warrants attention. It will be the availability of secure and reliable network connectivity from cloud computing that will make browser-based computing feasible. To assure uninterrupted connectivity the increased reliance on wideband wireless systems will be also a consequence of rethinking how to restructure DoD networks.


* http://www.chromium.org/chromium-os/chromiumos-design-docs/security-overview

Reducing Federal Data Centers by 38% in Four Years

Vivek Kundra, the US CIO, has just announced an ambitious program to reduce the number of Federal Data Centers from 2,094 by 800 in 2015, or by 38%. (see http://cio.gov/documents/25-Point-Implementation-Plan-to-Reform-Federal%20IT.pdf).

The number of data centers in the Federal Government is actually larger than 2,094, (see http://pstrassmann.blogspot.com/2010/12/how-many-data-centers-in-dod.html).

Though the Kundra is driven by pending budget cuts, the metric of counting how many data centers are eliminated can be misleading. Only dollars in the reduction of Operations and Maintenance costs are a proper measure of true reform.

Having presided during my career over data center consolidations at Kraft, Xerox and DoD involving hundreds of such attempts, I would be cautious about committing to 38% data center consolidations so fast.

Large data centers have the characteristics that they support a variety of customers. Each customer is connected to their servers with unique services provided in each case. One cannot just rip out a collection of servers and mainframes without the careful re-engineering of whatever links to it. These connections are mostly managed by sundry small contractors who provide maintenance and operating services, each using the best of custom-made technologies that they have available.

In any data center one will find multiple versions of operating systems, homegrown security appliances, unique communication processing methods, incompatible device interfaces and customized network management consoles. It is unlikely that all of the procedures in place are fully documented.

Resident contractors retain much of the know-how for operating the data centers, particularly under condition of failure. Potentially, the Government Data Centers can include up to 2,814 different server versions connected to 1,811 listed versions of clients that are managed by 1,210 versions of operating systems. If folded into any cloud, all of this diversity will have to be reduced drastically. How many of these are present in the existing Federal Data Centers is not known but my guess is that our legacy systems remain a large depository of software that has not been upgraded for years.

Every cloud contractor attempting to merge data centers into a vendor's Infrastructure-as-a-Service or Software-as-a-Service environment will have to first contend with the complexity they will have to be migrated out of an existing data center to a cloud vendor that always operates in a streamlined way. Each cloud vendor has defined a way of handling customers in tightly prescribed ways in order to preserve the greatest possible lock-up on the customer's business.

SUMMARY
Dictating the cuts in data centers without also extending to the consolidation to desktop hardware, desktop software and communication processing is risky.

Before proceeding with large-scale mergers that may reduce the number of data centers further consideration should be given to the disruption during migration to anywhere. Any plan to reform federal information technology data center operations should first invest into the engineering that would dictate standards how the new Federal data center environment will function.

Wiki Leaks and Cyber Operations

The casualty of Wiki Leak document dump will be the Defense Department’s latest concept of pushing vital information down to the front lines. For instance, in the Navy’s concept of Information Dominance operations it will be the lower ranking officers and enlisted men who are expected to sort out relevant battlefield views from the masses of information that had heretofore been laboriously sifted through layers of intelligence staffs.

Wiki Leaks have now jeopardized the doctrine of making data broadly available at the fighting level. The current shocking reception, at highest DoD levels from Wiki Leaks, should not come as a surprise. War fighting units were granted greater access to information without a corresponding reorganization how DoD networks and applications would operate for increased transparency.
 
DoD must deal now with the conflicting objectives. On one hand soldiers in a forward operating bases should have all of the information that could affect their operations.

On the other hand, making information indiscriminately available is very risky, particularly if intelligence from other agencies (such as the Department of State) or Allies is also involved.

DARPA has just launched the Cyber Insider Threat (CINDER) project to make it difficult for troops to funnel classified material to hostile sources through increased surveillance. Unfortunately, increased surveillance does not offer an answer on account of the enormous number of transactions as well as huge numbers of people involved.

 The answer to the data leak problem lies, as it has always been, in compartmentalization. There is a scope as well as a “boundary of relevancy” that surrounds all military and civilian personnel. What are the “need to know” conditions changes rapidly, depending on the location, mission and functions performed.

The present personnel systems are not designed to track, without delay, how an individual’s “boundary of relevancy” changes. Even simple personnel events, such as revocation of a CAC card, take too much time and is administratively ponderous. Short-term re-assignments of a person’s scope of security access are very difficult to do as conditions change. Intelligent surveillance of any anomalous transactions that do not match a security profile is beyond the scope of current technological capabilities in data processing and in data mining.

SUMMARY
    There are good and workable technology solutions available for overcoming unauthorized exfiltration of information from DoD operations.  The cyber insider problem can be solved through re-engineering of the speed how access authorizations files are granted.

The monitoring of transactions and files must be also changed from reliance on the security of millions of desktops and over 700 data centers to only a few pools of servers that can be monitored and archived.

Denial of Service (DoS) Tools

DoS (Denial of Service) are attacks for rendering a computer service incapable of responding to computer services requests in a timely manner.*  DoS software is a tool that can be used by cyber attackers, hackers, sysadmins and spammers.**

DoS operates by corrupting routing devices, electronic mail or Domain Name System (DNS) servers with the following effects:
1. Consume computing resources, such as bandwidth, disk space, or processor time;
2. Disrupt configuration information, such as routing information;
3. Disrupt of state information, such as unsolicited resetting of TCP sessions;
4. Obstruct the communication channels between the intended users and the target that has been disrupted.

DoS can be induced by methods such as:
1. A “Trojan” has been installed and activated;
2. A “Ping of Death” is generated. This launches a very large Internet Control Message Protocol (ICMP) packet so that the buffer on a server overflows;
3. A SYN Flood takes place so that SYN packets continue to be sent, tying up the service until the handshake times out. SYN is a part of the Internet TCP/IP protocol for a three-way handshake when a connection is established.

A simplified version of DoS is shown below:

There are many variations how DoS can be activated:***
1. Flooding the ICMP
2. Teardrop attacks
3. Peer-to-Peer attacks
4. Permanent damage attacks
5. Application level flooding
6. Distributed attack
7. Reflected attacks
8. Degradation of service attack
9. Blind denial of service.

Firewalls and systems patches can defend against DoS using malformed packets. However, if the DoS attack saturates bandwidth there is very little a defender can do except by shutting down and rerouting transactions to an alternative site while activating “snort” software.

Snort is an intrusion detection system (NIDS) which has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching, and content matching. The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, common gateway interface, buffer overflows, server message block probes, and stealth port scans.

DoS software is easily available from many sources and can be downloaded from web pages such as:
A. The DoS Project's "trinoo" distributed denial of service attack tool by David Dittrich from the University of Washington.****
B. “knight.c” is a downloadable DoS powerful client.*****
C. DoSHTTP 2.5.1. Can be used for Distributed Denial of Service (DDoS) attack.******
D. LOIC - It performs a distributed denial-of-service (DDoS) attack on the target site by flooding the server with TCP packets, UDP packets, or HTTP requests with the intention of disrupting the service of a particular host. Downloadable from http://sourceforge.net/projects/loic/.

SUMMARY
From the standpoint of DoD the DoS attacks represent the most serious threat to maintaining a continuity of operations without disruption. In case of warfare there is not question that an adversary’s first move would be to launch DoS attacks on DoD networks. The purpose would be to interfere with command and control communications. How defenses will be launched is beyond the scope of this blog.


*  http://staff.washington.edu/dittrich/misc/ddos/ 
**  http://www.nmrc.org/pub/faq/hackfaq/hackfaq-05.html. NMRC is the Navy Medical Research Center.
***  http://en.wikipedia.org/wiki/Denial-of-service_attack
**** http://staff.washington.edu/dittrich/misc/trinoo.analysis.txt
***** http://packetstormsecurity.org/distributed/knight.c
****** http://www.bestsoftware4download.com/software/t-free-doshttp-download-tblabqto.html

How Many Data Centers in DoD?

According to Vivek Kundra, the Federal Chief Information Officer, the DoD was operating 772 data centers as of July 30. 2010. *

What is a data center was defined as follows:
• Any room that is greater than 500 square feet and devoted to data processing; and,
• Meets one of the tier (I, II, III & IV) classifications defined by the Uptime Institute.

The problem with the room size is that with modern server technologies the space required for housing large computing power is only minimal. For instance a single rack-mounted IBM eX5, with mainframe computing power, will occupy only a fraction of space in a 20x25 room.

Defining a data center as meeting Tier I classification criteria would qualify installations that are currently not included in Kundra's count. Tier I installations do not require a raised floor and do not need a source of uninterrupted power supply. Tier I data centers operate with only a single system and have no redundancy. They have multiple single points of failure. A Tier I data center can be placed almost anywhere in a temperature controlled office environment or in a shipping container.

In the last two years Sun Microsystems, Hewlett-Packard, Dell, Microsoft and SGI have offered complete data centers in standard 40x8 ft and 20x8 ft shipping containers, occupying 320 and 160 square feet respectively. Such data centers have the capacity of up to 29.5 petabytes of storage and up to 46,080 CPU cores of processing power. These data centers would not be included in any of the OMB surveys!

Perhaps the greatest omission in the DoD data center count are installations that are operated by contractors. For instance, the data centers for NMCI, which are owned and operated by HP/EDS, are not included. That could understate the amount of computing dedicated to the support of DoD.

SUMMARY
The DoD count of what is defined as a “data center” most likely understate what is the actual number of operations that should be considered for consolidation. Current rack space technologies offer a large multiple of the computing power than was available in much larger configurations less than ten years ago. With oversight dictated by project budgets, the urgency of short implementation schedules will dictate the installation of powerful computer configurations unaccounted for by the Federal Datacenter Consolidation Initiative Report.

From the standpoint of cost, economies of scale, pooling of resources, more effective use of personnel and the concentration of security expertise the DoD data center count needs to be revised before it can serve as a basis for long range planning.

* http://www.cio.gov/pages.cfm/page/OMB-Asks-Agencies-to-Review-Data-Center-Targets

George Mason U. COURSE: AIT 690.007 – CYBER OPERATIONS

Professor Paul A. Strassmann
Office: Room 5359, Engineering Bldg, Fairfax Campus
phone: 203-966-5505; email: pstrassm@gmu.edu ;
website: http://www.strassmann.com
web for AIT 690.007: https://sites.google.com/site/gmucyberoperations/
On line office Hours: Th 5:30-7:00, or for appointment call 203-966-5505
____________________________________________________________________________
Introduction
This course provides graduate students with an overview of current Cyber Operations issues. The course content deals with topics that are of primary interest to the Department of Defense and its contractors who will be carrying much of the responsibility for actual implementation of cyber-related systems.
The subject matter covered will be diverse and will cover the following:

Session 1: The Economics of Cyber Operations – In classroom
Session 2: Organization for Cyber Operations - online
Session 3: Cyber Networks – Internet - online
Session 4: Cyber Networks – DoD Networks - online
Session 5: Legacy Applications for Cyber Operations - online
Session 6: Desktop Virtualization for Cyber Operations - online
Session 7: Data Center Clouds for Cyber Operations - online
Session 8: Semantic Software in a Cyber Environment - online
Session 9: Open Source Software - online
Session 10: Data Storage and Systems Reliability - online
Session 11: Attacks on Cyber Networks - online
Session 12: Security of Cyber Operation - online
Session 13: Social Networks – in classroom

Included for Each Session:
A. Slide presentation, 50+ slides; B: Required Reading – four published papers; C: Study assignment of four questions to be submitted in next class. Students will be graded for work submitted for each class; D: Submission of Cyber Operations case study in lieu of Final Exam. Will be graded; E: A recorded transcript will be available for each session; F: An approx 200-page reference text will be published for this class.
F: Classes will be held from 7:20 to 9:20 PM on the following dates: January 27. 2011; February 3; February 10; February 17; February 24; March 3; March 10; March 24; March 31; April 7; April 14; April 21; April 28. Final Exam will be a compilation of results from assignments handed in for each class.

Seminar Structure
 This class is a seminar, which means that you must come to class prepared to discuss the week’s readings that will be posted and available for downloads on https://sites.google.com/site/gmucyberoperations/.

A seminar works best when each participant is a contributor in the form of comments that can be posted on a WebEx page.  I have very high expectations of each student in the class to participate, as reflected in the following grading:

Grades in this class will be calculated as follows:
Class participation: 30%
Completed class work assignments: 70%
Weekly Activity

Desktop Virtualization and Virtual Desktops

The basic challenge of virtualization, as it affects desktops, is to come up with a clean way of separating a desktop from the files and the preferences that belong to a user.

Virtual Desktop Interface (VDI)
If one installs and then starts with VDI, a user's PC desktop is located on a virtual computer in your datacenter. The user is now able to move between the desktop displays at different locations. The desktop image will then show up on each thin client (or fat client operating in a “thin mode”) and still get the same desktop.

The virtual desktop interface still remains as a monolithic file in which the user's files and the Windows desktop are mashed together. The user may only have a few GB of personal files, but they are entangled with a Windows operating system that is 50GB or more.

Virtual Desktop
In a Virtual Desktop there is a clean separation between the user's files (and all their settings and preferences) and the Windows operating systems. This is accomplished by creating a user “Profile”, which can be centrally stored and is completely separate from the Windows operating system.

In order to use the desktop, one must reattach the “Profile” to the desktop instance of the Windows operating system. In such a case the desktop instance of the Windows operating is completely generic and standard issue from Microsoft. Because the standard issue has no personalization it will be easier to manage because one will need only one master copy.

The separate “Profile” is much cleaner, smaller, and easier to protect than a full virtual desktop.

SUMMARY
Virtualization offers two options how to manage desktops. How they are implemented will depend on how legacy systems will be migrated. The Virtual Desktop should be seen as an easier solution.

In the case of DoD the economic benefits from  desktop virtualization exceed the benefits from server virtualization. Potentially there will  be well over a million desktops that can be migrated into a much lower cost environment. For this reason the choice how to convert to virtual desktops running on thin clients is important.

Navy Prepares to Take An Important First Step

It is the objective of the U.S. Navy’s Information Dominance Corps to manage a global network that delivers instant integration of military data across a number of separate specializations such as geographic, intelligence, logistics and manpower, as well as provide information about red or blue forces. The semantic Web will be the engine needed to power the effort.

These objectives create an unprecedented demand for the retrieval of unrelated data from sources that are diverse and not interoperable. Such data now is stored in files that have inconsistent coding. The existing files are organized in contract-mandated projects that answer only inquiries that are limited to their respective enclaves. For answers that combine weapons, geography or logistics, Information Dominance Corps (IDC) analysts must surf through several databases, which are neither synchronized nor compatible.

Currently, the IDC has to depend on human analysts to use judgment in the interpretation of scattered facts. That is not easy because the analysts have to deal with different vocabularies, undocumented data definitions and dissimilar formats. Therefore an enormous effort is expended in the cross-referencing of disparate data repositories and to reconcile data sources that describe the identical event, but are coded differently. With the inclusion of tens of thousands of sensors and with the presence of thousands of computing devices in the global Navy/Marine Corps network, the number of analysts that would be required for sifting through all this data would exceed whatever is manageable and surely affordable.

To overcome manpower limitations in the future, the IDC will have to resort to semantic Web technologies to assemble and correlate data that would support operating needs. The semantic methods are techniques that rely on the extraction of the meaning of data from their related context. Such context is obtained by linking to each original data source a long list of related information. These are called data ontologies.

Ontological references are formal statements that describe a particular data element. The texts of ontology statements are linked to their respective data in a standard format. In this way they become readable as computer-addressable data entries. As a result, all data files end up as strings of ontologies that are linked to their respective data sources, which reveals the logical relationships. This arrangement makes it possible for computers to search and retrieve relationships and connections to data sources. It connects the scattered dots of seemingly random military data. It reveals the hidden meaning of transactions.

In a mature semantic Web, gigabytes are devoted to linking ontology statements for descriptions of only a few bytes of original data. The adoption of ontology-based semantics requires the construction of computing facilities that house huge amounts of computing and storage capacity. The handling of such enormous amounts of data requires data centers that possess economies of scale in capital cost while conserving energy that otherwise would swamp most of the available generating capacity. Such data centers can cost as much as $1 billion.

Ontologies can be generated automatically by analysts browsing through logically related information in multiple databases searching for information, but primarily for unformatted text that has been placed on disks in a retrievable format. Indexing text by some sort of a numerical coding schema is not of much use. Indexing relies on pinpoint identification of each data element either from its numerical value or from words used as keywords. Index methods are precise, but cannot discover relationships that have not been tagged previously. They are useless in the case of foreign languages or with new vocabularies.

The difference between the index and the semantic methods is that data retrieved by index methods must ultimately depend on human intervention to extract knowledge from a huge number of possible choices. For semantic extractions, the available data would be examined by computers and only then presented as a small number of results for further examination by human operators.

The purpose of the semantic Web is to make it possible for the IDC network to connect useful information from tens of thousands of databases automatically. The warfighters then can be shown what possible actions they could take. With the adoption of semantic methods, IDC will not be looking for thousands of uncorrelated search results, as is the case right now. It would receive answers in the form of a few priority-ranked findings.

The IDC computing environment should consist of a distributed but highly redundant global network. Various nodes of this network should collect information from every platform that acts as a data collector, such as desktops, laptops, smart phones, battlefield texting communications, unmanned aircraft video images, satellite pictures and radar tracking. A selection from this data would become available to appropriate persons because the network would possess situational awareness about each warfighter.

The ultimate objective is to endow everyone with the capacity to compile, assess and exploit information in support of decisions. Only a semantic approach in which the computer network links data to its local situation can deliver that outcome.

The semantic approach makes it possible for computers to “understand” what is dispersed among widely distributed files. Only machine-readable data can be used to sift through every file that could possibly reveal what otherwise is hidden. Only by means of automated software agents will IDC analysts be able to support information dominance.

Ultimately, the data collected by IDC will require the recovery and storage of information from tens of thousands of connected devices. This data would be placed in petabytes worth of rapidly retrievable files, growing into exabytes in less than a decade. It would require the offering of high reliability levels—100 percent with automatic failover—when supporting combat. All of the data, in different data centers, would have to be accessible—in less than 250 milliseconds—for recovery from multiple files. This would make IDC information universally discoverable and accessible while maintaining assured levels of security.

The IDC network requirements are demanding. They exceed, by a wide margin, the existing capacities. The initial operational capability would call for processing more than 100,000 transactions per second. The capacity for handling these transactions would have to grow exponentially with time because it would be carrying high-bandwidth graphics, images and color video. Such transmissions consume multiple megabytes of carrier capacity per transaction. Consequently the bandwidth to and from the IDC channels would have to be measured ultimately in terms of thousands of gigabytes per second.

After the receipt of the raw data into the IDC files, linked supercomputers would have to screen the inputs for further analysis. Software then would be deployed to pre-process inquiry patterns in order to identify standard queries so that typical questions can be answered without delay. One of the liabilities of semantic methods is the enormous amount of computation that is required to deliver useful results. The preprocessing workloads on the IDC supercomputers vastly would exceed what is needed for the handling of simple messages.

The projected size of IDC data files that support semantic processes is likely to exceed currently available space by a large multiplier. At maturity, it would require storing a stream of data totaling at least a thousand terabytes per hour or more than 20 petabytes per day, which is comparable to the processing load of the search engine Google. Google and IDC differ only in that the Navy requires higher system uptime and much higher security levels to support warfare conditions.

The key tools for constructing and using the semantic Web are the Extensible Markup Language (XML), the Resource Description Framework (RDF) and the Web Ontology Language (OWL). The management of these standards is under the guidance of the World Wide Web Consortium (W3C). The term semantic Web refers to the W3C’s vision of how data should be linked on the Web. Semantic Web technologies are methods that enable people to create data collections, build meta data vocabularies and write rules for the handling of related data. These three techniques are now labeled as Web 3.0 solutions.

XML is the protocol for recording data for Web accessibility. It is the format in which all data is recorded.

RDF is the model for assuring data interchange on the Web. RDF facilitates data merging and correlation even if the underlying recording schemas differ. RDF supports learning about data recording patterns over time without requiring the data identification to be changed. RDF forms graph views of recorded information, which is useful in presenting easy-to-understand relationships among data sources.

OWL is a family of languages for authoring ontologies. OWL would represent the knowledge about the events and their respective relationships as they apply to IDC operations. They form an added layer of meaning on top of the existing Web service protocols. Although many of the OWL descriptions can be obtained by automatic means that use mathematical algorithms, ultimately it will take a human analyst to find the applicable IDC relationships. This can be done only if everyone shares a common vocabulary for describing shared knowledge for the IDC enterprise. However, commercial software packages already are available that support the formation of OWL-compliant semantic relationships, which should speed up the adoption of these methods.

After sufficient experience is accumulated by means of analyst-aided data mining of transactions, many of the ontology templates can be reused so that the labor cost of maintaining the semantic Web can decrease. The IDC databases then can be organized as specialized Web services such as those that produce information for target selection. Such services then can fuse data from dozens of sensors and the latest geographic images as well as data about available weapons, and they could be deployed aboard ships.

Data ontologies are likely to become the method for applying semantic-based applications to IDC operations within the next decade. The enormous expansion of IDC data, especially with the sharing of sensor, logistic and personnel information, will make the semantic-based retrievals of information an absolute economic necessity.

Ultimately, ontologies will form the foundation on which other advanced methods, such as fuzzy logic, artificial intelligence, neural networks and heuristic searching can be adopted. Those are reasons why the use of the semantic Web should be seen only as another but very important steppingstone in the evolution of computer-based reasoning that cannot be delayed.

SUMMARY

The semantic Web should be viewed as the latest extension to the current Web. The semantic Web advances searching methods from inquiries that are based on structured data to producing results that answer uncorrelated questions even if they are in the form of colloquial sentences. The semantic Web therefore should be seen as an enhancement to the already existing methods that are available for accessing information over the Internet.

Semantic methods overcome the current limitations of separate and disjointed Web pages that cannot be collected readily for the assembly of enterprisewide information except through human intervention. The semantic Web advances the IDC from connecting Web pages by means of the analysts’ eyes to connecting the underlying data by means of computers. It advances IDC analysts from sifting through piles of computer listings to using computers to identify a few possible answers.

Army’s Private Cloud Goal Is Praiseworthy but Problematic

On June 25, 2010, the Army issued a request for proposals for the migration of information technologies into a cloud environment. A statement of work defines this as the “Army’s Private Cloud.” The contract reportedly could total $249 million over five years, or an average of $50 million per year. When one compares the proposed spending with the Army’s fiscal year 2009 information technology budget of $7.8 billion, the project accounts for only 0.6 percent of the Army’s budget. That is a modest start for moving in the direction in which commercial firms already are progressing at an accelerated pace.

The central technology of the Army’s Private Cloud, known as APC2, is virtualization. Commercial firms will have acquired this approach within the next five years and will be advancing further. Meanwhile the Army will be working on what can be construed, at best, as a pilot program that uses only features associated with early stages of cloud operations. Nevertheless, the APC2 program has several worthwhile goals.

First, the Army will reduce the number of data centers from over 200 to less than 20. Such reductions are readily available using mature virtualization techniques. Servers that originally were set up in support of individual applications would be pooled for large gains in capacity utilization. The payback from such efforts offers a remarkable return on investment of more than 50 percent. The break-even point is less than a year. Whether the Army needs to spend five years to achieve consolidation should be examined in view of the president’s memorandum of September 14, 2010, which requires rapid reductions in information technology costs.

Although the Army Program Executive Officer Enterprise Information Systems (PEO EIS) will promote the adoption of the cloud technologies, it is not clear how this can be accomplished. Migration to cloud computing calls for the education of an entire generation of Army information technology personnel. APC2 represents a reorientation of the ways in which the Army acquires and operates networks. Migration toward cloud computing is largely accepted now as the future direction of information technology by the Marine Corps, which has already achieved major savings from server consolidation and streamlining of applications and should be used as an example. The adoption of cloud computing will not be achieved primarily through the PEOs, who are largely acquisition executives, but through education of Army military and civilian executives, starting with general officers and with senior executive service personnel.

The awarding of contracts to use commercial computing capacity and to acquire containerized data centers is a good idea. However, the price tag for acquiring these data centers is out of range of the planned spending levels. Modular data centers fully configured to military specifications for power, air conditioning, security and failover capability almost certainly are unaffordable.

APC2 will use pay-for-use private cloud capacity instead of acquiring equipment and paying separately for consulting services. Commercially operated private clouds may have adequate security to run low-risk business applications. Unfortunately, all communications would depend on the Internet, which is vulnerable. From the standpoint of cyberwarfare, it is unlikely that commercial private clouds can meet the demanding security requirements for military applications. Therefore, APC2—based on commercial services—cannot be seen by the Army as a prototype for pursuing its ultimate cloud goals.

APC2 will employ best-of-breed, commercially available services using short-term contracts. Best-of-breed clouds are a good requirement except that almost every large cloud provider wants to have an almost permanent hold on a customer. Whatever applications will be placed on a private cloud will have to be moved into a Defense Department enterprise environment under the ultimate control of U.S. Cyber Command. Without such coordination, APC2 choices will be limited.

Contractors will own and operate all facilities, including all hardware and software provisioning. Their responsibilities include assurance of network connectivity; application migration; security assurance; provision of virtual operating environments; capacity planning and forecasting/trending for growth; and configuration and management of customized servers, storage, security and networking devices. Contractors also hold responsibility for disaster recovery and business continuity planning and execution of services; migration planning, scheduling, coordination and implementation; support for continuity of operations; system administration and monitoring services; network uptime and network availability guarantee; vulnerability and incident management; and access identification and authentication. They also must oversee the following areas: service desk and service request management; incident management; problem management; change management; release management and configuration management.

Lastly, Attachment 11 to the APC2 statement of work notes that the maximum recovery objectives will be four hours. Perhaps that is tolerable because APC2 would handle only low-priority applications. However, from a cyber operations standpoint, such delays are not tolerable. Only highly redundant multiple data centers will be able to meet 99.9999 percent failover capabilities. Whether any commercial contractor will be able to achieve that within the budget limitations is not clear.

The Army is handing over to an APC2 contractor, in addition to hardware and software operations, an all-inclusive list of systems management functions. The operational roles of the Army’s information technology personnel are not visible. It is not clear how the Army can remain fully accountable for the delivery of computing performance and for the conformity with demanding cybersecurity requirements. Whether any contractor can deliver everything that is required within an affordable pay-as-you-use pricing structure is questionable.

Though the commitment to proceed with cloud computing is long overdue and highly commendable, working out how the Army can migrate to cloud computing remains unresolved.

SUMMARY
The role of the contractor to provide the Army with mostly reports and status checklists, without direct operational oversight, is inconsistent with the goal of making cyber operations an integral part of information warfare—which is to make it organic to Defense Department components. The way in which the request for proposals is written assumes that cloud computing can be handled as a back-office acquisition that can be outsourced. That may not be the way in which the Defense Department can proceed.



Corruption of Internet Routing Tables

The rapid growth and fragmentation of Internet routing tables is one of the most significant threats to the integrity of the Internet transmissions.

When about 15% of the world’s Internet traffic was redirected by a set of servers owned by China Telecom there were popular websites, such as dell.com, cnn.com and amazon.de, that were re-routed through Chinese networks before reaching their destinations. This condition lasted for about 18 minutes. What was done was a prefix hijack by one or more routers. Whether this was intentional or not is unknown, but such routing accidents are all too common.

Routers tell packets of data which way to go. Organizations have private networks between various locations. When an e-mail is sent from one private network to another, the router “decides” that those packets should not be sent out to the Internet, but should instead travel within the corporate private network. An email sent from the same person to a potential customer, however, would be sent out via the Internet. In order for routers to know where to send things, they need to maintain some data about other networks. These are known as “routing tables”. If these routing tables get incorrect information, misrouting will occur.

Experts have considered the rapid growth and fragmentation of core routing tables as one of the most significant threats to the long-term stability and scalability of the Internet.

It is the Border Gateway Protocol (BGP) that decides where to forward IP packets to ensure they reach their correct destination network. The BGP table, which can be found on all Internet routers, contains all of the network "prefixes" – the IP address blocks assigned to any given network – active on the Internet at any given time. Over the years, as Internet usage has grown exponentially and the number of organizations coming online has increased, the number of networks advertised through BGP has swollen dramatically. In the last five years, it has more than doubled to almost 350,000 today. The number of routing table entries could hit two million in the next 10 years.

The danger here is that while BGP is the de-facto protocol for inter-domain routing on the Internet, actual routing occurs without checking whether the originator of the route is authorized to do so. The global routing system itself is made up of autonomous systems (AS). Each autonomous system decides, unilaterally, and even arbitrarily, to trust everything it hears from any other AS, to use that information without validation, and to further transmit that information to its other peers. This is often called “routing by rumor.”

Efforts are underway to secure the BGP based routing system. The IETF has initiated a working group, which is working on a Resource Public Key Infrastructure (RPKI), which provides for authentication for who can originate a route to an address.

Summary
The authentication of inputs to BGP tables is not merely a matter of changing standards. It will influence how router hardware will have to function and how messages with BGP instructions are distributed and secured.

NMCI Economics for the Next Five Years

As of March 2008, NMCI included more than 363,000 computers, serving more than 707,000 Sailors, Marines and civilians in 620 locations in the continental United States, Hawaii, and Japan, making it the largest internal computer network in the world.*  On September 30, 2010, the NMCI contract ended and the new Continuity of Services Contract (COSC) began. Under the COSC, the Navy retains the same scope of NMCI services with HP, but the network becomes a government-owned, contractor-supported, managed services environment. The cost of COSC is $3.3 billion over five years, plus the cost of $1.788 billion for the Navy buying intellectual property from HP plus $1.6 billion for the Navy to buy already installed equipment from HP, but mostly acquired from Dell.

The total Navy/MC cost for continuing NMCI, in an as-is format, for another five years is then $6.7 billion.**

The total Navy/MCI costs will then average $3,691 per seat, unless the number of seats increases. This does not include whatever the Navy may have to spend in the future for upgrading equipment now in place.

Although the accomplishments of NMCI in consolidating applications and in improving security are very good, there is a question whether the projected five-year NMCI costs are reasonable in view of the current budget limitations. Can the Navy and the MC make further reductions in the costs of COSC over the next five years.

 A Total Cost of Ownership model that examines the savings from a virtualized and mostly thin client environment for 400,000 computers suggests the approximation shown in the Figure below. Neither cost includes the costs of contractor and Navy/MC overhead plus the expenses for security that meet evolving DoD standards:***



The total potential estimated savings are then:



SUMMARY
With the Navy in control of computer assets and of intellectual property there is an opportunity to make a number of further cost reductions over the coming five years. How much of the savings can be realized depends on the provisions of the fixed price COSC contract with HP as well as on the organization of the NMCI network.

It is noteworthy that a large share of the NMCI costs are embedded in the wide distribution of subcontracts to small businesses. NMCI has exceeded the minimum 40 percent small-business objective set for NMCI, including 5 percent for small disadvantaged businesses, 5 percent for women-owned small businesses and nearly 1.5 percent for HUBZone small businesses. The 40 percent contract set-asides includes the participation of lower-tier small businesses under HP's large business partners, who define small-business utilization within their subcontracting initiatives. **** The presence of a large number of largely local subcontracts represents a structural risk to any attempts to cut costs.


  *http://en.wikipedia.org/wiki/Navy/Marine_Corps_Intranet
  **Shachtman, N., HP Holds Navy Network ‘Hostage’ for $3.3 Billion, Wired, August 31, 2010  
  *** http://roitco.vmware.com/vmw/SummaryAnalysis/Index
  ****http://h10134.www1.hp.com/sites/nmci/smallbusiness/