Search This Blog

Managing DoD IT Security in a Cloud

Removing DoD applications to commercial cloud services organizations that are operated outside of the Department of Defense perimeter will give rise to concerns about the adequacy of security assurance. Presently DoD data centers rely on network perimeter barriers that are supposed to exclude transactions from outsiders. Whether these barriers are effective, in view of hundreds of DoD data centers, is arguable. Contractors manage many DoD data and are staffed by personnel that are neither military nor civilian employees.

Nevertheless, the prospect of transferring computing services to an external commercial firm will have to be subjected to stringent security rules. What may be overlooked as a security incident may result in a revocation of security certification in the case of a cloud services provider.

DoD CIOs recognize that in cloud computing there are cost savings, availability of flexible capacity management as well as failover features that offer advantages as compared with the current DISA data centers. The CIOs are now asking whether the externalization of a workload to computing clouds will degrade security. Will the auditors reject commercial cloud computing because they cannot prove that they are secure?

DoD data centers achieve security by locking up server farms as well as associated electric power inside a physical enclave. Software controls are installed that include:
- Perimeter firewalls;
- Demilitarized zones (DMZ) for isolating incoming transactions;
- Network segmentation to reduce risks;
- Intrusion detection devices and software for monitoring compliance with security policies.

At present there are hundreds of firms selling computer hardware appliances and software packages for data center security. The problem with such devices is not only their high cost. Much effort is expended in integration and testing of servers that support individual applications. That adds to the overhead of maintaining hardware/software configurations for separate applications because the workload in the data center is not pooled. As security threats rise, data center management keeps adding separate security management devices, thus increasing not only operating costs but also the delays that are incurred as transactions snake their way through multiple security barriers.

The accumulation of various security measures and devices increase the fragility of systems and add to potential vulnerabilities. Each of the DoD data centers will ultimately end up with security protection measures that are unique in ways how they are implemented.  Therefore they are not amenable to coordinated oversight. It is this variety that prompted the Commander of USCYBERCOM, Gen. Keith Alexander to state "We have no situational awareness ... key defense IT systems remain exposed to remote sabotage."

In cloud computing the providers of services gain from the efficiencies of virtualization. Virtual machines from multiple organizations are co-located on physical resources but without any crosstalk that can jeopardize security. Virtualization is therefore the key technology that enables the migration of applications into a cloud environment where security is provided mostly through the hypervisor that controls separate virtual machines.  A third-party security appliance can be connected to the hypervisor. In this way consistent security services can be provided to every virtual machine even if they use different operating systems.

 One must stop viewing protection of applications at the data center or server levels as the basis for achieving security. Instead, we have to view each individual virtual computer, with its own operating system and its own application as fully equipped to benefit from pooled security services.

A data center may house hundreds and even thousands of virtual computers. Security in a cloud can be achieved by protecting virtual computers through their hypervisor on which they reside. In this way every virtual computer can be assigned policies will carry its protection safeguards as well as security criteria (such as the grant of access privileges).  For instance, when moving a virtual machine from a DISA data center to a cloud, the security of a relocated virtual machine will not be compromised. Multi-tenancy of diverse applications, from diverse sources is now feasible since the cloud can run diverse applications in separate security enclosures, each with their customized security policies.

One of characteristics of cloud computing is its offering of a self-service access to computing power. In traditional datacenters the administrative access to servers is usually managed by on premises staff.  When adding an application, this calls for an elaborate process of testing and integrating the application within its own security enclave. This process is time-consuming because it calls alignment with diverse settings.

In cloud computing the addition of a new application is streamlined. Integration with security measures can be instant and seamless because a hypervisor already supports most of the security services. If a virtual computer can ports its own security when moving from one cloud to another, the migration efforts can be reduced.

SUMMARY
Security services can be pooled and standardized in a cloud environment to support a large number of virtual machines. Such pooled services can be managed to give DoD much improved shared security awareness.

The management and monitoring of enterprise-wide security will still remain a demanding task. However, as compared with the current diversity in security methods, the transfer of applications into the cloud environment will reduce costs and simplify the administration of security.

Whether DoD can rapidly implement its own private cloud, or whether it will have to rely on commercially provided cloud providers is a budgeting as well as a timing issue. Given the current funding limitations a shortage of qualified talent, DoD could rely on commercial firms for most cloud computing services except for retaining the direct oversight over security. This could be accomplished by managing all security appliances and policies from DoD Network Control Centers that would be staffed by DoD personnel.

Network Situational Awareness

Businesses today are tasked with managing hundreds of mainframe computers and ensuring they remain available, secure and at their proper configurations. At the same time the management of personal computers, servers, laptops and smart phones grows even more complex. Organizations are faced with higher maintenance costs and more risk to protect themselves against security threats.

Software is now available that has built-in intelligence that can identify which devices are not in compliance with enterprise policies and recommend security fixes and timely software updates for hundred thousands of machines in seconds. This software automates the most labor-intensive tasks for complex global networks, saving time, labor, and expense.

The Total Network Awareness software offers real-time visibility and control for globally distributed computing devices. It provides a single management platform that gives organizations total network visibility, control and automation across every computing to manage critical applications over the entire systems lifecycle. It identifies operational vulnerability, opportunities for greater energy-efficiency and security compliance.

Here are some of the functions that Total Network Awareness can offer:
1. Distributes and manages a client's anti-virus, anti-malware, end point firewalls and network access control software.
2. Offers a single view of thousands of devices running on the entire network and gets real-time reporting of operational status.
3. Manages power consumption by being able to automatically configure and shutdown devices when inactive. Monitors printing usage to reduce costs.
4. Performs asset discovery, software and license management, deployment of Operating System upgrades and patches.
5. Delivers continuous enforcement of security policies for both desktop and roaming computers, regardless of network connection status
6. Identifies rogue assets that enter the network and take steps to eliminate or otherwise locate them for remediation or removal
7. Inserts security patch management and security updates for major operating systems and common commercially available applications
8. Define and assess client compliance to security configuration baselines
9. Complies with the National Institute of Standards and Technology (NIST) Security Content Automation Protocol (SCAP) certification.

SUMMARY
With over ten million devices connected to the DoD network it is necessary to have a complete visibility of the status of every device. Network Situational Awareness software is now available from a number of vendors, though its functionality and performance varies. The decision how to architect the configuration of management and control of DoD networks will be one the highest priority decisions that will be made because it will set the directions for systems deployment for decades to come.

Because the installation of Network Situational Awareness software calls for a long term investment and requires the deployment of specialized manpower, the ultimate objective for DoD to make such installations in only a small number of redundant Network Control Centers.

The Future of Virtualization

According to data from International Data Corporation (IDC) an increasing number of applications are now becoming deployed on virtual servers. That is a clear indication that operating systems such as Windows or Linux are no longer as important as they were. In a virtual environment traditional operating systems do not see the underlying hardware directly. The task of mediating access to the hardware in a data center is largely being taken over by a new layer of software, which is the hypervisor through which virtualization is taking over not only the management of processing resources but also the management of storage pools as well as the organization of networking resources.

Operating systems are by no means dead. But they are gradually becoming less relevant when it comes to orchestrating server hardware while providing added services such as the management of security. For legacy applications Operating Systems will still remain, but will be placed on top of the Hypervisor that will take over a number of functions previously performed by Windows.

Meanwhile, end users are slowly moving to applications that run from online services and not as a locally hosted application on “fat” clients. The end user operating system is less likely to be Windows. Increasingly, users depend on devices such as Thin Clients, Apple iPads or a variety of smart phones that do not run on Microsoft's once-ubiquitous operating system.

With the advent of a new architecture customers will be served not only from public clouds but also from private clouds, which will be operated as a hybrid environment where you can build applications that operate from either the public or from private data centers that are redundant and highly reliable. In such a setting tools will be available that allow instant portability of applications to wherever they can deliver the most economical services.

When planning the evolution to new directions of how to organize DoD enterprise computing, the following steps should be taken:

First, migrate server-side applications, which rely on a legacy Windows OS (such as Microsoft Exchange). These servers are used to operate many applications today and will continue to do so for many years. These servers should be virtualized and should move into IaaS (infrastructure as a service) clouds to gain savings from improved hardware utilization and reductions in energy consumption. 

Each of the migrated applications will continue operating with one or more standard components (such as an Oracle database or an Apache web server). Each of the applications will also runs its own separate Windows OS, usually on a single physical machine, though the applications will be virtualized. The problem with such an arrangement is that it still requires much of the overhead as before. To make 500 physical machines virtual still requires the management of 500 virtual machines because each still has its own OS, which needs to be patched, maintained and monitored for viruses. For this stage of evolution the OS maintenance costs will not be reduced to the maximum level that is attainable.

As a second step start migrating to a PaaS (platform as a service) cloud environment. Google App Engine and Salesforce.com are good examples of PaaS though DoD will have to structure on its own unique and proprietary platform to support cyber operations. All DoD applications will run on an extensible and secure infrastructure. It will not matter whether the DoD applications are running with Windows, Red Hat, or Solaris. Instead, all applications’ OS will run on either on an open system VMware Hypervisor or on a proprietary platform (such as Google App Engine or Salesforce.com) provided it can become secure. In this environment the DoD PaaS the operating system will remain invisible to the applications. The problem is that PaaS will ultimately require the re-writing of portions of the code so the majority of legacy enterprise systems will not be able to take advantage of all PaaS features. However, all new DoD web applications (such as those organized as a Service Oriented Architecture SOA) can be written with PaaS in mind in order to start realizing large reductions in development and operating costs.

Summary
Organizations should start migration from client-server designs to IaaS cloud computing and then to the PaaS cloud. Along the way short-term cost reductions will become available to fund the entire migration. It will take laying out at least a ten-year evolutionary path to accomplish the transformation of DoD computing from where it is now until it can operate in a cloud environment. The burdens of the existing legacy applications as well as the technical and managerial risks are just too large to overcome the obstacles on the path to an architecture, which is cheaper, more robust and more secure to cope with cyber warfare. 

Authentication of Access Credentials

The authentication of access identity is the key to all information security. Without identification of an individual’s access to DoD files and without assurance that there is an appropriate authorization for accessing records, it is not possible to protect DoD in the age of cyber warfare.

The problem with identity authentication (see draft of DoDI 8520) is its complexity. The following issues need further resolution:

1. There are eight “Sensitivity Levels” for establishing Authentication Credentials Certificates. There are seven “Entity Environments” into which an Authentication Credential Certificates must fit in. The combinatorial numbers of public and government organizations that can offer acceptable credentials is very large.

2. The number of organizations that offer Authentication Credential Certificates, which are accepted by DoD, is over fifty. Each of these offers two or more levels of Authentication Credentials.   Therefore, a DoD Component or Agency must be ready to grant access to well over 20,000 applications from access entries that are authenticated and certified by a combination of well over 5,000 possible authentication sources.  Without a central authority to certify authentication it is not possible to track whether all authentications are valid.

3. In each case a Component or Agency must screen incoming requests whether the origination person has conformed to elaborate certification rules that are enumerated in several non-DoD publications (which are not completely consistent in detail). Incoming requests must be further judged whether the circumstance under which access privileges are requested applies (e.g. situational awareness) to a particular log-on session.
  
4. Delegating access authentication to Partner Networks leaves open a gap as to processes for their certification. Who certifies whether the partner network conforms to the DoD defined “sensitivity levels” and the DoD specified “entity environments” remains unknown.

5. How to execute the proposed policy provisions represents an administrative burden, especially for access is from smart-phones (which will represent the majority of accesses in the future) and in case of off-site locations. Getting generic authorizations from a component Designated Approval Authority (DAA) is time consuming. Particularly onerous would be the process of revocation of such privileges in the case of termination of an employee or when the location or equipment configuration changes.

6. The policy singles out non-Microsoft operating systems for added DAA approval of identify credentials. To single out non-windows operating systems for an added DAA action is unworkable, particularly as virtual computing (through Hypervisors) takes over many of the functions of the Microsoft operating system.

7. Identity Authentication Under Non-standard Conditions or During Contingency Operations must be obtained in real-time (or a matter of few hours). The Instructions are vague as to what it takes to issue new or replace lost or stolen identity credentials. The statement that “Temporary procedures should incorporate best security practices” is unacceptable because under conditions of cyber warfare this is exactly the condition under which an attacker would be scanning DoD systems.

8. The provision that information system administrators and network administrators will establish, publish, and execute procedures for identity authentication to systems under non-standard conditions creates a by-pass to these Instructions and cannot be endorsed as a consistent DoD policy.

9. Offering biometrics as only one of the options in authentication is inconclusive. Biometric identification should be a part of a DoD policy. Biometrics must be specified for any implementation. Offering biometrics only as an option, at the discretion of an information systems owner, weakens the execution of Policy.

10. These Instructions do not mention access to social computing that uses NIPRNET. Since social media also use access identification codes, it is not clear under what category will such authentication codes be classified and stored.

11. Waiving compliance of policy on Authentication Credentials is contrary to DoD’s current emphasis to protect its cyber operations. Obtaining authentication credentials from authorized sources is rapid and inexpensive. This policy should therefore mandate full compliance and not offer waivers.

12. The DoD identity authentication policy does not address interoperability of credentials with organizations, such as the FBI, CIA, DHS or FEMA. For instance, FBI has just awarded a contract to BAE Systems (a British company with HQ in the UK) to provide certification and accreditation services to ensure the confidentiality and privacy of FBI computer. How BAE will make information security risk assessments technically interchangeable with DoD remains to be seen.

SUMMARY

NOW: There are eight different keys produced by over fifty locksmiths to fit into seven different doors that sometimes change the location of the lock. The locksmiths do not always make identical keys. Once the keys are issued nobody keeps track as to who has what kind of key has been issued. When a key is lost, it takes a while to invalidate it and burglars will seek out such keys. Once a high quality key is issued, it can be used to get through all of the low quality doors, even if some of these doors lead into forbidden rooms.

FUTURE: It would be much easier to issue only one (or maybe two) keys to everyone. All intelligence would be in the door. The door will not open unless a person is instantly authorized to enter.  It is now feasible to  make doors intelligent.

The proposed DoD policy for identity authorization reflects an approach to managing access that reflects an inability to impose consistent application architecture across DoD applications. A much simpler approach can place the authority for accepting or rejecting access privileges on the application itself rather than on the grant of generic access privileges to designated individuals. Such an approach would greatly simplify access authorizations, but require the placement of access permission into the application server or into the hypervisor.

How to Deal with Botnets

A Botnet is a collection of software agents, or robots, that run autonomously and automatically. The term is most commonly associated with malicious software and is derived from a Czech author’s word “robot” that was featured in a 1932 science fiction novel http://www.gutenberg.org/etext/13083 .

Botnets are operated by a “Botnet Herder” who acquires bot-creation software from one of many commercially advertised sources as well as from clandestinely developed proprietary sources, such as from http://www.newfreedownloads.com/find/bot.html . Bots are often ‘rented’ as a service to third parties for sending out spam messages or for attacking computers for theft of passwords or credit card information.

To be successful a botnet attack starts with the “botnet launch” phase, which is always based on the deployment of a zero-day virus (the virus signature is not contained in any of the virus detection software or in firewalls screens). The attack target is always either the Microsoft Operating system or usually the Microsoft Browser, although increasingly the Firefox are also targeted. Lately, the attack mode has shifted t the exploitation of less robust social computing applications such as FaceBook and particularly Twitter, which are propagation vectors of infected messages. It used to be true that replacing the Microsoft OS with Linux reduced risks from botnets, but that is not the case any more because of the proliferation of social media as a malware carrier.

For instance, on September 10, 2010 ealert@gmu.edu reported that an e-mail message would contain the 32.Imsolk.B@MM malware, The recipient of this e-mail would be then asked to download a document (“Here you have, just for you”) alleged to be a PDF file, but is in fact a disguised attack.

An attack can be launched from anywhere in the world and will persist for not more than a few days and often only a few hours, which is sufficient time before countermeasures are found and anti-virus fixes can be installed. Any attack will consist of multiple (hundreds) of variants of bot attack version because the attackers can be never sure which version will be successful.

We can assume that NSA will (or can) attempt to intercept the botnet launch server by attacking it through jamming or with other counter measures. I have no idea how successful that can be, though I am skeptical whether that can work because the attacker can instantly change the herder’s site. The herder can also use “anonymous remailers” or similar camouflage to disguise its origin. In the case of DoD I would certainly not relay on any NSA intercepts of botnet sources as a defensive measure.

After the “botnet launch” phase succeeds, and the attacker has created a bot that self-replicates or is triggered by an unsuspecting user to launch itself, the herder can proceed with triggering the bots to perform their missions, though herder intervention is not needed in the case of denial of service attacks. Often the time between the “botnet launch” phase and the “botnet actuation” phase can be hours or days.  A successful botnet launch may succeed in planting itself few hundred thousands of computers. There are over 500 million computers connected to the Internet. It takes less than 0.1% of machines to be initially infected for a very successful attack.

From a DoD standpoint you should be aware that you already could have embedded in your systems “sleeping” bots, ready to be triggered by a herder, particularly if this creates a tactical advantage. Particularly damaging can be a sleeping bot that has implanted spyware that generates only a few transactions, which cannot be detected. How to ferret out such dormant cases is a lengthy subject, do be discussed when we have time. The source of the “sleepers” can be insiders (accidental or otherwise). As long as you do not have a complete forensic record of all transactions, including those of contractors, you will be always exposed to an insider risk. This is another situation that warrants further discussion.

After the “botnet launch” phase is done, the most frequent use of botnets is to distribute spam, mostly in the form of spammed e-mail (70% of e-mail now is spam). Spam can drive advertising but I would not discount the likelihood that it is used in cyber warfare as a probe to collect intelligence. I would always take the appearance of any spam whatsoever inside my secure systems as evidence that I have been compromised. Furthermore, I would mandate that every single spam case inside the secure network would have to be reported and then forensically analyzed.

The greatest danger from bots arises from their ability to receive commands from a herder to launch denial of service attack against a designated remote target, such as DoD routers, switches or servers with an IP address that can be detected (that is easy). Since bots can infect ten thousands of machines, thus creating a specific “botnet”, they can induce the bots to generate huge volumes of traffic, such is flooding DoD with spam, email or denial of service data transactions.

For instance, there were an estimated number of 12 million infected computers with the “Conficker” botnet in 2008, with a capacity to generate up to 10 billion transactions/day. Since its detection, “Conficker” has modified itself into several hard to detect variants http://www.microsoft.com/security/worms/conficker.aspx . “Conficker” variations are already inside DoD and pop us more often than anyone is ready to admit. I have a long list of table of successful bots, each with a unique name and unique signature (Mariposa – 12 million bots, Kraken – 9 billion/day capacity, etc.).

Preventive Measures
When I was in NASA I received a botnet attack on one of the thirteen Internet root servers (this is one of the most guarded servers in the entire global Internet) housed by NASA as a most trusted custodian. All we could do was shut down the root server for an hour and reboot back to an assured configuration. The botnet attack stopped and went elsewhere looking for other targets. My only concern was that even a failure of an attack could have been nothing but an intelligence- gathering op.

If a machine receives a denial-of-service attack from a botnet, there are few choices what can be done. Given the general geographic dispersal of botnets and their spread over millions of Internet sites, it is impossible to identify a pattern of offending machines. The sheer volume of IP addresses does not lend itself to the filtering of individual cases. Administrators can configure newer firewall equipment to take protective actions against a botnet attack by using information obtained from passive fingerprinting. The problem is that the number of virus signatures against which firewalls can guard is enormous. Symantec and McAfee are reported as tracking close to a million intrusion-detection signatures and update them daily http://securityresponse.symantec.com/business/resources/articles/article.jsp?aid=20090511_symc_malicious_code_activity_spiked_in_2008 . There are preventive measures utilizing attack rate-based intrusion prevention systems implemented with specialized hardware. You will be besieged with vendors offering you all sorts of hardware and software fixed.

Botnets often use free DNS hosting services such as DynDns.org http://www.dyndns.com/ , No-IP.com, and Afraid.org to point traffic to a to sub-domains that will then harbor the bots. Some large Internet companies (such as AT&T, Verizon, etc.) purge their domains of these sub-domains, but that is not helpful because a military cyber-attacker will bypass all that.

Several security companies such as Afferent Security Labs, Symantec, Trend Micro, FireEye, Simplicita and Damballa have announced offerings to stop botnets. While some, like Norton AntiBot are aimed at consumers, most are aimed to protect enterprises and the ISPs and not the users http://en.wikipedia.org/wiki/Botnet . So far as I am concerned, such measures are band-aids and inadequate for protect DoD sensitive operations.

Meanwhile Sandia National Laboratories is working on ho to control botnets by running huge penetration simulations. When I worked on your task force almost two years ago the key person from Sandia had by far the best grasp of the bot situation, but I have no idea what they have done with it.

An Approach
Since botnet infections come from the Internet, you must keep most of DoD completely isolated from the Internet.  There is no other choice. I do not know of a safe filter that would assure you that no bot would ever get through.

However, you can manage access to the Internet from DoD by setting up separate and completely isolated links for Internet access and particularly for social systems, as I discussed with you before.

One way of assuring security against the inevitable botnets is to allow access to internal network IP addresses to be accessible only through a limited number of router access points. This would force all incoming Internet traffic to obtain its Border Gateway Protocol (BGP) rout addresses exclusively from a DoD owned and operated computer, which would be under 24/7 surveillance. If a bot gets somehow through (it will, inevitably), your manned surveillance combined with investments in expensive tools, which are affordable to MDA for only a limited number of routers, should reduce this threat. Since your Internet connections will be all controlled, you can always resort to occasional “fumigation” of your systems, such as dictating a reboot with a secure system, even if this means losing transactions. I also favor using Linux instead of Microsoft OS as well as browsers, such as Chrome to reduce the attack surface offered to the most popular bot attacks. Of course, if a large share of your Internet connected population is in the form of thin clients, your “fumigation” efforts will be less and less fragile.

Forcing access to DoD exclusively through managed routing tables in at your Internet access entry points represents a challenge and would call for reconfiguration of your network topology.  But the result would be an increase in controls over the handling of every bit of incoming traffic, including bots. If you are forced to offer “social computing” and Internet access to Google and similar sources, you have no choice but to protect these channels by enforcing isolation.

Of course all of the other necessary precautions must be already in place for your operations. You cannot trust anyone to follow policy not to ever use a thumb drive on a secure machine. Your USB ports must be sealed, or continually monitored. The same applies to any removable drives. Altering any part of the removable memory, including swapping of changing configuration can take place only under 100% independently verified surveillance. If your people wish to telecommute or work from home, they would have to use only approved devices that have been fully protected and encrypted. Under no circumstances can you allow the use of any private devices to conduct any DoD business. For instance, contractors should not be allowed to use their firm’s computers to touch any of DoD communication points.

Summary
Consider the above a preliminary answer. I would need to know more before recommending to you to set up a path to the Internet, which is under surveillance by the network control center. Such a path must be completely separate from DoD secure communications. As long as you have even the slightest connection to the Internet, regardless how small, the enemy will ultimately find it and implant bots.

I am also concerned whether your security policies are adequate. I have read most of the DoD security policies, but they are generic and not articulated in plain English as to what is prohibited and what individuals can do. The existing policies were written for lawyers and contract officers, not of people who can spend only a limited amount of time on practicing secure behavior. A simple checklist of do’s and do-not’s that someone can check off and sign every quarter would go a long way to improve resistance to bot attacks.

You cannot stop bots. You can only control them by means of separate networks where you can afford accepting failures that you contain by more reliable human behavior.

Cyber Defenses and the DoD Culture

According to Air Force LTG William Lord, 85 percent of cyberoperations are in defense. That being the case, How should the Defense Department protect its network and computer assets? A 2009 RAND Corporation report on cyberdeterrence asserts “…most of the effort to defend systems is inevitably the ambit of everyday system administrators and with the reinforcement of user vigilance.” The report also states “…the nuts and bolts of cyberdefense are reasonably well understood.”

Such views encapsulate the current thinking about cyberdefense, that such activity is primarily a back office service or a compliance matter. But these views are pernicious. They accept existing systems as they are, other than advocating for improved implementation methods. RAND does not admit that the current hardware, software and networks within the Defense Department are obsolete and dysfunctional. The department continues to operate within a culture that does not acknowledge that its computer systems are not suited for the age of cyberwarfare.

Defense Department leadership appears to be viewing cyberdefense issues primarily as a matter of policy and strategy that can be fixed incrementally. That is not possible. Cyberdefense deficiencies have became deeply rooted as result of the defective ways in which the Defense Department acquired IT over the past decades. Cyberdefense flaws are inherently enterprise-wide and are mostly not application specific.

The Defense Department has not as yet confronted what it will take to make systems and networks sufficiently secure. According to DEPSECDEF William Lynn, the department operates over 15,000 networks and over 700 data centers. The total number of named systems programs in 2009 was 2,190 (Air Force 465, Army 215, Navy 972 and Agencies 538). Each of these programs was further subdivided into subcontracts, some of which are legislatively dictated. Hardly any of the subcontracts share a common data dictionary, or data formats or software implementation codes.

The IT environment at the Defense Department is fractured. Instead of using shared and defensible infrastructure, over 50 percent of the IT budget is allocated to paying for hundreds and possibly for thousands of mini-infrastructures that operate in contractor-managed enclaves. Such proliferation is guaranteed to be incompatible and certainly not interoperable.

Over 10 percent of the total Defense Department IT budget is spent on cyberdefense to protect a huge number of vulnerability points. The increasing amount of money spent on firewalls, virus protection and other protective measures is not keeping up with the rapidly rising virulence of the attackers.

Take the case of the Navy/Marine Corps Intranet, which accounts for less than 4.8 percent of Defense Department IT spending. The NMCI contains approximately 20,500 routers and switches, which connect to 4,100 enterprise servers at four operations centers that control 50 separate server farms. Since the NMCI represents the most comprehensive security environment in the Defense Department, one can only extrapolate what could be the total number of places that need to be defended. Vulnerability points include hundreds of thousands of routers and switches, tens of thousands of servers and hundreds of server farms. There are also over six million desktops, laptops and smart phones with military, civilian, reserves and contractor personnel, each with an operating system and at least one browser that can be infected by any of the 2,000 new viruses per day. From a security assurance standpoint, such proliferation of risks makes the Defense Department fundamentally insecure.

Defense Department leadership is aware that cyberoperations are important. JCS Chairman Adm. Mike Mullen said that cyberspace changes how we fight. Gen. Keith B. Alexander, the head of the Cyber Command, said that there is a mismatch between technical capabilities and our security policies.

Meanwhile, the interconnectivity of Defense Department systems is rising in importance. For instance, the Navy’s Information Dominance Corps views its information environment as being able to connect every sensor to all shooters. Information dominance makes no distinction between logistic, personnel, finance, commander or intelligence data because all of it must be available for fusing into decision-making displays. This calls for connectivity as well as real-time interoperability of millions of devices.

After decades of building isolated applications, the Defense Department has now arrived at an impasse with regard to cyberdefenses just as the demand for enterprise-wide connectivity is escalating. Unfortunately, nobody in top leadership has identified the funded program that will remedy the inherent deficiencies in cyberdefenses. Prior efforts to do that, such as the Joint Task Force for Global Network Operations (JTF-GNO) and the Joint Functional Component Command for Network Warfare (JFCC-NW) were disbanded. Right now, there are no adequate budgets in place for reducing the widely exposed “cyberattack vulnerability surface.” As yet there is no unified enterprise system design or architecture that offers cybersecurity that works across separate Defense Department components at an affordable cost.


Defense Department IT budgets are now fully mortgaged to support ongoing operations and maintenance, while most large development funds are still paying for continuation of programs that were started years ago. With regard to the concerns I’ve raised in my previous post, here are some ideas on what should be done:

The Defense Department should proceed with the rapid consolidation of its communication infrastructure to generate cash that will pay for the merger of costly applications. SECDEF Robert Gates observed correctly on August 9 that “…all of our bases, operational headquarters and defense agencies have their own IT infrastructures, processes, and applications. This decentralization results in large cumulative costs, and a patchwork of capabilities that create cyber vulnerabilities and limit our ability to capitalize on the promise of information technology.”Defense Department communications also cannot depend on the routers and servers that are a part of the public Internet. Instead, the department should switch to computing “on the edge” that utilizes government-controlled assets. Communication costs are the largest single component of the Defense Department’s IT budget and can be reduced materially.

The Defense Department should proceed with the consolidation of its servers and pack them through virtualization into a small number of fully redundant (and instant fail-over) data centers. Greater than 50 percent savings are available in operating costs, with payback periods of less than one year. Adopting platform-as-a-service cloud technologies will make that possible. Switching to network operated computing devices (thin clients) and to open source desktop software can also produce additional large savings.

The Defense Department should complete its data standardization efforts that were started in 1992 and mandate compliance with an enterprise-wide data dictionary. It should proceed with the standardization of meta-data definitions of all Defense Department data elements. The organization for accomplishing that is already in place.

The Defense Department should dictate the acceptance of an all-encompassing systems architecture that would dictate Program Executive Officers (PEOs) how to acquire computing services and contractors how to build new application software. The current Defense Architecture Framework (DoDAF) as well as the OSD published architecture directives have not been accepted by the Services and should be superseded.

From a cyberdefense standpoint, the Defense Department should set up network control centers that would apply state-of-the art monitoring techniques for complete surveillance of all suspect incoming as well as outgoing transactions. One-hundred percent end-to-end visibility of all Defense Department communications is an absolutely required capability for security assurance as well as for total information awareness.

The recent reassignment of the Network & Information Integration (NII) from the Office of the Secretary of Defense to the Defense Information Systems Agency (DISA) can be seen as an indication that a combination of policy and execution of enterprise-wide communications will be forthcoming. The Cyber Command now controls DISA. There is hope that DoD will finally have an organization that has the charter to deliver working cyberdefenses.

However, the combination of NII, DISA, NSA and the Cyber Command is insufficient. Cyberdefense inadequacies are embedded into the proliferation of the applications and into the fracturing of the infrastructure. They can be found in the absence of funding to launch a rethinking how to manage cyberdefenses in the decades to come.

A different cybersecurity culture needs to be diffused throughout the Defense Department. It will have to view cyberdefenses not as a bandage to be selectively applied to a patchwork of applications. The new cybersecurity must become an inseparable feature of every computer technology that enables our operations.

Edge Computing for DoD Clouds

Most of the existing cloud computing services are accessed over the Internet and thus rely on this unpredictable and insecure medium. For DoD the Internet will remain an unreliable platform because it adversely impacts the performance of applications and services that run on top of it while exposing transactions to breaches in security. To realize the full potential of cloud computing the Department of Defense will have to overcome the security, performance, reliability, and scalability flaws of the Internet by adopting network practices that avoid the use of Internet altogether.

The operating value of a cloud service is a function of the speed of applications, the latency of responses, uptime reliability and security assurance. All cloud offerings will be at the mercy of any Internet bottlenecks. An Internet-based Software-as-a-Service (SaaS) offering may not meet a customer’s requirements because the rapid swapping of software routines. For these reasons it will be necessary to speed up communications to and from the cloud so that cloud computing can meet the demanding information handling requirements of DoD.

Original applications of cloud computing, such as available over Forge.mil by DISA, concentrate on public cloud services, which are accessible over the public Internet. Such offerings offer quick economies of scale in computing and data storage. They provide flexible, pay-as-you go benefits for testing and for application prototyping. However, the current DISA probes into cloud computing are mostly small-scale tests.

Driven by security and control needs the DoD must now start planning for the adoption of internally managed private clouds as a way of achieving the efficiencies of cloud computing. The scale of DoD’s $34 billion/year information technology costs makes the shifting to its own internal cloud infrastructure not only feasible but also affordable. By implementing cloud computing behind DoD’s firewalls it will be possible to pool and share computing resources across different applications, departments, or functions without dependence on the Internet for carrying most of its business.

The implementation of private clouds within DoD, on account of ponderous acquisition rules, will require capital investments for added computing hardware. It will require additions to its internal expertise, which is likely to be the greatest holdup.

While implementing cloud computing DoD will still remain partially dependent on the Internet because it will have to support workers at widely dispersed geographic locations where the Global Information Grid (GIG) does not reach.  The DoD private cloud will also have to be interoperable with a variety of vendor services, such as for telecommuting or for mobile communications, which are Internet based.

Consequently, almost all DoD cloud infrastructures will be in the form of a hybrid cloud. This means that DoD will have to provide for a wide range of secure applications that run across a combination of public, private as well as non-cloud environments. DoD will also have to continue running most of its high security applications using totally isolated networks, which are separated from the DoD hybrid cloud with security barriers.

Ultimately, DoD will end up operating all of its business applications and most of its war-fighting applications within its own private cloud. In this way it will realize a reduction in costs while enhancing security. The DoD hybrid approach will also have the ability to continue taking advantage of a wide range of public cloud services whenever that can be economically justified and securely extracted.

The Middle Mile
The infrastructure that supports cloud computing can be split into three links:
1. The first mile (e.g. originating infrastructure);
2. The last mile (e.g. the end user’s connectivity to the Internet, at destination);
3. The middle mile (e.g. the paths over which data travels back and forth across the Internet. Path between the origin server and end user).
Each of these links contributes in different ways to the performance and reliability problems of cloud networks.

First mile bottlenecks are well understood and remain entirely under the management of DoD components. Perhaps the biggest first mile challenge lies in the ability to scale the locally administered and contractor managed infrastructures, such as access gateways, network switches or redundant connections, to meet variable levels of demand. To achieve such improvements is difficult but manageable. Arranging for communications over satellites from ships at sea or in delivering computing support to mobile infantry units will require special attention.

Configuring the first mile will be always costly when dealing with problem how to provide the capacity for occasional transaction peaks. The current approach is to provide first mile infrastructures that are underutilized as well as insufficiently redundant for fail-over in cases of component failures. Cloud computing can correct most of such deficiencies by pooling of first mile resources across multiple DoD components. There will be governance problems in finding workable solutions to these challenges.

The last mile of Internet traffic is conducted over Local Area Networks (LANs) and Wide Area Networks (WANs) at multi-megabyte/sec broadband speeds. In DoD the LANs are locally managed, which is usually done by contractors. Compliance with DoD-wide standards how to install local circuits in a redundant mode will be mandatory. The last mile is unlikely to become a DoD cloud bottleneck provided that this is centrally directed and centrally funded.

This leaves the middle mile, which constitutes the infrastructure that comprises the public Internet as well as GIG circuits that have been subcontracted for conveyance over the Internet. The middle mile will be always a heterogeneous network that is owned by many competing firms, awarded in the form of multiple contracts from different acquisition organizations.  The DoD networking contacts are now covering hundreds or thousands of circuit miles and are estimated to include at least 15,000 separate networks that are often not interoperable.

The entire Internet is actually composed of 13,000 different networks each providing access to a small subset of end users. The largest of these networks accounts for only about 8% of end user access traffic. This means that the DoD Internet dedicated circuits will be ultimately tied to the performance of the Internet as a whole because transactions will have to be routed through several vendor networks. The vulnerability of this arrangement is large.  It includes tens of thousands of connection points between the first mile and the last mile as transactions hop through several links on their way to their ultimate destination. This arrangement is too fragile for the DoD enterprise to depend on in the age of cyber warfare.

Historically, DoD has invested heavily into the first mile and into the last mile. These investments were made from a wide range of funding sources, at thousands of locations. Separate contracts were used to buy different technology solutions because everything was paid out of over 2,000 individual projects plus from an untold number of local installation maintenance budgets.

The DoD’s middle mile has remained a no man’s land between the GIG, a variety of dedicated private networks (including expensive satellite links) and traffic routed over the Internet that is purchased by Services. The funding for the largest share of the middle mile was always budgeted centrally, mostly through the DISA agency. How much to spend for the DoD backbone circuits by an Agency rather than by a Service is a debatable matter. Spending for expansion as well as for new technologies to meet rising demands for capacity must be resolved in favor of completely central funding if cloud computing is accepted as the way how DoD operates its networks.

The consequence of an inadequate middle mile capacity for DoD is unknown. At present there are no policies that define the standards for DoD-wide network metrics. Packet losses, service degradation, uneven performance and down times are unknown. If operations of a DoD enterprise cloud are to materialize, there must be complete end-to-end situational awareness of the metrics of every single component that makes up the DoD cloud.

DoD personnel are now accustomed to low-latency connectivity directly from their homes. They have a perception that the response-time and availability bottlenecks, which are largely middle mile problems, are a reflection of the deteriorating performance of DoD systems.  While the leadership of DoD is increasingly vocal about the decisive importance of cyber operations, the military and civilian people who experience a degradation of the quality of their information technologies, do not have the confidence that DoD systems can perform to do the job that is needed. This is why cloud computing must be chosen as the best means for overcoming the current lack of credibility about DoD information technologies.

TCP as One of the Middle Mile Problems
Architected for reliability rather than efficiency, the Transmission Control Protocol (TCP), which is the Internet’s primary communications protocol, is a principal hindrance on middle mile transmission performance. TCP requires multiple round-trips between any two communicating parties to set up and tear down connections. This is especially detrimental for the performance of SaaS and PaaS applications, as these require small as well as rapid back and forth communications.

Long distances between communicating parties can lead to low throughput that grows as file sizes become larger and the number of “hops” between Internet links increases. This is because TCP allows only small amounts of data to be sent at any time before pausing and waiting for acknowledgment from each router on the receiving end. Network latency, the time it takes a single data packet to travel across multiple links on the network, will rapidly translate into a huge delay in the case of any failure at any router on the path from origin to destination. Communication latency, which is tied to the distance between the source and the end user, will rise whenever the TCP protocol repeats retransmissions as a transaction progresses towards its destination. This is a critical issue for those considering IaaS and SaaS solutions, especially in cases where rapid and reliable responses are essential.

For example, Amazon hosts its EC2 services in just three US datacenters and in a single European datacenter. Applications that are not mission-critical run perform well provided that the user is close to a data center and the size of files is not large. However, for applications with mission-critical applications and very large files the approach offered by Amazon is often too slow for supporting users. It suffers from delays the Internet’s middle mile. Performance and reliability is also likely to fall short of what is needed in support of war fighter applications. Therefore running cloud services for DoD’s global reach, from a few datacenters located in the USA, is not suitable.

Distributed Clouds On the Edge
Locating the cloud computing infrastructures in a distributed manner overcomes the problems of the middle mile. A distributed architecture — where servers are located at the edge of the DoD network, close to end users — avoids the middle mile bottlenecks. It enables the delivery of LAN-like responsiveness for cloud applications that would be running over DISA direct communication links. These would connect data centers to the edge servers, which would be located in immediate proximity to users.

Spreading the DoD workload over ten thousands of edge servers to house frequently accessed replication of code from central applications is technologically and economically feasible because powerful servers, with twelve terabytes of capacity are now available for $5,000. 

Summary
At present the largest share of cloud computing services is in the form of clouds based on centralized architectures. The drawbacks of such solutions are outages and delays in the support of mission critical businesses.

The migration of DoD applications to the cloud, which may take a decade, will require many changes in DoD’s network architecture, such as modifying IaaS, PaaS and SaaS software for increased security, application acceleration, fast synchronization with master files and for rapid recovery in case of local failures. Such changes will require software modifications that can take advantage of already proven solutions. To accomplish such a change DoD will need to place servers near points of use for most of its transactions.

As cloud computing takes over the DoD network environment will appear as an enterprise-wide hybrid solution that support millions of DoD users securely, economically and reliably.

 

Synchronization of Servers “On the Edge”

Remote sites such as Army bases, ships afloat or Expeditionary forces present challenges how to architect DoD for the delivery of enterprise-wide computing services. Management complexity, inadequate infrastructure and a lack of administrative resources prevent DoD from operating a consistent, scalable and secure computing and communication service. Placing virtualized “edge servers” at thousands of local DoD installations should be the preferred way of dealing with these issues.

Most of the existing DoD’s computing power now resides outside of DISA’s mega centers. Geographically distributed servers and desktops have been hard to manage, difficult to protect, and costly to maintain. When new computing facilities are needed, they must be obtained through headquarters, causing lengthy turnaround times for acquiring added capacity. Inconsistent hardware platforms and operating system variants make it impossible to provide enterprise-wide support for applications. Limited local budgets inhibit investment in business continuity solutions or in redundant hardware, which increases downtime. Consequently the management of distributed computing is performed by hundreds of subcontractors who do not have the funding to conform to increasingly costly security policies. Meanwhile, local site management has the incentive to requisition excessive capacity for its computing needs. As result assets can be under-utilized and manpower over-staffed while at other places service quality can suffer.

By virtualizing copies of servers to remotes site (to the “edge”) while managing the master virtual computers centrally, organizations can leverage existing data center resources, reducing cost and downtime. Administrators at network control centers can deploy copies of servers as well as desktops, in the form of virtual machines, across thousands of sites in minutes to respond to local needs. Administrators can also apply remote management methods to monitor and maintain high levels of service across multiple remote and branch offices, including patch management, which reduces local needs for specialized personnel. Under conditions of failure the central administrators can relocate any virtual computer to a back-up site, thus eliminating both scheduled as well as unscheduled downtimes. Hard to maintain security protection can be then administered centrally so that rapid responses to new security threats can be distributed to every “edge” location instantly.

By managing thousands of distributed servers at remote sites, the central staff can minimize setup and trips to the remote place, which are often hard to reach. Servers and desktops can be upgraded, patched and backed-up from the enterprise level network control centers, increasing the success rates while reducing testing costs.

Though different “edge” servers will host different virtual machines, each with a different mix of applications, templates will be available for the distribution of copies of virtual machines in order to preserve consistent ways for managing widely dispersed operations. Establishing a limited number of standardized deployment platforms across DoD will simplify troubleshooting, patch management, hardware refresh cycles, upgrades, migrations and support of legacy operating systems.

With thousands of DoD “edge” locations, each operating location will run only a limited number of applications. Consequently security will be greatly improved. Security compromises from insiders will be restricted to only a small numbers of exposures. It will be hard to launch an insider attack from an “edge” server against data centers.  More elaborate and expertly staffed defenders at the data center will detect unauthorized accesses automatically.

When hardware fails on the “edge” the network control center can rapidly recover and restart virtual machines at an alternative site thereby reducing unplanned down time. Since all virtual computers are encapsulated as complete systems it easy to replicate an entire “edge” set-up from the datacenter. Unlike in case of a physical environment, a virtualized remote operation does not need to duplicate remote office infrastructure in the datacenter for disaster recovery. All that is needed is for DoD to have the capacity to host virtual machines and a recent copy of what was contained in an “edge” installation.

“Edge” servers can be also constructed as plug-and-play virtual appliances for rapid deployment. The software at a new site should boot in a matter of minutes. Not only does this decrease the time it takes to start-up remote offices, but it also decreases potential support issues related to incorrectly configured software or hardware. Ultimately DoD should have the capacity to set up a new “edge” server operation as quickly as it takes to set up a source of power supply – in a matter of hours and not weeks or months.
In distributed deployments of DoD systems to the “edge”, losing Wide Area Network connectivity to the data center will not disrupt business operations at the “edge” location. Because the basic functions of virtualization are already included the “edge” server will continue to run without interruption. The virtual software will continue to protect the local user’s client devices until operations are automatically restarted and re-synchronized once the connection is re-established. This capability is critically important under combat conditions, on ships at sea or when a cyber-attack succeeds temporarily.

Summary

DoD with its varied and diverse operations must find a cost effective solution that will scale with rapidly changing needs of cyber operations. With the virtualization of application packages to “edge” locations DoD will be able to improve security while managing its complex environment. The new architecture that places most of DoD’s processing on the “edge” can deliver flexibility, availability and protection.

Managing Access from the Internet to DoD

Internet is connected to DoD addresses via intermediate network devices known as routers. A router is a special-purpose dedicated computer that makes connections when it receives a transmission from one of its incoming Internet links, takes a routing decision, and then forwards that packet to one of its outgoing links. The routing decision is made based on the current state of the connecting links as well as on the priorities that have been attributed to the various links in order to make the selection of the next connection efficient. Each router uses a routing table, consisting of the Border Gateway Protocol (BGP), to keep track of the path taken to the next network destination.  Consequently, routing tables will never remain static, but will be changing dynamically as conditions change in real time.

Pictures of CISCO Network Routers

There are thousands of routers on the Internet path to DoD network devices. The routing tables consist of BGP files located in rapid access files on a router. These tables store the routes and metrics directing the selection of a particular network IP destination. The tables are updated, in real time, about the conditions of all mediating network connections. A unique Autonomous Systems Number (ASN) has been allocated to each network, which ultimately leads transactions to every DoD location. As of July 2009 there were 320,000 BGP prefix numbers have been issued for connections between networks. Each router then attaches suffix numbers for designating its proximate connections.

The management of routing tables is automated for instant adaptation and for assuming additional functions, such as performing security operations in which reverse path verification is sometimes feasible.

Routers connect communication packets between the IP address of the origin of a message and its final destination, which in the case of DoD would be millions of addresses. When the router receives an incoming packet, it passes it to the next router, defined as a “hop” to which a packet should be forwarded. The next router then repeats this process, and so on until each packet reaches its final destination, often after eight to twenty “hops”.

This entire process is based on the information in the routing tables that are stored in the router. Any corruption of any one table on the path from origin to destination will lead to network malfunctions. Tampering with the routing tables makes it a preferred attack target. If the routing recalculations are maliciously modified the routing table will contain wrong entries that will corrupt an Internet-mediated transaction. Since there are thousands of routers directing traffic to DoD, the proliferation of routing tables on its path, always managed by third parties, continues to make DoD vulnerable.

The primary source of vulnerability of routing protocols is the lack of verification of routing information obtained from proximate routers. Each router must obtain information from other routers to form a database that reflects its surrounding network topology. Each router periodically exchanges data with neighbors about their status. However the routers cannot verify the correctness of the data they receive. Injected false routing information will propagate from one router to another thus compromising the integrity of all routers along a given path. The DoD vulnerability is therefore magnified by the multiple “hops” that take place across several routers from origin to destination.

To attack routers a hostile source requires information about how the network is configured and where the routers are logically located. It is easy is to find the default IP values, which report the destination addresses on a network path. There are numerous commercial trace route software programs available to do that. There are also other attack tools, which are available either from commercial sources or as software that is under development by increasingly sophisticated information warfare organization.

Summary

One way of assuring DoD security is to allow the DoD network IP addresses to be accessible only through a limited (or only one) BGP routing table. This would force all incoming DoD traffic to obtain its addresses exclusively from a DoD owned and DoD operated computer. Outsiders would be unable to scan DoD IP addresses for the purpose of launching an attack.

Forcing access to DoD exclusively through DoD managed routing tables represents a challenge and would call for reconfiguration of its network topology.  The result would be an increase in controls over the handling of every bit of incoming traffic.

Cloud Management Tools

Cloud Management Tools offer a service catalog of computing, data storage and network resource pools of all clouds that are accessible by means of the Open Virtualization Format (OVF) standard. This catalogue can be used to describe deployment policies that define the quality of service, capacity and the security of large computing complexes.

Cloud Management Tools make possible the creation of a “Virtual Data Center” that can embrace hundreds of diverse Software-as-a-Service, Infrastructure-as-a-Services and Platform-as-a-Service offerings. It creates a catalog from which a wide range of services can be selected and then automatically provisioned.

IT personnel without expertise in detailed coding and even technology-savvy business users have the skills necessary to use the Cloud Management Tools. Bringing the business user closer to the IT provisioning process make the Cloud Management Tools a significant enhancement in the management of business systems. The Cloud Management Tools will turn IT into an easy provisioning platform that meets business computing needs without the lag time and complexities that have so far been associated with the acquisition of dedicated computer assets.

Under the Cloud Management Tools, the diverse physical infrastructures that are listed in the cloud catalogue can be viewed and shared in a multitenant, isolated fashion by departments or outside organizations that need not know about each other. The result is a cloud resource that can be seen either as private or as public as a customer wishes. The Cloud Management Tools enable a wide range of service providers or affiliated IT departments to create the appearance that they are delivering services that are directly available, even though the services are assembled as a hybrid offering from several sources, including third party applications.



For example, an organization, such as the Navy, can set up a Private Secure Cloud with its own infrastructure as well as an application platform. As long as affiliated organizations, such as the Army and the Air Force remain interoperable, the entire Department of Defense can be seen as a cooperating organization. However, DoD cannot be self-contained and self-sufficient. It must increasingly draw on the suppliers’ Secure Private Clouds, which can provide DoD with Software-as-a-Service.  This would require full compliance by all parties with standards, such as OVF.

DoD will also have to access Secure Public Clouds, which operate their own platforms but would also have to be interoperable. Ultimately there will be thousands of public clouds needed to support DoD operating requirements, such as provided by FedEx, Wal-Mart, banks, travel agencies, airlines, food suppliers and health providers.

Security comes in the form of several offerings that must be tightly coupled with Cloud Management Tools, which can certify security assurance for every external connection. A variety of commercial security tools are available for use as centrally administrated virtual appliances or services. In the case of every connection to an external cloud IT managers will have to ensure that all connections and particularly access to public clouds, can be protected and isolated with technology that is an integral part of the virtualization infrastructure.

With the availability of  Cloud Management Tools, as the master control mechanism over a firm's collection of clouds, the computer industry has entered into a new era of how to organize its computing. One can compare the recent availability of the Cloud Management Tools with the introduction of the pervasive Microsoft Windows Operating System in 1981. Over a period of over twenty-five years the Microsoft OS enabled users to abstract the management of personal computers from personal involvement with the technically difficult controls of increasingly complex hardware features. The dominance of the Microsoft OS is now vanishing, to be increasingly displaced by Hypervisors, which are making it possible to abstract the management of hardware and software from direct user interventions.

Summary

Over the next decade the current focus of IT executives will pass primarily from the management of physical assets to an emphasis on the choices of cloud services. IT executive will concentrate on taking advantage of the universal “Cloud” which views all computing not as dedicated fixed assets, but as an accessible utility which delivers computer capacity as well as application services as a demand-driven variable cost. Cloud Management Tools should be therefore seen as a meta-operating system that makes it possible for any organization to draw on all available global IT resources. Cloud Management Tools are therefore the precursor of new ways how to deliver computing in the decades to come.

Corporate IT departments will ultimately split into the physical side and the user side, with separate organizations and budgets for each. The physical side will continue to own and operate the private cloud hardware and communications in a firm's data center. For example, the Defense Information Systems Agency (DISA) will perform in this role for DoD. DISA will deliver secure raw capacity, published as a commodity offering with guaranteed levels of security, service levels and transaction pricing. Local administrative management will continue to be responsible for LANs and desktops, including thin clients.

The user side will increasingly use self-service tools to deploy applications from published catalogs based on policies, service levels, and pricing. Users will be able to choose between internal and externally hosted capacity on the basis of competitive pricing that will lower the cost computing because of the huge economies of scale that cloud service providers will enjoy.

The OVF Virtualization Standard

For cloud computing to become a platform and technology independent offering the IT industry is adopting a number of interoperability standards. The key standard is the Open Virtualization Format (OVF) for packaging and distribution of virtual machines.  OVF offers the following:

Enables optimized distribution - OVF enables the portability and distribution of virtual appliances. In addition to support for compression for more efficient package transfers, OVF supports industry standard content verification and integrity checking, and provides a basic scheme for the management of software licensing.

Provides a simple, automated user experience – OVF offers a simple virtual machine installation process. Meta-data in the OVF file can be used to validate the entire virtual package getting transferred and validates whether each virtual machine can be installed. Compatibility with the local virtual hardware will also be verified.

Supports both single and multi virtual machine configurations – Software developers can configure complex multi-tiered services consisting of multiple interdependent virtual appliances.

Enables portable VM packaging - OVF is virtualization platform independent, while also enabling platform-specific enhancements to be captured. It supports the full range of virtual hard disk formats used for virtual machines, and is extensible to deal with future formats that are developed.

Affords vendor and platform independence - OVF does not rely on the use of a specific host platform, virtualization platform, or guest operating system.

Supports localization – OVF supports user visible descriptions in multiple locales, and supports localization of the interactive processes during installation of an appliance.

The OVF standard is not tied to any particular hypervisor or processor architecture. The proposal has been submitted to the Distributed Management Task Force (DMTF). OVF is endorsed by Dell,  HP, IBM, Microsoft, VMware and XenSource.

There is also the Virtual Machine Disk (VMDK) standard for the transfer of virtual disks. VMDK encodes only a single virtual disk from a virtual machine. A VMDK does not contain information about the virtual hardware of a machine, such as the CPU, memory, disk, and network information.

This text was extracted largely from http://www.vmware.com/appliances/getting-started/learn/ovf.html

Summary


Cloud offerings must be interoperable to offer hybrid solutions in which private as well as public clouds can offer a variety of services. OVF makes that feasible.


Centralized Desktop Management


       Total desktop management can be delivered from data centers to local or remote users as a managed service, at a reduced total cost of ownership. Downloading displays from a single approved image from data center servers achieves standardization. For improved performance, such as rapid response time, a standard protocol is applied. Customers can then access their personal desktops, including data, applications and settings without delay while using attached peripherals. 

    Individual desktops remain hosted on virtual servers in a data center, where the utilization of pooled assets – such a processors, files or communications – can be balanced for efficiency. 

    Centralized desktop management advantages are:

1. Central management of pooled resources reduces capital costs. With each reduction of $1 in capital costs there are $6 - $8 reductions in operating costs.
2. Capital and operating costs for desktops can be cut by >50% through deployment of thin clients.
3. The costs of electricity and air conditioning are reduced through efficiencies.
4.  Security is enhanced through concentration of security appliances and methods.
5. Troubleshooting can be performed remotely saving on-site presence.
6. Desktops can operate without interruption through instant relocation in case of failure.
7.  Desktops are present whenever a customer moves or if the customer switches to different technologies.
8. Desktops can detach for off-line use and reconnect and re-synchronize after return to the network.
    
The potential savings from centralized desktop management can be considerable (SOURCE: alinean.com):


    To ensure security with centralized control for compliance with industry and government regulations such as HIPAA, SOX, government mandates enabling centralize control of desktops and software access. This calls for access of end users with credentials by restricting access on the basis of authentication method. Most important feature is the maintenance of locked down desktops (such as blocking of all USB ports) without restricting access to applications, which can be encapsulated in fully encrypted files. IT staff can be then assured if full security compliance. Desktop virtualization makes it also possible to conduct the monitoring of software licenses as well as integrations with existing policy-based usage mechanisms such as Active Directory and LDAP.

    A special case is the management of migration to Windows 7. Upgrading of thousands of desktop devices is costly and time consuming. Windows XP applications will not automatically compatible with Windows 7. Additionally, many organizations have custom applications driving their businesses, recoding and recertification their applications for Windows 7 is costly. Most of them will have to rely on external vendors to provide the new compatible applications. By virtualizing existing Windows applications it removes the dependency of applications from the underlying operating system.   A single application can then run across multiple Windows operating systems. Once the applications are virtualized, they can be moved to a complete Virtual Desktop environment by totally separating the operating system from the underlying hardware.
   
    Summary

    The insertion of a virtualization layer into the desktop environment should be seen as an extension of operating systems. It delivers functions that individual operating system cannot execute, such as the management of disparate operating systems in the same server.  One can anticipate a diminishing role for operating systems as the virtualization software takes over hardware and communications functions.