Search This Blog

Integration of Controls for Large Cloud Data Centers

Performance, capacity and configuration management become closely connected as the size of a data center increases. Traditional tools and processes that were designed for stand-alone environments that are run on separate servers do not work in highly virtualized cloud setting. When the size of a data center exceeds several hundred servers, the tight integration of pooled capacity and the fail-over of computing and memory assets require automated controls. Up time is achieved by making real-time reallocation in capacity feasible.

When cloud operations support ten thousands of devices where processing, memory storage and telecommunications are in a services pool, the installation of automated controls is essential. Human operators cannot cope with the rapidity and complexity of such operations. Therefore, further growth of cloud computing will be always constrained not by the availability of computing assets, but by the inherent limitations how such assets are managed. To extract high levels of capacity utilization of at least 80%, from rapidly changing equipment configurations, can be accomplished only if the entire data center is viewed as a single shared pool that can instantly adapt to changing demands.  


The changes in the scale of data center operations in cloud operations makes it necessary to overhaul the ways how computing is organized. The new data centers require that all computing, storage and communications assets combine to offer to customers not only full uptime, but also on short latencies as devices are dependent on on-line responses. What was perhaps tolerable to a user who could always pass the accountability for poor services to company staff, in the cloud data center commercial per use services enforce delivery of superior service level agreements. The security assurance staff must also support unprecedented levels of reliability.

 A number of vendors offer data center management control software, for instance IBM Tivoli, HP OpenView, EMC|SMARTS and VMware vCenter. The power of these tools depends on the ability to monitor and to analyze the performance metrics data regardless of source. To prevent vendor lock-in requires that such software is vendor and data agnostic. Such software must scale up to support the collection and analysis of millions of metrics per hour. Such scalability applies regardless of whether the metrics are collected from a single, massive cloud or from many smaller services, which are affiliated with the central cloud through processing “on the edge”.

Because fail-over is also arranged across separate operations central management control software must be also able to employ ‘remote collectors’. This feature allows it to securely tap into performance data across firewalled environments as well as geographically separated multi-datacenter deployments.

The analytics of management control software reflects the manner in which the normal behavior of each performance metric is determined. It must have the ability to analyze any performance metric because experience has shown that millions of indicators have shown that data behave in widely disparate ways. 


It is inadequate to use a single method to characterize what is “normal” behavior by assuming that data will follow a ‘bell-shaped curve’. It is insufficient to trigger alerts when a metric reaches two or three standard deviations from the average. Monitors must specify a variety of allowable intervals to define ranges of acceptable behavior that would trigger an alert. Here are examples of methods that will reveal exceptional levels of performance:

• Exceeding linearly behavior (e.g., sudden peaks in disk utilizations). Monitoring defenses on a ship may require tracking in minutes in cases where there is an exposure to a missile attack.
• Two-state (e.g., on/off) availability of a service. Detection of a tracking signal by an UAV must be instant.
• Discrete value behavior detection (e.g., ‘number of database user connections’). Detection of an instant rise in the number of transactions may indicate an incipient denial-of-service attack.
• Cyclical pattern behavior detection (e.g., weekly, monthly, etc.). Mid-month rise in financial transactions may show a hacker attack.
• Non-time-series, ‘sparse’ data behavior, such as outliers. A rapid decline in communications may be an indicator of failure.

When problems are building up in a computing service, the first signs of abnormal behavior will show up as deviant performance metrics associated with an application. With sophistication of the automated detection means and the alertness by the monitors, it is possible to observe the abnormality and use this observation as an early warning of potential troubles.

It is important to recognize that automated monitoring is not necessarily telling conclusively if any one metric is behaving abnormally. In operations there will always be some metrics that will show abnormality at a time. That will be inconsequential systems ‘noise’ and all complex systems will always generate some of that.  The objective is to learn what would be a computer network’s typical ‘noise’ level and then take whatever action is necessary to detect “noise” levels that are potentially dangerous. The sensors will have to be sufficiently diverse so that it will require a simultaneous detection of multiple adverse indicators to confirm that a critical event has occurred.

SUMMARY
The installation of a system of controls and monitoring of large data centers warrants top executives' attention prior to proceeding with plans to implement of cloud computing projects.

Models for Winning the Race with Smarter Machines?


The Winter 2012 issue of the Sloan Management Review addresses the question how to win the race with ever-smarter machines. New business models are required. How well does this article offer answers how to acquire such models? 

Examples of innovative business models are offered from Zara, Staples, University of Washington, Assurant Solutions and CVS. In the case of Zara, the innovation is in leveraging the opinions of managers, with computers acting in an auxiliary manner. In the case of Staples, the primary objective is to evaluate personal views. In the case of the University of Washington computers engage results from participants. Assurant Solutions speeds up communications. CVS streamlines the processing the ordering process.

In each of the cited case computers are deployed in auxiliary ways of improving only a part of the business process. This is done for firms that represent only a minute fractional share of the global economy. In each case it took well-focused management to implement the desired changes for processes that are only a part of what constitutes a whole.

What are then the models for “winning the race”? First, it is a matter of scope. There are now close to a billion “smart machines”. They are located almost exclusively in pre-industrial and emerging-industrial countries where the examples cited in the article will not apply for decades to come. Brynjolfsson and McAfee do not deal with that.

The most neglected part of this article is in its omission of how the suggested innovations can be implemented. In the cited examples management was able to concentrate on an incremental improvements by organizing to make smart investments in technology in a limited area. Only after you have executives who have sufficient influence to combine separate functions that a new unified process can get hold. Winning the race is primarily an organizational challenge.  The technology is readily available. 

Technology is cheap and is universally accessible. It takes unified management, not access to computers, to start winning the race for progress. It took the unifying long-term leadership of leaders such as Bezos (Amazon) and Smith (FedEx) to push information technologies into race-winning innovations. Executives like Bezos and Smith are a rare occurrence. They are not the rule, but an exception, in guiding the development of computerization.

Today’s business world is still fractured into millions of organizational enclaves. Unification, even of small parts of an enterprise, is proceeding at the speed measured in decades of human generations, not at the pace dictated by the months of Moore’s Law.

The pace of the race with smarter machines, in pre-industrial and emerging-industry enterprises, will be dictated by political methods, not by entrepreneurial means. The race with smarter machines will be increasingly managed by the power of the government because the benefits of computerization will continue to accrue primarily to the economical elite and not to the population as a whole.  

China's Cyber Thievery Is National Policy—And Must Be Challenged


NOTE: On account of its importance, this is the first time I have copied a complete editorial as required reading material.  From January 27, 2012 Wall Street Journal:

Only three months ago, we would have violated U.S. secrecy laws by sharing what we write here—even though, as a former director of national intelligence, secretary of homeland security, and deputy secretary of defense, we have long known it to be true. The Chinese government has a national policy of economic espionage in cyberspace. In fact, the Chinese are the world's most active and persistent practitioners of cyber espionage today.

Evidence of China's economically devastating theft of proprietary technologies and other intellectual property from U.S. companies is growing. Only in October 2011 were details declassified in a report to Congress by the Office of the National Counterintelligence Executive. Each of us has been speaking publicly for years about the ability of cyber terrorists to cripple our critical infrastructure, including financial networks and the power grid. Now this report finally reveals what we couldn't say before: The threat of economic cyber espionage looms even more ominously.

The report is a summation of the catastrophic impact cyber espionage could have on the U.S. economy and global competitiveness over the next decade. Evidence indicates that China intends to help build its economy by intellectual-property theft rather than by innovation and investment in research and development (two strong suits of the U.S. economy). The nature of the Chinese economy offers a powerful motive to do so.

According to 2009 estimates by the United Nations, China has a population of 1.3 billion, with 468 million (about 36% of the population) living on less than $2 a day. While Chinese poverty has declined dramatically in the last 30 years, income inequality has increased, with much greater benefits going to the relatively small portion of educated people in urban areas, where about 25% of the population lives.

The bottom line is this: China has a massive, inexpensive work force ravenous for economic growth. It is much more efficient for the Chinese to steal innovations and intellectual property—the source code of advanced economies—than to incur the cost and time of creating their own. They turn those stolen ideas directly into production, creating products faster and cheaper than the U.S. and others.

Cyberspace is an ideal medium for stealing intellectual capital. Hackers can easily penetrate systems that transfer large amounts of data, while corporations and governments have a very hard time identifying specific perpetrators.

Unfortunately, it is also difficult to estimate the economic cost of these thefts to the U.S. economy. The report to Congress calls the cost "large" and notes that this includes corporate revenues, jobs, innovation and impacts to national security. Although a rigorous assessment has not been done, we think it is safe to say that "large" easily means billions of dollars and millions of jobs.

So how to protect ourselves from this economic threat? First, we must acknowledge its severity and understand that its impacts are more long-term than immediate. And we need to respond with all of the diplomatic, trade, economic and technological tools at our disposal.

The report to Congress notes that the U.S. intelligence community has improved its collaboration to better address cyber espionage in the military and national-security areas. Yet today's legislative framework severely restricts us from fully addressing domestic economic espionage. The intelligence community must gain a stronger role in collecting and analyzing this economic data and making it available to appropriate government and commercial entities.

Congress and the administration must also create the means to actively force more information-sharing. While organizations (both in government and in the private sector) claim to share information, the opposite is usually the case, and this must be actively fixed.

The U.S. also must make broader investments in education to produce many more workers with science, technology, engineering and math skills. Our country reacted to the Soviet Union's 1957 launch of Sputnik with investments in math and science education that launched the age of digital communications. Now is the time for a similar approach to build the skills our nation will need to compete in a global economy vastly different from 50 years ago.

Corporate America must do its part, too. If we are to ever understand the extent of cyber espionage, companies must be more open and aggressive about identifying, acknowledging and reporting incidents of cyber theft. Congress is considering legislation to require this, and the idea deserves support. Companies must also invest more in enhancing their employees' cyber skills; it is shocking how many cyber-security breaches result from simple human error such as coding mistakes or lost discs and laptops.

In this election year, our economy will take center stage, as will China and its role in issues such as monetary policy. If we are to protect ourselves against irreversible long-term damage, the economic issues behind cyber espionage must share some of that spotlight.

Mr. McConnell, a retired Navy vice admiral and former director of the National Security Agency (1992-96) and director of national intelligence (2007-09), is vice chairman of Booz Allen Hamilton. Mr. Chertoff, a former secretary of homeland security (2005-09), is senior counsel at Covington & Burling. Mr. Lynn has served as deputy secretary of defense (2009-11) and undersecretary of defense (1997-2001).

“Macro Clouds” and “Micro Clouds” for DoD


Cloud computing does not require running every application from a large data center.  It is possible to split applications to run locally and securely on micro clouds to be re-synchronized whenever they can connect to the macro clouds. Without ability to reconnect limited purpose computer devices from the battlefield to central commands will not be seen any more as a hurdle in the adoption of cloud computing. It is also possible to run a functional application, such as logistics, human resources or finance as micro clouds that will reconnect with the DoD enterprise macro cloud only as needed.

Micro cloud servers will be able to operate at forward locations in support of war fighters at location where real-time connectivity is not available or desirable. Limited applications of cloud computing must operate in the battle space when local forces need only limited amounts of pre-loaded applications as well as only geographically limited data. Similarly, logistic micro clouds can run in isolation in a warehouse space until such time when it must reconnect with military demands for inventory data.

Thorough micro clouds the benefits of macro clouds can be extended to troops in the battlefield wherever network connectivity is neither reliable nor has sufficient capacity to support feature-rich media. Micro clouds can be securely authorized as small computer servers running on devices as small as a high-capacity universal serial bus thumb drive attached to a laptop computer or to a shirt-pocket smart phone.

From an architectural standpoint, the size of micro clouds can be also defined by usage, which could view functional applications to be designed as dictated by the scope of operations and not necessarily limited by the available bandwidth.

SUMMARY
Micro clouds are inexpensive, since they can be hosted on consumer-grade computing devices. They can be secure, since the macro clouds can not only download applications with limited use, for a limited time, but also implant in the micro clouds security restrictions that can be re-verified when re-synchronization takes place.

A DoD architectural design that views separate parts of the enterprise as an agglomeration of micro cloud components also offers additional conceptual advantages. Individual ships, separate submarines or even entire expeditionary units can start organizing their systems as diverse clouds which will nevertheless remain connected as a part of an overall DoD Platform-as-a-Service design.

Structuring DoD systems for easy separation into micro clouds and then for re-integration into larger enterprise clouds offers a path to system interoperability. In terms cyber operations all of the DoD macro cloud is ultimately composed of hundreds of micro clouds!

What matters is the ability of DoD/OSD to impose on the entire enterprise a structure of standards and designs, which will permit the pursuit of enormous diversity while imposing full compliance so that all macro clouds can split into micro clouds and all micro clouds can re-integrate into macro clouds. Whenever that happens, DoD systems will surely be interoperable.

A New Task for DoD: Connecting Internet to "Things"


The sizing of the DoD cloud environment may be shaped by the rapid advent of the “Internet of Things” (IOTH) in the next ten years. IOTH is defined as: “A Wireless Web of Devices Managed by Cloud Intelligence.” In IOTH every round of artillery will be tracked from the munitions depot to the gun that fires it. Every inventoried avionics part will be located and found wherever it may be stored. Every crate full of armor vests will be identified and accounted for. IOTH will trace, indefinitely, billions of items that are currently maintained in inventory but extremely difficult to monitor as they move.

Although computers have always been embedded in physical devices as controllers, the significant change that is taking place now is the ability to connect even the most inexpensive devices, such as using cheap Radio Frequency Identification Tags (RFIDs), to the Internet. What has changed is the potential of connecting the number of DoD “things” to the Internet. The number of the required connections exceeds by several orders of magnitude the number of items that are currently monitored by DoD systems.

Cloud computing, which can be defined as “… Internet-scale services hosted in massive datacenters” enables ubiquitous web searches and access to hosted software. It also provides the analytics that enable mobile devices to adapt and personalize behavior, for example, by using their GPS location to find the most efficient way of hauling items from a depot. The cloud is the glue that binds the Internet of Things. It makes possible the cooperation by means of ubiquitous networks, shared data and cloud-based agents. IOTH offers the benefits in regulating the load on the communications grid. It increases the deployment of applications.

The cloud also offers platforms for building complex services that make Internet-connected devices far more than up-to-date replacements for the previous generation of “dumb” devices. The cloud can store data that is always accessible, in real time, to a large number of separate processes. It provides computing resources sufficient for the meshing of several applications into a coherent picture.

However, the IOTH computers embedded in individual devices will be always limited by cost, power, and size constraints, which in turn will bound the versatility and sophistication of the software that can run directly on them. There is no question that DoD will have to move in the direction of IOTH in order to replace the large number of existing logistics applications, which are neither interoperable nor efficient.

SUMMARY
The deployment of IOTH technologies raises many challenges in security and privacy particularly as electronically connected network-connected devices are open to malicious attacks. Although improvements in hardware and software can raise barriers for increased security, DoD will have to make changes in policy in order to put in place mechanisms that will enforce the safeguarding of billions of devices located anywhere on the globe.

Are Virtual Servers Secure?


Data center consolidation is now a key goal of DoD CIOs. With close to hundred thousand servers virtualization has become technically the most expedient way for achieving the downsizing of computing services. Whether the hosting of several servers into one computer will result in the reduction of data center sites remains to be shown. It will require a re-design of networking before the shrinkage into dozens of computers will make it possible to support millions of desktops from only a limited number of locations.

Shrinking thousands of workloads into hundreds of virtual computers greatly increases the complexity of the computing environment. It creates new security risks, which the consolidated environment must address. There is no question that any migration of applications to a much smaller number of platforms will magnify the exposure to compromises. DoD cannot tolerate increasing security risks even if large cost savings are available. Up to 70% potential reductions in the number of servers cannot be used as an offset against the rising costs of security and protection.

The traditional approaches to security offered security by increasing the size of the attack points available to an adversary. The multiplicity of data centers, each managed individually, provided a measure of protection so that targets would be hard to find. However, virtualization now reduces diversity through consolidation of processes and practices. Targets are now much larger and offer an opportunity for collecting compromising results from a collection of applications.

What used to be sufficient in dealing with a fractured legacy environment of only a few dedicated servers cannot cope with an environment where a single pool supports dozens or even hundred of applications. For instance, in a virtualized server pool applications will dynamically relocate not only during normal operations, but also whenever fail-over conditions dictate a shift of processing to a completely different set of servers. A security breach, which was previously contained to an isolated location, will now propagate across a multiplicity of sites while opening and shutting down as capacity optimization dictates. If a virtual host computer is compromised, the consequences can be potentially catastrophic.

Virtualization creates a hypervisor layer, which clouds the visibility of intra-virtual machine communications. In a well-developed virtualized environment a single hypervisor could manage as many as dozens of virtual servers, continually re-arranging the assignment of devices so that a security breach would not be detected. For instance, firewalls, used to be assigned individually, would now act only as a barrier for a cluster of applications and not individually for each separate virtual machine.

SUMMARY
Data center consolidation is now proceeding. It concentrates on server virtualization as the preferred method for achieving quick capacity utilization benefits. Unfortunately, it will take more than the application of hypervisors to a cluster of virtual computers to offer a reduction in the number of data centers. Server virtualization represents a persistent vulnerability. To cut down the number of data centers will require changes how DoD computing is organized and particularly how security is managed.

Utility Cost Structure for Cloud Services


DISA has just announced the Global Content Delivery Service (GCDS) cost structure for fiscal year 2012. It features a one-time annual fixed fee for services, with no recurring monthly costs. Is such a fixed cost consistent with commercial practices?

GCDS will cover all computing costs for DISA services, whether it is to download the latest security patches, check webmail, view information on portals, support decision making or analyze geospatial data. How DISA will calculate the amount of the annual fee and how the units of services will be defined was not specified. The question is whether a user will be able to make a competitive comparison between DISA and a commercial offering?

 For example, the Microsoft Windows Azure cloud bills only when an application is deployed. When developing and testing developers would remove computing tasks so that that services are not being used to minimize compute hour billing. A pay-as-you-go price plus the resources provided for each usage are listed in detailed pricing tables, such as 3.5 GB of memory for using a two core CPU, to cost $0.24 per hour of usage.

The most widely using cloud service is Amazon EC, which bills only for direct usage, on an hourly basis. Customers pay only for what they use.  There is no minimum fee. The prices are based on regions and on the configuration of servers, such as $0.085 per hour for Linux and $0.12 per hour for Windows.

An examination of pricing offered by hundreds of other cloud services firms repeats the pattern set by Microsoft and Amazon. There is quite a bit of variability how charges are metered but the principles of “utility” pricing remains for all firms. Everyone follows the pay-as-you-go approach.

SUMMARY
The strategic direction of DoD computing towards cloud computing has now been set by OSD policy. Through the pooling of computing capacity, customers would be able to make the choice where to process their workload. This includes using either DoD internal or commercial choices.

DISA has been designated as the “preferred option” for DoD computing. In this setting a mixture of both private as well as public processing will be used depending on economics and on security.

To make cost comparisons DoD customers will have to make in each instance tradeoffs between current operating costs and capital investments in application development. How such tradeoffs can be made when DISA offers annual fixed price allocations is not clear. The economics of computing dictates pay-as-you-go utility pricing.  That is the only way users can get a direct incentive to offset application improvement efficiencies against potential operating cost reductions.

The absence of a unit cost pricing structure in DoD is one of the deterrents in encouraging cost reduction. The infrastructure maintenance and security costs of the FY12 IT budget is 27% of total costs. If costs to pay for this overhead expense is collected as an annual levy (e.g. tax), there is no incentive to make cost reductions. In contrast, a commercial services firm will have good reasons to keep investing in overhead cost reductions, since every improvement will show up as a profit improvement. There is no accounting reason why DoD IaaS or PaaS cloud services should not follow the identical policy - the expense for any usage accounting can be negligible.

“Macro Clouds” and “Micro Clouds” for DoD


Cloud computing does not require running every application from a large data center.  It is possible to split applications to run locally and securely on micro clouds to be re-synchronized whenever they can connect to the macro clouds. Without ability to reconnect limited purpose computer devices from the battlefield to central commands will not be seen any more as a hurdle in the adoption of cloud computing. It is also possible to run a functional application, such as logistics, human resources or finance as micro clouds that will reconnect with the DoD enterprise macro cloud only as needed.

Micro cloud servers will be able to operate at forward locations in support of warfighters at location where real-time connectivity is not available or desirable. Limited applications of cloud computing must operate in the battle space when local forces need only limited amounts of pre-loaded applications as well as only geographically limited data. Similarly, logistic micro clouds can run in isolation in a warehouse space until such time when it must reconnect with military demands for inventory data.

Thorough micro clouds the benefits of macro clouds can be extended to troops in the battlefield wherever network connectivity is neither reliable nor has sufficient capacity to support feature-rich media. Micro clouds can be securely authorized as small computer servers running on devices as small as a high-capacity universal serial bus thumb drive attached to a laptop computer or to a shirt-pocket smart phone.

From an architectural standpoint, the size of micro clouds can be also defined by usage, which could view functional applications to be designed as dictated by the scope of operations and not necessarily limited by the available bandwidth.

SUMMARY
Micro clouds are inexpensive, since they can be hosted on consumer-grade computing devices. They can be secure, since the macro clouds can not only download applications with limited use, for a limited time, but also implant in the micro clouds security restrictions that can be re-verified when re-synchronization takes place.

A DoD architectural design that views separate parts of the enterprise as an agglomeration of micro cloud components also offers additional conceptual advantages. Individual ships, separate submarines or even entire expeditionary units can start organizing their systems as diverse clouds which will nevertheless remain connected as a part of an overall DoD Platform-as-a-Service design.

Structuring DoD systems for easy separation into micro clouds and then for re-integration into larger enterprise clouds offers a path to system interoperability. In terms cyber operations all of the DoD macro cloud is ultimately composed of hundreds of micro clouds!

What matters is the ability of DoD/OSD to impose on the entire enterprise a structure of standards and designs, which will permit the pursuit of enormous diversity while imposing full compliance so that all macro clouds can split into micro clouds and all micro clouds can re-integrate into macro clouds. Whenever that happens, DoD systems will surely be interoperable.

Status Report on GSA Cloud Services

The Cloud First policy, announced by U.S. Chief Information Officer in February 2011, mandated that agencies should start moving applications to the cloud by June of 2012.

The General Services Administration (GSA) was authorized early in 2011 to offer a variety of cloud contract vehicles. An Apps.gov web page was then opened offering cloud storage, virtual computer and web hosting services. Apps.gov also offered a wide range of business apps, productivity apps, social media apps and FedRAMP, which is a government-wide approved program that dictates a standardized approach to security assessment, authorization, and monitoring of cloud products and services.

GSA then awarded a contract for Google Apps in December 2010. By October 2011 GSA successfully moved 17,000 e-mail users to Google Apps for Government, a secure, cloud-based e-mail and collaboration platform. GSA officials have stated that using a cloud-based system will reduce cash costs of e-mail operation costs by 50 percent.

In May 2011 GSA released a request for quotation to provide government agencies with generic access to all secure, cost-efficient cloud-based email solutions. The RFQ was for the first of GSA’s Integrated Email as a Service cloud offerings, designed to increase the speed of agency adoption, deployment, and implementation of cloud technologies. This would allow agencies to purchase cloud services without the added cost of infrastructure maintenance, lowering the cost of government email and collaboration services because it offered Software-as-a-Service (SaaS) solutions.

The National Oceanic and Atmospheric Administration (NOAA) has now completed moving 25,000 employees and contractors to Google Apps, under the GSA contract. NOAA issued the request for proposals in January 2011 and made the award in June to Google and its partners. NOAA employees are now working with the latest technologies like environmental monitoring satellites and high-tech weather forecasting tools. All e-mail, collaboration and document management functions have been moved to a unified Google platform in just six months, except for retaining access authentication privileges. The estimated savings are about 50%.

In September 2011 the Department of Homeland Security became the next federal agency to award a task order using the GSA contract as a Service Blanket Purchase Agreement (BPA) for cloud computing. Although the contract award is limited ($5 million over five years) this established an important precedent.

SUMMARY
The GSA BPAs have opened the doors for agencies to proceed with a rapid introduction of cloud computing for “commodity” applications, such as e-mail and collaboration systems. The GSA process also appears to be compliant with the recent Congressional guidance.

The success of GSA and NOAA migration to Google are a proof that conversion of legacy e-mail is not necessary. A more direct migration path into cloud computing allows for a rapid transformation of an applications.

The current efforts by DISA to move the Army’s e-mail to a standard Microsoft environment is on hold on account of Congressional directions. From a short-term standpoint, continuing e-mail consolidation using a Microsoft solution offers an advantage because of the close entanglement of Microsoft software with local adaptations. However, from the standpoint of OSD policy, which mandates greater interoperability with other competitive options, the current DISA plans do require a re-examination.

How Will the DISA “First” Data Center Strategy Work?

There were 6,100 servers in the DISA Defense Enterprise Computing Centers (DECC).   The Air Force, Army and the Defense Logistics Agency have adopted the “DISA first” strategy. DISA will be considered for application and data hosting before pursuing any other solution.

The prospect of budget reductions is now driving the efforts to eliminate redundant data center facilities. Data center consolidation is also offering opportunities to streamline network architecture and to improve network security. DISA has now assumed the responsibility for playing a key role for managing DOD’s data center consolidation strategy.

According to the OSD CIO there were 67,246 servers operating in DoD.  The question is how to fit approximately 90% of all enterprise servers into DECCs that currently delivers only 10% of the total server capacity?

The existing servers at DECCs are handling about 3,000 of isolated applications but do not operate as a cloud. Virtualization of server computing is proceeding, though the pooling of disk space, controls and communications is not done. To transfer servers from the services and agencies will require the restructuring of applications so that all computing can be pooled in a shared cloud. The capacity of DECCs to absorb additional workloads at lower costs needs to be demonstrated because the economic and technical feasibility of proceeding with massive consolidations needs to be shown.

The computing capacity at agencies and services is meanwhile growing at an extremely fast pace. For instance, close to 100 operating UAVs require 500 megabytes per second worth of bandwidth, or 500 percent of the total bandwidth of the entire U.S. military used during the 1991 Gulf War. Theoretically this adds up to 180 petabytes per hour to be tracked and stored somewhere. That storage vastly exceeds the available DISA storage capacity of about five petabytes. While DoD is speaking about a rapid pace of consolidation, the ability to achieve that while reducing capital and operating costs still needs to be reflected in reduced budgets for 2012-2015.

SUMMARY
Consolidation of DoD data centers primarily into far more efficient and secure environments in DISA is the stated policy by the OSD CIO. However, provisions in the 2012 National Defense Authorization Act are likely to hamper ongoing efforts to start the migration with the transfer of the Army’s email to hosting by DISA. Although the obstacles may be organizational and political, the technical difficulties of executing the stated policy are likely to be very large.

The capital and operating costs are likely to shift the execution of the entire data center consolidation program from DECCs to commercial firms that can offer cloud services on demand and at competitive rates. The important task for DoD will be to engineer the cloud environment so that it will be able to be relocated for competitive reasons. The current DISA directions to proceed with a completely proprietary Microsoft solution for the Army must demonstrate that such flexibility will be preserved.

The FY 2012 IT Budget for DoD

The OMB prepared analysis of the FY2012 IT budget for DoD offers new insights into the existing spending.  An understanding of the structure of IT spending is important for gaining a realistic insight how the just announced strategic directions can be achieved.

The key insight of 2012 spending is an increase of 5%, not a decrease in IT spending. The following table shows the changes:

/FIGURE 1/

 The following shifts in spending are significant:
1. The shift of spending from services to agencies is continuing.  40% of the total DoD spending and 36% of all development is in agencies. Any proposed consolidations of applications must concentrate on the diversity of programs that are widely dispersed in a variety of agency organizations.
2. The Army shows a large increase in IT spending whereas the Air Force shows a remarkable decrease in the costs of ongoing operations. It appears that the Air Force is making good progress in the consolidation of shared applications.
3. The Navy shows a 57% increase in development costs. Since the Navy still continues operating with what is a mature NMCI systems and NGEN is just getting started, it is hard to understand the reasons for such an increase.

The projected $38.4 billion of DoD spending does not include the payroll costs of the uniformed and civilian workforce. According the DoD CIO, there are approximately 170,000 personnel supporting IT operations classified as support for information technology. If conservatively priced, this would add more than $17 billion to the total IT expense, or 44%. This manpower is by far the single largest cost component, far exceeding expense for computer hardware. In planning cost reductions the primary focus should be therefore on headcount reductions.

One also needs to consider the relative size of this manpower because it equals the headcount of the entire projected Marine Corps force. When DoD re-examines its “tooth-to-tail” ratios, the information workforce must be seen as a major opportunity to decrease the number of support personnel.

Missing from the DoD IT budget are most intelligence costs, such as the expenses for the DIA, NSA and a variety of national security functions. Since the future of DoD depends on the leveraging of intelligence efforts with warfare and a variety of cyber operations, a partial exclusion of such spending removes from the OSD oversight a critical component of enterprise networks.

A functional examination of DoD IT spending raises many questions about the organization of its projects. OMB divides IT spending into several categories:

/FIGURE 2/
 
1. 481 programs in Information and Technology Management consume half of the IT budget. In commercial terms that is usually classified as IT “overhead”. It deploys a variety of applications used primarily to deal with the proliferation of contractual relationships. For instance, the DoD Controller keeps track of IT spending with more than 5,000 expense line items. There is no question that any consolidation program should start with a sharp focus on how to reduce such expenditures.
2. A surprising discovery is found in the 656 supply chain management programs with a $3 billion budget – many with limited budgets – to keep track of asset records. A reduction in such systems should be seen not only as a way of reducing costs, but also as a means for streamlining the workflow so that tracking materials is simple.
3. The fact that only 28% of programs support Defense and National Security, which is the core business of IT, suggests that all reporting is incomplete and that the bulk of information technologies are classified as “weapons”, which takes them out of the IT classification. For instance, there are huge expenses for avionic systems or for missile defense. Though most of these costs deal with hardware and software, they are nevertheless defined as weapons and not as IT, where it would be excluded as a military capital cost.

SUMMARY
The FY2012 budget identifies line items that would affect the sequence of execution of an enterprise strategy for DoD. The current IT approaches have an enormous task of “cleaning up” the accumulation of up to thirty years of localized proliferation of programs that keep on consuming funds for support and maintenance.

Close to $14 billion/year development funds in FY2012 will have to be re-directed to generate the short-term savings while steering programs in the desired direction as outlined in http://pstrassmann.blogspot.com/2012/01/new-information-systems-directions-for.html.

Whether such funds will be available after austerity budgets become effective after FY2013 is not known.

There are many choices where to start. However, the $19.7 programs in the Information Technology and Management appear to offer the greatest potential. First, it is a functional area where the OSD CIO has unquestioned authority. Second, it does not intrude on National Defense or National Security mission-oriented programs or on functions that are tightly coupled to military operations. Third, it is the IT management programs that support the current proliferation of program initiative. By seizing control over the administrative processes that perpetuate the continuation of past practices, the ability to guide DoD programs towards a more consolidated approach will enhance the ability of central management to steer the development budgets towards the desired directions.

FIGURE 1

FIGURE 2

Does DoD Have an Adequate IT Strategy?

According to the newly released IT strategy documents, the “Enterprise Computing Centers” (ECC), become the default location for over 60,000 DoD servers in use.  Servers that do not fit into a small number of ECCs will remain in “Area/Regional Processing Centers” and in “Installation Processing Centers” that will be granted exceptions from consolidation. This entire migration should be mostly complete sometime after 2015.

The proposed streamlining of most of DoD’s data center capacity on such a short schedule is unprecedented. The only comparable effort was dictated by DMRD 918 in 1992, but did not get completed until ten years later. Though its original plans projected the folding of 122 data centers into five DISA operated services, the total number of data centers outside of DISA grew enormously as components found it attractive to operate their own computing facilities.

The fundamental flaw in the implementation of DMRD plans was its sole concentration on the consolidation of data center locations, with insufficient regard to the streamlining of related collateral processes, applications and communications. Just consolidating data centers was an inadequate strategy.

To make the ambitious data center consolidations feasible, DoD will have to include in its plans the problems associated with the termination of hundreds of contractors that currently deliver local data centers support services. This includes a significant share of locally managed “set-aside” contractors, which are primarily minority-owned firms. Congressional intervention to keep local contractors employed will inhibit the proposed strategy.

The latest IT strategy has added simultaneous consolidations of network controls as well as the elimination of individual networks. These are essential steps, but introduce an enormous effort to alter existing long-term contract relationships for 15,000 networks. The entire GIG 2.0 connectivity will have to be reconfigured.

The new IT strategy is also adding a replacement program for the multiplicity of existing security programs, network control centers and help desks. Such a substitution will create turmoil among the staffs now operating such services because the existing security arrangement represent a diversified patchwork of local adaptations that offer a large variety of security solutions.

The new IT is changing end-user services at the same time, such a central coordination for all testing, certification and procurement of information technology. This includes a centralized approach to administering a new generation of hardware and software purchases while imposing on contractor operations innovative application development platforms. Whether the existing contractual arrangements can accept such changes on the proposed schedule is doubtful. Software development practices of hundreds of contractors are difficult to alter while maintenance of existing code must continue without a flaw.

The new IT strategy proposes to address the methods used in connecting over seven million desktops that somehow must interact with the new data center configuration of virtual servers that have fail-over capabilities. Shifting millions of computers and smart-phones to become virtual devices requires a redesign in switching and in software, which involves substantially more than just changing hardware
The new IT strategy proposes shifting much of the existing technologies to web-based desktop and smart phone productivity suites. Divestment of existing hardware while keeping customers operating without interruption is going to be difficult on account of the time that will have to be used for retraining.
Implementation schedules will have to be extended unless large support staffs will be available to administer dual operations in the interim.

The new IT also strategy wishes to pursue a parallel approach to systems reconfiguration with integration of voice, video for all types of devices, including mobile computers. How that can be sequenced without disruption is a formidable task that could take more than a decade to complete.

The failure of DMRD 918 was its neglect of applications and data services. Proceeding with data center consolidation without synchronization of interoperable applications is perilous on an accelerated schedule. Any IT plans conceived in isolation, without prior assurance of close cooperation from clerical and administrative bureaucracies, needs examination. An effort to achieve standardization and unification of data definitions across DoD components has been in place since 1993 in DISA, but so far has managed to make only minor progress.

There are also technical issues that need to be considered before accepting the proposed strategy. As yet the Office of the DoD CIO has not published a comprehensive and all-inclusive reference enterprise architecture that would support the proposed overhaul of systems. There are no technical standards in place for a federated enterprise solution that delegates the roles of military services and agencies into a support position. The consequence of uprooting existing commitments, especially for multi-billion programs with multi-year schedules, has not been detailed.

The work that needs to be done in competitive selection of a limited set of development platforms is still waiting completion. From an acquisition standpoint this may consume most of the time available. DoD with its FY12 projected IT budget of $38 billion is more than ten times larger than the IT budgets of the largest commercial organizations. Dictating the adoption of a limited set of open source software development platforms in DoD will create an upheaval among software supplier firms. Congressional interventions will slow down vendor selection for an extended time.

Agreements on how to implement the concept of application development where every function is accepted by all components after getting tested by only one, is still to be worked out. This may be one of most sticky issues for reaching agreements across all components.

The long lingering effort how to assure an enterprise-wide binding acceptance of MetaData should be completed. There are at least 3,000 individual systems now in place. Each has its own separately maintained information stores. Proceeding with a standard DoD enterprise effort is too risky for venturing into a consolidated environment where data stores become a pooled service.

To obtain widespread acceptance of certified code from development platforms such as Forge.mil, should be improved. So far, only a negligible part of DoD programs have benefited from the use of pre-fabricated software code.

The endorsement of digital signatures now requires enterprise-wide implementation. To proceed with shared enterprise-level processing on the current schedule requires DoD-wide agreements about accepting enterprise-level messaging and collaboration applications.

SUMMARY
The newly released IT strategy documents are certainly commendable. However, from the standpoint of the speed of implementation the risks are too great. We have counted over twenty risks, each with a capacity to inhibit progress of the entire proposed strategy.

As a rule, individual program managers can always concentrate on delivering results with only a small number of known risks, for projects that have a limited budget. However, in this case, which is the most ambitious proposal for a total reconfiguration of DoD IT ever conceived, the known as well as the unknown risks are just too great to accept the proposed rapid schedule. The history of on-time and on-budget performance of IT projects shows that the larger the scope of any effort, the greater the likelihood that neither schedule nor results will follow the original plans.

As has been always the case before, IT reform depends on the leadership of the key IT executives, on the capabilities of the workforce, on the support of the contractors and on the skills of the technologists to guide DoD into a completely different information environment.

The existing strategic plan has not given sufficient consideration to the prevailing social situations (also called “politics”). It does not include an analysis what the DoD organizations are capable of executing. The strategy is too extensive, trying to solve too many of the existing problems all at once. It is too fast while engaging in multiple simultaneous radical innovations. As proposed, the new strategy needs more work to show how many of the projected results can be delivered in the foreseeable future.

Objections to Cloud Computing Security

Security vulnerability is the most frequently voiced objection to cloud computing. Everyone will readily attribute greater efficiency and effectiveness to platform or software as a service. However, the subject of security assurance is always cited as an issue for which adequate safeguards are not adequately specified. Such objections reflect an insufficient understanding of the far more demanding technical capabilities that the security of cloud computing requires.

From a policy standpoint the following views on the security issues are applicable: [1]
·      Consolidation into a limited number of clouds enables secure services because the number of data centers exposed to attack is a much less than the hundreds of existing sites.
·      With tightly controlled identity authorizations as wells as access privileges information can be made securely accessible to all.
·      Deploying enterprise-wide standard identity and access management protocols will extends security protection from the network to the data stored on servers.
·      DoD networks can be better protected from threats, both internal and external, by the ability of blocking a much smaller number of potential gaps in the information infrastructure.
·      Deployment of the limited number of staff as well as of costly forensic software engaged in computer network defenses makes it possible to anticipate attacks.
·      Tightly managed assurance processes, counter-intelligence, expert security management and automated command structures will ensure that military networks remain available at all times.
·      The smaller number of standard cloud environments can ensure an ability to recover instantly from any attack.

SUMMARY
The security assurance of a cloud-based DoD environment is a highly technical issue. What is currently practiced as safeguarding of highly distributed operations does not apply under conditions that would prevail in a consolidated cloud-based environment.

Objections to cloud computing require the installation of unprecedented countermeasures as computing assets become concentrated into a vastly smaller number of targets. From a policy standpoint, as noted above, cloud-based computing can be protected. It will now take a very large and costly effort to proceed with implementation.



[1] Signed_ITESR_6SEP11. Version 1.0 – 6 SEP 2011

New Information Systems Directions for the DoD

We have a new a DoD IT Enterprise Strategy and Roadmap.  The strategy has been just signed by the DEPSECDEF as well as by the OSD CIO. (1) This makes it the highest-level statement of IT directions in over two decades. The new strategy calls for an overhaul of policies that guide DoD information systems. Implementation of the strategy now becomes a challenge in an era when funding for new systems development declines.

The following illustrates some of the key concepts that require a complete reorientation how DoD manages information technologies:

1. New policy: DoD personnel will have seamless access to all authorized information, enabling the creation, location, uses and sharing of information. Access will be through a variety of technologies, including special purpose mobile devices.
Current condition: Seamless access to information is presently not possible. DoD personnel use computing services in 150 countries, 6,000 locations and in over 600,000 buildings. This diversity requires standardization that would be difficult to make available.
Conclusion: Extremely hard to do. Requires a change in the way DoD systems are configured.

2. New policy: Commanders will have access to information available from all DoD resources, enabling improved command and control, increasing speed of action, and enhancing the ability to coordinate across organizational boundaries or with mission partners.
Current condition: Over 15,000 uncoordinated networks prevent access that offers increased speed as well as real-time coordination.  Consolidation of all of the networks under centrally managed network control centers becomes a key requirement for further progress.
Conclusion: This becomes an extremely difficult undertaking. Can be done, but would require a complete reconfiguration of the GIG.

3. New policy: Individual service members and government civilians will be provided with a standard IT user experience, enabling them to do their jobs and providing them with the same look, feel, and access to information on reassignment, mobilization, or deployment.
Current condition: DoD systems depend on over seven million devices for input and for display of information. There may be millions of unique and incompatible formats for the delivery of user experiences.
Conclusion: To remedy format incompatibilities requires the replacement by means of standard software of all the existing interfaces. That becomes a multi-billion task, though shifting costs from low cost thin clients to a highly reliable cloud makes this option feasible.

4. New policy: Common identity management, access control, authorization, and authentication schemes are necessary to permit access based on a user’s credentials.
Current condition: This policy calls for the adoption of shared networks as well as the revision of access privileges that are currently included in close to 70,000 servers.
Conclusion: The workflow between the existing personnel systems and the access authorization authorities must be revised. Overhauling the systems access privilege granting process will require a change in organizational relationships. This policy can be implemented rapidly and at a low cost.

5. New policy: Common DoD-wide services, applications, and tools will be broadly usable across the DoD, thereby minimizing duplicate efforts, reducing data fragmentation and translation, and reducing the need for retraining when users are reassigned, mobilized, or deployed.
Current condition: This policy cannot be executed within the organizational and funding structures currently in place.
Conclusion: Standardization of applications and of software tools will necessitate junking much of the code already in place, or temporarily storing it a virtualized legacy codes. Reducing data fragmentation would require full implementation of the DoD MetaData directory, currently in a decade-long development program. This policy will most likely be the most costly part of the entire new strategy. May take a decade to implement.

6. New Policy: Streamlined IT acquisition processes must support rapid fielding of capabilities, inclusive of enterprise-wide certification and accreditation of new services and applications.
Current conditions: Presently there are over 10,000 operational systems in place, controlled by hundreds of acquisition personnel. There are 79 major projects (with current spending of $12.3 billion) that have been ongoing for close to a decade and that have a proprietary technology deeply ingrained.
Conclusion: Disentangling DoD from several billions worth of non-interoperable software can be done by changing OSD policy and obtaining Congressional approval.

7. New Policy: Consolidated operations centers will provide pooled computing resources and bandwidth as needed. Standardized data centers will make it easier to access, reallocate, and monitor resources.
Current conditions: The existing number of data centers, estimated at over 770, represents a major challenge in consolidation without major changes in the software that currently occupies over 65,000 servers.
Conclusion: Can be done by shifting the workload to commercial Infrastructure-as-a-Service suppliers, but under tight DoD control to make a shifting of the workload possible.

SUMMARY
There is no question that the new OSD IT policy is in line with what are the requirements of the new military environment. The problem is how to implement the transition, because the financial, technical and organizational hurdles are challenging.

The idea of reprogramming 10,000 operational systems into a standard environment, with standard desktops, is neither affordable nor technically executable on an acceptable schedule. DoD will have to consider radically new ways how to achieve the goals of the new policies.

One of the options is to shift DoD systems to a Platform-as-a-Service environment where a standard DoD enterprise infrastructure supports multiple systems, even virtualized legacy applications. Another option is to migrate “commodity applications” such as document processing, collaboration and e-mail to Software-as-a-Service offering.


 (1)  Signed_ITESR_6SEP11. Version 1.0 – 6 SEP 2011