Search This Blog

Acquisition of Cloud Software and Services


There are now thousands of firms offering “cloud” services. They range from large suppliers, such as Amazon, Google and SalesForce, to specialized enterprises such as Terremark and Savvis. The potential scope of these firms is global since they can potentially deliver their products anywhere. These firms can be either cloud brokers, cloud managers, cloud operators (SaaS, PaaS or IaaS), cloud platform vendors, software providers (VMware, Citrix, Microsoft) or cloud hardware suppliers (IBM, Dell, HP).

The benefits of switching a firm’s highly customized data center operations into any combination of Private, Public or Hybrid clouds will decrease operating costs as well as deliver greater security, more effective utilization of capacity and improved availability. The evidence that cloud computing delivers such gains is undisputed. However, to realize the benefits of cloud computing will require making major changes in the ways computing is managed.

How to migrate into a cloud-based computing environment is a decision that every chief information officer is facing at this time. One can progress incrementally by starting with the progressive virtualization of in-house servers. Such gains can be made quickly as the number of computing platforms is reduced.

In another case, a firm can transfer its computing workload to a services provider, such as by outsourcing of its e-mail or accounting. Costs will be reduced and the quality of this service will improve. The problem with all such moves is that a firms’ IT operations will commit to an incremental improvement, without engaging on a path that would lead to much greater information effectiveness for the entire enterprise.
    
Here is a partial list of issues that occur when a firm pursues incremental cloud migration. In each case this involves acquisition of services from suppliers who offer only partial technology solutions:

1. Every one of the thousands of vendors will attempt to “lock in” its customers into progression that favors its proprietary offerings but only for the contracted work.
2. A proprietary contract will limit access to public cloud services. It is unlikely that such arrangements will be interoperable with proprietary private cloud services.
3. Data centers from different parts of an organization are likely to pursue incompatible cloud applications.
4. Security policies, back-up and fail-over capacity will be either inconsistent or not achievable.
5. The sharing of capacity and managerial control cannot be implemented. Large supervisory staffs will remain in place.

Cloud vendors have become specialized. They offer a variety of services each offering some sort of a “cloud” solution but never a complete answer to enterprise needs. [1]  The following is an example of the variety of firms and the published standards that are currently available to guide cloud acquisition:


A customer should be able to select from the above list those suppliers that comply with open standards, such the Open Virtualization Format (OVF). [2] Specification describes an open, secure, portable, extensible format for the packaging and distribution of software to be run on virtual machines. OVF has arisen from the collaboration of key vendors in the industry and is accepted in forums as a future standard for portable virtual machines.

The Open Cloud Computing Interface (OCCI) comprises a set of open community-lead specifications. [3]  OCCI is a protocol and API for a range of cloud management tasks. OCCI was initiated to create a remote management API for IaaS services, allowing for the development of interoperable tools for common tasks including deployment of autonomic scaling and monitoring. It has evolved into a flexible API with a focus on integration, portability and interoperability while offering extensibility. The current release of the Open Cloud Computing Interface is suitable to serve many other cloud models.

SUMMARY
The advancements in cloud computing have lead to a proliferation of vendor offerings as vendors are getting reorganized to support cloud computing that is interoperable and integrated. The most advanced feature of this turmoil is in the adoption of software-defined networks, as the functions of network switching and routing are getting relocated from hardware devices to software-managed servers. A similar evolution is now taking place in the shift from security hardware “appliances” (stand-alone fire-walls and malware-tracking) into software-defined capabilities. In this race towards the increased integration of cloud software with layers of server computing, rather than adding hardware devices, the leading firm is VMware. They have announced the open source “Cloud Foundry” as a new product.[4]

  Budget pressures as well as increases in the demands for computing services have placed demands on chief information officers to accelerate the conversions to cloud computing. Thousands of new firms now offering cloud solutions need to be examined for a demonstration that their offerings will have a well-defined migration path for delivering long-term gains. As one of the selected acquisition criteria the demonstration of an evolutionary path will also have to include compliance with published standards as well as the absence of proprietary solutions.


 [1] http://blogs-images.forbes.com/kevinjackson/files/2012/08/CloudTechSpectrum_Vendors_v21.png
 [2] http://dmtf.org/sites/default/files/standards/documents/DSP0243_1.1.0.pdf
 [3] http://occi-wg.org/
 [4] http://www.cloudfoundry.com/

Defense Business Board Report on Cloud Computing


The Defense Business Board (DBB) is one of the highest-level committees advising the Secretary of Defense. [1]  Its report on “Data Center Consolidation and Cloud Computing” warrants attention to indicate what policy directions DoD should be following. [2]

The purpose of this note is to comment on DBB findings:
1. The Department’s FY12 budget for IT is reported as $38.5 billion, $24 billion of which is dedicated to infrastructure. Those numbers are incomplete since they do not include the payroll of 90,000 military and civilian employees, worth over $10 billion. It does not include the time expended by employees in administrative, support, training and idle time that is associated with over 3 million on-line users, amounting to at least $3,000 per capita/year, or $9 billion. [3] From the standpoint of potential DoD cost reduction targets the total direct IT should be considered to be at least $58 billion. In addition there are also collateral management costs such as excessive purchasing due to long procurement cycles, high user support costs to maintain separate systems and high labor costs due to inefficient staff deployment.
2. The report lists over 772 data centers in place, with a data center defined as having more than 500 sq. ft. Such count understates what is a data center since the footprint of modern computer facilities servers can be accommodated in less than 200 sq. ft. Therefore, the number if data centers eligible for consolidation is well over a thousand. Consolidating equipment represents the least expense. Most of the cost is in the re-alignment of files, communications and contract arrangements.
3. The DBB does not recognize that the cost of the infrastructure, or 62% of the total, is broken up into thousands of separate programs. Hardly any of the programs are interoperable from a logical or physical standpoint. The largest component of the infrastructure is $9.9 billion for telecommunications, largely managed by DISA. These costs are managed as an allocation of total costs rather than through transaction fees as is the generally accepted commercial practice and as was originally recommended as DoD policy in 1993 though DMRD 918.
4. What are the cost savings and benefits from the streamlining of DoD IT is not understood in the form of cost justifications. For instance, the re-wiring and software redesign costs for data center consolidation involve a major restructuring to fit enterprise-level standards for over 3,000 programs. Business cases that would support such effort have not been completed.
5. No recommendations have been made how to deal with loss of local operational control over applications. The re-assignment of dedicated staffs or contractors remains under local control.  The task of restructuring major programs does not account for a large number of sub-contractors in each instance as well as for up 30% of the value of each program distributed to a multitude of small business operators who have embedded into applications unique features and functions.
6. The DBB has not spelled out the migration process how to simplify operations through standards, interoperable software and telecommunications that are tightly coupled with applications.
7. The entire cloud software implementation will be determined by policies that dictate the migration process at the enterprise and not component levels. The speed of migration will determine the rate at which savings can be realized. If the migration takes too long, the conversion into the cloud environment will most likely never pay off.
8. There will be huge reductions in manpower if the most efficient cloud computing policy is chosen for DoD. The required skill levels, especially as employment shifts from operations to development, will make it more difficult to recruit replacements. If contractors and sub-contractors are included, the total manpower affected exceeds about 300,000. The DBB has not addressed how this issue can be resolved.
9. The report offers a table with potential cost savings. DoD cannot use such data as benchmarks in the absence of any details how such efficiencies can be realized. Projected savings of 70-90% call for radical re-architecting of the DoD approach how to manage. For instance, in the absence of any discussion how the reduction of application development to 4 days can place, there is no explanation how changes of existing acquisition policies would make such projections credible. Without an indication what type of up-front investments is necessary makes any ROI forecasts without support.
10. The statement that “properly designed cloud systems can be more secure” has no merit. How the DoD enterprise would protect 7 million connected devices against insider compromises or instances where there are violations of security policy is not clear.
11. The admonitions that DoD needs strong governance and leadership, a clear strategy, a well articulated “concept of operations” as well as the removal of policy barriers is self-evident. Such assertions are without merit in the absence of specific recommendations.
12. The four-step migration sequence, with cloud acceptance as the last phase after all other rationalizations have taken place, offers an unrealistic sequence. DoD must start with well-articulated cloud architecture before proceeding with incremental migration. Incremental progress, without an overall plan, will arrest progress to only partial local improvements in the status quo.
13. DoD progress toward cloud computing cannot be achieved through hundreds of separate pilot programs. The limits on future funding calls for a concentrated effort. It cannot be done through program-based short-term savings, but through a radical overhaul of the existing infrastructure where the largest inefficiencies exist at this time.
14. The recommendation that the DoD CIO has veto power over IT spending but engages component CIOs as chief implementers while leveraging DISA, runs into conflict with Title 10 responsibilities. Without addressing this issue, the DBB report is “toothless”.
15. Applying a sequenced approach to data center consolidation as the high priority action addresses the least profitable initiative, with dubious payoffs. It keeps the implementation of cloud computing in the hands of the components and not primarily with the DoD enterprise. The strategic direction of DoD should aim towards enterprise-level cloud computing that mandates application consolidation as well as the enterprise-wide adoption of virtual personal appliances.

 SUMMARY
The DBB report is incomplete. It does not offer actionable solutions. It only raises policy level questions, which is insufficient. As components are formulating FY13-FY18 budget requests they will find nothing in this report that will guide what re-alignments are needed to advance DoD towards cloud computing.


  1 http://dbb.defense.gov/charters.shtml
  2  http://dbb.defense.gov/pdf/Final%20IT%20Report%20with%20Tabs_FF9D.pdf
  3 Gartner Research Note G00208726, 11/2010



The Deployment of Virtual Device Interfaces (VDI)


Desktop and smart-phone virtualization allows organizations to adopt a centralized approach to the management of the configuration of computing devices to greatly reduce costs. By decoupling the applications as well as data and operating system from devices, and by moving these components into a pooled center a streamlined, secure way to manage distributed devices is feasible. Computing devices can be then centrally managed and desktop customers can realize many benefits.

VDI can manage tens of thousands end-user devices from a centralized administrative interface from where it allows provisioning, configuration management, connection brokering, policy enforcement, performance monitoring and application assignment. VDI increases security and compliance by moving data into a computing center, centrally enforcing endpoint security and streamlining security countermeasures processes. Most important, VDI makes it possible to install security services against “spear-fishing” attacks that otherwise would be undetected.

VDI offers economics advantages. Centralizing the infrastructure makes it less costly for IT staff to provision, maintain and monitor desktop images across their entire life cycle while decreasing support calls and reduce end-user downtime. The Total Cost of Ownership (TCO) of unmanaged computing devices is $5,795/year. The comparable cost for VDI devices is $3,310.  For instance, with the DoD population of more than 3 million devices suggests a potential direct cost reduction opportunity could be one billion dollars/year. When major savings from end-user costs (administration, training, repairs and downtime) are added that would increase the potential gains by $6.5 billion dollars.

For smaller firms the potential savings of $2,500,000/year could be realized for every 1,000 computing devices.

VDI delivers to users experiences across locations and devices over the LAN and WAN in terms of lower latency and much higher uptime reliability. Users can connect to the VDI environment a wide range of devices including desktops, thin or zero clients, and mobile devices. Mobile users can access their VDI desktops even if disconnected from the network provided that they re-synchronize their applications afterwards. A software configuration management console enables IT administrators to centrally administer thousands of VDI desktops from a single image for the management, provisioning and deployment.

VDI is installed on a virtual infrastructure, which includes virtual machine hypervisors and the management center to create and manage the virtual machines. End users open VDI Clients on endpoint devices to log in to their desktops, which are “views” of all virtual machines such as Windows desktops. Users can access their desktops from a variety of endpoint devices where VDI is installed such as Macintosh, Windows, and Linux computers, thin clients, zero clients, iPads, and Android-based tablets.

To install VDI, the following installations are necessary: Cloud network and storage connections; Microsoft active directory and domain controllers and hypervisors. The VDI Connection Server will then authenticate client users through the integrated Windows Active Directory, which connects the users to their virtual desktops. Users can also connect directly to the central desktop. For remote connections, a wide range of security servers will stand as protection between the clients and the internal network.

Each VDI virtual machine desktop has within it: an operating system; a VDI agent; the user profile (“persona”) and installed applications. From the VDI administrator console it is then possible to view all VDI components.

VDI ultimately requires the adoption of a standard protocol so that an organization can operate seamlessly with a single common platform from the desktop to the datacenter. That enables private and public cloud based desktop services across a variety of hybrid cloud services. Proprietary VDI protocols from firms such as IBM, Microsoft, Oracle and VMware offer VDI capabilities, which are, however, in most cases are not interoperable.

SUMMARY
Installing the VDI environment could be a complex, multi-step process depending on the options: 1. The VDI host infrastructure must be installed. 2. Set up VDI view agents, inclusive of templates must be integrated. 3. Installing Microsoft Active Directory and Domain Controller services is necessary. 4. VDI composer database and SSL security certificates must be added. 5. VDI connection servers must be loaded to dedicated physical machines. 6. Configuration of VDI transfer software, such as Windows applications must be completed. 7. Desktop pools of hardware need to be set-up. 8. Security services require installation. 9. The entitlement of individuals to their respective desktops must be designated. 10. Network connections are required for customized configurations. 11. Personal profiles must be installed.

If VDI is getting installed into a private cloud that captures a wide range of existing configurations (Windows, Linux, etc.) the conversion will be costly and the payback will take a long time. If the VDI takes place after the migration to thin clients has already taken place, the conversion will be easier.
The adoption of VDI does not necessarily have to be made into a private cloud environment. It could be implemented as a hosted service that already includes VDI as a standard offering.

The current DoD policy to rapidly migrate thousands of diverse and customized configurations offers an enormous challenge. To achieve major cost reductions in short order will require directions from an enterprise architectural level and not from the standpoint of thousands of existing individual programs that will have to be harmonized.



Containing User-Activated Security Threats


The single largest security threat at this time is through network breaches. Employees are direct targets of adversaries who have the objective of penetrating networks to gain access and then exploiting it do further damage. The most successful attack is spear-phishing employees with email containing links to malicious sites. The adversary is tricking an employee into becoming an accomplice to network breach every time they click on a link that looks innocent but hides an attack. Every employee is therefore a potential point of weakness in security.

A well-designed attack has a high chance of success. Every employee is a potential contributor to a security breach, from the intern to the chief executive. The adversaries also know that internal network security to protect many incoming transactions is for all practical purposes non-existent. After gaining access to a single machine, an attacker can move laterally to seek out the keys to the entire network. This is a problem that demands a sophisticated technology solution to aid the internal security team in identifying and then isolating the adversary while protecting the network.

At present, the infections are usually detected weeks and even after the fact.  Damage is prevented after the adversary has had ample time to both access the network and steal sensitive data. While one attack gets cleaned up, the adversaries are already launching another penetration.

Most of the existing counter-measures rely upon are reactive technologies. They require a list of known bad malware or websites in order to detect or block malware. These technologies no longer work against today’s adversaries who morph their signature while bringing down websites on an instant basis. Malware authors have produced seven millions brand new variants in the first quarter of 2012 (https://portal.mcafee.com/downloads/). Malware authors are also utilizing polymorphic techniques in which malware mutates instantly to evade detection. The reactive defense perimeter has been shrinking while the vendor provided anti-virus protection keeps detect less than 19% of new incursions.

The existing anti-malware paradigm must now change. It must evolve from protecting assets that are statically placed behind layered defenses to one of protecting those assets wherever they may be. The employee has now become the primary target.  Every one of multiple mobile computing devices must be guarded. According to the US-CERT first quarter FY2012 phishing and malicious websites now account for 58% of direct attacks against employees who clicked permission for access.

One traditional way of protection is to build a better network firewall.  Firewalls are designed to stop inbound threats to services that should not be available to an outsider. Unfortunately, firewalls are ineffective since they block only inbound attacks. But, browser malware is initiated by outbound requests that pass through the firewall after a user clicks to admit them. The attacker therefore doesn’t need to try to penetrate the network. The employee pulls it in from the inside!

While application whitelisting is effective at preventing standalone malware, more than a half of attacks exploit known applications including the browser, document readers, and document editors. Increasingly, Microsoft Office documents are the most vulnerable and widely used applications.  These applications present a rich environment for attackers to exploit vulnerabilities. They also provide fertile ground for adversaries to dupe users into clicking on links and opening social applications such as Facebook and Linked-In. As malware exploits those applications, the cyber adversary gains a foothold in the enterprise.  The malware has then access to that machine, to the data on that machine, and to all network devices to which that machine is connected.

For example, two of the recently most widely reported attacks – on RSA and on the Iranian nuclear site – were initiated through penetration of employees’ computers. In each case an infected transaction was inadvertently admitted. This enabled further attacks to proceed even though there was extraordinary security protection already in place.

SUMMARY
Over the past few years it was believed that a breach that has been admitted into a desktop couldn’t be stopped.  After the fact detection offered the only prevention means. Reactive list-based reject approaches could not stop direct threats. Intruders had to be detected first but the question remained how to identify an intruder.

A new approach takes the most highly targeted unprotected applications in a network (such as the Web browser, PDF reader, Office suite, .zip files, e-mail) and places them into a separate virtualized computer. Every time any application is opened, or anytime an attachment comes from outside the network, a completely separate Virtual Machine environment is created. By creating such an environments, all malware – whether zero-day or already known – is tagged and prevented from attacking the host as a pathway for breach. It remains completely isolated on its own VM.
When an infection is detected inside such controlled environment, the user is alerted for potential discarding the tainted transaction and then to rebuild it to a clean state. Forensic details are then captured to feed such intelligence into security infrastructure surveillance.

It will require a massive conversion of millions of existing DoD desktop and mobile devices to operate through Virtual Device Interfaces (VDI) to achieve anti-phishing protection.

The Navy’s NGEN Program is Contrary to DoD Policy


The original $8.8 billion NMCI contract with EDS expired in September 2010. Three years before that NGEN (New Generation) replacement program was launched. Until 2011 NGEN has spent $432 million on work preparing for the transition. With NGEN now scheduled to start in 2014, the original NMCI contract has been meanwhile supplemented by and additional $5.5 billion.

Does NGEN hold up to the promise of providing the Navy and the MC with information superiority that meets 21st century requirements? We do not think so. What we are getting is a rehash of what is now an obsolete approach.

When EDS took over NMCI, it assembled thousands of disaggregated networks and turned them into a unified program with a common level of security and service. However, the Navy and the MC didn’t get an understanding of what made up NMCI because EDS held that. There was no understanding of what was the cost of the system, how many people it took to run it, and what are were the contributions to the users. When a continuity of the contract took over in 2010, NMCI was broken up and divided it into services. One segment was transport, which was the wires, fibers, routers and switches that are on the base and local area networks. For wide area networks DISA offered services. Help desk, email, data centers, video teleconferencing, voice-over-IP and the deployments of end-user devices was organized separately. In addition that was a hardware segment, which is mostly hardware on people’s desks as well as a software segment that delivers the software for end-users and the software required to operate the network. As result NMCI was broken into 38 services. As result the total cost of NMCI was finally known but still without a comprehension of how it all fit together.

The objective of NGEN is not only to know what the pieces of NMCI cost but also how the fits together, which should enable to compete separate pieces and parts. The forthcoming RFP competes the transport and enterprise services portions separately. This includes a 35% of total award for small business, which further complicates the entire bidding process.

The latest version of the NGEN RFP was released on May 9, 2012, allowing bidders additional weeks for questions. Final proposals are due July 18. The   source selection process will then proceed to completion in February 2013. At that time, there will be a contract award to begin executing the transition plan starting in April 2014.

The Navy and the MC have already purchased the lion’s share of the infrastructure—routers, switches, cables, as well as computer hardware to reduce costs. The government now owns the infrastructure as well as the NMCI intellectual property how the network operates. There is a corpus of 450,000 documents available to bidders for guiding their directions. By owning the infrastructure, by purchasing the government rights to the NMCI intellectual property and by making the intellectual property available to industry NGEN is now largely defined to operate similarly as NMCI.

SUMMARY
NGEN is proposing to replicate the NMCI infrastructure. It imitates the NMCI operating methods.  It hopes to reduce server costs through virtualization, which can deliver only minor savings. Despite of its age of over 15 years, NGEN does not represent innovation but a reversion to cold-war thinking.
NGEN directions deviate from the following small selection of the OSD strategic directions:

1. Individual programs will not design and operate their own infrastructures to deliver computer services. NGEN persists in operating a program-level infrastructure.
2. DoD will operate an enterprise level cloud-computing infrastructure. NGEN will be only Navy/MC used.
3. DoD will make possible to rapidly construct and then to deploy applications. NGEN has been broken up into 38 separate services. This requires extensive integration before a new application can be launched.
4. Global data and cloud services will be available regardless of any DoD access point or device. NGEN will be built to support primarily the Navy and MC.
5. The OSD CIO will be responsible for the Enterprise Architecture that will define how the DoD cloud is designed, operated and consumed. NGEN is architected to imitate NMCI.
6. DoD will implement enterprise file storage to enable global access to data by any authorized user, from anywhere and from any device. Enterprise-level data interoperability is not a NGEN objective.
7. DoD-wide computing will not be limited to Components but also to others such as throughout the Federal government, mission partners and commercial vendors. Universal connectivity is not included in NGEN.

The latest DoD “Cloud Computing Strategy” mandates system implementation that differs from the directions that NGEN is taking. NGEN is continuing with a relatively low level of funding because it preserves much of the current NMCI infrastructure and does not make a decisive commitment to major cost reduction available through cloud computing. To conform to OSD directions NGEN will have to re-examine its current approach.

Consolidate Count of Applications, not Count of Data Centers


The Army will achieve much bigger savings from eliminating application duplication and from preparations of apps to movement to cloud computing than from physical data center consolidation. To date, the Army has identified 16,000 applications that are running at post camps. The challenge is working ways how to devise application modernization and consolidation.

So far the Army’s data center consolidation efforts have been a “forklift operation,” which is moving servers from one location to another. That is costly but without demonstrable payoffs.

The Army is not showing major saving from cutting data centers through merely relocating servers. The savings are in the elimination of the duplication of maintenance and support costs of local applications, usually performed buy local contractors.

Just how many data centers the Army has is still an unknown. It has been estimated that the Army currently has about 500 data centers where a center is defined as a facility with 300 square feet or larger fully devoted to data processing. That is now defined as a closet, room, floor or building for the storage, management and dissemination of data and information.

SUMMARY
The costs of IT are not in servers, which are not expensive, but in the expense for support and maintenance labor. Any efforts that concentrate on the numerical elimination of the data center count – especially if this count is magnified through changing definitions – will lead to misleading conclusions. Data center elimination should be measure in the reduction in total operating costs, not in counting installations. This makes application consolidation a much greater challenge, as code has to be transported into virtual computing that can accept compatible policy implementation, such as security.

What is the Age of DoD Silos?


Last month we reported that there were 2,904 separately funded FY12 IT budgets. Many of these would be set up to operate their own and incompatible networking, storage, server, operating systems, middleware or control commands.

Silos have a long history in DoD. They often stretch for decades. During a long development time they create distinctive formats that keep reducing the interoperability with other solutions. The enclosed tabulation of some long-term projects accounts for 18% of total IT spending, This illustrates up to 35 years during which program managers and contractors keep developing silo-specific features.  


 Over a decade any IT investment will lock in unique codes and interface formats. Programs will be continually re-written for updating information technologies. To maintain connections will requires continuous modification of the supporting infrastructures of two or more silos. Format translations and compatibility bridges for files will have to be constructed and maintained. That adds large amounts of support costs and increases the problems of maintaining security. Contractors will be kept permanently busy just keeping the various reporting arrangements consistent.

DoD cannot afford supporting the continuous stream of maintenance costs associated with decades long software development cycles. As an immediate remedy it is now in a position to acquire software that will accelerate the adoption Information-as-an-Infrastructure (IaaS) solutions. There are now hundreds of cloud services firms that offer such technologies, though they range from proprietary (such as Amazon, Microsoft and Google) to open source (such as VMare) solutions.
Instead of keeping up separate infrastructures for each silo, DoD can start migrating to a much smaller number of infrastructures. This can be done by evolutionary migration. Each legacy silo applications can be “encapsulated” into a virtual package so that it can now run on its own virtual computer.
Such virtual computers can take advantage of pools of shared servers, of disk memory and of a shared communications environment. Capacity utilization will then increase. Security policies will be enforceable across an entire range of virtual computers. The conversion to IaaS services will become one of the principal means for delivering the projected reductions in the number of data centers that is presently receiving widespread attention.

Though a reduction in the number of data centers will result in cuts in expenses for brick and mortar facilities as well as for managerial overhead, the major gains will come from the pooling of processing capacity for better utilization and for sharing of disk memories that will offer reductions in the required disk space. Hard to quantify gains will come, however, from the consolidation of security services. Formerly costly security enforcement means, such as expert manpower and specialized security appliances will now be available for more consistent control of security measures.

Instead of elongating project schedules for individual silos, DoD should be able evolve in very short order to a much smaller number of enterprise infrastructures, each subject to central controls for assuring data and communications interoperability. Existing silo budgets for separate infrastructures will have to curtail further spending on infrastructure to fund a much smaller number of pooled enterprise solutions. Such migration could start in the next fiscal year by shifting processing of parts of some applications from legacy silos to a limited number of commercial public clouds from where they would support as “hybrid cloud” solutions without users seeing much of a difference. After sufficient experience is gained, parts of such solutions could then relocate into DoD owned and operated private clouds.

DoD will have to find a method for extracting funds from increasingly obsolete legacy programs to Joint enterprise projects that offer a shared infrastructure. Such a move will allow DoD components to concentrate on applications, but without their prohibitively expensive custom-made infrastructures.
The original 1992 intent for creating DISA was to make it a shared provider of enterprise services. The fiscal mechanism for delivering such services has been long available as working capital funds that can be used to charge individual users not as allocations of fixed costs, but as a fee for services used. Transaction-based pricing will have to be instituted in this new environment so that components can make competitive comparisons as they shifts cloud workloads between cloud services in a hybrid environment.