Search This Blog

The Advent of OpenFlow Protocols

The Open Networking Foundation (ONF) has just been organized to create protocols that would make it possible for firms to control the processing of Internet transactions on switches and routers. ONF includes leading firms involved in Internet networking, such as Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. *

The new ONF protocols, named “Open Flow” will radically change how communication networks will operate in the future. This protocol will become an industry standard as an add-on feature to the existing IPv4 and IPv6 protocols. Open Flow code will be also embedded in network controllers, switches and routers. The first router that uses OpenFlow is already in place. Prototype installations have been in place at over dozen universities since 2008.

The objective of ONF is to make networks programmable in much the same way that individual computers can be programmed to perform specific tasks.  This represent a major departure form the current approach in which the Internet switches and routers are pre-defined so that they cannot be modified to accommodate dynamic fluctuations as the traffic on networks keeps changing. OpenFlow focuses on controlling how packets are forwarded through network switches and routers.

In the past one of the key components of any system could not be programmed. That was the network that connected computing nodes. Under OpenFlow it will now be possible to customize networks to the applications that are actually being run.

OpenFlow protocols and associated software should open up hardware and software systems that control the flow of Internet data packets, systems that have until now been closed and vendor proprietary. This will cause a new round of innovation that will be focused principally upon the emerging computing systems, known as cloud computers, that require a variety of network services that currently are not available.

For instance, OpenFlow will permit setting up on-demand “express lanes” for voice and data traffic that is mission critical. Software will allow combining several fiber optic backbones temporarily for particularly heavy information loads and then have circuits automatically separate when a data rush hour is over. Another use of OpenFlow will be load balancing across an entire network, so that diverse data centers will be able to shuttle the workload so that performance does not deteriorate.

OpenFlow will be an open interface for remotely controlling the forwarding tables in network switches, routers, and access points. Based on such capabilities user firms will be able to build networks with a much wider scope, especially involving wireless communications. For example, OpenFlow will enable more secure default fail-overs, wireless networks with smooth handoffs, scalable data centers, host mobility, more energy-efficient allocation of resources and ready deployment of improvised new networks.

SUMMARY
OpenFlow warrants immediate attention even though large-scale implementation may be three to five years ahead. However, it will alter the architecture of networks, such as GIG, to a significant extent. The programmability of networks will change the role of the GIG from just a transmission medium to a component that becomes an active part of the design for DoD. Such planning should be starting soon. OpenFlow equipment will have to be provided for any acquisitions that will have a life of well over ten years.


* http://www.nytimes.com/2011/03/22/technology/internet/22internet.html?_r=1&partner=rss&emc=rss

Client Based or Server Based Security?

According to the DoD CIO the Secret Internet Protocol Router Network (SIPRNET) connects approximately two thousand DoD locations, with up to 500,000 users. Every SIPRNET connection is physically protected and cryptographically isolated. Each authorized user must have a SECRET-level clearance or higher. *

Following the unauthorized release of classified documents (e.g. Wiki Leaks) the Director of the Defense Intelligence Agency stood up an Information Review Task Force to assess the security of DoD SIPRNET data. The task force found that:  units maintained an over-reliance on removable electronic storage media; the processes for reporting security incidents were inadequate and there was a limited capability to detect and monitor anomalous behavior (e.g. exfiltration of data).

DoD is now proceeding with the installation of the Host Based Security System (HBSS) by June of 2011 provided by COTS vendors. This will provide for central monitoring and control over all computers and their configurations. HBSS includes a Device Control Module (DCM), which can be used to disable the use of all removable media.  48,000 to 60,000 computers will be exempted from DCM restrictions and will be able to continue relying on removable media.

DoD will also continue to issue a Public Key Infrastructure (PKI)-based identity credential on a hardened smart card, which is more robust than the Common Access Card (CAC), which is used on unclassified networks. The PKI cards require positive identification of anyone who is accessing data. This would be completed by mid 2013.

The key to HBSS – and what represents its weakness – is the ultimate dependency on a human policy auditor who can set up access restriction policies. The policy auditor will receive real-time messages to aid in the recognitions of any actions outside of the limits set by policy.

 Despite the strengthening of controls, the detection of insider compromises still depends on audits performed by human operators located at many separate SIPRNET locations.

The problem is how to identify selected events as “anomalies” security policies that would indicate questionable behavior. Though the implementation of HBSS offers a strengthening in the identification of information retrieval, ten thousands of individuals will still have the potential to ex-filtrate classified information by “write” actions that are not unauthorized. How to identify such instances still remains an unresolved challenge. HBSS may provide more tools, but cannot prevent a Wiki-Leaks incident from happening again.

SUMMARY
The implementation of HBSS imposes on DoD operations large additional costs.  The installation and maintenance of HBSS software is labor intensive. It is a task that adds to the work of hundreds of contractors that already maintain SIPRNET configurations. That cannot be done without more funding and without additional headcount.

Training is not trivial. HBSS requires at least a two-day course.  To staff HBSS Policy Auditor positions, that watch operations around the clock, will require an incremental staff with a higher grade of skills. Personnel records will have to be expanded to include descriptions of what tasks an individual is permitted to perform and under what conditions, which is subject to frequent changes.  Administrative policies and processes will have to be put in place to determine who should (and should not) have access.

Clearly, HBSS is a costly short-term “patch” on an already overburdened system. Whether DoD will be able to staff and then deploy a sufficient number of Policy Auditors has not been as yet included in plans for FY12-13. How the auditors will be supplied with the intelligence that is necessary for the discovery of anomalies is yet to be established.

So far the HBSS fix has not examined how to evolve from the currently proposed short-term improvement to a longer-term solution. Instead of relying on each SIPRNET enclave to set up its own rules for auditing its desktops and laptops, a “cloud” method for the management of secure networks offers lower cost options.

The key to such an approach is to give up on adding more COTS software to already overburdened desktops. HBSS is a very expensive and manpower-intensive desktop-centric solution. It is unlikely to be implemented by FY13 on a timely basis because of budget and organizational limitations.

Instead of the HBSS add-ons to ten thousands of existing desktops, security monitoring and audit should be relocated to a few hundred virtual servers on a secure private cloud. Central policy can be then administered more economically and consistently from a handful of Network Control Centers (NCC). Specialized headcount at the NCCs can operate with less manpower and with a much greater reliance on advanced diagnostic software.

Security should be added to a few servers, rather than to many desktops. Cloud servers can host sophisticated surveillance software more effectively and can be deployed much faster.  


* http://hsgac.senate.gov/public/index.cfm?FuseAction=Hearings.Hearing&Hearing_ID=0c531692-c661-453a-bc97-654be6eb7d00

Why the GIG Warrants Top Priority

The debate about the future of DoD networks and how that can be delivered over the Global Information Grid (GIG) revolves around the question whether the GIG could rely on Internet-based connectivity.  The scope of the GIG is enormous - it encompasses over 10,000 routers and 10 million hosts, including wireless connections. It includes a wide range of nodes, link types as well as human portable and battery powered devices. The GIG provides capabilities from all operating locations (bases, posts, ships, camps, stations, facilities, mobile platforms, and deployed sites). The GIG also provides interfaces to coalition, allied, and non-DoD users and systems.

The GIG overarching policy makes clear that its objectives are all-inclusive of every application, anywhere.  It does not accept the acquisition of IT capabilities as stand-alone systems. It rejects designs that are defined, engineered, and implemented one pair at a time – an approach that focuses on system or platform capabilities rather than on mission capabilities.

Instead, all DoD systems shall be based on a shared GIG. It will be based on a common, communications and computing architecture that provides a full range of information services, for all security classifications and information handling needs. *

GIG data shall be shared and exchanged through common interoperable standards that will be based on the IPv6 common network protocol. It will allow all types of data to move seamlessly on the GIG’s diverse transport layer, which includes landline, radio, and space-based elements. This means that every network link in DoD must be interoperable from a protocol standpoint. That is not the case at present. The diversity of the existing network protocol is not known.

The GIG supports mission critical operations. For complete security it must use IPv6 formats (e.g. IPSec). Connectivity to the GIG depends not only of land circuits and wireless links but also on Radio Frequency (RF) and satellite connections. The GIG must be a trusted network that comprises of units in the field (e.g. army companies or ships), which need to be seamlessly connected to the GIG even while they are mobile.

GIG will operate in accordance with common metrics, measurements, and reporting criteria. ** Originally, the GIG was conceived as a federation; that is, ownership, control, or management of the GIG (people, processes, and hardware/software) was distributed throughout the DoD. *** That approach did not work though it was reaffirmed by the Instructions from the Chairman of the Joint Chiefs of Staff in December 2008.  Instead, the implementation of the GIG concept is now in the hands of USCYBERCOM.

In planning the evolution to the long-term GIG objectives does DoD really require a totally enclosed Intranet (such as NMCI) to assure its security in the interim? Is it possible to make the Internet sufficiently secure so that the costly acquisition of a variety of dedicated circuits, such as for the NGEN transition, is not necessary? Does it make sense for the individual services to continue with the contracting for dedicated networks that provide services to only a limited set of applications?

Though dedicated transmission lines can speed up communications and reduce latency of transmissions between major hubs, the costs of connecting all DoD locations must be planned as a part of the GIG and not on a stand-alone basis. The vulnerability of the Internet to security compromises (which includes corruption of LANs, WANs, intermediate switches and routers) is well understood.  DoD will therefore have to resort to specially configured VPNs (Virtual Private Networks) to protect its transmissions. This cannot be done for only local networks, but must be engineered so that the protocols can be imposed universally.

DoD must develop and install DoD-specific VPN implementations because VPNs are a method for using the existing Internet infrastructure to provide secure access to all every IP addresses.  This avoids the expense for dedicated implementing networks that carries only DoD traffic that work only for individual contracts. The public Internet offers an enormous redundancy with highly distributed links that will overcome local circuits failures that cannot be achieved economically by other means. Internet is more resilient against failure than any Intranet that could be designed, except at an exorbitant cost. However, any reliance on the Internet must be first engineered for enhanced security that would be approved by NSA and only then imposed as a uniform GIG solution.

A DoD version of VPN will encapsulate all transactions using NSA approved cryptographic methods between any points. Cryptography will then keep all data transfers private from intrusion by any internal or external other source. That will also safeguard against security breaches that could happen during a transmission until a transaction reaches its final destination where it can be decrypted.

There are several different classifications, implementations and uses for VPN solutions, which includes compliance with additional restrictions set by NIST and validated by the NSA. There are several standard protocols that assure how the “tunneling” of traffic will take place and how it can be inspected by DoD Network Control Centers. There are codes that will have to be added by DoD to support the end-to-end intrusion-proof procedures throughout the entire transmission sequence. The tunnel’s termination point, i.e., customer edge, will finally offer the authentication of a legitimate recipient while still remaining subject to USCYBERCOM controls.

The most important requirement of a VPN are cryptographic protocols that block any intercepts and which allow sender as well as recipient authentication to preserve message integrity. This includes IPSec (Internet Protocol Security), which was originally developed as a requirement for IPv6. Until that protocol is implemented (see http://pstrassmann.blogspot.com/2011/02/are-ipv4-addresses-exhausted.html) the IPv4 Layer 2 Tunneling Protocol could be used as a substitute, though that is not recommended.

SUMMARY
VPNs play a central role in the (GIG), the combined network-of-networks being developed by not only DoD but also by other US government agencies to support the communication needs of the security, defense and intelligence communities. The GIG architecture can be viewed as having two main components, namely trusted edge networks and a large backbone core consisting of a combination of both trusted and untrusted network segments. In order to achieve privacy and integrity of the data crossing the backbone, edge networks must use consistent VPN gateway protocols to encrypt traffic as it passes through thousands of routers and switches.

VPN will reduce network costs because it avoids the need the dependence on dedicated lines that connect offices to private Intranets during the transition from the current state to where the GIG will ultimately provide for all connections.

Meanwhile, DoD will require the use of dedicated fiber optic connections primarily for back-up and fail-over traffic among data centers. DoD may also find it advantageous to acquire dedicated fiber optic links to servers “on the edge” as a way of reducing the number of “hops” that the public Internet imposes.  Whether such connections are acquired for exclusive DoD uses is a matter of economics as well as of security. In any case, such links will all have to be subjected to the discipline dictated by standard GIG protocols.

Meanwhile, the DoD is struggling to assure its minimum acceptable network security. When asked, in Congressional testimony, how he would grade the U.S. military's ability to protect its networks, Gen. Keith Alexander, commander of U.S. Cyber Command, said he would give it a “C”. For an essential combat capability nothing but an “A+” should be acceptable.  ****

When one examines the priority of all of the issues that affect the conduct of IT in DoD there is no question that proceeding with the implementation of the GIG is on the top of any list of actions that warrant the greatest attention.

A personal note: DISA's role in DoD information management expanded with the implementation, in September 1992, of several Defense Management Report Decisions (DMRD), most notably DMRD 918. DMRD 918 created the Defense Information Infrastructure (DII), now more commonly understood as the GIG. Strassmann was one of the principal authors of DMRD 918.


* DoD Directive 8100.1, 11/21/2003
** DoD Instruction 8410.02, 12/19/2008
*** cio-nii.defense.gov/docs/GIGArchVision.pdf
**** http://defensesystems.com/articles/2011/03/17/cyber-command-head-rates-military-cyber-defense.aspx?admgarea=DS



Cloudonomics Principles

A small New Zealand firm has just published a list of “cloudonomics” principles. The list offers useful guidance how to think about the adoption of cloud computing. * The following is a redacted version of “cloudonomics”:

Cloudonomics Principle #1:
Cloud services cost less even though their unit costs are more. 
Although cloud services unit cost are more expensive when used, they cost nothing when they are no
Customers save money by replacing fixed infrastructure with clouds because all workloads are spiky.
The peak-to-average load ratio is much greater than the utility premium.

Cloudonomics Principle #2
On-demand trumps forecasting. 
Short-term forecasting is often wrong. 
Long-term forecasting is always wrong.
The ability to scale workload up and down to meet unpredictable demand allows for cost optimization.

Cloudonomics Principle #3
The peak of a sum of workloads is never greater than the sum of the peaks.
Individual enterprises deploy capacity to handle their peak demands. Under this strategy, the total capacity deployed is the sum of these individual peaks.
However, since clouds can reallocate resources across many enterprises with different peak periods, a cloud needs to deploy less capacity.

Cloudonomics Principle #4
Average unit costs are reduced by distributing fixed costs over a larger number of units of output. Larger cloud providers can therefore achieve economies of scale.
Superiority in numbers is the most important factor in safeguarding cyber security. 
Cloud operations coupled for fail-over have the scale and the personnel to fight rogue attacks.

Cloudonomics Principle #5
Organizations derive competitive advantage from responding to changing business conditions faster than the competition.
With cloud scalability and instant availability of capacity, for the comparable cost, a business can accelerate its information processing and decision-making.
Dispersion of processing sites will increase the latency (response time) of transactions. 
Reduced latency is increasingly essential in cyber operations (less than 200 milliseconds).
A Cloud computing provider is able to provide more distributed computing nodes (servers on the edge), and hence reduced latency, than an individual enterprise would want to deploy.

Cloudonomics Principle #6
The reliability of a system increases with the addition of redundant, geographically dispersed components such as data centers. Cloud Computing vendors have the scale and diversity to do so.
A data center is a very large fixed facility. Data centers tend to remain in locations for a variety of reasons such as where the company was founded, where they got a good deal on property or where politics dictates it. 
A Cloud service provider can locate sites optimally.

SUMMARY
The consolidation of over 100 DoD data centers into the current number in DISA, gave little consideration to aggregating these facilities into workload-sharing operations. At the time (1992) software and operating standards were not available. As one of the principal policy makers, who was driving data center consolidation (DMRD 918), can now reconsider what would be the next step in DoD data center management.

The “cloudonomics” principles proposed by an astute researcher offers useful guidance for everyone who is planning to execute guidance from the Federal CIO on data center consolidation.

* http://diversity.net.nz/papers/

Benchmarking Virtual Cloud Services

Virtualization makes it possible to run multiple operating systems on a single server. It reduces capital costs by increasing the efficiency by requiring less hardware while decreasing the number of administrating personnel. It ensures that applications will perform with the highest availability and performance. It enables business continuity rough improved disaster recovery. It delivers high availability throughout the datacenter. It improves desktop management and desktop control with faster application deployment and fewer support calls due to application conflicts.

The leading firm in virtualization is VMware. The leading consulting firm, Gartner, offers the following comparisons of the relative strength of companies that sell virtualization software licenses:


Terremark is a leading VMware customer. They publish prices for using its VCloud Express. Terremark is $350 million provider of IT infrastructure services with twelve datacenters in the United States, Europe and Latin America. It can be used as a benchmark for making comparisons with other cloud data center rates (such as from DISA). The unit list prices are as follows:


What are then the potential savings that can be gained from the virtualization of 10,000 servers? The Total Cost of Ownership (TCO) program will be used to make the computations. *

Virtualization enables a substantial reduction in the number of servers, with consequential reductions in operating manpower, energy and infrastructure costs. A 75% cost reduction, over a five-year period is achievable, with break-even time of less than one year.


The largest cost reductions are realized from the elimination of Capital Expense (CapEx) costs, as servers with more core and more processors replace configurations that cannot operate at high levels of utilization since they cannot share computing power.  Operating Expense (OpEx) reductions are found almost entirely in the reduction of personnel. “Other” costs reductions come from substantial reductions in the costs of electricity in support of computers and air conditioning.


 
A further breakdown of cost reductions is shown in the following table. There is also a reduction in the number of desktops, since the personnel headcount is also cut.


Power and cooling energy savings are highlighted in the following table:



SUMMARY
Server virtualization savings are attractive, though they represent only a stage in the process of migrating operations into a cloud. Without a plan that would also coordinate the re-alignment of client devices, increasing security and up-time reliability the virtualization savings would include an element of risk that is not present in ongoing operations.  However, the most important issue concerns the decision how to configure the new data processing environment. Virtualization would reduce the number of servers from 10,000 to only 521.

Where and how the new computers would be located requires a rethinking about the management of much smaller data centers operated by a much smaller complement of operating personnel.

Server virtualization should be therefore seen not as merely as a technical means to achieve the consolidation of computing but primarily as a managerial challenge how to start the migration into the cloud environment.  


* http://roitco.vmware.com/vmw/. This calculator is the result of work done by the Alinean Corporation. Strassmann was founder and member of the original Board of Directors.

Is the NMCI Replacement Too Risky?

The Government Accountability Office (GAO) has just published a report (GAO-11-150) noting that the first increment of the Navy Marine Corps Intranet (NMCI) replacement, the Next Generation Enterprise Network (NGEN), will cost about $15.6 billion, or $4.7 billion more than other alternatives.

The first increment of NGEN (for FY 2011-2015) will provide comparable NMCI capabilities, such as additional information assurance. It will offer greater DON network control that will be taken away from existing contractors. The projected total cost of NGEN is about $50 billion through fiscal year 2025.

The key difference in the projected NGEN plans is the proposed delivery of this program with 21 contractual relationships as compared with the 3 contractual relationships at this time. * NGEN will be also executed using large amounts of contracts set-asides. **

The principal objections by the GAO are as follows:
1. None of alternative plans matched the currently proposed program.
2. Cost estimates were not credible.
3. Insufficient measures for assessing plans.
4. The proposed implementation is more risky.
5. There is no reliable implementation schedule.
6. Program was approved despite a lack of requirements.

 SUMMARY
The NGEN program was established in 2007 and has so far spent $432 million. It is supposed to provide the foundation for the future Naval Networking Environment (NNE) based on shared enterprise architecture and common standards, as yet not published.

The GAO report is deficient in that it does not comment on any performance metrics, such as uptime availability, latency or quality of service which are keys to evaluating whether NGEN is superior to NMCI.

Although the Navy disagrees with several GAO recommendations, including advice to stop further work on NGEN, the GAO report raises sufficient warnings to justify concerns about the current directions of NGEN.


* Reservations about multiple NGEN contracts are noted in http://pstrassmann.blogspot.com/2011/01/can-multiple-acquisitions-support-cyber.html
** http://pstrassmann.blogspot.com/2010/12/nmci-economics-for-next-five-years.html

DoD Social Media Policy Remains Unaltered

The Defense Department has just reauthorized, for another year, the social media guidelines. [Directive-Type Memorandum (DTM) 09-026]. Accordingly, the NIPRNET will continue to be configured for easy access to insecure Internet-based offerings for several millions of computing devices.

This will include access to social media such as YouTube, Facebook, MySpace, Twitter and Google Apps. The DTM states that DoD commanders and Agency heads will continue defending their computers against all malicious activity.

Without prescribing how malware defenses will be applied there is a question how effective is a generic DTM, which allows the widespread use of social media, but without specific guidelines how to defend the networks.

The widespread use of social media cannot be stopped or curtailed any more. In remote locations and on long rotations, the network time spent on social media can exceed the traffic for conducting DoD business operations. For troop morale the free access to social media is a necessity.

Without a defined policy how to assure security, social media will continue to make the DoD networks insecure. To demonstrate this vulnerability we will use the most pervasive social media, Facebook, to illustrate what are the dangers to NIPRNET.

According to data from security company BitDefender, there's harmful content behind about 20 percent of posts on Facebook news feeds.  BitDefender said about 60 percent of attacks on Facebook stem from threatening third-party apps. *  Most of the infectious software originates from thousands of independent developers who often sell such software for a fee. By clicking on infected links users risk having all sorts of viruses downloaded to their computers. **

People who are tweeting can install from their friends' Facebook accounts a variety of bots. These bots have access to all of the data of anyone connected to a hacked account.  Facebook accounts can then be linked with more people in a social circle - opening up new opportunities for identity fraudsters to launch further attacks.  ***
  
In late October, a particularly malicious piece of malware called Koobface resurfaced on Facebook. Like the original strain of the Koobface virus is spread via Facebook messages. The messages usually have clickable topic lines like "Is this you in the video?" or something similar.  When a user clicks on such message, they are brought to a third party site where a link is waiting.  Open the link and their computer will turn into a zombie that can be commanded to execute more damaging procedures.

SUMMARY
With hundreds of data centers and thousands of servers the attacks transmitted through social media cannot be stopped any more. What is required now is a policy that dictates the technical means for isolating such attacks.

Social media transactions should be completely isolated and segregated.  User displays should be partitioned to communicate all public Internet traffic exclusively with dedicated severs. In this way infected communications will be shuttled into partitions from where a further propagation of malware will not affect the conduct of DoD operations. However, such solutions will require a major overhaul how networks are organized.

The use of social media by DoD personnel cannot be stopped. What is needed is an architecture that will allow the separation of the insecure from the secure environment for an assured safeguarding security.


* http://www.pcmag.com/article2/0,2817,2373281,00.asp
** http://www.bbc.co.uk/news/technology-11827856 
*** http://blogs.computerworld.com/17418/security_warnings_whether_or_not_you_plan_to_drink_and_drive_a_keyboard_this_weekend

 

Cyber Operations Without Satellite Connectivity

Every cyber operation contingency plan must include a case in which the use of most satellite connectivity will be lost.  Whether such failure is caused by adversary intervention, malicious jamming or mechanical failure is irrelevant. Military operations depend on the availability of assured bandwidth. A contingency plan must also assume conditions of failures when only degraded capacity is available for conveying minimum essential bandwidth. Partial communications must allow Network Control Centers to cut off low priority transactions, while keeping up SIPRNET or supporting only channels that deliver mission-specific messages.

There are several options that could be considered as a replacement for satellite channels:

1. Use of high-altitude vehicles.  An example of this solution is the Global Observer Unmanned Aerial Vehicle (GOUAV) or its successors, including solar powered planes.  GOUAV has already conducted test flights. It will fly at an altitude of between 55,000 – 65,000 feet for 5 – 7 days.  Longer times aloft are likely to be feasible in the future. That’s above the weather and above sustained conventional airplanes. That height also helps due to the laws of physics, which allow aircraft at that altitude to cover a circular area on the surface of the earth up to 600 miles in diameter.

The goal for Global Observer is for payload capacity of >1,000 pounds. It uses liquid hydrogen fuel and fuel cells that drive 8 small rotary engines.  It can handle communications intercepts over a wide area. In the future it will have the capacity to augment transmission of telecom bandwidth. It could be designed to act in lieu of GPS satellites, which are particularly vulnerable to very low cost jamming devices.  **  Multiple communication and remote sensing applications have already been demonstrated including high definition broadcast (HDTV) video, and third generation (3G) mobile voice, video and data using an off-the-shelf mobile handset.

The unit costs of loitering airplanes that act as store-and-forward stations make their deployment attractive. They are mobile and can be repositioned to avoid interference. Without crews they are easy to replace. They can be launched only as needed anywhere in the world when needed.
There are also other technologies available, such as the Boeing X-37B Orbital Test Vehicle that could be used to place in orbit steerable platforms for communications.

High-altitude vehicles (HAVS) offer many advantages over satellites, whether they are in low polar orbits (100 to 1,000 miles) or in stationary equatorial orbits of 22,300 miles above the earth. HAVS can be launched quickly to support a military mission as needed, rather than holding a persistent spot in space. HAVS can be steered and can be recovered for technology enhancement.

2. Use of a combination of ground based and wireless “mesh networks”. Mesh networking is a type of networking where each node must not only capture and disseminate its own data, but also serve as a relay for other sensor nodes, that is, it must collaborate to propagate the data in the network. * In effect a wireless mesh network replicates the structure of the land-based and hard-wired Internet.

A mesh network can be designed using a routing technique. When using a routing technique, the message propagates along a path, by hopping from node to node until the destination is reached. For insuring all its paths' availability, a routing network must allow for continuous connections and reconfiguration around broken or blocked paths, using self-healing algorithms. As a result, the network is reliable, as there is often more than one path between a source and a destination in the network, which can include both wired as well as wireless links.

The nodes on a mesh can be stationary, such as transmission towers, buoys or aerostats, or mobile such as ships or UAVs. The connectivity between mesh nodes can vary depending on bandwidth, antenna design and available power. For instance, High Frequency (HF) microware links can be set up so that a fleet of ships can be positioned in a way that assures delivery of transactions via mesh-like connections to their ultimate destination.

Mesh networks can be also deployed tactically. It is possible to deploy low cost unmanned helicopters (such as the Navy’s MQ-8B Fire Scout) or long endurance helicopters (such as the Boeing A160 with 2,500 mile range). Swarms of these vehicles can be launched to act as store-and-forward communication nodes. At a moment’s notice they can offer instant connectivity for rapid attack expeditionary forces.
Mesh networks have the advantage that they can be built as an extension of existing networks that use optical fibers. The construction of mesh network can be done as an evolutionary program, without obsoleting legacy networks already in place. In this way they can function as redundant hybrid mesh connections, with existing ground links or HF ship-to-ship and ship-to-shore links routing transactions through the most cost-effective path.

3. Orbital Replacement for the NASA Shuttle.  The second version of the Boeing/Air Force X-37B has been orbiting at an elevation below the international space station.Depending on who you talk to, the space plane could be a prototype commando transport or (most likely) a spy satellite, though it can have a multipurpose mission because it can be steered. Potentially it could it could launch, repair, repair or reposition US or other satellites in low orbit. It could sneak up and disable or steal enemy satellites. Its pickup-bed-sized payload bay is particularly enticing to observers.
 
SUMMARY
The U.S. military cannot depend on satellites for support of its expeditionary forces. Even if degraded for transmission of only essential messages, multiple and fully redundant communications paths must become available. Unfortunately, the existing proliferation of network connections costs too much. It is also not interoperable. Its use of the available spectrum is profligate. To make available communication options that dispense with most of the satellite links will require a different approach to network design. It will require funding that will not be available unless costs can be extracted from the existing communication arrangements (see diagram below of antennas on an aircraft carrier).



Right now it is not apparent that the centerpiece of DoD’s networks – the Global Information Grid (GIG) – has provided for such eventualities. Taking on the task for providing satellite-less bandwidth for its ships and expeditionary forces should be a priority tasks for the Navy’s N2/N6 organization.



* http://en.wikipedia.org/wiki/Mesh_networking
** http://www.newscientist.com/article/dn20202-gps-chaos-how-a-30-box-can-jam-your-life.html?full=true

Why So Many Data Centers?

The U.S, Government Accountability Office (GAO) in its March 2011 report GAO-11-318SP identified opportunities to reduce the potential duplication in government programs.  These programs are usually managed by separate bureaucracies and will most likely operate separate databases, separate servers and possibly separate data centers or server farms.

We have extracted from the GAO a sample listing the number of duplicate government operations as an indicator that can explain why the Federal Government has in place 2,094 Federal Data Centers according to the CIO of the Federal Government, Mr. V. Kundra:

Fragmented Food Safety: 15 Agencies
DoD Organization to Respond to Warfighter Needs:  31 Departments
DoD Organizations for Improvised Explosive Detection: Several
DoD Organizations for Intelligence, Surveillance and Reconnaissance: Numerous
DoD Organizations for Purchase of Tactical Vehicles: Several
DoD Organizations for Prepositioning Programs: Several
DoD Business Systems Modernization: 2,300 separate investments
Fragmented Economic Development Programs: 80 separate programs as illustrated in the following table:


Fragmented Surface Transportation Programs: 100 separate programs
Fragmented Federal Fleet Energy Programs: in 20 separate Agencies
Fragmented Enterprise Architectures: in more than 27 major Agencies
Fragmentation of Federal Data Centers: 2,094 data centers
Fragmentation Data on Interagency Contracting: Excessive duplication
Ineffective Tax Expenditures and Redundancies: 173 major programs
Modernization of Electronic Health Record: Multiple efforts in VA and DoD
Integration with Nationwide Public Health: 25 major systems
Biodefense Responsibilities: 12 Agencies
FEMA Operations: 17 major programs
Arms Control and Nonproliferation: Two separate bureaus
Domestic Food Assistance Programs: 18 programs
Homelessness Programs: Over 20 programs
Transportation of Disadvantaged Persons: 10 Separate Agencies


Employment and Training Programs: 47 Separate programs
Teacher Quality Programs: 82 Separate programs
Financial Literacy Efforts: 56 programs by 20 different Agencies
Farm Program Payments: Huge number of diverse programs
Improper Payments by 20 Agencies on 70 Programs (partial list):


SUMMARY:
The 345 page GAO report, summarized only in part in this blog, offers a clue that simple data center consolidation, through virtualization of servers, is not easily accomplished.  The various Agencies and bureaus that operate computers in support of their activities are often connected with legislation dictates that control how a program is executed. The idea of datacenter consolidation involves much more that applying simple technical solutions.









The New CIO as a Risk Manager


The primary role of the CIO ten to twenty years from now will be risk management. This includes network security, protection of privacy and confidentiality, the safeguarding of the databases and the assurance of service availability.

Server farms, currently managed by the CIO, will be deployed either as private "clouds" operated by large computer services enterprises that enjoy the economies of scale of billion dollar data centers. CIOs will cease to have a direct operating control over hardware, as the management of “cloud” assets becomes a specialized commercial service that requires highly skilled personnel that individual firms will find difficult to retain. 

Enterprise private “clouds” will be in completely secure enclaves and operated for a fixed fee. However, there will be “hybrid” cloud configurations that will provide peak-load, testing and fail-over capacity as a variable cost. The distribution and the sharing of workloads will a way to reduce fixed expenses.

A significant reduction in the scope of a CIO’s responsibilities will be found in the shifting of the distribution of computing power by means of wireless connectivity. This will largely replace the existing hard-wired connections, routers and switches.  The large cost for the upkeep and re-installation of the existing cabling in ceilings and ducts will be eliminated, while security will be improved. Customers will be increasingly mobile, insisting on Gigabit connections to their handheld devices that will match the capacity they presently receive at their desktops and laptops.

The attention of the CIOs will shift to the assurance of the reliability of a firm’s own network as well as connectivity with customers and suppliers.  Much higher levels of service availability – approaching 100.0% - cannot be achieved through more robust equipment. Uptime reliability can be delivered only through redundancy and real-time fail-over methods. Even the largest enterprises will have to rely on hybrid data center connections that will kick in whenever a firm's own private "cloud" requires added capacity. As the demands for service level quality rises (which includes latency), the CIO will be engaged in the engineering of network performance under conditions of failure.

Enterprises will also have to buy peak-load capacity, because the average 24/7 utilization of assets will continue to be low, despite virtualization. Peak load sharing will be also used to run in parallel modifications to existing applications, in separate partitions, until the robustness of a new version of software is tested under live conditions. CIOs will have to managing such shifts in computer capacity and start dealing with computer capacity brokers who sell spot machine cycles. A capacity and service level center, under the control of the CIO, will look like the power dispatch operations nowadays used in the management of electric networks.  

Commercial vendor services (from billion data centers) will have the economies of scale and a level of reliability that no major company will be able to match. Data centers with >500,000 servers can deliver a minute of computer processing power for less than a penny.

The principal role of the CIO, as a high-level advisor to top executives, will be based on a full accountability for total systems (end-to-end) information security. Networks will be re-engineered for protection from incoming malware and for safeguarding outgoing communications by means of strong encryption. CIOs will have to create a link with personnel systems, which authorize and then authenticate individuals for systems access. CIOs will have to extend this role to include read/write/modify privileges assigned to designated persons.

The centerpiece of the job of the CIO will not be the data center, but the Network Control Center (NCC). Highly specialized personnel will occupy seats that monitor and evaluate network connections, computer processing power, the functions of all devices and the security of incoming as well as outgoing transactions. Such surveillance takes place around the clock. Unless there is an over-riding cost issue, the NCC should never be outsourced. Its personnel will require an in-depth understanding of a firm’s operations. It will become a human resources platform for career development in the enterprise.  

How the future CIOs will develop or acquire software is hard to predict. Meanwhile, highly specialized contractors will build custom software because software development becomes an increasingly specialized task requiring exceptionally high priced talent.

SUMMARY
The job of the future CIO will change radically in the next 10 to 20 years. It will not depend on the direct ownership or control of hard assets (data centers) or soft assets (programmers). Except in cases where special security mandates ownership, all information technology will be either contracted or purchased.

The core functions of a CIO will be risk management, which becomes broadly defined as the prevention of information technology failures, avoidance of operational non-performance or the stopping of security incidents.

Threats are up exponentially. Service level requirements are rising rapidly. The costs of failure can be catastrophic.

When the CEO encounters a risk that somehow relates to information technologies, they will still have to turn to a CIO. It will have to be a different CIO.