Search This Blog

Hosted Clouds for DoD?

Internally developed and DoD component-conceived private clouds for DoD are not likely to happen.

DoD does not have the software talent and cannot afford acquiring it in the next few years. DoD does not have the capital or the time to construct several billion dollar cloud data centers. DoD migration into the cloud environment can be only incremental. There are thousands of legacy applications that can be relocated to the cloud only gradually. DoD must rely on a hybrid approach during the transition of legacy systems from the current environment. During such transmission the capacity of any cloud operations would be vastly under-utilized.

The structure of current DoD operations shows a great diversity in meeting security requirements. From a less demanding security standpoint, there are $4.6 billions of business applications that may be easier to relocate. War fighting applications, ranging from NIPRNET to SIPRNET, are $11.7 billions. These are demanding and must migrate into a cloud only under conditions that guarantee no loss of data. In addition there are applications that are compartmented and would always remain off the cloud, running on DoD organic assets. However, the overarching problem for DoD is the $15 billions devoted to its infrastructure. DISA and diverse Agencies have attempted to support every security requirement for every DoD application and has not succeeded so far to deliver a unified approach.

One of the options that DoD should explore now is the hosting of its diverse requirements in commercial operations that deliver secure hosted clouds. In such arrangements commercial firms offer IaaS, PaaS or SaaS services. DoD would run network control centers and manage the high performance LANs and WANs that are necessary to access the Internet from DoD controlled client computers. The consequences of pursuing such policies would be a hybrid arrangement. It would lend itself to evolutionary migration while relieving DoD of the need to acquire the intellectual property for cloud operations. This would also avoid making huge investments for capital assets.

Another option is for DoD to hire a leading contractor to develop and build the required cloud capacity. The cost of such a program would exceed some of the largest weapon projects, with attendant risks. None of the existing IT contractors have the experience in building cloud operations. While DISA is trying to position itself as the provider of DoD computing it would be burdened by the same limitations as IT contractors.

SUMMARY
How hosted DoD clouds would function will require the institution of new policies. Here are few guidelines:

1. Business applications and generic services such as e-mail, open source office applications, collaboration systems, calendars, etc. could proceed along lines that have ben already established by GSA. *
2. The concept of operations for applications that exclude warfare would be IaaS. Multiple vendors, each with several data centers, would offer hybrid and interoperable services. This would allow DoD to relocate applications as needed. DoD Services will continue to exercise control through component-specific methods.
3. The concept of operations for applications that include warfare, but not intelligence, would be PaaS. DoD Services would continue to exercise control through component-specific operations except that databases will not be hosted commercially.

The shifting of the cost for cloud software and for cloud capital from DoD to secure commercial vendors offers a path how security can be increased while costs are reduced.

To conceive of such a plan will require an oversight organization. With the relocation of the position of the Assistant Secretary of Defense for Networks and Information Integration from the Office of the Secretary of Defense to DISA (which is managed by USCYBERCOM) the accountability for the planning of network defenses is in place.

With the rising emphasis on cloud computing as the solution to security while budgets are shrinking the need for action is here.



*http://pstrassmann.blogspot.com/2011/02/google-docs-are-step-into-cloud.html

Google Docs Are a Step Into the Cloud

Unisys partnering with Google and Section 8(a) contractors (Tempus Nova and Acumen Solutions) will deliver Google cloud-computing services to the General Services Administration (GSA). Their 15,000 employees will switch from desktops and laptops hosted on local servers to network-hosted applications operated from Google data centers. GSA is among the first federal agencies to move into cloud computing. * Competitors for the contract were also Microsoft and IBM who lost for unspecified reasons.

The Unisys fixed price contract is $6.7 million or $90/employee seat/year. GSA will still have to provide client computers, plus high bandwidth LAN connectivity to the Internet. The high responsiveness of Google will make possible the replacement of high cost “fat” clients with low cost “thin” clients for substantial additional savings.

We do not have a benchmark for comparing the GSA cloud deployment with current client/server operations. However, we do have data on the cost per seat for the NMCI project. ** Although NMCI does include on site management of equipment, from the standpoint of functionality the services are comparable in terms of the most frequently used features. The NMCI costs are either $1,818/employee seat/year or $2,230/employee seat/year if the full five year costs to the Navy are included. The disparity between NMCI and GSA cloud costs are just too large not to be left without further examination. 

Anyway one accounts the costs, GSA’s IT budget will realize a huge reduction in operating expenses and in the elimination of most of its IT support personnel. GSA will also avoid substantial future capital investments for servers. In return GSA will receive e-mail, word processing, spreadsheets, presentation slides preparation means, collaboration applications as well as a wide range of diverse services that Google makes available at no cost. The applications migrated into the Google cloud represent the majority of GSA’s computing needs.

The source selection documents for choosing Google are not available and therefore we cannot say whether it was the price, the migration cost or security requirements that were the basis for the vendor selection.  We only know that the major objection to the engagement of Google came from Microsoft who were offering their online Business Productivity Only Suite consisting of the Microsoft Exchange Online for email and calendaring; Microsoft SharePoint Online for portals and document sharing; Microsoft Office Communications Online for presence availability and Office Live Meeting for web and video conferencing. Microsoft’s argued that the existing interoperability between Microsoft applications and the diverse GSA applications were not easily interoperable, if at all.

The differences between Google and Microsoft applications will be hard to ever reconcile. Google offers solutions that are based on open source programs, using published Apps APIs (Application Program Interfaces). What GSA is buying is a Software-as-a-Service solution where all application development and maintenance costs are already included in Google bills.

An examination of the interoperability between Google documents, spreadsheets and presentation software and Microsoft Office applications were found to be compatible. There appears to be no valid reason whey Google Apps cannot coexist with any other Microsoft linked applications that remain hosted on GSA servers.

Microsoft’s applications are tightly wedged into their Operating system environment. From a security standpoint, Microsoft application software, operating system and browser are also under continual attack. Hundreds of bugs are discovered every year. Until the fixes are installed, there is always a time during which Microsoft programs remains vulnerable. No such vulnerabilities have been as yet attributed to Google.

SUMMARY
The large savings available from the migration to Google Apps for the most frequently used GSA workloads offer a relatively easy and fast path into cloud computing. Other applications, such as database intensive uses, can be scheduled for transfer later or remain hosted on clouds that specialize in applications such as Oracle database services. There is no reason to suppose that GSA cannot operate in the future in a hybrid cloud environment where a part of applications are run by Google, some are run on other clouds and some remain hosted within GSA.

An important consideration in the choice of Google is the opportunity for GSA to disentangle itself from largely total dependence on Microsoft. GSA would now have an opportunity to choose from diverse computing clouds where interoperability can be competed primarily for the least cost as well as the highest levels of security.


*http://www.gsa.gov/portal/content/208417
** http://pstrassmann.blogspot.com/2010/12/nmci-economics-for-next-five-years.html 

Fictitious Identities on the Internet

The attractive person you encounter on Facebook, MySpace, LinkedIn, Nexopia, Bebo, Friendster, Orkut or many other social web sites outside of the U.S.A. could be actually a fake.

There are many ways of constructing such fictitious individuals, including persons invented by a government agency. *

An Internet IP address can be registered for as little as 99 cents/year for an <.info> domain or for $4.99/year for a <.com> domain. ** An operator can create a large number of personas, replete with background, history, supporting details, resumes, pictures and cyber presence that are technically, culturally and geographically credible.

Such fakes enable an operator to display a number of different online personalities from the same workstation. This can be done without fear of being discovered. E-mail, blog and collaboration applications can appear to originate from any part of the world for interactions through conventional online services or social media platforms. The fake includes user-friendly indications that maximize situational awareness, such as displaying real-time local information or weather.

Communications from fake personalities can have a wide range of motivation. This includes sexual enticement, accusation of misconduct, fictitious reports, bullying, slander or libel. The possibilities of abuse are limitless, especially if the allegations originate from different sources that appear to be credible. Fake sources are also ideal for spreading propaganda and can be used to spread misinformation about political matters. If a faker’s bona fides are questioned, a variety of references can be provided from multiple fake addresses.

SUMMARY
Except in cases where certification is authenticated by a government issued identity document, such as a CAC card in the case of DoD, other origins of Internet communications will remain untraceable.
With proliferation of Internet fake personalities, protective measures will have to be taken. For instance, in the case of DoD social computing, a government issued identity certification may have to be issued to safeguard communications between the military and private addresses.

In the case of commercial communications the existing certification authorities, such as obtained from Verisign, would require additional authentication of an individual by confirming the validity of a government issued driver’s license or passport. This would create unprecedented traffic on the Criminal Justice Information Network (CJIN) currently used by law enforcement agencies.

Fake personalities on the Internet are emerging as a new threat to communication. Right now there are too many easy ways how to establish Internet personalities. In due course this risk will have to be contained.


* http://www.bnet.com/blog/technology-business/so-why-does-the-air-force-want-hundreds-of-fake-online-identities-on-social-media-update/8728
** http://order.1and1.com/xml/order/Home;jsessionid=D5C95BB2FA9082F5EABCA0776C314EE2.TCpfix142b 

Modular Development is Not the Answer

The “25 Point Implementation Plan to Reform Federal Information Technology Management” issued by the U.S. Chief Information Officer on December 9, 2010 states that “… OMB found that many IT projects are scheduled to produce the first deliverable years after work begins, in some cases up to six years later. In six years, technology will change, project sponsors will change, and, most importantly, program needs will change. Programs designed to deliver initial functionality after several years of planning are inevitably doomed.”

The House Armed Service Committee has further elaborated on the current situation in DoD:  *
1. Only 16% of IT programs are completed on time and on budget.
2. 31% are canceled before completion.
3. The remaining 53% are late and over budget, with the typical cost growth exceeding the original budget more than 89%.
4. Of the IT programs that are completed, the final product contains only 61% of the originally specified features.

To deal with this situation the Office of the Federal CIO advocates the adoption of a modular approach to systems development, defined as “… delivery in shorter time frames … based on releases of high level requirements and then refined through an iterative process.” This would require deploying systems in release cycles no longer than six to twelve months, with initial deployment to end users no later than 18 months after the program is authorized.

The problem with such guidance is the disregard of typical time-lines for information technology projects. Constructing the Metadata for DoD databases is continuous. It is never finished and requires continuous investments that forever.

The time to develop a communications infrastructure for DoD takes decades. It may then take more than several decades to migrate into a new communications environment.

DoD has core business applications, such as in Finance or Human resources that remain in place with little change for many years.

Tactical applications have an extremely short life, which may be only a few hours while a mission lasts.

Attempting to solve DoD’s development problems with rapid paced modular development does not recognize that enterprises need to solve some long-term problems before short-term solutions can be implemented.

The following diagram illustrates the differences in the timing of programs:

Global:
Global standards take a very long time to evolve. DoD must select only a minimum of technical standards and enforce them across all applications. Emphasis placed on assuring interoperability with the least migration costs. Proprietary solutions must be avoided.

Enterprise:
Enterprise standards must be followed. Control over databases, over shared communications, security and survivability should never be included as an integral part of a development program. Enterprise directions should hold steady for decades. A DoD cloud is clearly a shared enterprise program, not a functional investment.

Process:
Functional processes, especially in business applications, should not be managed as modular releases, but planned and funded as multi-year programs. Core features of functional processes should be extracted by Services and Agencies as “plug-in” add on.

Business:
Businesses should be built as applications that have only unique and specific uses. There should not be multiple logistics, personnel or financial systems within a Service or an Agency.

Application:
Application Development and Maintenance can be decentralized to meet local needs.  Standard modular code should be used to compose programs that take advantage of an in-place infrastructure, thus minimizing the amount of investment that is required to deliver results. It is only here that the six to twelve month modular development schedule would apply.

Local:
Local applications should be composed from parts available from the Enterprise, Functional and Business levels. Depending on the capabilities of the DoD networks, local applications should be assembled in a very short time, often in hours.

Personal:
Personal applications should be totally separated from the DoD business to protect privacy. They should be subject to only records management controls.

SUMMARY
DoD projects that last more than six year, or be terminated only to be restarted again reflect the prevailing program management practices. What we have are programs that are attempting to develop their own unique infrastructure, with little dependence on Enterprise or Process services. Such an approach is expensive and time consuming. This situation is compounded by a limited enforcement of global standards.

DoD programs should not be cut into pieces that are launched incrementally. Programs should be fitted into an overall architecture in which complexity is solved by means of programs that have multi-year stability. That would leave short-term execution to depend on pre-fabricated modules that depend on the simplicity of using only minimal amounts of code while most of the code is extracted from the long lasting shared infrastructure.

* www.esi.mil/Uploads/HASCPanelReportInterim030410%5B1%5D.pdf

Hybrid Clouds for Cloud Migration

On December 9, 2010 Vivek Kundra, the U.S. Chief information officer, announced a 25 point implementation plan to reform federal information technology management. * One of the key initiatives will be a strategy to accelerate the adoption of cloud computing across the government.

Each Agency CIO will be required to identify three services and create a project plan for migrating each of them to cloud solutions and retiring the associated legacy systems. Of the three, at least one of the services must fully migrate to a cloud solution within 12 months and the remaining two within 18 months. Such migrations will not be allowed to function as isolated “silo” environments, but will have to be interoperable as well as portable in compliance with standards, yet to be published. The planned migration into the cloud environment will have to be deployed rapidly, while generating cost reductions.

The question is how will Agencies accomplish the stated goals? Migrated one Agency branch at a time? Or, migrate one application at a time? Or, move just the work of one component as a token effort to comply with numerical targets? How much the migration to a cloud environment will be seen as a success? Will five, ten or twenty of an Agency budget be housed as a cloud service after a year?

The solution to these questions can be perhaps found in the concept of migration to a hybrid cloud environment. Software has just become available that provides a link between internal and external clouds that moves virtual machines between a hosted service (public clouds) and an organization's own internal systems (private clouds). Agencies will now have the choice of first migrating into a secure public cloud applications that can be easily standardized, such as e-mail, calendars, collaboration utilities, group communications, documents, spreadsheets, presentations, videos, slide shows, browsers and project management. These would then become Government Standard Utility Applications (GSApps).

Standard applications represent a significant share of the cost of operations of any Agency. Whatever variability in these applications may exist at present is a reflection of the way they were acquired rather than of the functionality they perform. Consequently, migration to GSApps is feasible, while provisions can be made to add features in a few isolated cases where that can be justified.

GSApps would be then delivered in a Software-as-a-Service (SaaS) mode, which would relieve the government of large staffs currently devoted to software development, software maintenance, software upgrading as well as operating personnel that at present mind in-house servers performing these functions. Such reductions in manpower should be also seen as a major gain in government security since GSApps are hosted in highly automated and security assured environments. SaaS would also replace server farms, which would represent a step in the direction of reducing the number of data centers.

If the government embraces GSApps, they would be guaranteed to be interoperable while achieving enormous immediate cost reductions. Of course, compliance with FISMA and conformity with industry security standards would become a prerequisite for choosing the hosting services. Uptime and low latency would have to be assured. Redundant files with the records of all transactions would also have to be provided as a part of the service, including features such as de-duplication.

Migration costs from the prevailing environment, which is now based on Microsoft Exchange of Lotus Notes, would be relatively easy to execute, since several hosting vendors offer conversion software for the translation of addresses and files. Since most of the hosted applications in already imitate the visual experience that customers are used to, minimal training would be necessary.

SUMMARY
A hybrid migration into the cloud environment appears to be the best approach to comply with directions outlined by Kundra. A properly configured and reliable SaaS can relieve Agencies of perhaps up to a half of the transaction workload, while reducing servers and operating personnel. However, such a move will require the adoption of standard cloud connector software which will make it possible to subsequently connect Infrastructure-as-a-Service and Platform-as-a-Service cloud adoption for applications that are require DoD to retain complete control over its applications and its databases.


*http://www.govtech.com/enterprise-technology/Vivek-Kundra-Unveils-Federal-IT-Reform-Plan.html

Sufficient Policy for Information Technology?

A search for “ASD(NII)” of the Official Department of Defense Web Site for DoD Directives, Instructions and Administrative Instructions found 5,058 citations as well as 115 citations for “CIO”. *

Since system interoperability and system security are the key requirements for cyber operations, the following is a partial list of policies that provide Directives and Instructions for implementation:

DODD 4630.05; INTEROPERABILITY AND SUPPORTABILITY OF INFORMATION TECHNOLOGY (IT) AND NATIONAL SECURITY SYSTEMS (NSS).
DODD 5015.2; DOD RECORDS MANAGEMENT PROGRAM.
DODD O-5100.30; DEPARTMENT OF DEFENSE (DoD) COMMAND AND CONTROL (C2).
DODD S-5100.44; DEFENSE AND NATIONAL LEADERSHIP COMMAND CAPABILITY (DNLCC).
DODD 8000.01; MANAGEMENT OF THE DEPARTMENT OF DEFENSE INFORMATION ENTERPRISE.
DODD 8115.01; INFORMATION TECHNOLOGY PORTFOLIO MANAGEMENT.
DODD 8190.1; DOD LOGISTICS USE OF ELECTRONIC DATA INTERCHANGE (EDI) STANDARDS.
DODD 8320.02; DATA SHARING IN A NET-CENTRIC DEPARTMENT OF DEFENSE.
DODD 8320.03; UNIQUE IDENTIFICATION (UID) STANDARDS FOR A NET-CENTRIC DEPARTMENT OF DEFENSE.
DODD 8500.01E; INFORMATION ASSURANCE (IA).
DODD O-8530.1; COMPUTER NETWORK DEFENSE (CND).
DODD 8570.01; INFORMATION ASSURANCE (IA) TRAINING, CERTIFICATION, AND WORKFORCE MANAGEMENT.
DODI 1025.3; ADMINISTRATOR, NATIONAL SECURITY EDUCATION PROGRAM.
DODI 4630.8; PROCEDURES FOR INTEROPERABILITY AND SUPPORTABILITY OF INFORMATION TECHNOLOGY (IT) AND NATIONAL SECURITY SYSTEMS (NSS).
DODI 4650.01; POLICY AND PROCEDURES FOR MANAGEMENT AND USE OF THE ELECTROMAGNETIC SPECTRUM.
DODI 5205.13; Defense Industrial Base (DIB) Cyber Security/Information Assurance (CS/IA) Activities.
DODI 8100.04; DOD UNIFIED CAPABILITIES (UC).
DODI 8110.1; MULTINATIONAL INFORMATION SHARING NETWORKS IMPLEMENTATION.
DODI 8115.02; INFORMATION TECHNOLOGY PORTFOLIO MANAGEMENT IMPLEMENTATION.
DODI 8410.02; NETOPS FOR THE GLOBAL INFORMATION GRID (GIG).
DODI 8420.01; COMMERCIAL WIRELESS LOCAL-AREA NETWORK (WLAN) DEVICES, SYSTEMS, AND TECHNOLOGIES.
DODI 8500.2; INFORMATION ASSURANCE (IA) IMPLEMENTATION.
DODI 8510.01; DOD INFORMATION ASSURANCE CERTIFICATION AND ACCREDITATION PROCESS (DIACAP).
DODI 8523.01; Communications Security (COMSEC).
DODI O-8530.2; SUPPORT TO COMPUTER NETWORK DEFENSE (CND).
DODI 8560.01; COMMUNICATIONS SECURITY (COMSEC) MONITORING AND INFORMATION ASSURANCE (IA) READINESS TESTING.
DODI 8910.01; Information Collection and Reporting. 
DTM 09-013; Registration of Architecture Descriptions in the DoD Architecture Registry System (DARS).
DTM 09-026; Responsible and Effective Use of Internet-based Capabilities.

SUMMARY
The DoD Directives and Instructions are comprehensive. They cover, in detail, every topic related to interoperability of systems and security security. Consequently any flaws in cyber operations are not due to an absence of a policy but due to a lack of implementation.

* http://www.dtic.mil/whs/directives/corres/dir.html

The Known Unknowns in Cyber Operations

Former Secretary of Defense, Donald Rumsfeld, believes that it is the known unknowns that can hurt you. Here is a compilation of a few situations where facts are not available:

1. How many “bots” (hostile implanted software) are there in DoD millions of computers?
2. How many of the estimated three million DoD desktops and laptops have open ports that are accessible to malware attacks?
3. How many servers hosting critical applications or data do not have verified backups?
4. How many DoD computer devices have USB port into which a thumb-drive can be inserted without detection?
5. What fraction of DoD client devices experience access downtime greater than half an hour?
6. How many DoD computers have boot times greater than ten minutes?
7. How many military, civilian or contractor employees perform any part of classified work on their personal computers that do not require a CAC card for identification?
8. For how many days do military, civilian and contractors retain access authorization to DoD networks after they have been terminated from their position?
9. How many social computing transactions take place over NIPRNET?
10. What share of traffic conveyed over DoD networks is from social applications?
11. How many individuals have more than one .mil address?
12. How many downloads to YouTube and similar sites from DoD sites are recorded for subsequent forensic analysis?
13. How many communications from DoD sites to sites not on .mil addresses are screened for inappropriate contents, such as pornography?
14. Are DoD originated e-mails, regardless of source, filed and retained for compliance with the
Federal Records Retention regulations?

According to Rumsfeld here are also “unknown unknowns” that can potentially inflict even greater damage. Do not know how to make such a list. 

SUMMARY
Unknown unknowns are potentially exploitable flaws for launching cyber attacks. Keeping track 
of failed implementations offers a sobering perspective on situations that warrant attention.


Operating in Cyberspace

Cyberspace issues are receiving attention at the highest policy levels. The Deputy Secretary of Defense has published a ten-page article. * A deputy commander of the U.S. Cybercommand wrote a five-page article on how to operate in the cyberspace military domain. **

These articles deal with organization, doctrine and strategic concepts how to defend DoD. They do not address the issue what are the characteristics of what is to be defended. What is missing is a realistic assessment of the current status of DoD’s FY11 $36.3 billion spending for information technologies that constitute the cyberspace.

The current hardware, software and networks within the Defense Department are obsolete and dysfunctional. The department continues to operate with a culture that does not as yet acknowledge that its computer systems are technically unsuited for operations in the age of cyber warfare.

The existing cyber defense deficiencies are deeply rooted in the ways the Defense Department acquired information technologies over the past decades. The existing flaws are enterprise-wide and pervasive. Regardless how much money is spent on cyber security protection most of it is inadequate to make the existing proliferation of networks adequately secure.

The total number of DoD systems projects in FY10 was 5,300. *** Each of these programs is subdivided into subcontracts, many of which are legislatively dictated. The total number of DoD data centers was 772, which makes their defenses unaffordable. ****

The information technology environment in the Defense Department is fractured. Instead of using a comprehensive and defensible infrastructure, which presently consumes 57% of the total information technology budget, money is spread over thousands of mini-infrastructures that operate in separate silo-like structures, which are almost entirely managed by contractors. Such profligacy is guaranteed to be incompatible and indefensible.

Over ten percent of the total Defense Department IT budget is spent on cyber defenses to protect tens of thousands of points of vulnerability. The increasing amount of money spent on firewalls, virus protection and other protective measures cannot keep up with the rapidly rising virulence of the attackers.
Hardly any of the subcontracts share a common data dictionary, or data formats or software implementation codes.  As result, the systems are interoperable only with difficulty. Except for isolated cases, DoD systems cannot support the coordination of information in order to launch coordinated cyber countermeasures.  What is in place is not only vulnerable, but also inadequate in meeting operational requirements for 21st century information dominance, which are low latency (less than 200 milliseconds) and close to 100.0% availability.

Internet – the primary conduit for cyber attacks - is connected to the Department of Defense networks over hundred thousand of routers and switches, which connect to ten thousands of servers located in hundreds of separate locations. In addition there are over six million desktops, laptops and smart phones, each with an operating system and browser that can be compromised by any of the two thousand new infections per day. These are risks that make DoD fundamentally insecure. Such risks will persist unless the underlying information technology infrastructure is overhauled.

Unless the Pentagon’s cyber strategy also includes a re-design of its technology infrastructure, any approach that does not include efforts of first remedying the existing deficiencies will miss what needs to be done.


* Lynn, The Pentagon’s Cyberstrategy, Foreign Affairs, September/October 2010
** Leigher, W.E., Learning to Operate in Cyberspace, Proceedings of the U.S. Naval Institute, February 2011 
*** http://www.whitehouse.gov/omb/e-gov/
**** http://www.cio.gov/pages.cfm/page/OMB-Asks-Agencies-to-Review-Data-Center-Targets 

Denial of Service Attacks Now at 100 GBS

Arbor Networks, in a just published Infrastructure Security Report, states that in 2010 there has been an increased severity in Distributed Denial of Service (DDoS) attacks. For the first time a 100 Gbps attack was reported. *

That represents a dramatic escalation in the amount of information that is piled up on a network in order to shut it down:


Since the most frequently deployed defense against DDoS is to shut down the computer links that have been jammed, a 100 Gbps attack can possibly unleash large amount of damaging transactions before all connections are finally severed.

The delays between DDoS detection and when the shut down happens can be seen from survey results of 111 technical network managers of Information Services Providers (ISPs):


Shutting down and then restarting a network hit by DDoS is not automatic (13% of responses).  It can be a time consuming affair.

The network defenders also suffer from a scarcity of qualified personnel. To stand sentry-duty in a data center could be a position that is hard to fill, as illustrated by the following:



DDoS attacks are launched from “bot” computers that have implanted programs capable of launching attacks against designated IP addresses. Attacks occur when the controller (known as the “herder) of a “botnet” triggers the release of a rapid sequence of messages.

It is interesting to speculate how many “bots” would be necessary to generate a simultaneous stream of 100 Gbps traffic.

Over 50% of the observed Internet attack traffic in the last quarter of 2010 originated from 10 countries, with USA, Russia and China accounting for 30%. ** The global average Internet connection speed is now about 2 Mbps, though it ranges from average speeds as high as 14 Mbps (South Korea) or 7 Mbps (Delaware).  Therefore, to deliver a 100 Gbps attack would take anywhere from 7,000 to 50,000 bots.

Botnets have been known to grow into large collections. The Dutch police found a 1.5 million-node botnet. The Norwegian ISP Telenor disbanded a 10,000-node botnet. In July 2010, the FBI arrested a “herder” responsible for an estimated 12 million computers in a botnet. ***

One can therefore conclude that assembling DDoS capable botnets is well within the scope of malware operators. The chances of future attacks that would exceed 100 Gbps  is high.


SUMMARY
With an estimated 15,000 networks in place, according to DEPSECDEF Lynn, DoD is vulnerable to more powerful and most likely more frequent denial of service attacks. How to defend against that is a matter of tradeoffs between the availability of highly trained people, or investments into an installation of automating shutoffs or in ways how to acquire fail-over capabilities.

The defense of 15,000 individual networks against DDoS by human operators is neither affordable nor executable.

A defense that depends on automatic shut-offs would require retrofitting existing software with such features. It is unlikely that there is either the time or the money to do that.

The best option is to set up DoD data centers with virtual servers that can fail-over to one or more back-up servers whenever a DDoS hits. That would require migration into a virtualized environment, which is likely to show relatively fast paybacks and which can be executed by means of hypervisor software.


* http://www.arbornetworks.com/report

** Akamai State of the Internet, 2010
*** http://en.wikipedia.org/wiki/Botnet



Are IPv4 Addresses Exhausted?

On June 9, 2003 the DoD/OSD CIO issued a memorandum that the DoD goal is to complete the transition from IPv4 addresses to IPv6 addresses by FY08 for all inter and intra networking. This was necessary to enable the transition of all National Security Systems and the GIG, to be completed by FY07. DISA would act as the Central Registration Authority for all DoD systems.

The directed transition to IPv6 by FY08 never happened except for minor installations.

On September 28, 2010 the Federal Chief Information Officer issued a memorandum that the “The Federal government is committed to the operational deployment of Internet Protocol IPv6”. * Agencies and Departments of the Federal Government will have to upgrade externally facing servers (such as web services, email, DNS and ISP services) to use IPv6 by the end of FY12. For internal applications that communicate with the public Internet servers, the upgrade to IPv6 would be implemented the end of FY14.

The major benefits to be derived from migration from IPv4 to IPv6 are the much larger address spaces. IPv6 offers improved routing and enhanced security, especially how transactions are handled within Internet routers and switches. For instance, IPv6 reduces complexity of Internet services by eliminating the reliance on Network Address Translation (NAT) technologies. IPv6 also enables added security services for end-to-end mobile communications.

With a continuous growth of new facilities in DoD the question is whether IPv4 addresses are easily converted so that DoD systems will remain completely interoperable and show improved communications performance.

DoD has so far used only about half of all of the IPv4 addresses that have been assigned to it. As of February 2008 there were over 200 million IP addresses still available for DoD, which should maintain communications for a time. ** If DoD proceeds with its adoption of IPv6 it would acquire 42 million billion billion billion IP addresses. That means that DoD would have enough IP addresses to give each grain of sand on earth 90 billion IP addresses. Such a number is nice to have, but the question is whether there are funds available to make the conversions to IPv6 with the urgency that has been dictated.

The alleged shortage of IPv4 addresses is result of allocations that were  made over 30 years ago. For instance IBM, HP, Apple and Ford each received a block of 16.8 million addresses. Xerox, with only 53,500 employees "owns" 16.8 million IP numbers. DoD was one of the largest recipients of IP addresses and has now 134.2 million IP numbers.

DoD has now backed off IPv6 implementation even though upgrading from IPv4 to IPv6 would allow for better network mobility, mission expedition and the widespread adoption of Radio Frequency Identification (RFID)s. *** Although some DoD components have already started migration to IPv6 the differences between applications staying on IPv4 and those communicating using IPv6 will increase the complexity of network software.  A mixed environment will require DoD to launch efforts that add to all IPv4 locations added interoperability capabilities until such time when all IPv4 addresses will be retired.

At this time there are no major funded programs for proceeding with IPv4 conversion on a tight schedule. OSD policy has now saddled all programs, whether they are legacy or new, with the need to acquire additional transformation software and hardware while in transition to IPv6. That will surely take longer than the policies have dictated. Migrating to IPv6 involves much more than just re-setting the protocol options on a single device:


To fix a complex environment, such as in DoD, would require revisions and upgrades throughout the entire network. In addition exhaustive testing is required. To achieve verified compliance, companies must pass over 450 tests that inspect core IPv6 functionality. *******

Transition hardware and software is available from several vendors but it is questionable whether the current budgetary limits will permit spending money on projects with only a transitory life.  **** To maintain interoperability during the conversion from IPv4 to IPv6 thousands of DoD locations will need a capability to translate IPv4 addresses to more options, as illustrated below.


Address transformation will add complexity to every site that communicates either within DoD or externally.

Meanwhile the task of implementing IPv6 remains technically a demanding task.  Due care must be taken to ensure that the existing communications are not impeded as more software is placed into the path of every transaction. GIG IPv6 network performance will also have to improve, especially for auto-configuration, prioritization, converged voice and video, multicast and mobility. A recent survey by Arbor Networks shows the following difficulties with IPv6 implementation: *****


In more than half of the 111 reports from network technicians inadequate IPv4 vs. IPv6 parity had to be overcome with software fixes.

The global registry of IPv4 addresses, the Internet Assigned Numbers Authority (IANA), indicates further shrinkage of available IPv4 addresses, but by no means exhaustion. Only the organization that assigns Internet addresses to China and to India (APNIC) shows that they will be using up their address pool by the end of 2011. However, with a reallocation from existing poorly utilized address pools elsewhere in the world there are more than adequate IPv4 numbers available globally for an indefinite future.

Meanwhile Internet Service Providers (ISP’s) are already upgrading network switches as well as routers to handle IPv6 addresses in addition to retaining the capacity to process every IPv4 addresses. Therefore, the availability of dual handling of addresses does not impose on DoD any short-term urgency to achieve IPv4 to IPv6 conversions.

SUMMARY
IPv4 allows 32 bits for the Internet Protocol and supports 4.3 billion addresses. IPv6 uses a 128-bit address and supports a practically infinite number of addresses. As of the end of 2010 only 533 million unique IP addresses have been assigned. ****** Though the USA currently has 26.4% of the global IP population, it has obtained more than 50% of the IP addresses, while the quickly growing China is exhausting its allocation. Clearly, there are enough IP addresses, on the average, except that they have been misallocated. An immediate rush into IPv6 cannot be therefore justified provided that IANA can take corrective actions.

Given poor progress in IPv6 implementation DoD contractors will have every incentive to continue enhancing IPv4 capabilities rather than working on the conversion to IPv6.

IPv6 is not necessarily more secure than IPv4 provided that added security fixes are installed. Security features are now available for IPv4 from a number of sources.  From the standpoint of DoD applications there will be few practical differences in security protection if the fixes are implemented. Therefore, keeping IPv4 in place makes sense unless DoD decides to proceed with a full implementation of RFIDS, which is not the case right now on account of enormous initial costs.

There is another option open: the IPv6 Native Dual Stack solution, now in testing. Can access services natively over both IPv6 and IPv4. Users do not need to use any IPv6 or IPv4 tunneling, translating, or NAT solutions. Access to both IPv6 and IPv4 can take place directly at high-speed. When the Dual Stack solution is ready, DoD may save money by avoiding costly software fixes.

Despite high-level policy mandates promulgated in 2003 and in 2010 the IPv4 to IPv6 conversions will not happen very soon. It will require a redirection how future DoD networks will be upgraded before DoD internal and external networks can start communicating using the identical address formats.

The best choice for DoD is to proceed with the adoption of IPv6 as a requirement for any upgrades rather than to confront every Component with fixed immediate deadlines.

* http://www.networkworld.com/newsletters/frame/2010/111510wan1.html 
**http://royal.pingdom.com/2008/02/13/where-did-all-the-ip-numbers-go-the-us-department-of-defense-has-them/ 
** http://www.defense.gov/news/newsarticle.aspx?id=59780
****http://www.datatekcorp.com/index.php/ipv6-portal/ipv6-resources/117-sbir-work 
***** http://www.arbornetworks.com/index.php?option=com_content&task=view&id=1034&Itemid=525
****** Akamai State of the Internet, 2010.  
******* http://searchcio.bitpipe.com/data/document.do?res_id=1285853030_482&asrc=EM_DWP_13517395&uid=6482191