Search This Blog

DoD Interoperability Through Web Services?

Any Web Service can be accessed anywhere over any Internet or any Intranet link. For accessing Web Services one must rely on a standard web browser and on standard ways of writing code to render a Web Service executable at any of the millions of personal computing devices in DoD.

Web Services offer large economic advantages.  A variety of Services can be retrieved and re-used. Several Services can be combined into innovative web applications (“mash-ups”). Services can be updated and maintained in pooled servers to reduce latency.

Web Services can be used without installing application software to millions of technologically diverse computing devices. If an enterprise design allows divers desktops, laptops or smart phones in different organizations can start cooperating without people-assisting intermediaries.

Web Services make it possible for technologically different user devices to operate across technologically completely different communications infrastructures. Web Services allow technologically different generations of servers to share databases that can also operate in legacy environments.

With DoD’s >700 data centers, >15,000 networks and >3 million personal devices it is possible that interoperability can be achieved from any to person to any person – provided that enterprise standards are strictly enforced.  Unfortunately, DoD does not have actionable standards in place. Therefore interoperability across more than 7,000 major application “silos” is not feasible at this time, which makes this condition a hindrance in the pursuit of information “dominance”.

The enterprise reuse of Web Services depends on the ability of systems to describe and publish what functionality is available to customers. That is why a Web Service Registry is essential. Such Registry would allow DoD components to organize access to the available Web Services. Such a Registry would provide the means for publishing, then discovering and finally accessing the available Web Services.

A Web Service registry must comply with the Universal Description Discovery and Integration (UDDI) standard. * A Web Service Registry is a compilation of information in the form of Web Services Description Language (WSDL). ** These are standards set up by international consortia for adoption of product-independent standards. UDDI and WSDL describe Web Services and how they can be used.
The UDDI supports the description, publication, and discovery of any organization that offers a Web Service. It describes what services are available. UDDI defines the technical details how to access such services. UDDI defines how services are organized. UDDI data also explains how data is structured and how the data models are stored. Search and lookup entries are identified. Publish, delete and update events are delineated.  A DoD UDDI Registry would include information about each component, such as what MetaData is available at each Web Service, what is the identification of business processes, what are the platforms on which applications operate and what are the various access protocols.

WSDL provides a model as well as XML formats for describing what a Web Service offers. A service description in WSDL separates functionality from details such as how and where the service is offered. While the abstract description includes types and an interface, details include bindings, which include available implementations of the interfaces at point-of-use.

SUMMARY
The adoptions of UDDI and WSDL are essential but only partial steps towards DoD interoperability. To obtain a systems architecture that meets the challenges of information warfare DoD has to put in place many other standard methods, though the adoption of web services is by far the most preferred approach for ultimately achieving enterprise-wide interoperability of computer-based communications.

To deliver interoperability will require the institution of a diversity of additional practices, which are an integral part of the Service Oriented Architecture (SOA). Most likely there will be applications in DoD that will never appear as a Web Service. There are still many issues that need resolution, especially with regard to governance, that will inhibit further progress. The absence of enforceable guidelines from the Office of the Secretary or a commitment from USCYBERCOM are still lacking at this time. What is missing is an uncompromised approach to achieve strict standardization for all applications. When syntactic, semantic, and organizational interoperability of information systems will be achieved is presently not visible in any programs.

The efforts launched in DISA under the Net-Centric Enterprise Services (NCES) in the last six years were a promising start, but have not resulted in much progress. NCES has now been terminated.
Web Services offer: Reduced cost of maintenance; Reduced cost of new development; Agility to respond to new business needs; Reuse of Legacy Systems; Abstraction and isolation of any platform dependence; Cost-benefit analyses for trading off legacy reuse, legacy migration, and new development.
Web Services make it possible to break systems into loosely coupled applications and infrastructure elements. This decreases the attack surface and enhances security.

DoD cyber operations require real-time sharing of data across all Components.  For this reason most applications and all critical data will have to become available by means of Web Services. There are no other known options for achieving that.


* http://uddi.org/pubs/uddi_v3.htm#_Toc85907968
** http://www.w3.org/TR/wsdl 

Comparing VISA and DoD I.T.

Analysis of commercial operations offers interesting insights how DoD information technologies could possibly become more efficient. Although VISA is completely different from DoD, there are nevertheless differences that can explain why VISA and DoD budgets are so far apart. As DoD will be looking for cost reductions in I.T. spending there are lessons to be learned from VISA operations that could possibly have merit in planning for DoD improvements.

VISA makes available data about its “Network, EDP and Communications” costs. * For one year, ending on September 30, 2010, the total expenses for I.T. were $425 million. VISA information technology expenses are therefore only 1.3% of the total cost of DoD information technologies.

Although VISA processes are much different than DoDs, from the standpoint of speed, security, reliability, flexibility and scalability the VISA operations can offer useful lessons how to design and how to manage a wide ranging information complex.

Here are the major differences between VISA and DoD:
1. VISA operates globally from three data centers, DoD from 772.
2. VISA data centers are redundant and provide for fail-over in real time. Most DoD data centers are not backed up.
3. VISA network uptime is close to 100.0%. DoD uptime availability is not measured.
4. VISA manages the software and configuration management for the entire world from only two locations. DoD does that from at least 2,200 separate projects.
5. VISA provides a global infrastructure and leaves to individual financial institutions to manage their operations and input terminals as long as they conform to centrally dictated standards. DoD is reported to have 15,000 communication infrastructures, each of which is attempting to achieve complete integration down to desktops, laptops and smart phones.
6. There are only two carefully managed software updates for the VISA infrastructure per year. DoD software updates are as needed, whenever and wherever that is affordable.
7. A single VISA executive group controls VisaNet budgets and priorities in quarterly reviews. In DoD the management over budget is widely dispersed so that planning, development, testing, installation and operation is separate both in organization and in timing.

VISA can deliver a formidably collection of services for a fraction of DoD costs because its organization and its concept of operation is completely different.

The following illustrates what VISA delivers for the money it spends: **
1. Every day, VISA processes up to 1.8 billion credit card entries and has the capacity of handling over 20,000 transactions per second. The number of DoD daily transactions not more than a tenth of this amount.
2. VISA accepts cards at 1.7 million locations. DoD supports not more than a tenth of this.
3. VISA processes entries for 15,700 financial institutions. The DoD network interfaces with not more than a tenth of that.
4. VISA processed at peak time more than 200 million authorizations per day. The peak load on DoD, under warfare conditions, is unknown but would not be comparable.
5. VISA operates globally from three synchronized data centers linked by 1.2 million miles of optical lines. The DoD GIG does not permit real time synchronization of data centers because it has limited capacity for that.

VISA shows the following operating characteristics:

Fast – On average, transactions are processed in less than a second. This includes providing business-critical risk information to merchants and banks. DoD applications will average a latency that is much greater. DoD latencies are not measured and not tracked.

Secure – VISA employs multiple defense layers to prevent breaches, combat fraud and render compromised card data unusable. These defense layers include data encryption, network intrusion detection and neural network analysis.

Real-time risk scoring capabilities are the result of more than 30 years of monitoring transaction patterns and applying sophisticated risk management technologies during the authorization process. Risk analysis methods detect unusual spending patterns and flag possible fraud in real time. These examine 40 separate transaction aspects and 100 or more known fraud patterns which they weigh against a profile of all of the cardholder’s transactions from the last 90 days. The result is an instantaneous rating of a specific transaction's potential for fraud, which is then passed to the card issuer. The card issuers, based on their own proprietary criteria, decide to accept or to decline transactions. DoD does not have the forensic assets in place to apply  “artificial intelligence” screening methods either to infiltration or exfiltration of its traffic.

Reliable – VISA runs multiple redundant systems to ensure near-100% availability. A self-correcting network detects transmission faults and triggers recovery. For DoD a real time redundancy is not affordable. Up-time reliability is not measured. In fact, standards for up-time reliability measurement and reporting do not exist.

Flexible –VISA supports a diversity of payment options, risk management solutions and a number of different information products and services. This includes more payment methods as well as a choice of access and controls. In DoD the GIG is only a telecommunications carrier, with limited capacity. The GIG does not include a capacity to vary its functionality.

Scalable – VISA processed over 92 billion transactions per year, each settled to a choice of currencies such as penny, peso, ruble or yen. This is accomplished in over 50 languages. On a peak single day last year, VISA processed more than 200 million authorization transactions. VISA stress tests show the capacity to process close to a billion transactions per day. DoD network scalability is fractured and therefore has a very limited capacity.

VISA authorization transactions can be complex. The following is a simplified description of the authorization and payment processes. VISA offers to Issuers a wide range of collection plans and features, such as customer loyalty programs, which add more steps to the following sequence:

1. The Cardholder swipes a credit card into millions of VISA-compatible card readers or accounting machines. Hundreds of different manufacturers make these devices, each with different software. These devices are located even at the most remote locations in the world.


2. The authorization transaction is checked, secured and encrypted by the Merchant’s software.
3. It is passed to the Acquirer — usually a merchant's bank — where the Cardholder’s account is credited after checking and verification using bank-specific software.
4. The Acquirer reimburses the Merchant instantly after verifying the authorization request. The purchase is authorized at the point of sale.
5. The encrypted authorization is then passed from ten thousands of Acquirers to one of three VisaNet global data centers where every authorization transaction is subject to further risk analysis, security verification and protection services.
6. VisaNet then passes the authorization transaction to hundreds of Issuers, which are the Cardholder’s bank. The issuer collects from the Cardholder’s account by withdrawing funds if a debit account is used, or through billing if a credit account is used. After the funds are successfully transferred, the approved transaction is returned to its origin where it would be displayed on different formats.
7. If the Cardholder’s account is overdrawn, the sequence of the entire process is reversed and the credit authorization is withdrawn.

The entire workflow of credit card authorization from start to finish takes place over the public Internet, or over dedicated optical lines, in encrypted format using the “tunneling protocol” in conformity with VISA dictated standards. By using “tunneling” the VisaNet can receive and transmit over incompatible trusted networks, or provide a secure path through untrusted networks.

In the case of DoD applications it is impossible to track, evaluate or measure end-to-end performance. The DoD architecture has not been designed for assigning separate and distinct roles to the required standards, to the functions of the infrastructure, to the roles of enterprise systems and to the missions that have been delegated for completely decentralized control.

SUMMARY
VisaNet is not just a network service. It can be best described as a global cooperative organization that reaches directly into each of its 15,700 financial institutions with software upgrades, standards enforcement, compliance verifications, security assurance and diagnostic help. VisaNet is a confederation of banks for VisaNet voluntary participation since competitive offerings are also available.

Perhaps the most important single insight to be gained from the VISA environment is a focus on applying systems engineering to the credit card network in its entirety from points of entry to the processing of authorizations in banks.  VISA views its business as an integrated continuum that requires continuous tuning as technologies, features and networks change. For instance, VISA tracks the latency (response times) and up-time availability in every link. VISA deploys network engineers who work closely with application designers and data center operators to shave microseconds from transactions.

Perhaps the greatest economies of scale are gained from a complete centralization of control over the management of the software infrastructure of VisaNet. While leaving the complete management of banking software in the hands of each of the 15,700 financial institutions, Visa continuously implements enhancements to its global payment network from a central location. There two major system upgrades each year for the entire network. Each of these upgrades is a carefully choreographed event, which involve the collaboration with each of the financial institutions, merchants and processors around the world. An average system upgrade requires some 155,000 person-hours. In each case there are up to 100,000 lines of code changed, creating 50,000 application upgrades each year.

The VISA approach is different from current DoD practices where the severance between the developers, infrastructure operators and the managers of the client environment takes place without synchronized integration of every part.

The VISA operates in close coordination between IT management and business executives.  Business managers control the budget and dictate how to make trade-offs between schedule, cost and features.
In VISA computer networks are treated as an integrated and seamless workflow that is continually maintained and upgraded. In contrast, the DoD approach is to tear asunder planning, engineering, software implementation, testing, installation, infrastructure operations and data processing.  Nobody is in charge of the entire workflow from conception to the delivery of results.

DoD is trying to create and manage something that is fundamentally an inseparable process. DoD systems are a collection of subdivided efforts that are time-separated into contractually organized parts. Such an approach is not affordable any more.



* http://investor.VISA.com/phoenix.zhtml?c=215693&p=quarterlyearnings 
** http://corporate.VISA.com/about-VISA/technology-index.shtml

Scalability of Systems

When confronted with the redesign of a huge DoD system the issue of adequate scalability for the handling of transactions will always come up. When planning for the deployment of large enterprise-level programs Program Executive Officers (PEOs) are apprehensive about the scale of what is proposed.

PEOs will argue that the capacity and the inflexibility of the DoD infrastructure will limit the scalability of any central application. If the network, its switches and routers have limited capacity, then any central computer service will not scale. Distributed processing, with local operations, will more likely deliver the desired level of service.

The organization of the application code and databases will also influence how much centralization is possible. If local operations, in the Services and the Agencies, retain control over versions of the proposed software, a Service Oriented Architecture (SOA) will be hard to implement. DoD-wide management of an over-arching SOA design will be too difficult to implement.  Excessive variability cannot be scaled without added costs for maintenance manpower.

  Disregarding likely objections, systems scalability must be first defined in terms of its transaction processing capacity. If a system does not have the power to handle customer transactions, PEOs will be justified to break up programs because it is not “scalable”.

The metric for evaluating scalability is the number of transactions per minute processed against a database (usually Oracle) dated December 2010.  A report on "Oracle Databases on vSphere Workload Characterization Study" shows that a relatively small and cheap virtualized server configuration  (<$200,000) will handle 505,000 transactions per minute. *  Such capacity exceeds by more than a multiple of 100 the average Oracle database processing capacity.


 Transaction-processing power is now available at a very low cost. Therefore, PEOs will have to start looking for other ways to deliver more effective results.

SUMMARY
Availability of a relatively inexpensive system with the capacity of processing 500,000 transactions/minute suggests that it could accommodate ten thousands of administrators engaged in conducting related activities simultaneously.  Even with multiple redundant operations for fail-safe operations, it is unlikely that DoD can generate demand for unaffordable transaction-based systems in DoD.

Concerns about the scalability of DoD systems to handle application workloads are misplaced. Any restrictions on processing power will arise from the inability of DoD networks to support traffic, or from small-scale contracts that will result in an excessive variability in design.

New virtualized technologies and the availability of inexpensive multi-core servers favor the centralization of information processing. Concerns about the practical limits on scalability have not merit.
The economics of data processing favors consolidation. Arguing against that is the fear that central operations will result in unmanageable delays as well as in budget over-runs. Questions about scalability are therefore matters that concern how systems are managed rather than which technology is deployed.  

* www.vmware.com/.../pdf/partners/oracle/Oracle_Databases_on_VMware_-_Workload_Characterization_Study.pdf

Can Multiple Acquisitions Support Cyber Operations?

The Navy is proceeding with NGEN acquisitions that divide a potentially $14.5 billion program initially into five contracts: (1) Security Operations, which may be targeted for small business; (2) Infrastructure such as LANs/WANs; (3) Enterprise Services such as supporting and migrating existing thick-client architecture to server farms; (4) Hardware purchases; and (5) Software (purchased through existing contract vehicles). In addition the Navy will depend on DISA for long haul networks, such as provided by GIG. *

Dividing a huge program into separate acquisitions makes it manageable from an acquisition standpoint. However, the question remains how will separate programs add up for delivery of a low cost and high performance capability that supports the Navy’s Information Dominance objectives?

There are tradeoffs to be made between security defenses, infrastructure architecture, configuration of personal computers and servers, operations of data centers and software to support applications. For instance the first line of defense for security assurance can be located either on desktops, in servers, in data centers or be hosted in network switches and routers. The decision where to concentrate security will affect how the Navy network will be structured to protect the core of information operations, which is the safeguarding of databases.

There is a difference whether fail-over redundancy is placed in the LANs, in the WAN’s or on the GIG connected data centers. Hardware purchases will be affected by decisions whether to consolidate the hardware under the control of a few Network Control Centers (NOCs) or to distribute it across the “edge” of hundreds of local network connections for distributed monitoring.

What software to acquire, and accordance with what standards, will have by far the greatest effect on the costs of NGEN. If local small business will make the decisions about security software this will dictate a much larger diversity that in turn will generate the need for more support personnel. Local software options would have to be then accommodated in by a greater variety how critical databases would be managed for enterprise-wide interoperability.

The decision to shift personal computing to greater reliance on server farms, especially where wireless connectivity is involved, will influence how data centers will be organized. That will influence the reliance on the capacity of the GIG to support real-time transactions.

The timing of the five proposed acquisitions is also critical. Navy will be making NGEN acquisitions in the next four years. Whatever is acquired will have a life of well over ten years even though the life of the data contained in the hosted databases will extend over many decades. Meanwhile, the hardware technologies will be changing at a fast rate, with the price/performance of computing equipment improving significantly every 18 months.

NGEN will end up with multiple five-year contracts, mostly to be awarded in 2012. At that time each acquisition will lock in solutions that will at that time meet detailed contractual requirements that were defined up to three years ago. How incremental contracts can support rapidly changing cyber operations well beyond 2020 is not clear. How five short-term and contractually separate acquisitions can preserve for over 50 years Navy essential data without costly re-programming is not apparent. How the planned division of NGEN into five parts can preserve the long-term knowledge capital embodied in applications without creating new generations of “legacy” solutions in the future is not clear.

From the requirements to the final award the NGEN acquisitions it will take over three years. After that the contracts will be in place the dislodging of incumbents will be extremely difficult because variability in software will inevitably creep in.

What ultimately dictate the success of NGEN are not the constantly evolving network needs, the rapidly changing security countermeasures and the infrastructure that will be surely changed by USCYBERCOM.  How NGEN will progress will not be decided by the easy availability of commercial “cloud” data centers or the ready availability of commodity hardware to support millions of personal computers and ten thousands of servers.

What will ultimately persist is the quality of the support for a rapid turnover workforce. What endures is the training of NGEN operators. NGEN should be seen not as an information technology solution but as a means for the preservation of intellectual capital of the Navy. What matters is the capacity of Navy personnel to learn how to respond to increasingly complex situations where a total dependence on information systems has become an integral part of all information warfare.

The human factors of NGEN are by far more important than information technologies. The military and civilian operators will always cost a large multiple of the expenses for IT that supports them. What matters is not the wiring or the hardware, but the way software and its supporting databases are deployed.

From the standpoint of the preservation and the capacity to manage a deluge of new data, the conservation of databases will dictate the economics of how to secure, network and manage applications.  Data and their corresponding MetaData have an extremely long life (well over 50 years) and represent the single most costly life-cycle element of NGEN.

From a functional standpoint the NGEN application software in place will have a life that outlasts many times any changes in microprocessors or of electric connections.

From the standpoint of innovation software has to become readily available in a matter of days and not years.

From the standpoint of easy of learning any NGEN software must be offered in a form that is consistent over wide range of applications and over an extended time period.

For these reasons the acquisition of software from an existing huge catalogue of vendor software may not support the objectives laid out for Information Dominance. The choice of how to manage NGEN software, not its networks, not its hardware, becomes the over-arching foundation of how to construct a system that will allow the Navy to attain information dominance.

SUMMARY

In view of the experience with migration of software across different technologies my examination of the proposed Navy NGEN acquisition plans now gives rise to concerns about the architecture and design of these programs.  NGEN planners should shift their attention from acquiring physical assets to thinking through how to deliver to the Navy a software environment that will avoid the current proliferation of projects that in the future will require high maintenance plus continuous conversions of legacy codes. The following questions will have to be answered:

Has adequate attention been given to tradeoffs that will minimize the total costs to the Navy, especially during the migration of applications from one hardware environment to the next generation of technologies?

How will a preference in favor of software usability affect the interoperability, reliability and delivered service quality affect customers during the transition that may take more than ten years?

How will short-term savings in increasingly cheaper information technology devices be balanced against long-term expenses for operating and user personnel that are rising?



http://www.orgcom.com/resources/upcoming-federal-opportunities/86-next-generation-enterprise-network-ngen.html






Protecting Databases With a Software Vault

A Database Vault (DBV) is designed to provide a separate layer of protection around a database application. *  Its purpose is to prevent access to data especially from highly privileged users, including Data Base Administrators (DBAs), application owners, hackers and cyber attackers.

A DBV introduces into the database environment the ability to define data domains, to specify applicable command rules, to assign who can access data, what data they can access, and the specific conditions that must be met in order to grant such access.

Databases consist of data domains, which define collections of data in the database to which security controls must be applied. They can consist of database objects such as a single table or multiple tables, procedures, programs, an entire application, or multiple applications. In an enterprise scenario, for example, data domains separate the data used by one department from that used by another. In the case of DoD the Army, Navy, Marine Corps and Air Force would have respective DBAs define and control the assignment of domains.

A DBV defines the rules and control processes how users can execute data base management statements, including within which domains and under what conditions they may do so. Command rules leverage individual or combinations of factors, such as identifying individuals and their access characteristics, in order to restrict access to data. Built-in factors include authentication method, identification type, enterprise identity, geographic origin, language, network protocol, client IP, database hostname, domain, machine, and others. In addition to these, custom factors can be defined. Restrictive factors can be assigned to all users, including DBA’s. Multi-factor authentication rules are supported. For example, a certain action could be restricted to being allowed only from a specific IP address within a specified time range.

To protect the database from even high privileged users such as DBAs, the vault includes a definition of the separation of duty in which the DBV is separated from DBA functions. The database vault information itself is protected by its own secure domain, which prevents tampering and therefore must be kept on physically separate servers. The database vault software requires that the DBV managers assume the responsibility for the creation of all new data domains in the database. This will then override all existing accounts with the create user privilege.

Finally, a built-in reporting mechanism provides reports, including those that detail as to who has access to what data and if there were any attempted violations.

SUMMARY
A DoD enterprise level database will be accessed by possibly hundreds of applications originating from diverse Components. It will contain petabytes of data that are updated and accessed in real time. There is no question that such a database will become the target of choice for any cyber attack. Consequently, extraordinary precautions will be taken to offer protection from any unauthorized access to specific data domains, whether they come from external or internal sources.

Creating a Database Vault protection mechanism must be mandatory for mission critical cases. By this means DoD obtains not only an assured layer of protection but also creates a well-defined separation between the roles of the DBV, the DBA and the auditor or the supervising military personnel. All reporting of violations of restrictions occurring in the Database Vault would have to be routed as secure messages directly to those who are accountable for the data vault. Under defined conditions all alterations to the database could be then restricted automatically until human intervention authorizes what steps can be taken next.

* http://products.enterpriseitplanet.com/security/security/1146070533.html

Database Protection Against Insiders

The innermost cores of DoD systems are the databases. In cyber attacks viruses can be implanted in applications, denial of service can block networks or malicious code can produce false results. In all of such cases there are ways how reconstitution can take place. However, if an adversary degrades a shared database from which applications draw data, recovery is difficult. A petabyte database may contain tens of millions of data elements, which are updated at millisecond speeds. If the database integrity attack is designed to be gradual and progressive the users will be receiving results that only few will question. At a critical point the users will stop trusting their computer screen and resort to other means how to improvise what to do without computer aid.

The greatest threats to DoD cyber operations are not external attacks but the subversion from an insider. Whatever may be the motivation for perverting a major database is immaterial. Whether it is malicious, or from a disgruntled employee or from an enemy operative is irrelevant. What will ultimately matter is that a critical moment an act of cyber warfare will disable warfare operations.
  
A number of database vendors offer “Database Vault” protection software. The question is whether this offers safeguards in cases where personnel performing data base administration (DBA) tasks are a threat.

When databases are corrupted the greatest risks do not arise from technical failures against which elaborate safeguards are known to exist. In protecting databases the most important question is who is in charge of real-time policing of the actions taken by DBAs? What is the chain of responsibility for real-time countermeasures? What is the role of the auditors and administrators to see to it that adequate safety processes are in place? What are the separate chains of command through which the various actors in ensuring database security report?

The problem of safeguarding databases is compounded by the fact that database software is one of many applications that run in the same datacenter environment. Numerous versions of Unix and Windows will be accessing the identical database. The DBA administrators, the auditors, the oversight administrators and a wide number of final users will be running millions of queries per hour that access a shared database. The users will only retrieve data from databases, which are “owned” by the DBAs. However, it is possible that without appropriate safeguards there may be too many individuals that will have access to a database. Unless such access is controlled and fully accounted for there will be an exposure that damage to the database could arise from many sources.

There are many methods for attacking databases. One of these, and perhaps the most persistent means is through a “trojanization” of code. To trojanize a software product, one of the diverse and high turnover contractor employees doesn’t even have to actually write an entire backdoor into an application. Instead, the malicious developer could purposefully write code that contains an exploitable flaw, such as a buffer overflow, that would let an attacker take over the machine while a database application is restarted. Effectively, such a purposeful flaw will act just like a backdoor. If the trojan sneaks by the DBA, the malware developer would be the only one who knows about the hole. By exploiting that flaw, the flaw perpetrator could control the database using such code at a time of their choosing. For this reason the DBA will have to see that the interactions between applications and the database are totally isolated.

The DBA will always have access privileges and attach a debugger to any database process, to record all operations, to reset function and to modify how the database system works. There are many libraries and tools to do it. Each vendor provides own proprietary tools for that purpose. There are also numerous other open source and proprietary software available for extracting data or for modifying data bases such as DUDE (Database Unloading by Data Extraction). *

Under extraordinary circumstances the DBAs must be able to recover and reconstitute data files. That can be a complex and time-consuming procedure especially if the database was encrypted. Such recovery must take place under the surveillance by qualified personnel. An attacker can always induce a system failure and apply changes to the database software during the recovery without making it a suspicious act.

SUMMARY
It has been reported that DoD presently operates over 750 data centers and runs thousands of applications. There must be thousands of databases that pass information back and forth for interoperability. The risk that at least one of these databases will become a source of infection is a security exposure. Software protection, such as properly administered “vaults” can offer protection in such cases. However, the software “vault” must not be a part of an application but must be a DBA responsibility.

The personnel with DBA responsibilities will remain the holders of the most critical role in the management of DoD information assets. Though there is a long list of security measures that all must be put in place to assure that appropriate processes are in place, the elimination of compromises from an insider cannot be regulated only by procedures. The security of DBAs, as well as of all personnel auditing the functions of the DBA, deserves at least SECRET level of clearance. TOP SECRET clearance is warranted for all DBA related positions involving warfare. Whether such increases in security can be achieved with the current DoD reliance on contractor personnel must be demonstrated.

The current proliferation of diverse database cannot be fixed in the short run. The best one can do is to reduce the number of DBAs in DoD, to assure their security clearance, to increase the number of civil service personnel and to conduct the oversight of database surveillance procedures exclusively through military officers.

* http://www.ora600.nl/introduction.htm

Cracking Passwords With a Rented Computer

With a few institutional exceptions most amateur attackers have been always prevented, by limited computing resources, to calculate the billions of computations required to break passwords. That has now changed. Inexpensive access to powerful computing power has become readily accessible to anyone in the world for only a small expense. Cyber crime is costing less and is easy to do. Password cracking software is available.  All it takes is sufficient computing power to apply it, such as the WPA Cracker. * 

The new computing services are based on specialized semi-conductors, which offer Graphics Processing Unit (GPU) capabilities. These have recently begun making computational inroads as a replacement for general-purpose microprocessors that are used for more conventional information processing. GPUs have migrated into computationally intensive applications such as oil exploration, scientific image processing, linear algebra, image reconstruction and stock options pricing determination.

GPU-assisted servers, that were previously available only in supercomputers, can be now rented as a cloud hosting service. ** For instance, a commercial hosting service can be programmed to decode the Secure Hash Algorithm (SHA) that is one of a number of cryptographic codes published by the National Institute of Standards and Technology as a U.S. Federal Information Processing Standard. The SHA-1 is a 160-bit hash function. It was designed by the National Security Agency (NSA) to be a part of the Digital Signature Algorithm. In less than an hour the available  “cracking” software can examine very large tables using several hundred "cloud services" CPU clusters that contain GPU processors.

SUMMARY
It is easy to use Amazon EC2 services to process decryption services on GPU servers. All it takes is a simple log-on and a credit card. Amazon charges 28 cents per minute for such services. It may take only a few minutes to break a password. For such a low price, which costs less than a round of rifle ammunition, DoD is now exposed to attacks from more aggressors.


* http://www.wpacracker.com/
** http://www.infoworld.com/t/data-security/amazon-ec2-enables-brute-force-attacks-the-cheap-447

Applicability of DISA DMZ to Cyber Operations

The Defense Information Systems Agency (DISA) has just announced the creation of a demilitarized zone (DMZ) for unclassified DoD applications. The objective of the DMZ is to control the access and improve security between the public Internet and Unclassified but Sensitive IP Router Network (NIPRNet). Implementation will take about two years. It is supposed to roll out across an estimated 15,000 DoD networks.

In computer security, a DoD DMZ will be an isolated sub-network that will process all intra-enterprise transactions for estimated more than four million client computers before it will expose them to any untrusted networks such as the Internet.

The purpose of the DoD DMZ is to add an additional layer of security to DoD local area networks (LAN) and wide area network (WAN). An external attacker will then have access only to the perimeter defenses of the DMZ. This makes it necessary than none of the DoD 15,000 networks will have any computer ports –whether on client computers or on servers – exposed to access from the Internet with the exception of designated DoD web-based applications. It is expected that the DoD DMZ will deflect almost all of the known attack methods. However, it will still leave to human operators to discover and then to deal with any anomalies that are detected by monitoring software.

With progression and the evolution of cyber attack methods it is likely that there will be shift from software based and automatic detection methods to an increased reliance on human intelligence of the guardians of the DMZ.

Under conditions of a concentrated cyber attack the numbers of transactions that must be processed and then passed through the DoD DMZ can possibly approach ten thousands of events per minute. Therefore the capacity of a DMZ must be designed for handling exceptionally large amounts of peak traffic. On account of the increased complexity of zero-day attacks, this will place a burden on capabilities of the diagnostic methods that will be in place.

The servers most vulnerable to external attacks are those that provide services to users who engage in business outside of their local networks, such as e-mail, web and Domain Name System (DNS) routers. Because of the increased potential of these servers to being compromised, clusters would have to be placed into their own sub-network in order to protect them if an intruder were to succeed in attacking them. Therefore servers within a DMZ will have to be assigned limited connectivity to designated servers within the internal network as an added precaution.

 Communication with other servers within a DMZ may also have to be restricted. This will allow servers within the DMZ to provide services to both the internal and external network, while allowing the DMZ operators to cut off traffic when intervening controls indicate that the traffic between servers within the DMZ or with external networks has been compromised.

Simultaneously with the creation of a DMZ DISA is also implementing a DoD central command center (DCC). The DCC will provide continuous oversight of DISA’s network as well as 13 subordinate regional operations centers. The center will employ a mix of 220 contractors, civilian employees and military personnel. The DCC is expected to be fully operational when DISA moves to Ft. Meade late in 2011.

SUMMARY
The construction of a DCC and the creation of a NIPR DMZ are milestone events in the creation of more defensible cyber operations for DoD. These are right moves, in the right direction. They are an indication that under the direction of USCYBERCOM the DISA organization is progressing in support of cyber operations.

It remains to be shown how technically effective will be the new DMZ.  By creating one or more sub-networks that screen incoming and outgoing traffic DISA will be adding delays (latency) to all of its transactions. Transactions will be dropped and will therefore require a positive confirmation for critical messages, which will increase traffic volume.

Current NIPRNET e-mails already show delays, which will surely increase as additional layers of security monitoring are added. If the new DMZ is an add-on to the already existing security methods, the compounding effects are likely to slow down all traffic further.

The DCC or its subordinate points of control will have to deal with requests for access to Internet portals from NIPRNET computers via the DMZ. From an administrative standpoint the maintenance of a directory of permitted access privileges could represent a large workload.

How the new DMZ will deal with SIPRNET communications, which can tolerate lesser latency, is not known. DISA will ultimately have to disclose the technical design of its DMZ and how it will handle peak loads. DISA will have to show how the DMZ will interact with already existing assurance software that is in place on existing networks.

Whether the penultimate Network Control Center (NOC) for DoD, now renamed DCC, can carry out the task of acting as the sentry of last resort for cyber operations remains to be demonstrated. The DCC will have to deal not only with the 13 subordinate regional operations centers that are under the control of DISA, but also with what is a large number of Component NOCs, each functioning under different concepts of operations and deploying different software.

Whether a workforce of only 200 has the capacity of coordinating the designs of multiple Component NOCs while also operating in a high alert mode 24/7 is open to questions. If the DCC is the hub of DoD-wide cyber operations, the presence of contractors is contrary to the objectives of cyber warfare to make it a combat capability of the USA.

Uptime Performance for Cyber Operations

The reliability of end-to-end transaction processing for cyber operations is one of the most important metrics for dictating the design of networks. Under conditions of information warfare, seconds, not minutes will matter.

It is necessary to reach agreements how to measure systems uptime. The reliability of a network cannot be isolated within the Army, Navy, Marine Corps or the Air Force. Under conditions of information operations, the uptime of a DoD network will be the combined response time from every participating network.

 The calculation of network uptime using undefined average metrics is misleading. Is uptime averaged over minutes, hours or days? Is it measured at the user’s keyboard or at the data center? Will it be measured in the number of transactions that exceed a standard, or is uptime expressed as the number of transactions that are below a defined threshold? Or, will the network operators resort to a survey of a random sample of users to gauge user satisfaction? Will such a sample be taken at a maximum peak load time or during average business hours?

The following illustrates a valid approach to measuring uptime:


1. The time interval over which the measurement of uptime is taken is specified.  That could be seconds or hours depending on a user tolerable response times. In the case above the downtime increments were chosen in five minutes, but could be any interval.
2. The number of transactions (users or seats in the above case) that miss a defined standard. That could be more than 200 milliseconds (in the case of a Google search) or less than five minutes when downloading geographic data.
3. The SLA (Service Level Agreement) non-performance standard is defined not as uptime but as performance downtime over five minutes. 99% sounds good until you realize that this number could give you 87.6 hours, on the average, per year.
4. Overall system performance can be examined as a frequency of failures (Green or Red), or as a summary over a 30-minute period.

When designing for network reliability one must consider whether the network has a single point of failure or whether it is redundant. Cascaded single points of transaction processing show the following downtimes:



If cyber operations use a redundant design (two identical system processes running in parallel) then the overall system reliability shows remarkable uptime improvements. When automatic fail-over rates assure low failures that approach should be pursued for all critical applications. Virtualization makes fail-over economically feasible:



SUMMARY
Contractor defined DoD Service Level Agreements are inconsistent in definitions as well as in calculating uptime/downtime metrics. With increased dependency on multi-Component interoperability it is necessary to standardize the uptime evaluation methods. That will make it possible to start predicting the reliability of complex networks in systems engineering of cyber operations.

Linux for DoD?

Prime Minister Vladimir Putin Vladimir Putin signed a 20-page executive order requiring that all public institutions in Russia to replace proprietary software, developed by companies like Microsoft and Adobe, with free open-source alternatives by 2015. Such move will save billions of dollars in licensing fees, but Mr. Putin's motives are not strictly economic. In all likelihood, his real fear is that Russia's growing dependence on proprietary software, especially programs sold by foreign vendors, has implications for the country's national security. Free open-source software, by its nature, is less likely to feature secret back doors. *

There are also indications that China, Saudi Arabia, Turkey and Iran are making attempts to switch from proprietary software made in the USA to open source software which is primarily open source Linux.

The potential of reaping substantial savings are not the primary incentive for a decoupling from US vendors. It is motivated to contain within security-defined boundaries the vulnerability to Internet-conveyed attacks or to manage exfiltration of information from internal sources.

The Russian government will now start managing the Open Source Linux software environment so that they can add security add-ons for its private and limited version of an Operating System. This may also include control of additional software that provides security features.

Can DoD attempt standardization of Operating Systems by adopting a security-enhanced version of Linux? Can USCYBERCOM implement such a change as a way of improving network security while reducing costs? Will the DoD component oppose such a move?

It turns out that over 40% of total DoD IT spending is already in the hands of Agencies, and not with the Components. Agencies have about twice as much as the IT money as what remains in each of the Components.

By far the greatest amount of Agency IT spending is in DISA, now controlled by USCYBERCOM, which is now in a position to dictate the formation of a DoD secure infrastructure. Such a move would have economic as well as security advantages since the DoD infrastructure costs are now over 50% of total IT spending and is increasingly diverting funds from innovation to security assurance. The reason why DoD IT infrastructure spending is so high, and why little money is available for innovation, is that we have now hundreds of duplicate infrastructures within the Components plus the additional cost of supporting the DISA infrastructure.

The role of the Components would have to be then restricted to the development and operation of applications, all riding on the shared DoD infrastructure that provides most of the required security features. The Components would have to stop funding projects where each develops their customized infrastructure and security protection.

SUMMARY
Time has come for DoD to start considering the adoption of an open source version of Operating Systems software, such as one of the versions of Linux. What would make the DoD Linux unique are the security add-ons that would remove most of the rapidly changing security features from the application servers and from client computers.

The responsibility for implementing such change should be managed by USCYBERCOM. Controlled versions of what would be a DoD-specific operating system is more likely to offer a much smaller “security risk surface” than generic software that is readily available for examination and exploitation by attackers.



* http://online.wsj.com/article/SB10001424052748704415104576065641376054226.html?mod=googlenews_wsj

App Stores

Amazon, Google and Apple cloud services are pursuing a new approach for delivery of applications to customers. Application Store Developer programs enable the downloading of pre-tested applications to desktops, laptops or hand-held device. In this way even individual developers can design and then start marketing apps to tens of millions of customers.

The entire App Store concept is based on making use of the reliable infrastructures of Amazon, Google and Apple. Developers can benefit from services offered by the various App Stores. Technical tools, inclusive of application development methods, are offered from a proprietary portal. Apps are instantly delivered to customers using convenient self-service account management tools. There were 500,000 Google Apps, 600,000 Apple Apps and 1,000 Amazon Apps.

The sheer number of apps available today makes it hard for customers to find high-quality, relevant products – and developers similarly struggle to get their apps noticed. Each App Store has merchandising features designed to help customers find and discover relevant products from a vast selection of readily available pre-tested programs.

It costs developers only $25/year to participate in Amazon’s program. Google costs $90/year. Typically, the cloud vendor will instantly pay the developer 70% of the list price of an app.

For instance, the Google Apps Marketplace offers products that have been reviewed both by Google as well as by customers with ranked ratings shown for each app. A full text of reviews by users is included along with detailed specifications and features listed.  The prices for the apps range from no-cost offerings to very modest annual subscriptions. Apps can be either downloaded to a local computer device or run on hosted “clouds”. An example of Google Apps listing is shown below:


SUMMARY
The availability of computer applications from App Stores is a significant new development that alters how systems should be acquired. By offering ready-to-use and rated applications customers can bypass the extremely costly process of custom application development. Most importantly the costly application code testing and verification can be mostly avoided provided that an application has received trusted acceptance from reviewers. Applications from Stores can be hosted on either the DoD infrastructure or on commercial infrastructures such as provided by Microsoft, Amazon, Google or Apple. All of the required security, reliability, data-management, fail-over and performance assurance features will be then provided as service from the infrastructure and not from the applications.

DoD needs to offer a wide availability of standard and tested applications that will displace the hundreds of the local applications that are currently custom-programmed by contractors to meet local needs. The DoD apps will also have to meet portability programming standards so that they can get hosted on the DoD infrastructure without changes in configuration.

The products of App Stores will bring closer the time where applications can be added to the DoD networks in a fraction of the time and at a fraction of the cost it presently takes.

The DISA Offering of Cloud Services

The Defense Information Systems Agency has just announced that it is uniquely positioned to become the leading provider of cloud computing services to the Defense Department for both unclassified and classified data. *

The number of alternatives how to implement cloud services has at least 36 different major variations. The number of possible evaluations options, each requiring analytical investments, is therefore very large.

For instance, the Cloud Security Alliance (CSA) has published 192 cloud evaluation guidelines. The criteria include: Cloud Computing Architecture; Governance; Risk Management; Legal Matters; Compliance and Audit; Lifecycle Management; Portability and Interoperability; Operations; Business Continuity; Disaster Recovery; Incident Response; Remediation; Encryption; Key Management and Access Management. **

The European Network and Information Security Agency (ENISA) has listed 53 vulnerabilities that impact 23 information assets in 35 risk categories. ENISA details 7 Policy and Organizational Risks; 12 Technical Risks; 4 Legal Risks and 20 Risks Not Specific to the Cloud. These risk classes are further subdivided into an elaborate taxonomy that covers topics such as: Personnel security; Supply-chain assurance; Operational security; Software assurance; Patch management; Network architecture; Host architecture; Resource provisioning; Authorization; Identity provisioning; Key management; Encryption; Data and Services Portability; Business Continuity Management; Incident management; Physical security and Environmental controls. ***

Any evaluation of cloud computing should consider acquisition options from qualified cloud firms such as Amazon; AT&T; T Synaptic Hosting; BlueLock Virtual Cloud Computing; Enomaly; GoGrid; Google; Hosting.com; Microsoft Azure; NetSuite; Logica; Rackspace Cloud; RightScale; Salesforce.com; Terremark vCloud Express and Unisys Secure Cloud. Each of these offers combinations of many of  features and functions that offer various degree of "lock-in" into their offerings. Of these the technical role of hypervisors will warrant special attention.

SUMMARY
For DoD to launch into cloud services will require a major effort to define exactly what services DISA will offer, what will be the costs and what provisions will be made to answer questions that have been raised by CSA and ENISA.

DISA will have to announce exactly what will be their offering in order to gain a large share of IT data center services estimated to be worth at least $10 billion/year, now largely operated by contractors.


* http://www.nextgov.com/nextgov/ng_20110103_7911.php?oref=topstory
** Cloud Security Alliance, Security Guidance for Critical Areas of Focus in Cloud Computing V2.1, December 2009. http://www.cloudsecurityalliance.org/
*** ENISA, Benefits, Risks and Recommendations for Information Security (125 pages), and Cloud Computing Information Assurance Network (24 pages), November 2009, http://www.enisa.europa.eu/

Data Center Consolidation is Feasible

According to Vivek Kundra, the US CIO, there were 2,094 Federal Data Centers. DoD operated 772 data centers. In each case, a “data center” was defined primarily as any room that is greater than 500 square feet and devoted to data processing.

The 500 square definitions do not hold up any more as the technologies are shrinking. It is possible to fit into 20x8 ft. shipping containers with the formidable capacity of up to 29.5 petabytes of storage and up to 46,080 CPU cores of processing power.

The economies of scale of data centers of 300,000 to 500,000 sq. ft. show a dramatic lowering of costs of information processing, huge decreases in operating expenses, reductions in staff, while also increasing the reliability and latency of what would then become a “server farm.” Examples of the construction of such huge data centers can be seen from firms such as Apple:


Apple data center in Maiden, North Carolina


Facebook data center in North Carolina

 There are many new firms that build and equip data centers. These are investment ventures. They build highly efficient large facilities and then lease them either as totally dedicated facilities for a particular organizations, or as “cages” available for partial occupancy. In the case of companies, such as Apple or Facebook, the computer configuration is standardized to meet the company’s proprietary system architecture. Partial occupancy pages offer greater flexibility to a customer to install specific software.

As an illustration of only a small sample of firms, the data center venture firm of Sabey is currently building a 350,000 sq. ft. data center for Dell. * The firm of CoreSite offers several locations with finished “wholesale” data center space for users who seek turnkey space that can be deployed quickly. ** The Equinix operates 22 data centers in the USA, 7 in Europe and 5 in Asia. Some of the large Equinix data centers are larger than 200,000. *** Rackspace operates 9 data centers, which includes managed hosting. **** The firm of Interxion operates 28 sites in Europe. *****

Summary
US Government and particularly DoD data centers have been constructed over the past thirty years. The do not reflect the economies of scale that have become available primarily on account of ample optic transmission capacity. Meanwhile the costs of hardware have shrunk, while the cost of electricity and operating manpower has been steadily rising to meet an exponential growth in demand for services.

The existing data centers have their origins in separate contracts, which dictated how computing facilities would have to be organized. It is clear that the currently proliferation of data centers cannot support the increasing demands for security and for reliability. With capital costs for the construction of economically viable data centers now approaching $500 million, it is unlikely that the needed capital could be available with the looming budget cuts in DoD.

The current FY10 O&M (operating and maintenance costs) of I.T. are $25.1 billion. This makes the leasing of DoD-dedicated data centers to be constructed by any one of many commercial firms affordable.

The total DoD user population is about eight million. Its workload would be only a fraction of the estimated high frequency users of firms such as Facebook that process transactions for over 600 million customers in less than 200 milliseconds. Even with provisions for redundancy, the consolidation of computing services into half a dozen Facebook-like data centers is an option.


* http://www.sabey.com/real_estate/data_centers_main.html
**http://www.datacenterknowledge.com/archives/category/crg-west/ 
***http://www.equinix.com/data-center-expertise/platform-equinix/ 
****http://www.rackspace.com/managed_hosting/private_cloud/index.php
*****http://www.interxion.com/About-Interxion/ 

Measuring Transaction Latency

The goal of Information Superiority is to deliver a capability where:  Every data collector makes information available, in real time, for use by all other /warfare/ nodes. Every sensor is connected via standard interfaces to make local data globally accessible. *

What metric will confirm that Information Superiority is achieved?

The simple answer is that DoD will have to operate a network where the latency (defined as request-to-response time) will always meet the required response time within tightly defined statistical control limits.

In warfare situations the required latency will be dictated by the speed of the response to react to a threat. In an administrative situation the latency will be dictated by the workflow of business processes.
Unfortunately, in DoD there is no difference between the latency of warfare and the processing of administrative transactions. They are intermingled on the same networks. Only networks that are dedicated to the control of weapons can be exempted from DoD network latency standards.

In any discussion of DoD networks the latency metric must become one of the primary criteria for the design of circuits, data centers, security and applications software. The latency of an end-to-end system is the result of the interactions of every one of its components. For this reason the current separation between the management of systems acquisition and on-going operations is not tenable. Acquisition and operation or interlinked by performance and cannot be treated separately.

The adoption of standard latency metrics for all of DoD is mandatory. With thousands of applications, operating in thousands of networks, connecting hundreds of data centers and millions of personal computers the system responses will be paced by the latency of the slowest application.
Standard DoD latency metrics must mimic best commercial practices. Since the scope of Google is comparable to that of DoD an illustration is in order.

Google shows latency results for various grades of service in one minutes increments: **



The latency in this case varies from 70 milliseconds to one instance of a 240 milliseconds peak, with a median of 120 milliseconds (0.120 seconds).  That is comparable to the speed of a keystroke.  For all practical purposes that can be considered to be near instantaneous.

 Latency speed has now become one of the key parameters for network architects and designers. Both Google and Face Books consider the speeding up latency as one of the prime influences in improving user acceptance. The race for speeding up latency is particularly present in the financial services industry where direct optical links have been constructed to reduce latency of financial trades down to 13 milliseconds. ***

SUMMARY
The current DoD practices either overlooks or minimizes the importance of measuring transaction performance. For instance, one of the largest DoD systems is satisfied by reporting only multi-day “user satisfaction” indicators, based on random sample questionnaires. Such measurements are unsatisfactory because latency is an instant event.  It must be measured using a statistically 100% valid set that includes every instance.

The transaction latency will have to become one of the key design and operating parameters of cyber operations if DoD networks will meet information warfare objectives.



 *http://www.insaonline.org/assets/files/NavyInformationDominanceVisionMay2010.pdf 
**http://code.google.com/status/appengine/detail/serving-java/2011/01/02#ae-trust-detail-static-get-small-nogzip-java-latency 
*** http://www.nytimes.com/2011/01/02/business/02speed.html 

Desktop Virtualization

Desktop virtualization offers extraordinary payoffs that could cut total U.S. Defense Department information technology spending by up to 12 percent. Depending on legacy configurations, numerous approaches are available to achieve that rapidly—it is not a “bridge too far.” The technology is mature; it is a path that already has been paved by thousands of commercial firms.

Proceeding with desktop virtualization calls for altering the information technology infrastructure, which establishes how data centers connect via communication networks to millions of user devices. It calls for an architecture that is extensible to meet the diverse needs of the U.S. Army, Navy, Marine Corps and Air Force. Projects to install desktop virtualization must enable a migration path that goes from the costly “as is” configurations to what will evolve into a low budget “to be” environment.

Desktop virtualization potentially can reduce the Defense Department’s information technology spending by huge amounts. The total population of Department of Defense client computers comprises more than three million computers. Applying desktop virtualization to this population delivers operating savings as well as capital cost reductions.

Summary
The purpose of desktop virtualization is to free information technology management from more than three decades of labor-intensive client computing that was device-centered and not network-centered. The Defense Department now should embark in a direction that will shift the support of user computing to enterprise clouds, which can support client computing from a much smaller number of data centers over the network to a much larger number of thin- and zero-client end-user devices.

The savings from desktop virtualization are attractive. The technology for installing it is mature. Thousands of commercial firms have demonstrated how to do that successfully. There is no reason why the Defense Department should not proceed with desktop virtualization without further delay.

For more details see  http://strassmann.com/pubs/afcea/2011-01.html