Search This Blog

How to Acquire Enterprise Systems?


Much has been written about the problems with the acquisition of large DoD systems. An accepted view is that the existing acquisition processes are “broken”.

There are, however, useful lessons to be learned from cases where complex systems have been delivered on time and under budget. Here are a few rules:

Acquisitions Succeed Because of Sound Engineering.
Compliance with elaborate policies, guidelines and instructions that dictate how systems are built and operated are unlikely to give assurance that a system will be delivered on time, on budget and with all of the features that the users requested. There is a long list of GAO reports that attest to the consistent failures of programs, each of which followed the thousands of documents required to comply with practices dictated by DoD Directives 5000.1 and 5000.2.

Acquisition programs do not fail because they do not comply with practices dictated by DoD policies. Non-compliance is a consequence, not a cause. Failures are attributable to insufficient engineering leadership, to a lack of engineering expertise and the absence of good engineering practices.

The primary challenges of program failure are technical and the ways programs are organized. The primary skill of Program Executive Officers (PEOs) should not be the skill how to steer an acquisition through a maze maze of regulations and procedural restrictions.  The PEOs job is to make technology work and delivered with acceptable risks.

Program cost accounting and progress schedules are only indicators of how well the engineering and the organization of work is working. It takes more to measure of a PEOs success than what is accomplished during the acquisition phase. What matters are the follow-on operations and maintenance costs that will as well as the delivery of capabilities that, with enhancements and upgrading, will make a system viable for many decades after the PEO has moved on. Life-cycle operations and maintenance costs will always exceed the acquisition costs by a large multiplier.

Acquisitions Succeed Because Mistake Are Avoided
Acquisitions do not fail because of a few bad decisions. Nobody is perfect and mistakes will happen. They fail because major errors will add up and form a devastating failure-multiplier. It is the continuous aggregation of often well-intentioned measures that will ultimately add up to a mess. That can be corrected only sometimes with more money or degradation in performance but hardly ever through an improved schedule.

Programs will fail when multiple flaws accumulate. It is the creep in defective engineering that will ultimately cripple any program. Two to five fundamental flaws can be fixed with budget supplements, schedule slippage or by deleting features. More than ten flaws are hard to overcome. When there is an excessive accumulation of what I call “sins of commission” that will result in either redefining what needs to be done or letting the entire venture fade quietly into oblivion.

Here is a list of twenty-six things to avoid:

Eleven Sins of Program Managers:
- Never schedule the completion of a program investment past its financial break-even point. Break-evens should be less than two years for each development increment.
- Never plan a program that takes longer than a half of its useful technology life. Technology life these years is less than five years.
- Never assume that the completion of all of the acquisition cycle milestones will guarantee the delivery of originally planned operating results. The farther the milestones are spread the greater the probability of changes in original requirements.
- Never fail in the delivery of projected metrics for operation and maintenance, such as cost, latency, reliability and availability.  Performance metrics should be used to keep track of the expected results during the entire program cycle.
- Never assume that multiple organizations that depend on a system will reach agreement about the proposed features or the definitions of data.
- Never trust requests for major system changes to be paid by someone who has no accountability for results.
- Never commit to project schedules that take longer to implement than your customer’s average time to the next reorganization. In DoD the time between reorganizations is about two years.
- Never hire consultants to deliver undefined requirements subject to unclear specifications on a time-and-materials basis.
- Never consider a program complete unless you have acceptance from a paying customer who is also directly accountable for results.
- Never add major new requirements to a project after the budget and the schedule are fixed.
- Never test a pilot program by relying on excess support manpower to demonstrate that the system will function for a selected small number of senior executives who have limited requirements. Pilot programs should be always tested at locations where operating personnel can fall back on back-up solutions.
Eight Sins of Technologists:
- Never adopt a new technology unless the top management in charge of operations understands intimately how it works.
- Never design a program that will simultaneously deliver new communications networks, new data centers, new databases and new concepts of operation as an integrated package.
- Never introduce a totally new and untested technology for immediate deployment.
- Never deploy totally innovative software simultaneously with totally innovative hardware.
- Never code a system in a programming language that is unknown to most of your staff.
- Never give programmers access to the network control center consoles that manage computer services.
- Never automate a critical process that does not have a human over-ride switch.
- Never rely on 100% reliability of a single communication link to retrieve data from a critical database.
Seven Sins of Program Planners:
- Never consider a system acquisition program to be complete without with a full-scale pilot test.
- Never assume the recovery of any system from complete failure without frequent re-testing of fail-over readiness.
- Never depend on critical delivery dates without proof that a vendor has done it before.
- Never design a system that will not be able to operate under severely degraded operating conditions.
- Never consolidate data centers that operate with obsolete technology, dissatisfied customers and a management that is technically incompetent.
- Never convert an old application or database to a new one without being able to retrace your steps in case of failure.
- Never engage services of a contractor who has a program manager who lives in a trailer hitched to a pick-up truck (I did that, to my regret).

Build a Little, Test a Little
The above flaws are only a partial list. Systems managers will always discover new ways of how to stumble. In the DoD there is an unlimited opportunity for adding to an inventory of misfortunes.

The best way for preventing a persistent accumulation of flaws is not to engage in the development of applications that take much longer than two years to implement after a prototype pilot has succeeded. Therefore, the installation of applications, in progressive increments, must also fit into an overall design to assure interoperability.  This calls for mandatory compliance with enforceable standards.

All applications must be sufficiently small and modular so that they can fail.  When that happens most of the parts can be then re-purposed for restarting a system at a lower cost because the price of all technologies keeps dropping.

DoD systems programs should never pursue a monolithic approach for the construction of a new telecommunications infrastructure, for a new operating environment and for new databases. New applications should ne never be developed to fit into their own dedicated telecommunication network, a separate operating environment and certainly not with a self-contained database. Applications can be launched fast, but networks, operating environment and databases take a long time.

The telecommunications infrastructure should be an enterprise service. It should be designed incrementally so that that it will support applications when they demand added services. Building a global grid calls for long-term funding and for a completely different management structure than what is required for applications.

The operating environment, and particularly the data centers, should be an enterprise service that can be built incrementally, so that its capacity, security and redundancy can support applications when they come on stream. A mix of government controlled and commercially managed operations calls for financial arrangements that transfer most of the technology risks to equipment vendors.

The databases are the keys to achieving the interoperability and the security of DoD systems. These should be constructed based on shared standards as well as enterprise-wide metadata and not on application-derived definitions.

Finding Payoffs
The acquisition of enterprise systems by DoD must recognize that the total life cycle costs of preset investments into the DoD infrastructure can extend for many decades. What counts for such long-term investments is the discounted present net worth of cash flows for both the infrastructure as well as the applications.

From a financial analysis standpoint, the discount factor for investments for an application should be low if implemented rapidly. The discount interest rate would be a small multiple of short-term Treasury bonds.

The discount factor for investments in the DoD infrastructure must recognize the long time it takes to place communications and an operating environment in place while information technology takes place at a rapid pace. Consequently, the discount interest for infrastructure investments will be a large multiple of long-term Treasury bonds. On balance, that would favor spending money on infrastructure investments that speed up the implementation of applications at the lowest possible cost.

When viewed from the standpoint of life cycle costs the investments in new applications must be also traded off against long-term expenses for operations and maintenance. That will always favor spending money on how to accelerate the installation of applications.
Therefore, DoD policies should make enable the making of life cycle tradeoffs between acquisition costs and how systems are acquired and operated. DoD policies must also guide how investments are made on the infrastructure in comparison with applications.
The principal flaws in the implementation of current acquisition policies are:

1. The PEOs are not accountable for total life-cycle operations and maintenance (O&M) costs of systems. Systems acquisition budgets are funded separately from O&M. There is no visibility of what ongoing costs of applications may be.
2. The costs of the DoD infrastructure (currently consuming over 56% of total IT costs), which includes networks and the operating environment, cannot be traded off against the life cycle costs of applications. Infrastructure costs are widely distributed into enclaves such as DISA and DLA while each Service keeps building additions of their own infrastructure.
3. DoD IT costs are broken up between multiple organizations, each attempting to garner funds to attain self-sufficiency.
4. Only a small fraction of DoD IT life cycle costs is subject to a formal budgetary process where tradeoffs in investment vs. O&M are made. Meanwhile, individual project managers are left to their own resources to concentrate on keeping their acquisition costs below budget.

 Summary
The DoD systems acquisition processes can improve by signing up to the primacy of engineering over management by procedure. The acquisition of computer applications will gain from avoiding any of the twenty-six “sins” of how to manage.

However, the current acquisition policies will still fall far short in delivering what DoD needs.   In the absence of enterprise level tradeoffs how investments between acquisition and O&M costs IT is unmanageable. The acquisition of IT systems differs from the acquisition of weapons. O&M costs, inclusive of military, civilian and contractor labor (which do not show up as IT costs) overwhelm the acquisition costs, which are watched closely.

While the demands as well as the costs to support cyber operations are rising the budgets are shrinking. Cost cuts and additional outsourcing will be insufficient to shift of DoD from an emphasis on kinetic weapons to technologies that support information-based warfare.

The fundamental flaws are not Directives 5000.1 and 5000.2.   It is the funding mechanism that has allowed DoD to accumulate over 15,000 disjointed networks, close to a thousand of data centers and over 5,000 information technology projects that are not interoperable. The current method of funding projects has resulted in thousands of databases that are incompatible and therefore cannot support information warfare.

The DoD IT infrastructure has excessive costs while its performance is inferior as compared with commercial practices. Applications are redundant while their performance is inadequate.

The fundamental flaw in the management of DoD systems is not how acquisition is conducted but how money is spent on the DoD information infrastructure.  At present DoD operates an expensive and ineffective infrastructure. In FY10 it cost $19.5 billion, or 58% of total IT spending. This is expensive because it supports 1,900 projects that can be found in different self-contained budgets, which are managed by separate organizations. Such diversity does not permit the making of the making of tradeoffs between short-term low risk projects and long-term high- risk infrastructure investments. As result, all tradeoffs are made only locally on a sub-optimal scale. Project management is then burdened with elaborate restrictions that try to achieve economic trade-offs by administrative means that ignore sound engineering methods.





Can DoD Manage the Delivery of GIG Objectives?


DoD operations continue to be hampered by the lack interoperability. In order to run war operations in the last decade DOD had to patch together disparate systems and networks. DoD has been also retrofitting systems after they are fielded to keep field operations working. This approach has been very expensive. It has been insufficient in meeting the DOD’s stated goal of achieving a networked force where soldiers, weapon systems, platforms, and sensors are linked and able to function jointly.

DOD has been looking to the Global Information Grid (GIG) to solve the interoperability problems since 2002. * But progress to date has fallen short of its objectives.

The GIG is a large and complex set of technology programs intended to provide an Internet-like connectivity to every device, including wireless and radio. It is supposed to allow users at any location to access data on demand from anywhere. Its purpose is to enable the sharing of information in real time. GIG should enable collaboration in decision-making regardless of which military service is the source of information.  The GIG would link weapon systems for greater joint command of battle situations as the US dependency on information-based warfare is rising rapidly.

According to a 2006 GAO report the GIG infrastructure will cost approximately $34 billion through 2011 though the rising costs of information assurance will be increasing that amount. * How much of the current annual IT costs of $36.5 billion is allocated to communications is not clear. However, the duplication in over 150 DoD networks is increasingly shifting the costs of information management from applications that support the warfighter to the underlying infrastructure.

DOD’s investment in the GIG extends beyond development of the core network circuits. The purpose of the GIG is to integrate the majority of its weapon systems, application systems into a comprehensive network.  Accomplishing these objectives involves reaching agreement on common standards and in aligning systems with GIG-like services.

There are three decisions processes that have so far impeded the progress in advancing the GIG:
1. The Joint Systems, which the DOD uses to identify, assess, and prioritize military capability needs has not come up with an architecture and design that can be the basis on which to build a functioning GIG;
2. The Planning, Programming, Budgeting, and Execution process, which guides how DOD allocates resources, has not been able to develop an acceptable fiscal and governance mechanism for funding enterprise-level investments;
3. Defense Acquisition System, which governs how DOD acquires weapon and information technology systems, has not been reformed to support a GIG-like venture in which the technologies are subject to rapid changes.

DOD’s decentralized management approach does not fit the GIG.  It is not designed for the development of a large-scale Joint integration effort, which depends on a high degree of coordination and cooperation. Though the GIG calls for clear leadership and authority to control budgets across organizational lines no one is in charge of the GIG. There is no requisite authority or accountability for delivering GIG results.

The Office of the Secretary of Defense assigned overall leadership responsibility for the GIG to the DOD CIO, to include responsibility for developing, maintaining, and enforcing compliance with the GIG architecture; advising DOD leadership on GIG requirements; and providing enterprise-wide oversight of the development, integration, and implementation of the GIG. However, the DoD CIO has practically no influence on investment and program decisions by the military services and defense agencies, which determine investment priorities and manage program development efforts. Consequently, the services and defense agencies are unable to align their spending plans with GIG objectives.

DOD’s decision-making processes are not structured to support crosscutting, department wide integration efforts. The existing processes were established to support discrete service- and platform- oriented programs rather than joint, net-centric never-ending programs. This situation remains in place to this day. The Joint Capabilities Integration and Development System (JCIDS) process has been implemented for almost a decade and has produced a large collection of policy papers but not much else. In the absence of collateral budgetary, PPBE and Acquisition process changes JCIDS plans have a limited use.

For instance, the DOD’s acquisition process continues to move programs forward only if there is sufficient advance knowledge that technologies can work as intended. At the current extremely rapid rate of technological change, information systems investments will become obsolete by the time the entire multi-phase (five years+) Acquisition process can be ever completed.

Joint, net-centric capabilities depend on the delivery of several related acquisition programs. This calls for rapid-turnaround integration in at least quarterly time frames while the acquisition process with a time clock based on years instead of weeks is not suited for managing interdependencies among diverse programs, especially if cooperation from several services and agencies is instantly necessary for the correction of software defects.

SUMMARY
The Global Information Grid has been seen as the cornerstone of information superiority, as a key enabler of net-centric warfare, and as a basis for defense transformation. The GIG’s many systems were expected to make up a secure, reliable network to enable users to access and share information. Communications satellites, next-generation radios, and a military installations-based network with significantly expanded bandwidth were supposed to pave the way in which DOD expects to achieve information superiority over adversaries. The focus of the GIG was to ensure that all systems could connect to the network based on common standards and protocols. Some progress has been made but only at the price of rising costs and the increasing disconnection between the technologies of DoD and commercial IT.

Increased budgetary pressures are starting to modify DoD's use of the term "GIG". That is undergoing changes as new concepts are emerging such as Cyberspace Operations, GIG 2.0 or the Department of Defense Information Enterprise (DIE). Such ideas are in the process of revising what was original version of GIG, which delivered mostly circuit bandwidth but little else.

However, unless there are revisions in the way in which Joint Systems requirements are defined, how the Planning, Programming and Budgeting processes are revised and how Acquisition is restructured, the existing management processes are inadequate for delivering the desired integration and interoperability goals.

* DoD Directive 8100.1
** GAO-06-211

Google Identifies Difficulties in Detecting Web-based Malware


Google engineers analyzed four years worth of data comprising 8 million websites and 160 million web pages from its Safe Browsing service, which warns users when they hit a website loaded with malware.  Google said it displays 3 million warnings of unsafe websites to 400 million users a day.

The detection process is becoming more difficult due to evasion techniques employed by attackers that are designed to stop their websites from being flagged as bad.

The company uses a variety of methods to detect dangerous sites. It can test a site against a "virtual machine honeypot" where it can examine malware. It can make a record of an attack sequence. Other methods include ranking a website by reputation based on its hosting infrastructure, and another line of defense is antivirus software.

One of the ways hackers get around detection is to require the victim to perform a mouse click. This is a kind of social engineering attack, since the malicious payload appears only after a person interacts with the browser.

Browser emulators can be confused by attacks when the malicious code is scrambled, a method known as obfuscation. Google is also encountering "IP cloaking," where a malicious website will refuse to serve harmful content to certain IP ranges, such as those known to be used by security researchers. Google found that some 200,000 sites were using IP cloaking.

Antivirus software programs rely on signatures as one method to detect attacks. That software often misses code that has been "packed," or compressed in a way that it is unrecognizable but will still execute. Since it can take time for anti-virus vendors to refine their signatures and remove ones that cause false positives, the delay allows the malicious content to stay undetected.

While anti-virus vendors strive to improve detection rates, in real time they cannot adequately detect malicious content. Attackers use anti-virus products as test-beds before deploying malicious code.

SUMMARY
Malware detection software is progressing, but attackers are learning also. Interception of suspicious web pages is available, but is still insufficient. The best defense remains in extreme personal caution in opening any messages.



Blog based on http://tech.slashdot.org/story/11/08/19/1328237/Google-Highlights-Trouble-In-Detecting-Malware?utm_source=headlines&utm_medium=email

Apache Hadoop – Ordering Large Scale Diverse Data


Apache Hadoop is open source software for consolidating, combining and analyzing large-scale data. Apache Hadoop is a software library that supports distributed processing of vast amounts of data (in terabytes and petabytes) across huge clusters of computers (thousands of nodes). It scales up from single servers to thousands of machines, each offering server localized computation and storage. Rather than rely on hardware to deliver high-availability, the software is designed to detect and handle failures at the application layer. It delivers a service for computer clusters, each of which may be prone to failures.

Relational data base software excels in storing workloads consisting of structured data. Hadoop solves a different problem which is fast, reliable analysis of structured data as well as unordered complex data. Hadoop is deployed along legacy IT systems to combine old data with new incoming data sets.

Hadoop consists of reliable data storage using the Hadoop Distributed File System (HDFS). It uses high-performance parallel data processing using a technique called MapReduce.

Hadoop runs on commodity servers. Servers can be added or removed from a Hadoop cluster at will. A Hadoop server cluster is self-healing. It can run large-scale, high-performance processing jobs despite of system changes.

Dozens of open source firms participate in the upgrading and maintenance of Hadoop/MapReduce. Critical bug fixes and new features are added to a public repository, which is subject to rigorous tests to ensure software reliability. All major firms that offer cloud computing services already employ Hadoop/MapReduce. *

A Map/Reduce job splits input data into independent chunks, which are processed as separate tasks in a completely parallel manner. The Map/Reduce software sorts the outputs of the individual “maps” on separate servers, which are then fed into the reduce process. The software takes care of scheduling tasks, monitoring progress and re-executing any failed tasks.

The compute nodes and the storage nodes are identical. The Map/Reduce framework and the Hadoop Distributed File System run on the same set of servers. This configuration allows Hadoop to schedule tasks on the nodes where data is already present, resulting in high bandwidth across each cluster.

The Map/Reduce framework consists of a single master JobTracker and of separates TaskTrackers for each cluster-node. The master is responsible for scheduling the jobs' component tasks on the individual servers, monitoring them and re-executing any failed tasks.

Applications specify the input/output locations and supply the map of how a job is processed. This reduces processing overhead via implementations of all connecting interfaces. These, and other job parameters, the comprise configuration management for each application.

SUMMARY
The masses of data, such as is currently tracked at multiple DoD network control centers, cannot be analyzed by existing relational database software. In addition, access to multiple web sites to extract answers to customized queries requires a new architecture for organizing how data is stored and then extracted.

The current DoD incoming traffic is too diverse. It shows high real time volume peak loads. The text, graphics and video content are unstructured. They do not fit the orderly arrangements for filing of records into pre-defined formats. The bandwidth that is required for the processing of incoming messages, especially from cyber operations and from intelligence sources, calls for the processing of data in a massively parallel computer in order to generate sub-second answers.

The conventional method for processing information, such as the existing  multi-billion Enterprise Resource Planning (ERP) systems, rely on a single massive master database for support.

A new approach, pioneered by Google ten years ago, relies on Hardoop/Map Reduce methods for searching through masses of transactions that far exceed the volume of transactions currently seen in the support conventional business data processing.
With the rapid expansion of wireless communication from a wide variety of personal devices, DoD messages subject to processing by means of massive parallel computers will be exceeding the conventional workload of legacy applications.

DoD is now confronted with the challenge of not only cutting the costs of IT, but also with the task of installing Hardoop/Map Reduce software in the next few years. In this regard the current emphasis on the reduction in the number of data centers is misdirected. The goal for DoD is to start organizing the computing as a small number of massive parallel computer networks, with processing distributed to thousands of interconnected servers. Cutting the number of data centers without a collateral thrust for software architecture innovation may be a road that will only increase the obsolescence of DoD IT assets as Amazon, Baidu, Facebook, EBay, LinkedIn, Rackspace, Twitter and Yahoo forge ahead at an accelerating pace.

Meanwhile DoD is wrestling how to afford funding the completion of projects started after FY01. DoD must start carving out a large share of its $36 billion+ IT budget to make sure that FY13-FY18 investments can catch up with rapid progress now made by commercial firms.

After all, DoD is still spending more money on IT than any one else in the world!


* http://wiki.apache.org/hadoop/PoweredBy

Project Einstein for Network Security


The Einstein Program is an intrusion detection system that monitors the network gateways of U.S. government agencies for unauthorized Internet traffic. Einstein 1 examined network traffic while Einstein 2 can look at the content of incoming and outgoing transactions. *

In 2007 an upgraded version of Einstein 2 was required for all government agencies except for the Department of Defense as well as all intelligence agencies.  That excludes 60% of all US IT spending.

By 2008 Einstein was deployed in fifteen out of the nearly six hundred agencies. With such slow progress the Department of Homeland Security (DHS) has asked for $459 million for FY12 to include the installation of Einstein 3 and increasing agency participation. Congress my not, however, support enlarged Einstein funding.

Einstein is the result of the E-Government Act of 2002. It is under the management of DHS, which is responsible for the safeguarding all civilian agencies, which have over 2 million users. Einstein involves the centralization of all connections to the Internet in order perform consolidated real-time intrusion prevention for all incoming and outgoing communications.

It supports 4 Federal Computer Incident Response Centers (FedCIRC).

Einstein 3 uses an intrusion prevention system to block all malware from ever reaching government sites.

The technical problems with Einstein implementations are as follows:
1. While Einstein 2 is only partially implemented, the testing of Einstein 3 has not been implemented.
2. It is unlikely that Einstein 2 or 3 will have the capacity to defend against denial of service attacks (DoS). Criminal bot masters can now rent out as many as 5 million bots. Government cyber attackers can command more than that. Potentially, each bot can generate up to 10 MBs traffic. This could produce an onslaught of over 50,000 Terabytes/second on a single IP address. That is not scalable.
3. One way of detecting intrusion anomalies is through correlation. New intrusions are compared with prior cases. Unless supercomputers are employed for this purpose, Einstein does not have the capacity to make correlations for a network that serves two million users.
4. Einstein depends on the authentication of signatures from trusted as well as untrusted commercial sources. That is not acceptable

SUMMARY
It is unlikely that Einstein can be expected to protect the civilian sector of the government against cyber attacks. Current discussions promoting extensions of Einstein into the US critical infrastructure (electricity, energy, communications, etc.) have little merit.


* Einstein, http://en.wikipedia.org/wiki/Einstein_(US-CERT_program)
** Communications of the ACM, August 2011, p.30 

Requirements for a DoD-wide E-mail System


In August 2011 the U.S. Army 's transition to a single enterprise e-mail system has been put on hold to work out technical problems. With only 9% of the migration completed (88,000 accounts) it is not surprising that Army has uncovered that the existing e-mail service need “cleaning up”. The Army has designated its enterprise e-mail as a Software-as-a-Service (SaaS) cloud solution, though it is only a "hosted" cloud solution.

A lack of standardization has been the main problem as the Army hopes to install an enterprise e-mail program with DISA as the implementation manager. The Army e-mail could to be then used as a prototype to extend it to more than five millions of DoD mailboxes. However, it is unlikely that such standardization can be ever achieved.

A single, standard common operating picture replacing all e-mail versions would facilitate conversion to it. However, it does not appear that what DISA/Army is doing. To accommodate diversity for rapidly changing mobile technologies the Army is installing additions to the hosted e-mail. Such an approach is not likely to succeed because it will be difficult to maintain.

The DISA/Army DoD enterprise e-mail system may be flawed on account of the following:
1. The term SaaS refers to business software that runs exclusively on Cloud servers, rather than on- premise at a customer site.  The vendor provides a service that can be subscribed to and accessed over the Internet rather than a physical product that customers have to install and manage on their own. But the Army SaaS is not SaaS. It leaves parts of legacy software on desktops, laptops or smart-phones.  Users receive an exact copy of what they have previously used, without change. Consequently, the standard software had to be modified for coping with a variety of conditions.
2. What DISA is delivering is a “hosted” solution with custom features, not standard low cost SaaS software that offers only pre-defined features.
3. A “hosted” e-mail can’t offer the benefits of real SaaS because it expends too many resources in maintaining multiple versions of both its own software as well as a broad matrix of supporting infrastructure.
4. Army customers insist on retaining different versions of what is already in place.  They are not able to share infrastructure and operational resources to the extent that real SaaS vendors can.

The Army SaaS needs to have proceed with a program that has the following characteristics:
1. All customers share a single version of the e-mail software. This is a “multi-tenant” architecture;
2. All customers share the identical IT infrastructure and operational resources, using only browser Internet access;
3. Updates for features or program fixes are included with the service at no extra charge for every e-mail user;
4. World-class security for data center operations, applications, and data is concentrated at the SaaS cloud level;
5. Service level guarantees including 99.9999% uptime, backup, and disaster recovery;
6. Ongoing maintenance and performance tuning is performed by the SaaS vendor, without user involvement
7. There are no perpetual licenses, only pay-as-you-go charges.
8. The system can be configured to adapt to
the needs of individual customers without compromising the standard system. In order to serve many clients, most multi-tenant SaaS solutions offer configuration options to meet different needs. Configuration options are captured separately from the standard e-mail offering. The SaaS vendor will often guarantee configuration options. On-premise customizations are not supported.

SUMMARY
The current Army and DISA effort to install a standard e-mail system, which is potentially a prototype for all of DoD, is stalled. The current solution, based on an extension of existing Microsoft services, is attempting to slide an e-mail replacement on top of the existing multiplicity of legacy solutions. Such migration may not be economically justified, as evidenced by Congressional refusal to support the budget for this program.

Perhaps a more decisive cutover approach may be not only feasible but also executable in less time.






New TDL-4 Botnet Threats

The TDL-4 botnet is a collection of Trojans with the capacity to inflict damage through increased technical sophistication as well as improved commercial exploitation. * A botnet contains compromised computers connected to the Internet used mostly for malicious purposes. When a computer becomes compromised, it becomes a part of a botnet. Botnets are usually controlled via standards based network protocols such as Internet Relay Chat (IRC).  TDL-4 uses the KAD peer-to-peer network for managing its control communications.

Millions of personal computers have been infected. The TDL-4 botnet is sneaky, evasive, hard to detect and difficult to disinfect. TDL-4 is the fourth generation of the TDL malware. TDL-4 packs all kinds of tricks to conceal deep within hard drives, evading most virus scanning software as well as more proactive detection methods. It communicates in encrypted code, and contains a rootkit program that allows an operator access to a computer even while hiding itself from the user, network administrators and automated security measures.

TDL-4 is malicious because it facilitates the creation of a botnet--a network of infected computers that can be used in concert to carry out tasks like distributed denial-of-service attacks, the installation of adware and spyware, or spamming. It currently has 4.5 million machines under its control and counting. The infecting file is usually found lurking around adult sites, pirated media hubs, and video and media storage sites.

The TDL-4 malware originators have extended the program functionality to encrypt communications between bots and the botnet command and control servers. The controllers of TDL have created a botnet that is protected against countermeasures and antivirus companies. Antivirus vendor, Kaspersky, has suggested that TDL-4 has installed nearly 30 different malicious programs onto the PCs it controls.

TDL-4 installs itself into the master boot record (MBR), which makes it difficult for the Operating System or any antivirus or security software to detect its code. Once inside a personal computer, TDL-4 takes up residence in the MBR, which means it can run before the computer is actually booted up. This MBR is rarely combed over by anti-virus software giving TDL added invisibility. Then, TDL-4 runs its own anti-virus program. It contains code to remove around 30 of the most common malicious programs, wiping an infected machine clean of everyday malware that might draw a user’s attention or cause an administrator to take a closer look. It can then download whatever malicious software it wants to in the place of the deleted programs. This version of TDL-4 also has added modules, which can be used to hide other malicious cyber actions.

An advanced encryption algorithm ensures that security and anti-virus products are unable to ‘sniff’ packets that it sends out onto the network. This helps to cloak information that is being sent from Command and Control (C&C) servers, and the information being returned by the TDL-4 Trojan.

Any attempt to take down the regular C&Cs can be circumvented by updating the list of C&Cs. Any C&C has a means to directly communicate over the encrypted channel to any host, so that it is virtually indestructible.

TDL-4's controllers use the botnet to plant additional malware on PCs, rent it out to others to conduct spam and phishing campaigns or for distributed denial-of-service attacks.
You cannot buy the source code. You can only rent time on a bonnet service that is built using the TDL-4 toolkit, in essence replicating the business model of Software-as-a-Service.

The owners of the rootkit go to great lengths to make sure that its turf, which are literally the millions of computers that are part of its army, are protected from other rogue malware. The defense mechanism includes its own antivirus to take out other competing malware and eliminate the risk of potential conflicts as well as the use of public P2P networks to link the slave computers to Command and Control servers.

The TDL-4 network is rented out at a high price to criminal organizations. With a rising number of PCs working for them, the owners of TDL-4 can launch impressive spamming and phishing campaigns, which can rake in fees. TDL-4 can be also used to plant other malicious pieces of malware, including “spybots”, hijacking toolbars, and even fake antivirus software. When a contract runs out, TDL-4 can remove these programs easily. TDL-4 is also removing the competition (malware it doesn’t sanction) while opening captured computers to software it prefers. It’s definitely the cyber version of organized crime and the start of a Mafia cyber war.

The continual development of the TDL-4 network, its advanced tactics, and its wide dispersal is the work of a concentrated criminal network with thousands of dollars devoted to development of its cyber operations. “Partner-programs”, most often operating through websites offering adult content, bootleg videos, or file storage, are paid $20-$200 for every 1000 computers they infect with TDL. Kaspersky estimates that each version of TDL costs its controllers about $250,000 to set up their network. Daily revenue from a botnet the size of TDL-4 can be in the many tens of thousands of dollars.

SUMMARY
At one point the “Conficker” Trojan was going to destroy the entire Internet as we knew it, but it is now contained. TDL-4 will continue to confound and frustrate security experts for years but this too shall pass, causing damage meanwhile. The problem is that the TDL-4 continues to evolve as defenses become more capable. TDL is multigenerational persistent malware, with new attack forms getting launched as profits from botnets keep rising.

* IEEE Computer, August 2011, p.16



Sandboxing Offers Security for Social Computing


Sandboxing protects a system by limiting what an application can do, such as accessing files on disk or resources. Limiting the capabilities of an app to just those operations needed for viewing social computing messages will keep the rest of a system totally secure in the event that a message or an app are compromised.

The exploitation by one virus is what makes it possible for downloaded malware to corrupt an entire machine. Web browsers and their plug-ins can infected Web pages. Malicious PDF or Word document can become a conveyor of infection. Firewalls, anti-malware software and other products aren’t much help in cases of “spear-fishing” or zero-day attacks. Social computing communications, such as messages received over Face Book or Twitter, are one of the principal sources of malware, since they usually originate from personal computers from members of the families of DoD personnel.

If a DoD person, using a secure desktop, laptop or smart phone, receives a social computing message, one cannot be ever sure that the message is not also acting as a conveyor of malware.  The right solution is to place all incoming traffic that originates from addresses other than .mil (from any unauthorized source) directly into a sandbox where it can be examined, but not transferred anywhere on the DoD network.

A sandbox is an isolated zone designed to run applications in a confined execution area where all functions can be tightly controlled, if not prohibited. Any installation, modification, or deletion of files and/or system information is restricted. From a software security standpoint, sandboxes provide an extremely limited code base. It prevents any decision-making on the user’s behalf except to examine the incoming message. This protection is invisible and cannot be changed by the recipient.

Sandboxes should be also used to prevent the downloading of “Applets” from diverse libraries such as Apple, Google and Amazon. Any such download would be automatically routed to a user’s sandbox until such time that network control would can test, verify and legitimize a new application.

SUMMARY
All sandboxes must run as isolated virtual computers on separate servers that are controlled within an IaaS or PaaS cloud environment, on a private DoD cloud. Under no circumstance should DoD allow the creation of sandboxes on client desktop or laptop machines. The virtual desktop will then display the contents of a virtual desktop as a separate and isolated window, which will prohibit pasting or cutting sandbox text or data unless authorized to do so by the network control center.

New Roles for the CIO

On August 8, 2011 the Director of OMB issued a memorandum for the purpose of enlarging the roles of the government’s Chief Information Officers. Its objective is to change the roles of Agency level CIOs from just policymaking to portfolio management for all IT.

Does the OMB memorandum change materially the roles of the CIO?

The OMB memorandum adds to the CIO responsibilities, as defined by the Clinger-Cohen act of 1996, a recommendations that CIOs should work with the Chief Financial Officers and Chief Acquisition Officers as well as with the Investment Review Boards (IRBs). * Such coordination should have the goal of terminating one third of all underperforming IT investments by June 2012. Though this objective useful, it cannot be construed as an enlargement of the CIOs role as portfolio manager for all of IT.  The job of eliminating underperforming systems was always one of the principal CIO tasks.

The OMB memorandum adds to the CIO responsibilities the mission of managing “commodity IT”. CIOs are advised to pool agency purchasing power to improve the use of commodity IT. For instance this concerns dealing with e-mail, collaboration tools, human resources or administration. To achieve that, CIOs should rely on “enterprise architectures” and the use shared commercial services instead of standing up separate services. Although these recommendations are commendable, the government does not have an enterprise architectural design in place as yet. It has been unsuccessful in organizing efforts in which commercial shared services are used in the government. Though efforts have been made to organize pooled e-mail service in the Army, Congress has denied such funding. In the absence of establishing methods for pooling “commodity IT” funds, this enlargement is not executable.

The OMB memorandum adds to the CIO responsibilities a “program management” mission. This is largely as a personnel management function. In the absence of administrative rules it is not apparent how a CIO, without authority, can conduct annual performance reviews of component CIOs. He cannot be accountable for the performance of IT program managers, especially where such personnel reports to Acquisition officers. There is no way how CIOs can carry out the OMB dictated “program management” responsibilities to enlarge their authority.

The OMB memorandum adds to the CIO the primary responsibility for implementation of information security programs that support the assets and missions of an agency. Such authority is subject to an examination of implementation in “CyberStat” sessions conducted by the Department of Homeland security. In the absence of a qualified staff or funding needed to carry out such responsibility this enlargement of CIO responsibilities is lacking an understanding how security responsibilities are managed in agencies such as DoD, which accounts for more than a half of total government IT spending. The proposed enlargement in security matters not sufficiently explained to be credible.

The OMB memorandum requires the CIOs to participate in cross-agency portfolio management through the Federal CIO Council (CIOC).  The objective would be to reduce duplication of IT spending across agency boundaries. In the absence of changes in the budgeting processes it is not clear how the CIOC can take actions that would pool agency funds into a multi-agency program. The CIOC is a committee without fiscal power.  It cannot be seen as the basis for the enlargement of the powers of a CIO.

SUMMARY
Memorandum M-11-29 from the Director of OMB makes an attempt to increase the power of the federal CIOs. However, the memorandum lacks substance. Other than increasing the coordination between CIOs there is no evidence that the powers of the CIO would change in any way.

* http://www.whitehouse.gov/sites/default/files/omb/memoranda/2011/m11-29.pdf





Enterprise E-mail for DoD?

The Army reports* spending over $400 million annually in operating costs to support organization-specific e-mail systems. That supports 1.6 million mailboxes as a cost of $250 per mailbox which does not include the costs of communications (reported within DISA budget) or development costs.

An examination of available Software-as-a-Service (Saas) e-mail services shows that richly featured enterprise e-mail services are already available for prices as low as $8/seat, inclusive of several mailboxes per user.  There are also hosted e-mail services available with higher prices and with a range of features which far exceed current DoD requirements.

Most importantly, all of the available cloud SaaS e-mail services report lower than 0.001% of downtime.

An estimate of possible savings from migration to a standard DoD enterprise-wide e-mail could be more than $1 billion in operating costs reductions.

What steps can be taken to deliver such savings?

The initial migration steps toward a cloud hosted SaaS service, offered by DISA, is now taking place.  The Army is replacing Army Knowledge On-Line (AKO) with Microsoft’s proprietary web-based offering. With only 4% of mailboxes moved by end of June 2011, the Army has experienced outages of e-mail service of over five hours. What is the cause of such outages is not clear, though the processing capacity of the nine DISA DECCs to maintain better than 99.999% availability is yet to be demonstrated.

The DISA is migrating individual legacy mailboxes with the purpose of delivering e-mails without any effect on the user. To achieve that objective modifications were added to the Microsoft standard offering. Consequently, even small variations on the Active Directory must be “cleaned up” before conversion can take place. That is hard to do. Active Directories are maintained at over 300 separate sites for the Army, each with a slightly different variation in software implementation. DISA is required to conduct the transfer of existing records, all past e-mails, all documents and all attachments without alteration.

The challenge of making a smooth transition of so many variables is not manageable given what is the condition of the legacy systems. As result undocumented exceptions had to be handled by adding help desk personnel. The Army also found that it operated e-mails systems with inconsistent firewalls, had problems with variations how Common Access Cards were implemented and had integration issues with different versions of Microsoft Vista, Outlook and Exchange.**

There are also added complexities, such as variations in local licensing agreements and security processes that make any migration of diverse legacy e-mails to a standard e-mail environment too difficult to achieve given the limitations on time and funding.

The Army’s migration to the DISA cloud for the delivery enterprise e-mail services is supposed to be a prototype for the rest of DoD to follow.  That is receiving ample attention. For instance, a Committee of the Congressional Armed Services Committee has slashed the Army’s e-mail services plans by 98% of the FY12 request for $85.4 million until better justification of spending plans is received, though some of the underlying technical issues have so far not received sufficient attention. At this rate the realizations of a standard enterprise e-mail system for DoD, operated exclusively by DISA, is receding in the distant future.

SUMMARY
If the Army’s approach to an enterprise e-mail system is to serve as a prototype for DoD, the migration from a customized to an enterprise standard must be simplified. It is inconceivable how the enormous variety of existing e-mail implementations within the Air Force, Navy, Marine Corp as well within a multiplicity of Agencies can be wedged into a DoD-wide standard SaaS e-mail. The existing e-mail services have too many changes and modifications to be ported into a single standard environment without a huge expenditure for the coding of local fixes and for conversion software.

Consideration should be given for choosing a single low cost, open source, highly secure, DoD interoperable and upgradeable SaaS system as a standard. It should be extensible for additional features such as cooperation, information sharing and document management.

E-mails have a limited shelf life of only few days. DoD components could convert to a private secure SaaS cloud instantly, with only a short switch over period. For archival purposes, DoD components could then operate temporarily dual e-mail systems until DoD standard processes take over all e-mail functions.  If any of the selected archival records require retention, conversion utilities could be used to do that at a fraction of the enormous cost it takes in the current scheme to impose on the entire migration backward compatibility for all legacy e-mails.


*DefenseSystems.com, July 2011, page 22
** Signal, August 2011, p. 10

From Network-Centric to End-Point Centric Defenses

DoD has over 10,000 networks in place. These are subject to changing attacks. In addition there are thousands of roaming wireless users as well as millions of desktops, laptops and smart phones. These devices must be protected for assured security.

It is not feasible to protect all of these points of vulnerability during transmission, even with encryption. Along the way, from points of origins to point of destination, there are hundreds of routers and switches that can be compromised. Since networks are connected, huge amounts of effort must be invested to provide universal security for all communications.

With traffic encrypted at the transport or data layer, network-based inspection for compromises is unrealistic, uneconomic and cannot be implemented. Keeping all of the network devices secure is unmanageable under current budgetary and manpower limitations.
Shifting security controls to the endpoint makes it possible to inspect all traffic irrespective of the technologies that are in place. Therefore, in the case of DoD endpoint security becomes the most effective way of assuring secure delivery of all transactions. A diversity of threat countermeasures can  be made available at the endpoints as contrasted with generic protection needed for all network levels.

Sophos Labs reports that there are more than 95,000 individual pieces of malicious code every day.  A new infected Web page occurs every few seconds. The content-based detection techniques that have been used for the past 30 years as network-centric defenses are now becoming ineffective against the mass of malicious code. In contrast, at the endpoint the visibility of the applications, data, behaviors and system uses can be used to make better decisions and to achive better protection.

SUMMARY
The net effect of shifting from network-centric defenses to endpoint security makes it necessary for DoD to adopt private Platform-as-a-Services (PaaS) clouds as the architecture of information.

Individual firewalls and virus protection at the desktop, laptop or smart phone levels for protection are economically unaffordable. Endpoint security, at the PaaS server levels, can manage thousands of virtual desktop computers for security for maximum efficiency.

The transfer from emphasis on network-based security to endpoint security will not be easy. The organizations that manage these two different regimes are managerially separated and have separate budgets. It will require setting up an organizational framework for making tradeoffs where to spend  money for assuring the greatest possible protection of DoD systems.

2011 State of Virtualization and Cloud Computing

The SearchDataCenter.com has just released survey results about the status of virtualization (http://searchdatacenter.techtarget.com/feature/State-of-virtualization-and-cloud-computing-2011) from 1,000 typical organizations.

The primary use of virtualization is to consolidate servers (59%). Only 15% of firms are using virtualization for cloud computing, which suggests that endpoint of virtualization is approached only slowly. Only 6% have no plans for virtualization. Some of the most popular uses for server virtualization are to improve disaster recovery (DR) and workload availability, dynamically allocate computing resources and maintain a standardized set of “golden images” that can be used to quickly deploy new virtual machines.

Top virtualization product deployments:
VMware 69%
Microsoft  12%
Citrix  4%
While Microsoft Hyper-V and Citrix XenServer remain relevant products, they trail vSphere by a considerable margin.

In 2011, 50% of respondents reported running fewer than 10 workloads per physical server, 35% run 10 to 20 workloads per server, 10% run 21 to 30 workloads per server, and 5% run more than 30 workloads per server.

Organizations can successfully deploy virtualization on almost any modern server, though the composition of “standard” hardware platforms has shifted in recent years to blades and large rack units. In 2011, 30% of respondents deploy virtualization on blade servers, 29% use large rack servers (2U and larger), 15% use 1U servers and 3% use other large SMP machines.

Hardware vendors for virtualization are Dell (43%), HP (35%) and IBM (13%). Oracle and CISCO account for only 5%. The dominance applications running on virtual servers are Web Servers (71%) and Data Bases (58%).

According to DCD: 2011 findings, cloud adoption is growing slowly, with 64% of firms not as yet deploying cloud computing. Only 36% of firms have so far deployed cloud computing or are in the process of such deployment.

The ability to retain that initial investment in IT within a firm has been the single most compelling reason for private cloud adoption in 2011. Many businesses also leverage a private cloud for DR and business continuance. Beyond that, self-service and automation make a private cloud an attractive venture.

SUMMARY

New VMs are fast, easy to create, and the hardware is essentially free. If you need to duplicate your production environment to test a new application deployment, just use a VM tool to create and run a new VM in seconds. But once the job is done, what happens to the VM? After sitting on a disk somewhere in your physical infrastructure, it’s easy to forget, and will continue to consume valuable storage and processing resources without providing meaningful return, leading to virtual machine sprawl. For this reason the current DoD focus exclusively on virtualization is likely to fall short of what can be achieved.

There are, however, some common themes worth noting, for avoiding virtualization in DoD. When DoD manages a proliferation of small environments with only a few servers or business applications, there is no justification or skill to add the technology. Any operation outfitted with older or proprietary (or internally developed applications) would not be suitable for virtualization. DoD efforts to consolidate data centers (and marginally small servers) will find it difficult to realize major cost reductions without migrating legacy servers into cloud operations.

Trojans Inside Computer Hardware

Globalization of the semiconductor industry and associated supply chains have made integrated circuits increasingly vulnerable to Trojan programs inside a microprocessor that executes designed microcode. A Trojan is a destructive program that masquerades as an application. The software initially appears to perform a desirable function for the user prior to installation, but steals information or performs illegal the system functions.

Vulnerabilities in the current integrated circuit (lC) development process have raised serious concerns about possible threats from hardware Trojans to military, financial, transportation, and electrical power systems.

An adversary can introduce a Trojan through an IC that will disable or destroy a system at some specific future time. Alternatively, an attacker can design a wire or some IC components to survive the testing phase but fail before the expected lifetime. A hardware Trojan can also covertly cause a system to leak confidential information.

Trojans can be implemented as hardware modifications to application-specific integrated circuits (ASICs), commercial off-the-shelf (COTS) parts, microprocessors, microcontrollers, network processors, or digital signal processors (DSPs), or as firmware modifications-for ex­ ample, to field-programmable gate array (FPGA) bit streams.

To ensure that an IC used by a client is authentic, either the developer must make the IC design and fabrication processes trustworthy or the client must verify the IC for trustworthiness. Because the former approach requires a trusted design center and foundry, it is expensive and economically infeasible given current trends in the globalization of IC design and fabrication. On the other hand, verifying trustworthiness requires a post-manufacturing step to validate conformance of the fabricated IC to the original functional and performance specifications­ nothing more and nothing less.

Most Trojan detection methodologies assume the existence of secure IC circuits, which are obtained by arbitrarily selecting chips from a large batch of fabricated ICs and thoroughly testing them. This procedure assumes that Trojans are inserted into random ICs, but to do so, an attacker must use a different set of masks for selected chips, making such an effort unattractive. It is more viable for an attacker to insert a stealthy Trojan into every fabricated IC that passes manufacturing tests and trust validations, obviating the need for additional expensive masks. This raises the challenge of detecting Trojans in ICs without relying on a proven secure IC.

Current design methodologies provide multiple opportunities to insert Trojans that can go undetected. It is important to incorporate new design-for-trust strategies that prevent attackers from inserting Trojans into a design as well as effectively detect Trojans in fabricated circuits. ICs must be designed such that undetected changes are nearly impossible.

COTS components are commonly used in today’s systems. These components are usually designed and fabricated offshore and thus cannot be trusted. The challenge is to develop testing methodologies that consider COTS components’ specifications and functionality without having access to their internal structure. The internal details of components are no longer supplied by the original equipment manufacturer.

SUMMARY
Hardware has become a vulnerable link in the chain of trust in computing systems and must be overcome. The problem of hardware security has gained significant attention during the past several years. The assumption that hardware is trustworthy and that security efforts need only focus on networks and software is no longer valid given globalization of ICs and systems design and fabrication. Until DoD develops novel techniques to secure hardware any computer application potentially can be considered untrusted while in the field.

 1 Extracted from Computer, IEEE Computer Society, July 2011

The Problem with Cloud-Computing Standards

Cloud computing is pursuing a proprietary approach to promote its offerings. Firms do not offer readily portability or cloud solutions from one vendor to another. The current tendency is to offer as little interoperability as possible. Cloud computing, especially IaaS and PaaS, can be compared to a hotel where guests can check it but find it difficult to ever check out.
However, some progress is getting made in view of several organizations that have now established cloud standardization as an objective.

The Distributed Management Task Force (DMTF) so far has made greatest progress.  It has created an Open Virtualization Format (OVF). OVF provides a way for moving virtual computers from one hosted platform to another. ANSI recognizes the OVF as a standard. It is also under consideration by ISO. IBM, Microsoft, Citrix and VMware lead the DMTF. It has a large number of global supporters.

The IEEE has in place two working groups. Each has so far published only Draft Guides.  Work will not be completed for at least two years.

The Open Grid Forum is attempting to create an Open Cloud Computing Interface, which is work in process.

The Organization for the Advancement of Structured Information Standards (OASIS) has two technical committees working on cloud standards, with no results so far.

SUMMARY
With the exception of DMTF, with limited applicability, the progress made so far on cloud standardization has not resulted in interoperability across diverse cloud offerings.
What matters now is the “eco structure” that surrounds various cloud vendors which includes software firms and cloud providers. The rapidly expanding cloud provider industry (software plus service offerings) supports primarily dominant vendors. Service providers continue to be dispersed, but the concentration in cloud software clearly identifies VMware (with 70% market share) and Microsoft (with 23% market share) as the leaders.

From the standpoint of DoD support of VMware as the de facto standards appears to be the safest approach to pursue for the time being.

Sand Boxes for Advanced Persistent Threats

McAfee’s VP of threat research in a recent blog post noted "The targeted compromises--known as 'Advanced Persistent Threats (APTs) … we are focused on are much more insidious and occur largely without public disclosures. They present a far greater threat to companies and governments, as the adversary is tenaciously persistent in achieving their objectives. The key to these intrusions is that the adversary is motivated by a massive hunger for secrets and intellectual property; this is different from the immediate financial gratification that drives much of cybercrime, another serious but more manageable threat."

The actual attack method is familiar. The compromises follow standard procedures of targeted intrusions: a “spear-phishing” e-mail containing an exploit is sent to an individual with the right level of access at the company.  The exploit when opened on an unpatched system will trigger a download of the implant malware.

That malware will then execute and initiate a backdoor communication channel to the Command & Control web server and interpret the instructions encoded in the hidden comments embedded in the webpage code. This will be quickly followed by live intruders jumping on to the infected machine and proceeding to quickly escalate privileges and move laterally within the organization to establish new persistent footholds via additional compromised machines running implant malware, as well as targeting for quick exfiltration the key data they came for.

In a recent study by the Intrepidus Group, which is behind the PhishMe.com awareness service allowed companies to attempt to phish their employees. Findings based on 32 phishing scenarios tested against a total of 69,000 employees around the world. Here they are:
23% of people worldwide are vulnerable to targeted/spear phishing attacks;
Phishing attacks that use an authoritative tone are 40% more successful than those that attempt to lure people through reward-giving;
On an average 60% of corporate employees that were found susceptible to targeted spear phishing responded to the phishing emails within three hours of receiving them;
People are less cautious when clicking on active links in emails than when they are requested for sensitive data.

SUMMARY
Given the tendency of users to be open to targeted attacks, the only solution is to isolate all traffic originating from un-authorized locations – that is sources not on a “white” security list - into an isolated “sand boxes”.

Sandboxing protects the system by limiting what an application can do, such as accessing files on an internal disk or any other desktop over the network. Limiting an app inside the sand box to just operations that it needs to perform keeps the rest of the system secure in case a downloaded app is corrupt or compromised.

Since all social computing in DoD, which now constitutes a large share of total transactions, is the primary sources of targeted spear fishing, DoD should set up its desktops on PaaS based virtual computers at central servers where all transaction are subject to automated surveillance. As a first priority, DoD should proceed with providing completely isolated “sand boxes” on all desktops, laptops and smart phones.

Trojan Methods for Subverting Desktops

The notorious recent corruption of RSA systems defenses was accomplished by using the the Poison Ivy remote administration utility. It is a backdoor Trojan. It bypasses normal security mechanisms to secretly control a program, computer or network. It is available from   .

There are other similar programs commercially available. There is an illegal market that offers backdoor Trojans that are hard to trace, or can be available as a zero-day event. That makes the Trojan undetectable for all practical purposes.

The notorious Anonymous organization uses the readily available RemotelyAnywhere software. If a computer is already occupied by a bot, the installation of RemotelyAnywhere can proceed without a user knowing about that. It is available from .

Dealing with Advanced Persistent Threats

To distinguish cyber attacks that are "highly targeted, thoroughly researched, amply funded, and tailored to a particular organization -- employing multiple vectors and using 'low and slow' techniques to evade detection" from hacker exploits, the US Air Force has coined the term APT.

APT infiltrations can originate from nation-states and their hired attackers, from industrial competitors, or from organized crime.

The standard approach to fortifying the perimeter of an organization, such as network encryption, is a losing battle. Attackers are not trying to insert malware through existing encrypted channels. A successful defense has to change from “keeping attacks out” to accepting that “sometimes attackers are going to get in” regardless of protective measures.
The first line of defense is therefore the ability to detect attacks and then to minimize the damage instantly. Zero-day attacks are used with increased frequency. No pre-planned defense will counter that. One must assume that every organization has been already be compromised and then immediately proceed with countermeasures.

An approach to cyber defense must therefore rely on the presence of highly automated network control centers that have installed triggers, often using artificial intelligence or neural networks, to detect intrusions. If an organization has more than a thousand networks and several hundred data centers (such as is the case in DoD), it has neither the personnel, nor the resources or organization to stand up a rapid response line of defense. The only way to address the organization of secure network control centers is to limit their numbers through a consolidated management of networks that operated with only a limited number of Platform-as-a-Service (PaaS) clouds.

The second line of defense is to control tightly the access desktops, laptops or smart phones. With millions of such devices in DoD it is neither practical nor affordable to install into every device firewalls, virus protection and malware detection means. Access to desktops is always based on personal authentication privileges, regardless of location or computer technology used. Configuration updating of virus, firewalls or malware therefore becomes an unmanageable task for controlling access to a very large number of points of access. Security enforcement should be done at the server farm level where up to hundred thousand virtual desktops can be controlled centrally.

Rapid migration to cloud computing, in the form of a private PaaS, is the only affordable and feasible way for protecting DoD against cyber attacks.

COMMENTARY
The reaction received to my recent blog about the compromise of the RSA network warrants further information.  According to IEEE Security & Privacy, July 2011, the RSA hacker exploit was based on a bug in the Adobe Flash Player. Attackers broke into the RSA network by sending e-mail messages to a number of RSA employees. Attached was an Excel spreadsheet. The Excel spread sheet contained an embedded Flash file with a vulnerability that was previously unknown to Adobe. This vulnerability allowed the attackers to take over an RSA employee’s personal computer and install a version of the Poison Ivy remote administration tool. This enabled the attackers to steal user credentials, access other RSA computers and then transfer to themselves sensitive information.

This situation could have been averted.  RSA employees should have strong access authorization that would identify the Poison Ivy source as illegal. The RSA network administrators should have been able to detect a communication anomaly and immediately intercept it.