Search This Blog

Status Report on DoD Architectures

The availability of enterprise architectures is essential. It is the foundation for assuring that modernization can take place at the pace of progress in information technology. Systems architectures are necessary to guide the transformation of knowledge-intensive organizations towards greater efficiency and effectiveness. Without the well-defined enterprise architecture it is unlikely that DoD will transform its business, war-fighting and intelligence processes to meet the rising information needs.

The presence of systems architectures has been recognized as the keystone for assuring the success of public and commercial organizations. The extent of completion and operational uses of such architectures can be seen as an indicator of the competence of a systems organization. Working architectural blue prints can be used as one of the proofs of capabilities to manage enterprise systems.

The GAO report of September 2011 (GAO-11-902) offers a report on the current status of DoD enterprise architectures. These were divided into 59 “core elements”, which were then arranged according to a seven stage progression towards actionable guidance how to organize implementation.
The scope of the GAO report was defined as consisting of 2,324 systems. $38 billion has been requested in FY12 to support this effort.

DoD operates under a “federated” architectural framework. Under this construct several separate and distinct architectures are acceptable as long that each conforms to an overarching enterprise level master design. In the DoD context this means that there is a comprehensive OSD level architecture within which Army, Air Force and Navy must structures their respective architectures for enterprise-wide compliance.  This would include features such as network interoperability and data interchangeability.

Although DoD components declare they operate in a federated mode, there is no OSD master in place. Individual component architectural plans are only partially complete. The respective plans are not comparable and therefore cannot be classified as a part of a federated DoD framework.

Using a “stages of maturity” scale GAO found that, on the average, DoD components would meet architectural planning requirements in only 10% of the cases. In 42% of the cases DoD components would be only partially compliant. In 48% of the cases there was no architecture that could be applied as guidance for systems implementation.

SUMMARY
GAO concluded that neither the Air Force or the Army or the Navy have the requisite directions for transforming their information management processes. They do not have the structure for modernizing the supporting infrastructure to minimize overlap as well as to maximize interoperability. They do not have a fully developed plan that would guide information technology spending for achieving major cost savings.

Wireless Data Communications in DoD

DoD wireless communications operate with fractured allocations of wireless spectrum. Consequently the utilization of the assigned spectrum can vary from extremely poor utilization to congestions at times of critical use.

The difficulties in using the assigned spectrum are aggravated when the military operates in countries that control their own spectrum allocations.
Bandwidth from, 21.85 to 29.89

Total US Bandwidth 

The urgent need for reliable and secure wireless communications have been well stated in a report by the GAO to Congress: “Survivability and lethality in warfare are increasingly dependent on smaller, highly mobile, joint forces that rely on superior information and communication capabilities.  Moving this information—including bandwidth-intensive data and video—to, from, and across the battlefield requires breakthroughs in radio technology.  DOD’s existing or “legacy” radios lack the capacity and flexibility necessary to achieve and maintain this level of information superiority.  DOD has been counting on the Joint Tactical Radio System (JTRS), a development program begun in 1997, to deliver the needed capability. JTRS relies on networked communications to improve information sharing, collaboration, and situational awareness, thus enabling more rapid and effective decision making and execution on the battlefield.  It is intended to provide the bandwidth volume to handle the information traffic, emulate different legacy radios, and function as a router for tactical networks.” 1

A waveform is the representation of a signal and the characteristics of a radio that includes the frequency, (VHF, HF, and UHF), modulation type (FM, AM), message format, and/or transmission system. Most of the radios used by the military services operate with a single dedicated waveform and can only interoperate with similar radios.

Therefore DOD proceeded to develop JTRS as a single, interoperable family of radios based on a common set of standards and applications.  The radios are expected to not only satisfy the requirements common to the military’s three operational domains—air, sea, and ground— but also be able to communicate directly with many of DOD’s existing tactical radios.

JTRS is intended to contribute to DOD’s goal of network-centric warfare operations by introducing sufficient bandwidth for video communications. The wide band networking waveform being developed for ground vehicles is expected to provide data rates of up to 5 megabits per second which is at least ten times faster than legacy radio systems. The waveforms would facilitate the use of maps, imagery, and video to support decision-making.

To make better use of the available spectrum DoD finally launched in 2002 the Joint Tactical Radio System (JTRS) program with a $3 billion budget to be completed by 2007. So fat actual spending is $12 billion. Most of the money ($5.7 billion) was spent on procuring legacy radios to support operations in Iraq and Afghanistan. Only $2.5 billion went into JTRS development. 2

JTRS radios must address stringent security architecture requirements established by the NSA and must be certified through a multistage process during their design and development.  Certification is a rigorous and potentially time-consuming process, and consequently must be factored into the schedule for each radio’s system development. The unique characteristics of JTRS radios have introduced new complexities into the certification process.  First, because much of the functionality of JTRS radios is defined in software rather than in hardware, software verification presents a major delay.

Presently DOD does not have a strategy for meeting the needs of wireless operations. Its future wireless communications capabilities are improvised without a cogent architecture or direction. The key issues that require resolutions are:
1. Overcoming technology hurdles, size and power constraints, and security architecture issues to guide future JTRS development.
2. Managing investments within rising fiscal constraints. A legacy vehicle radio costs about $20,000, while its more capable JTRS replacement is estimated to cost up to 10 times more.
3. Phasing in JTRS without prematurely retiring a large inventory of legacy radios.

SUMMARY
Due to rising costs, reduced number of radios requested and rapidly changing technologies, the JTRS program will not be fully executed, even after several recent revisions.

The replacement program for JTRS should leverage from the currently proposed universal (and much cheaper) e-mail and collaboration program that makes use of commercial client devices such as iPhones, iPads and iTablets for military use. That would require re-architecting the approach for managing wireless communication by reducing the costs (and complexity) of user devices. Instead, investments should be channeled into “cloud” support servers and vastly improved communications that are uniform, controlled by standards and enhanced for security.


  See GAO-08-877; http://www.gao.gov/new.items/d08877.pdf   
  Included here were tactical radio investments for the Army and Marine Corps. Air Force and the Navy buy tactical radios separately.  

Strassmann Biographical Information

PAUL A. STRASSMANN’s career includes service as chief corporate information systems executive (1956-1978; 1990-93 and 2002-2003), vice-president of strategic planning for office automation (1978-1985) and information systems advisor and professor (1986-date).
Mr. Strassmann is the Distinguished Professor of Information Sciences, George Mason School of Information Technology and Engineering. In 2009 he received the Honorary Doctorate from the George Mason University where he teaches an on-line graduate level course in cyber operations. He is Contributing Editor of the Armed Forces Communications & Electronics Association Signal magazine and serves as the Chairman of the Board of Directors of Queralt, a company that provides Radio Frequency Tag identification services for high-value objects.
After serving as an advisor to the Deputy Secretary of Defense since 1990 he was appointed to a newly created position of Director of Defense Information and member of the U.S.A Senior Executive Service.  He was responsible for organizing and managing the corporate information management (CIM) program across the Department of Defense that included a major cost reduction and business re-engineering program of the defense information infrastructure.  Strassmann had policy oversight for Defense Department’s information technology expenditures.  He is 1993 recipient of the Defense Medal for Distinguished Public Service from the Secretary of Defense, the Department’s highest civilian recognition.  In 2002 he was recalled to government service as the Acting Chief Information Officer of the National Aerospace and Space Administration, with responsibility and accountability for the computing and telecommunication information infrastructure. In 2003 he retired from government service after receiving the NASA Exceptional Service Medal for improving I.T. architecture, security and services.
Strassmann joined Xerox in 1969 as director of administration and information systems with worldwide responsibility for all internal Xerox computer activities.  From 1972 to 1976 he served as founder and general manager of its Information Services Division with responsibility to operate corporate computer centers, communication networks, administrative services, software development and management consulting services.  Introduced major innovations in global telecommunication management.  From 1976 to 1978 he was corporate director responsible for worldwide computer, telecommunications and administrative functions.  He was a key contributor to shaping business Xerox strategy for office automation and developed new methods for evaluating the productivity of computer investments.
Until his retirement from Xerox he served as vice president of strategic planning for the Information Products Group, with responsibility for strategic investments, acquisitions and product plans involving the corporation's worldwide electronic businesses.  Afterwards he became author, lecturer and consultant to firms such as AT&T, Citicorp, Digital Equipment, General Electric, General Motors, IBM, ING, SAIC, Shell Oil, Sun Microsystems, Texas Instruments as well as Adjunct Professor at the U.S. Military Academy at West Point, and Visiting Professor at the University of Connecticut and the Imperial College, in London, England.  His public involvement includes presentations to the Senate, the House of Representatives, the Board of Governors of the Federal Reserve, the British House of Commons and the USSR Council of Ministers. Strassmann also served on the Boards of Directors of Alinean, InSite One, McCabe Software, Meta Software and Trio Security Corporations.
Prior to joining Xerox Strassmann held the job of Corporate Information Officer for the General Foods Corporation and afterwards as the Chief Information Systems executive for the Kraft Corporation from 1960 through 1969.  He started working with computers in 1954 when he designed a method for scheduling toll collection personnel on the basis of punch card toll receipts.  He earned an engineering degree from the Cooper Union, New York, and a master's degree in industrial management from the Massachusetts Institute of Technology, Cambridge.  He is author of over 250 articles on information management and information worker productivity.  His 1985 book Information Payoff–The Transformation of Work in the Electronic Age has attracted worldwide attention and was translated into a number of languages.  His 1990 book, The Business Value of Computers, covers research on the relation between information technology and profitability of firms.  His 1993 book, The Politics of Information Management offers guidelines on organization of the information function for greatest effectiveness.  A companion volume, The Irreverent Dictionary of Information Politics reflects on the inconsistencies in information management practices. A 1997 book, The Squandered Computer, was Amazon.com #1 best selling book on information management.  His latest books are on Information Productivity - Assessing the Information Management Costs of U.S. Industrial Corporations (1999), The Economics of Corporate Information Systems (2007), Paul’s War (2008), Paul’s Odyssey (2009) and The Computers Nobody Wanted – My Years at Xerox (2009). His lectures are now appearing as video recordings on the Internet.
Strassmann was chairman of the committee on information workers for the White House conference on productivity and served on the Department of Defense Federal Advisory Board for Information Management and the Army Science Board.  He is a Distinguished Engineer of the Association for Computing Machinery, life member of the Data Processing Management Association, Chartered Fellow of the British Computer Society, senior member of the Institute of Electrical and Electronic Engineers and member of the honorary engineering societies Tau Beta Pi and Chi Epsilon.  He authored the code of conduct for data processing professionals; was recipient of the 1992 Award for Achievement by the Association for Federal Information Resource Management and was named the Government Executive of the Year; the 1992 International Industry Award for advancing the adoption of Open Systems and the 1996 Excellence Award for Business Engineering.  In 1997 he was named as one of the twelve most influential Chief Information Officers of the last decade by the CIO magazine.  In 2000 he was cited by the Department of Defense for his pioneering work as one of the executives responsible for advancing the cause of U.S. information capabilities. He is recipient of the 2006 Neal Business Journalism award for a series of articles on the Economics of Information. Strassmann is recipient of the Gen. Stefanik Medal for his actions as a guerilla commando from September 1944 through March 1945 in Czechoslovakia.

Recent Cloud Crashes

In recent days Google Docs, Facebook, Amazon and Microsoft suffered outages. There have been always cloud crashed, of various duration, ever since the cloud approach to operations has attracted attention.

Google Docs was out of service for about an hour on Sept. 7, the result of a “memory management bug” that was exposed after Google made a change to improve real-time collaboration in Google Docs. The General Services Administration (GSA) is now running close to 20,000 desktops under Google Docs and depends on Google Docs uptime.

Facebook has been down for 2.5 hours because it “changed the persistent copy of a configuration value that was interpreted as invalid.” Tens of millions of Facebook users were affected. The overload on the system, which contributed to down time as every single client attempting to fix the unavailability of the service.

Amazon cloud service did not function correctly for more than a day. Several large accounts malfunctioned, with a substantial loss in business. The cause was a misapplied software error when systems software was getting updated as well as by too a delayed response to relocate processing.

Microsoft suffered what it called a Domain Name System failure that knocked out services for several hours worldwide for Office 365, Hotmail and SkyDrive. DISA is now in the process of migrating Army e-mail systems to Office 365 and hopes that Microsoft will not fail.

Though failures of public clouds are immediately visible and become a favorite subject for the press, one has to question whether reports of cloud downtime may be indicative that there is something fundamentally wrong with the concept of cloud computing.

The fact is that failures of commercial systems are rarely reported. Therefore there is no way how one can make any comparisons. As a rule in-house computer failures are kept as a secret within the enterprise except in cases when public operations, such as transactions on a stock exchange, become widely known.

SUMMARY
So far I have been unable to find a single case where a cloud service failure was caused by hardware failure though in the case of Amazon there was a problem in switching processing to fail-over processing. All known cloud failures so fare have been software failures that occurred when updating or upgrading system changes. Whether these are human failures by software personnel or the inability to fully test a new version prior to installation is debatable because there will be usually multiple small errors that can add up to downtime.
 
Primary datacenter cloud operations should never combine the test and development datacenter. Testing must be always completely separate and under tight configuration control to make sure that test versions are not mixed up with run versions. When new software is ready for release for to the primary data center it should be placed for a prolonged test time in a parallel “resource” cloud that mirrors the primary data center until it is safe to switch over.

Until software developers can develop a full assurance that a software change can be fully tested – which is unlikely to ever happen – cloud subscribers must proceed on the assumption that software failures will happen. To protect against such cases, elaborate testing of parallel operations will have to be installed.
None of this should prevent cloud operators to proceeding with the installation of software redundancies as well as fail-over capacity.





 

How to Prevent Vendor Lock-in for Cloud Services?

DISA is proceeding to offer e-mail and collaboration enterprise services for DoD. The issue now is how to prevent vendor lock-in when selecting the provider of cloud services.

Almost all of the existing e-mail applications are Microsoft based. It would be logical to continue adopting versions of Office 365 cloud services as the preferred solution. Whether this is hosted at the Microsoft Azure cloud environment or as a private cloud in one of DISA DECC centers is a matter of cost and reliability. Cloud hosting would allow scaling the e-mail services as other parts of DoD join the enterprise cloud e-mail offering. Microsoft, or its licensee, would manage the e-mail software while components would control user access rights.

As a short-term transition strategy migrating everyone to a standard Microsoft solution could be one of options how to proceed. It could enable migrating existing Microsoft operations to DoD-wide standards. Unfortunately, this approach is likely to be too lengthy, too expensive and managerially hard to execute. The variations and multitude of different versions in the existing e-mail and collaboration solutions are too diverse to allow the creation of uniform e-mail services. It would make more sense if DISA considers offering a “core” e-mail cloud based on commercial services. DoD components could then convert to these “core” services in incremental steps.

To attract participants to an off-the shelf “core” e-mail service DISA needs to operate a secure, low-cost, easy to use e-mail (and collaboration cloud) offering. Conversion costs for access to a “core” offering should be incurred by the user and not by the cloud utility. Conversion costs can grow without limit as local requirements are added. A computer utility, such as DISA, should offer standard services at low prices only, leaving to users to add unique features under software architecture controls. The incentive to convert to utility services should be with the user seeking lower total costs of ownership and not the utility whose has only the objective of delivering operating efficiencies.

 Meanwhile, DISA should be planning for an exit strategy from an exclusive Microsoft solution to an offering that allows multiple competing suppliers to bid for the DoD business. The prospective bidders must be also able to deliver greater flexibility to adapt to rapidly changing information technology choices. Of course, Microsoft should be in a position to offer its services but without an exposure that its services would constitute a “lock up”. Here are some of the elements that should be included in a viable exit strategy:

1. Allow for the conversion to non-proprietary protocols
Microsoft messaging uses exclusively the Application Programming Interface (MAPI) as the messaging architecture for transactions. This is a proprietary protocol for connecting Microsoft Outlook with Microsoft Exchange. A DoD enterprise solution should not use a protocol that is exclusive.

2. Adopt protocol that retains messages
Every vendor uses the Internet Message Access Protocol (commonly known as IMAP). It is a protocol that allows an e-mail client to access e-mail on a remote server. Instead of relying on MAPI all current user devices, regardless of manufacture, can process IMAP. This protocol then allows using every desktop efficiently, whether it is an Android or iPhone device.
There are two available e-mail protocols: IMAP and POP. When using IMAP a customer is accessing an inbox on a central mail server. Although messages appear on the user’s computer, they remain on the central mail server.
POP does the opposite. Instead of just showing you what is in the inbox on the mail server, it checks the server for new messages, downloads all the new messages and then deletes them from the server.
The DoD enterprise offering should be using IMAP. Records retention policies make this a requirement.

3. Plan for conversion strategy from Exchange Server
Microsoft Exchange Server is available in numerous versions. It is a proprietary client-server product developed for enterprise uses using Microsoft infrastructure products. It has limitations on its interoperability. During migration to a cloud service, the Exchange Server can be used as a transition solution but eventually must be replaced by an open source application.

4. Plan for conversion strategy for Active Directory
Active Directory (AD) is a proprietary directory offered by Microsoft. It uses a number of methods to provide a variety of network services, such as:
Central location for network administration and security.
Information security and single sign-on for user access to networked resources.
Standardizing access to application data.
Synchronization of directory updates across servers.
Active Directory stores all information and settings for a deployment in a central database. Active Directory allows administrators to assign policies, deploy and update software. Active Directory networks can vary from a small installation with a few computers to tens of thousands of users, many different network domains and many geographical locations.

The Active Directory features are one of the principal means for reinforcing a Microsoft control of enterprise e-mail.

SUMMARY
To standardize and then to unify hundreds and possibly thousands of separately maintained versions of Microsoft e-mail applications involves large expenditures. At present the costs of cleaning up what is already in place in order to conform to a DoD-wide e-mail cloud is difficult, particularly if the cloud provider pays for all conversions and migration costs because users will keep adding new requirements to retain legacy features.

Consequently, at the end of the current approach to the consolidations of all e-mails DoD will end up with Microsoft solutions that are locked-in.

While proceeding with e-mail integration, DISA should proceed with the selection of options that would be interoperable as well as offer competitive choice from multiple vendors. Technical choices that are made during the proposed migrations should comply with the requirement that each vendor also offers exit strategies that assure DoD enterprise solutions that remain flexible.

Enterprise Core Services for DoD

E-mail, keeping track of directories, calendars, collaboration and document management are the most demanding core applications of DoD. Such core services also include support of diverse desktops, laptops, smart phones and all mobile devices. These services also deliver global synchronization of files so that anyone can communicate from any place with everybody. The current fracturing of these services is not acceptable. It is incompatible, insufficiently secure and too expensive.

DISA has now taken on the task of providing DoD with a secure enterprise-wide capability that would migrate hundreds of existing core services into a cloud-based environment. The following policies would have to dictate the directions that DISA can be taking to achieve the stated objectives:

1. Avoid vendor lock-in by adopting open source software that allows vendor-independent management of applications.
2. Deliver basic core services as a universal Software-as-a-Service (SaaS) that hosts open source applications.
3. Operate SaaS for hosting by competing firms. The SaaS servers are virtualized for relocation of capacity and for fail-over.
4. Avoid payment of up-front enterprise software licenses. Avoid annual fees for application maintenance. Instead pay only for actual use.
5. Migrate to Linux. Software for managing basic core service should be identical to every user.
6. Own the application software for all core services.
7. Make core services available on-line and off-line. Off-line transactions should be synchronized after any device re-connects.
8. Control the configuration of all core services.
9. Include automatic backup and fail-over of cloud services.
10. Make a standard core desktop available to every DoD employee.
11. Offer desktop aggregation services to allow secure communication to and from Yahoo! Mail, Gmail, Hotmail, Facebook, etc. POP or IMAP accounts.
12. Follow identical rules for all desktops, such as drag and drop actions as well as standard rules for composing, deleting, editing and replying.
13. Deliver core applications as a transaction service. Transactions should be priced as if they would originate from a regulated utility.
14. Make core services capable of relocation between sites and vendors. Open source application programming interfaces (API) should be used.
15. Design the core services to meet the capacity needs of of military, civilian, reserves and contract personnel.
16. Allow military Services and Agencies to add features to the basic core services but only if open source APIs are used.
17. Contract with only a limited number of SaaS providers. Contracts should be of short-term duration.
18. Engage only contractors who also offer comparable commercial grade services.
19. Allow add-ons to core services but only under central configuration controls.
20. Manage access authorization and provision of secure signatures to core services from automated Network Control Center (NCC).
21. Staff the NCCs only by military and civilian employees.
22. Place no limits on retention of defragmented e-mail storage.
23. Make core service available in at least ten languages.
24. Include automatic location of users physical locations for security assurance.
25. Provide migration tools to standard core desktop from Exchange, Lotus Domino and Novel GroupWise.
26. Offer a standard core desktop to provide a uniform user interface regardless of the information technology in place.
27. Offer PC-over-IP thin client services to all locations, including of smart phones.
28. Install uniform desktop icons as the standard user interface for all commands and applications.
29. Extend from a central location the provisioning and managing of applications and data.
30. Form a DoD controlled enterprise technical community for the sharing information about technical innovation, quality improvement and enhancement of core services.

SUMMARY
A gradual migration of “commodity” applications, such as e-mail to cloud computing offers advantages to DoD in cost, security, reliability and interoperability. The current fracturing of “commodity” applications into thousands of incompatible enclaves calls for the creation of a SaaS environment that will have the capacity to support all DoD operations.

With the task of organizing a DoD-wide enterprise core service, there is no reason why DISA should not be able to deliver cloud environment that meets the criteria as defined above.

Federal Government Needs a Federal Enterprise Architecture


Given the current status where both policy as well as implementation is lacking we have two fundamental questions that need to be answered now: First, is it still possible to complete the originally planned FEA to guide DoD systems developments henceforth?

Second, is it possible to re-align the positions of the DoD CIOs for acquiring the budgetary and organizational controls that would allow the management of IT spending?
The answer to both of the above questions is negative. The assumptions that have been attempted to apply for fifteen years do not work any more. Fortunately, the rapid development in the delivery of information technologies as a shared service – cloud computing – has now opened new options for DoD to start redirecting its IT policies.

What is now needed is primarily a FEA2, a DoD architecture for the placement of Platforms-as-a-Service (PaaS) as the core around which to reorganize the conduct of IT.
Instead of FEA1 trying to direct how thousands of separate projects would be designed and operated, FEA2 would define how to set up only a handful of PaaS cloud services.
FEA2 would be developed not by staff that is remote from where PaaS services are delivered.  Instead FEA2 staffs would be embedded where PaaS is executed.

FEA2 would be realized as an evolutionary process, not as a pre-set blueprint. The management of projects would not be split into stand-alone engineering, development, programming and operation phases assigned sequentially to separate organizations. Instead, in the spirit of Admiral Rickover, programs would be managed as unified and tightly coupled ventures. FEA2 would concentrate on quarter-by-quarter migration to guide a rapid progression from the current legacy code to eventual arrival in the PaaS environment.

The planning horizon for FEA2 should be at least ten years. The management of programs should be in the hands of senior long-term officers, not with rapid rotation appointees. With IT now classified as a weapon rather than as an auxiliary function, all senior appointments should qualify as information systems specialists.

FEA2 would provide guidance on how applications would be gradually stripped from their underlying infrastructures. The objective would be to convert each application into a form that can be delivered as a service-on-demand on any of DoD’s standard PaaS offerings. New applications would be placed on top of a standard PaaS instead of the existing proliferation of infrastructures.

FEA2 would define PaaS as addressing the following standard services to be used as shared capabilities: Networking; Storage management; Management of servers; Virtualization of the hardware; Maintenance of operating systems; Implementation of security assurance; Middleware for managing the transition of legacy code and the Design of run-time interfaces.

Security services, which will become the most critical part of all DoD systems, will be embedded primarily inside each PaaS and then updated in a relatively small number of software assets. Leaving security features, such as access authentication within applications, will make it possible to consolidate within a small number of PaaS what presently are the most costly information assurance functions.

Excluded from PaaS would be the application codes and the applications related data. Meta Data directories would be managed as an enterprise asset to assure interoperability. Separating PaaS services from the applications would be controlled by tightly defined standard interface protocols.

FEA2 would largely simplify how DoD manages its systems and how it can reduce costs. Instead of thousands of contractor-designed and contractor-maintained infrastructures that now account for an excessive 57% if all costs, DoD will concentrate on maintaining only a handful of PaaS environments, probably at least one PaaS for each military service and for each Agency.

PaaS services would be available in the form of DoD Private or Public Clouds, though Infrastructure-as-a-Service (IaaS) would be always present as a transition offering on the path towards PaaS.

FEA2 will depend on a strict adherence to open standards in order to assure interoperability across PaaS clouds.  DoD memorandum of October 16, 2009 offers guidance a preferred use of open source standards in order to allow developers to inspect, evaluate and modify the software based on their own needs, as well as to avoid the risk of contractor lock-in. DoD cannot allow each contractor to define their own PaaS.  DoD cannot allow the operation of a hotel where guests can check in but can never check out.

Existing PaaS solutions offered by vendors such as Amazon and Microsoft Azure offer services that operate in this manner. To assure competitiveness only operations based on open source standards can be allowed.  An independent test of open source interoperability would verify that applications can be relocated from one PaaS to another.

The roles of the CIOs and the acquisition personnel would be enlarged to oversee the rates charged by PaaS services. To assure competitiveness and for comparability of charges, the transaction pricing structure for each service would have to be uniform for every PaaS provider. The ultimate test of FEA2 will be cross-platform portability. DoD customers should be able to relocate any application from any one PaaS to another with only minimal transfer expenses.

The technical aspects of a PaaS can be best described as a method that allows a customer’s unique data and applications to be placed on top of a defined computing “stack”. The PaaS “stack” takes care of every infrastructure service. The customer can be left to worry only about applications and data.

DoD will have some diversity in PaaS offerings because various components will make progress at different speeds. This will require the placement of a software overlay on top of the existing IT coding “stacks”. Such overlays will act as intermediaries between what belongs to the customer’s, such as the Army, Navy, Air Force and Agencies, and what belongs to the PaaS during the progression from legacy systems to PaaS. The placement of an intermediation software layer between the PaaS and the applications will allow for the diversity of applications during the long migration it will take to reach the ultimate goal. It may take a long time for such migration to take place as legacy systems are eventually replaced with PaaS compatible solutions.

The PaaS arrangement makes it necessary for applications and data to be constructed using standard development “frameworks”. Such standardization is necessary to enable applications to relocate easily from one PaaS cloud to another, whether these clouds are private or public.  With such standardization applications and data can relocate to take advantage of competitive offerings from different PaaS offerings.

To prevent PaaS contractors from offering cloud solutions that capture customers by means of proprietary runtime and middleware solutions it is necessary to control the interoperability across several PaaS services as well as across any interim IaaS solution. That must be done by DoD policy that will assure that interfaces depend on open source solutions that can be verified for interoperability.

Achieving PaaS standards only through the DoD policy is insufficient. The PaaS technologies are global whereas the reach of DoD is limited by spending less than 1.5% of the global IT costs.  The ability of customers to migrate from one PaaS vendor to another PaaS vendor must be preserved by an OSD that works with commercial firms to adopt standards that prevent lock-ins by large prime contractors that could prevent smaller firms from offering PaaS services.

The insertion of a limited number of PaaS services into DoD will result in large cost reductions. Contractors will be shifting to proprietary PaaS services to gain larger profit margins unless DoD sees to it that competition can prevail.

Transferring applications to a cloud offers enormous benefits. It also can be a trap. After placing an application on IaaS to take advantage of virtualization of servers such a move can be wedged into a unique software environment. For all practical purposes applications cease to be transportable from any one IaaS to another IaaS and certainly not to PaaS.  There are hundreds of cloud services that operate in a proprietary manner and DISA is now considering such moves. OSD policy must see to it that all migration moves must fit the ultimate objective of operating as a PaaS. IaaS solutions are useful in offering raw computing power but may not be sufficiently flexible to enable redeployment when conditions change.

PaaS services must therefore offer the following:
1. The interface between customer applications and the PaaS must be in the form of Open Source middleware, which complies with approved IEEE standards or prevailing industry best practices.  Standard Open Source middleware must allow any application to run on any vendors’ PaaS cloud. Regardless how an application was coded it should remain transportable to any DoD approved PaaS cloud, anywhere.
2. The isolation of the customer’s applications from the PaaS software and hardware is necessary to permit the retention of the DoD’s intellectual property rights, regardless of cases where DoD may choose to host some of its applications on a public cloud.
3. Certification by the cloud provided that applications will remain portable regardless of configuration changes made within its PaaS. This includes assurances that applications will retain the capacity for fail-over hosting provided by another PaaS vendor.
4. Assurance that the customers’ application code will not be altered when hosted in the PaaS cloud, regardless of the software framework used the build it.
Any DoD plans to migrate systems into a PaaS environment will henceforth have to consider the ready availability of off-the shelf software that will make the migration to PaaS feasible at an accelerated pace.

Commercial software already available aims to allow developers to remove the cost and complexity of configuring an infrastructure and for a runtime environment that allows developers to focus on the application logic. This streamlines the development, delivery and operations of applications, enhances the ability of developers to deploy, run and scale applications into the PaaS environment.

The objective of FEA2 is to get an application deployed without becoming engaged in set-ups, such as server provisioning, specifying database parameters, inserting middleware and then testing that it’s all ready for operations after coordinating with the data center operating personnel.

SUMMARY
A new Federal Enterprise Architecture should be based on a separation of the underlying infrastructure (PaaS) and applications. This call for a complete reorganization of the way how the Federal Government proceeds with building the systems that are needed.

A Review of the Federal Enterprise Architecture

One of the mandates of the Clinger-Cohen Act of 1996 was the creation of the Information Technology Architecture. In subsequent 1999 guidance the Federal CIO Council defined the Federal Enterprise Architecture (FEA) as the process for developing, maintaining, and facilitating the implementation of integrated systems.

Chief Information Architects were then appointed at the Federal and DoD levels. However, as of June 2011 the GAO reports that the enterprise architecture methodology was not deployed. Between 2001 and 2005, GAO reported that DoD spent hundreds of millions of dollars on an enterprise architecture development that was of limited value. None of the three military departments have so far demonstrated that they have made commitments to deploy an architecture needed to manage the development, maintenance, and implementation of systems.

Senior DoD IT executives have stated that the development of an architecture methodology has been pushed back due to budget limitations. As yet, no time frames have been established for producing a FEA.  There is no specific time when the enterprise architecture would be delivered.  The current focus is on other resource-intensive commitments.

Nevertheless, the use of well-defined enterprise architectures remains nowadays an attribute of managing IT in successful commercial firms. A centrally directed architecture remains the basis for system integration and for delivering lower IT costs.  In spite of the significant potential benefits, the enterprise architecture has not guided DoD systems over the past fifteen-years.

While DoD was promoting departmental and agency-wide enterprise concepts actual implementation of integration was missing except in isolated cases.  Consequently DoD ended up lacking a coherent blueprint for creating the technology environment that would be interoperable, economically efficient and easily accessible for innovation. The absence of a working architecture has prevented DoD from making progress at speeds in which information technologies are now progressing in leading commercial firms.

The absence of a guiding DoD architecture also opened the floodgates to excessive project fragmentation, to technology incompatibilities and to operations that were contract-specific rather then DoD enterprise integrating. That increased not only the costs of maintenance and of modernization upgrades, but also put a brake on the ability to innovate, which would meet the rapidly rising demands to achieve information superiority. If there was innovation, it had to take place as stand-alone new projects and not as an enhancement to systems that could be improved at a lesser cost.

The Clinger-Cohen Act also passed to the Chief Information Officers (CIOs) the responsibility for the management of total IT spending. That did not take place as originally legislated.

At present CIOs cannot be held accountable for the contracting decisions that are made by acquisition officers who use regulations found on 2,017 pages printed in small font. The management of IT, as of March 2011, is split into 2,258 business systems with only spotty direct component CIO oversight.  Meanwhile, the influence of the DoD CIO to control rising costs continues to be limited.

There are also over a thousand of national security systems that were not listed in the DoD Information Technology Portfolio Repository (DITPR). Such projects included intelligence systems, military command and control networks and equipment included as an integral part of weapons. Whereas in the years of the cold war national security systems could be set up as stand-alone applications, modern warfare conditions now required a real-time interoperability between national security systems and selected business applications such as logistics.

As the consequence of having no architecture as well as a weak CIO control over IT costs, DoD ended up with largely isolated enclaves – silos - of projects.

IT projects are now managed as separate contracts. Applications are not interoperable across the DoD because they cannot share enterprise-wide data. IT projects are run in over 700 data centers as well as in an untold number of special-purpose servers that often have data center capabilities.
Such outcomes were not originally hoped for in 1996. The flawed outcomes, predicted in 1995 Senate hearings, pointed to the inadequacies of Clinger-Cohen to meet post-cold war needs. The 1996 legislation did not offer changes in the organizational structure for managing IT. It did not alter the DoD component-entrenched budgeting practices.  It did not set out to deliver a shared information technology infrastructure that would compare favorably with best commercial practices. Security was not as yet on the agenda. Rising DoD requirements would be then met with rapidly rising spending for IT.

SUMMARY
The absence of a coherent Federal Enterprise Architecture represents a fundamental flaw that prevents the Federal Government to use information technologies effectively and efficiently.