Search This Blog

Cloud Computing is Here Now


The projection of trends in cloud computing, as of May 23, 2011, (http://www.google.com/search?q=Cloud+Computing+Takes+Off+MOrgan+stanley&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a) reveals the following trends that DoD must consider in its planning:
1.           Cloud workloads in the USA will increase at a 50% CAGR in the next three years— which is much faster than the average increase in total DoD IT spending, which is expected to be close to zero in the next three years.
2.           The percentage of DoD CIOs that are planning to moving their workload to cloud computing is expected to rise. This is evidenced by the attention devoted to meetings about cloud computing in Washington.
3. DISA has staked out a claim that it wishes to offer cloud services at its 14 DISA Defense Enterprise Computing Centers (DECC) (http://www.disa.mil/computing/index.html). The capability to absorb a large number of servers into the DECCs using cloud software has not been demonstrated so far.
4.           The growth rate of DoD enterprise-owned servers will decline as the pressure to reduce the number of data centers is increasing. Component operated server hardware becomes the primary area for realizing savings from the migration to cloud computing and server consolidation. The expected spending for in-enterprise server spending will decline as the equipment is shifted into DECCs.
5.           The technical directions will require the shifting of the workloads from server farms operated by military services to where OMB cannot count them any more as a “data center”.  As results there will be an increased reliance on virtualization of all servers and on much denser workload-to-server ratios as result of higher capacity utilization. That should generate large savings.
7.           On-premise private computing by the military services will continue to dominate the processing of DoD workloads in the foreseeable future because legacy systems are tightly wedged into existing operations. To extricate from that would require incremental funding that will not be easily available.
A highly optimistic projection how workloads could be shifted is best illustrated in the following estimate what will be the overall US pattern of operations:

SUMMARY
The net results of the trends listed in this text can be a major reduction in the amount of investment in IT capital in DoD. There could be also large cuts operations and maintenance costs, such as in administrative and support personnel payrolls.
If the moves listed above are executed swiftly, the break-even on the investments for making the shift to cloud computing could be in the 12-18 month range. However, as is demonstrated by the latest attempts by the Army to move e-mail into the cloud-computing environment (see http://pstrassmann.blogspot.com/2011/05/armys-e-mail-move-to-disa-cloud.html) that does not look promising. The large amount of up-front funding for a project that would take over three years does not seem to satisfy the Congress.
It will require a more aggressive initiative to get the DoD to catch up with the rapid pace that is now followed by commercial firms.


Open Source Frameworks


A software framework is a reusable set of programming methods and program libraries that will produce a standard structure for coding an application. A software framework will produce applications that run in a specified environment.
A specialized version of frameworks is the Web application framework.  This can be applied for the development of websites, web applications, and web services. Web application frameworks are becoming the dominant method for delivering computing services.
Programmers find easier to create code when using a standard framework, since this defines how the underlying code structure is organized. Applications can inherit code from pre-existing classes in the framework.
The most widely adopted framework is the Microsoft Framework that has contributed to the widespread adoption of Windows applications. A variety of frameworks are also available from Apple, Oracle (Application Development Framework), Mozilla and Linux. The limitation of these frameworks is their proprietary characteristic. The proliferation of existing frameworks reinforces the writing of code that reinforces a customer’s adherence to vendor-specific solutions.
The most recent innovation is the availability of “open source” frameworks. These make it possible to redeploy code, such as Java, to run code for applications that run on a wide range of platforms. (see http://pstrassmann.blogspot.com/2011/05/springsource-development-framework-for.html)
Cloud Foundry is an open platform service from VMware. It can support multiple frameworks, multiple cloud providers, and multiple application services all on a single cloud platform. It is an open Platform-as-a-Service (PaaS) offering. It provides a method for building, deploying, and running cloud apps using the following open source developer frameworks:  Spring for Java applications; Rails and Sinatra for Ruby applications and Node.js as well as other JVM for Grails applications. Cloud Foundry also offers MySQL, Redis, and MongoDB data services. This is only the initial list of open source frameworks. Other frameworks are expected to offer different tools or templates.
Cloud Foundry takes an open approach to connecting with a variety of cloud offerings. Most PaaS offerings restrict a developer to proprietary choices of frameworks and infrastructure services. The open and extensible nature of Cloud Foundry means that developers will not be locked into a proprietary framework or a proprietary cloud such as, for instance, Microsoft Azure or Amazon EC2.
VMware believes that in the cloud era this maps to flexibility and community participation. With this fundamental belief, VMware is open sourcing the Cloud Foundry application execution engine, application services interfaces and cloud provider interfaces.
 Cloud Foundry offers an application platform, which includes a self-service application library, an automation engine for application deployment and lifecycle management, a scriptable command line interface (CLI), integration with development tools to ease development and deployment processes. It offers an open architecture for quick development framework integration, application services interface and cloud provider interface.
Cloud Foundry is ideal for any developer interested in reducing the cost and the complexity of configuring programs as well as runtime applications. Developers can then deploy applications that have been built with the aid of open source frameworks without requiring modification to their code when applying their code to different cloud environments.
Cloud Foundry allows developers to focus on applications, and not on hardware or on middleware. Traditional application deployments require developers to configure and patch systems, maintain middleware and worry about network connections. Cloud Foundry allows developers to concentrate on the business logic of applications. Consequently applications can be tested and deployed instantly.
VMware now operates an open-source community site, CloudFoundry.org, where developers can collaborate and then contribute to individual Cloud Foundry projects.
SUMMARY
The Open Source Cloud Foundry is a dramatic innovation. It is based on the concept that in cloud computing there must be complete separation between the underlying hardware, the intervening software (which includes Operating Systems) and the application logic. Applications should be able to run on any hardware, regardless of vendor origin. Applications must function regardless of the platform such as desktops, laptops or cell phones.  Applications must be accepted by any operating system or any middleware regardless of the way it has been configured.
The objective of the Cloud Foundry is to deliver open source frameworks that will make it possible to run universally accessible Platform-as-a-Service clouds. In the future there will be a large variety of PaaS vendors who will distinguish themselves by offering different service level agreements.
If DoD wishes to reduce its cost by adopting cloud services it will have to re-examine how software is generated, tested and then deployed. DoD will have to adopt open source frameworks for developing applications.  

Army’s E-mail Move to the DISA Cloud


A congressional subcommittee allowed only $1.7 million to be spent from the Army’s FY12 request for $85.4 million to move e-mail to DISA. * The Army has proposed such migration for approximately one million e-mail accounts with an expected saving of $100 million/year starting in 2013.
The subcommittee has asked for a business-case to justify such a move. It has also asked for the Army to show why the guidance to evaluate commercial cloud services, as called for in the 2011 Defense Authorization Act, was not followed.
To make the move the Army will upgrade the current Microsoft Exchange mail. This will eliminate a workload from Army’s data centers and place it within DISA. As result DISA will operate Army’s e-mail in an Infrastructure-as-a-Service (IaaS) mode. Microsoft Office products will remain on local computers. This will reduce savings as compared to Software-as-a-Service (SaaS) offerings where all user software is hosted on the cloud. Such services are now available from commercial sources for a flat fee.
There are good reasons for proposing the move of e-mail into a cloud environment though IaaS offers only partial improvements. The existing e-mail environment in the Army is broken up into 15 separate enclaves (forests). These deliver a plethora of organization-specific email systems. Some Army bases operate separate data centers for processing e-mail. What the Army proposes will certainly meet the OMB requirement to cut the number of data centers but the results will be only a shift from smaller computer farms into the much larger DISA megacenters.
The Army e-mail has estimated annual operating costs in excess of $400 million, with an estimated per person cost of at least $400. **
E-mail users, who must operate across organizational boundaries, are limited to compatible access devices. E-mail is hindered by reduced capabilities when interacting with other DoD personnel.  Deployed forces must carry additional e-mail support equipment when relocating to expeditionary locations or lose connectivity for extended periods of time. 
Existing Army servers are operating only at a fraction of capacity and IaaS virtualization would improve server utilization. However, this will not diminish administrative and support operating costs, which far exceed server capital costs.
Soldiers would have a single mail address but would not be assured interoperability with diverse devices and allied commands.
An IaaS service will allow the expansion of available e-mail storage from 200 megabytes to 4 gigabytes, but that is insufficient for storage of video and graphic files that currently dominate social network traffic.
The Army currently operates with a 2003 version of Microsoft software and will upgrade that to a 2010 version. That does not include the cost of upgrading to Window 7 on desktops and laptops.
SUMMARY
The prosed relocation of e-mail from Army facilities to a cloud environment is long overdue. A properly designed secure and redundant cloud e-mail service will reduce costs, improve up-time availability and increase response time. It will reduce the risks of security intrusions that depend on e-mail for gaining access to DoD networks.
However, the proposal to implement a partial IaaS version of e-mail is inadequate. The $85.4 FY12 funding request for migrating e-mail to a DISA is not a complete business case. Life cycle costs of moving e-mail and related functions into the cloud environment involve operating expenses as well as development, support and administrative costs. Therefore the Army proposal should be evaluated as a program that would take several years to complete.
The total costs of ownership of e-mail in an IaaS environment must include all administrative, communications, development and maintenance costs of not only DISA but also of whatever remains with the Army.
The current proposal should include the total cost of ownership of secure SaaS cloud computing. The costs for such services, which include additional applications that are not currently covered in the Army proposal, are now available commercially for $50 per person. 

The SpringSource Development Framework for Java Code


SpringSource is the leader in Java application infrastructure and management. It provides a complete suite of software products that accelerate the entire build, test, run and revisions management of the Java application lifecycle. SpringSource employs leading open source software developers who created and now drive further innovations. As results Spring has become the de facto standard programming model for writing enterprise Java applications programming code.
SpringSource also employs leading thought leaders within the Apache Tomcat, Apache HTTP Server, Hyperic, Groovy and Grails open source communities. Nearly half of the Global 2000, including most of the world’s leading retail, financial services, manufacturing, healthcare, technology and public sector clients are SpringSource customers.
Millions of developers have chosen the Open Source Spring technologies to simplify Java code development and to dramatically improve productivity and application quality. These developers use Spring because it provides a centralized configuration management and a consistent programming model for: declarative transactions; declarative security; Web Services creations and persistence integration. Unlike the complex and hard to use Java Enterprise Java Bean (EJB) platform, Spring enables all of these capabilities to be applied in a consistent manner that simplifies and partially automates the production of reliable application code.
Spring provides the ultimate programming model for modern enterprise Java applications by insulating business objects from the complexities of platform services. It manages application component management, enables Java components to be centrally configured and linked —resulting in code that is more portable, reusable, testable and maintainable.
SUMMARY
The traditional application development process involves linking Java components manually. Such approach runs into difficulties as the size of the application code and its complexity grows. Writing, tracking and testing such code is hard and difficult to reuse. Spring performs many of the previous manual coding tasks automatically. Developers can concentrate on business logic and code, without having to worry as much about the software infrastructure that supports the running of the application programs. This is critical when the Java code is used to add features to web sites that are viewed by millions of customers.
As the dependency on cloud computing increases, DoD must pay attention how applications are written and maintained. Software routines have to be interoperable across a wide range of applications. Adherence to Open Source frameworks is necessary to assure that applications can be relocated between data centers either to assure backup or to change contract relationships.
DoD must impose standards how Java code is written. DoD must dictate how such code is deployed for operations and maintenance.  The current contract separations between applications planners, application developers and operators - dictated by acquisition rules - is not viable.   




User Centered Approach to Cloud Applications


The user-centered design (UCD) * software will make it possible to access Software-as-a-Service (SaaS) services with only the aid of major Internet browsers. Such approach will enable organizations to centrally manage the provisioning of diverse applications, while applying open standards to security and to access controls.
The UCD software increases the security of using SaaS applications. Users will have a single login available across multiple devices, with self-service accesses to a corporate repository that offers industry standard SaaS applications. This grants access to multiple web applications such as SalesForce.com, Facebook, Google, WebEx and others.  With the evolution of cloud computing, hundreds of firms will offer off-the shelf SaaS applications.
At present the access to SaaS requires separate authorizations and software fixes for configuration alignment. That is hard to do, especially for integration with existing systems that already reside on private clouds or continue to operate as legacy applications. It is the purpose of UCUI software to manage such integration.
Users are also bringing diverse devices to the workplace. Systems managers must now manage multiple access protocols and conversion software to enable legacy devices to extract useful information from any SaaS offering. It is the purpose of the UCD software to accept a variety of protocols from all of the devices already in place.
The UCD is a hosted service that enables organizations to centrally manage the access as well as the usage of different SaaS applications in a seamless continuum. IT management can therefore extend the users’ enterprise identity from a private cloud to the public cloud while simplifying the processing of applications in real time. This is then supported by strong policy management on security restrictions as well as by consistent activity reporting.
The purpose of the UCD is to offer a single display for managing user access, identity and security across multiple business apps and multiple cloud environments. It is independent from the Microsoft Active Directory. This should be seen as an evolutionary step how to migrate from the proprietary Microsoft to an open source environment.
To assure security the user centric platform will have to implement the Security Assertion Markup Language (SAML) [see http://pstrassmann.blogspot.com/2011/05/secure-sign-on-for-web-based.html] and the Open Authentication (Oauth) [see http://pstrassmann.blogspot.com/2011/05/applying-open-authentication-oauth.html] .  
The UCD will ultimately bridge the gaps between the private DoD clouds and the public SaaS clouds.  The UCD can be then deployed in an evolutionary manner without time-consuming integration efforts while also reducing security risks from multiple access locations. Most importantly, the addition of SaaS services to the portfolio of DoD hosted applications will materially reduce DoD infrastructure costs at a time when budgets are shrinking.
SUMMARY
Today’s DoD workforce expects access to their data anytime from anywhere. Therefore the workforce will have to be turning to SaaS applications to meet rising needs and to cut operating costs.
An increased dependence on SaaS will have to be met by offerings available from commercial clouds that have been modified to meet DoD’s tight security requirements. With rising budget restraints this is the most effective path for migrating DoD systems to an increased reliance on cloud services.

* http://en.wikipedia.org/wiki/User-centered_design

Applying the Open Authentication (Oauth) Standard


OAuth (Open Authentication) is an open standard for authorization. It allows users to share their private resources with another site without having to hand out their credentials (username and password).
OAuth allows users to pass tokens instead of credentials for their information. Such token will grants access to a specific site and only for specific resources. Each access will apply only for a defined duration (e.g. the next hour). This allows a user to gain access to third party site.
OAuth is a standard of the Internet Engineering Task Force (IETF). The current version of the standard, OAuth 2.0, defines authorization flows for web applications, desktop applications and mobile phones.
OAuth is completely transparent to the users. The end-user need not know anything about OAuth, what it is or how it works. The user experience is included OAuth implementations in both the site requesting access and the one storing the resources.
SUMMARY
Giving an account password to another party on the network is the same thing as going to dinner and giving an ATM card and PIN code to the waiter.  When it comes to the web, users put themselves at risk by sharing private information.
DoD personnel are making increased use of multiple web sites for social networking. They can log on to each web site using their identity and passwords. But such disclosure reveals information that an intruder could use for implanting malware.  OAuth deals with such a risk by allowing users to hand out tokens that grant limited access for only specific uses and only for a defined time period.
DoD systems designs should start including OAuth in security software that manages access to social computing. Relying on tokens instead on password protected logins will simplify network management and will increase security.

Secure Sign-on for Web-based Applications (SAML)


SAML stands for "Security Assertion Markup Language". It is an XML-based standard for communicating identity information between users and web applications service providers. The primary function of SAML is to provide an Internet Single Sign-On (SSO) for organizations looking to securely connect with Internet webs that exist both in the inside as well as on the outside of an organization's firewall.
Internet SSO is a secure connection where identity and trust is shared between a specified security certification Partner and a web an application provider. Such SSOs streamlines the access to applications by eliminating logins for individual applications. Logins are avoided by providing a persistent SSO. For instance, once a SAML SSO has been approved for a user to a Google SaaS service no further logins are required.
The Security Assertion Markup Language (SAML) is an OASIS (Organization for the Advancement of Structured Information) standard for exchanging authentication data. It uses security tokens to pass information about an end-user and an identity provider. IBM, Oracle, RSA, VMware, HP and Google support SAML.
How SAML authorizations are executed depends on how applications are hosted. Services vendors, and particularly various Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS), will offer different middleware versions for accepting SAML. This will limit the extent how applications can be relocated to other vendors.
The simplest use of SAML can be found for Google, which is the premier Software-as-a-Service (SaaS) offering:



The following describes the steps taken to obtain a SAML access authorization:
1.      The user attempts to reach a hosted Google application.
2.     Google generates a SAML authentication request.
3.     Google sends a redirect to the user's browser. The redirect URL includes the SAML authentication that will be submitted to the Partner.  
4.     The Partner decodes the SAML request and extracts the URL for both Google and the user's destination URL. The Partner will then authenticate the user.
5.     The Partner generates a SAML response that contains the authenticated user's username. This response is digitally signed with the partner's public or private RSA key.
6.     The Partner encodes the SAML response and returns that information to the user so that the user’s browser can forward that information to Google.
7.     Google verifies the SAML response using the Partner's public key. If that is confirmed it redirects the user to the Google application.
8.     The user is now logged to all Google SaaS applications.
SUMMARY
The step-by-step process described above represents only the simplest case of a SAML authorization process. In the case of Google this makes the use of Google applications not only rapid but also convenient as a preferred service.
From the standpoint of DoD the required SAML SSO authorizations can be specified for every user. Different security clearances can be attributed according to need-to-know uses.
The accreditation Partner should be organized within DoD. It should be also staffed by DoD manpower at Network Control Centers that track application access by each DoD person.
The current access authorization process in DoD is based on Common Access Card (CAC) readers.  The user then logs to applications to obtain access privileges. There may hundreds of such separate logon procedures presently in place. Such logons were conceived when applications were developed in separate contracts. Single Sign On (SSO) is used only in application “silos”. The multiplicity of SSO processes increases the costs as well as a vulnerability to security compromises.
The DoD should therefore consider adopting a customized version of SAML initially for all of its web applications and subsequently for all of its SaaS applications.
  

What Will Cloud Computing Be in Ten Years?


Customers do not care much about the technical details of computing. They only wish to receive answers every time and fast. Requested information must be available regardless of the computing device they use. Responses must be secure. There should be no restrictions as to the place from where they communicate. Information must be available for people authorized to make use of what they receive. The sources of information must include information received from people, from sensors or from public web sites. Information must be available to and from ground locations, ships, submarines airplanes and satellites.  A user must be able to connect with every commercial enterprise on the Internet.
It is the objective of the cloud architecture of the future to totally separate customer’s computer appliances from any of the technical housekeeping details that currently consume huge amounts of time by customers as well as of support staffs. There is no doubt that a number of firms will operate in this mode within ten years.
The greatest challenge for cloud computing will be its ability to gain access to every application whether it is a legacy or a new application. The future of cloud computing is the hybrid environment where a variety of services are accessible from any computer appliance. The following illustrates the scope of hybrid clouds:

  Such arrangement makes it possible to use any application, regardless where or how deployed. Applications would not require separate procedures for gaining access or for obtaining separate security permissions. Everything that is either device specific or location unique remains under control of a Cloud Operating System (COS) that is not visible to the customer.
COS manages the selection of the source of applications, load balancing, back-up of services, user access privileges and the recognition of a customer’s device. All this must be done without a user having to sign into to different servers, logging into to different applications, identifying of different user devices or signing in with different passwords. 
What the customer wishes to have is a “personal information assistant” (PIA). Such a device matches a person’s identity. It is configured to adapt to changing levels of training. It understands what are the user’s verbal or analytic skills. It knows where you are at all times. Any security restrictions are reconfigured to fit a user’s current job. At all time every PIA is monitored from several network control centers.  From a catalogue of available services the customer finds what they need and by a single click can obtain the desired service. All of the “housekeeping” is taken care of by the COS without any user intervention.
A user must be able to obtain instantly services from a diversity of cloud operations platforms, each hosting a broad range of applications.  The COS software must be able to channel a user’s request to a diversity of sources. Such flexibility is necessary to assure the portability requests across any platform, retrieval of data from any application and compatibility with every conceivable appliance, as illustrated below.
 
The COS of the future differs from the current Operating Systems (OS) such as Windows or Linux. The present OS manages how vendor-defined applications are integrated with dedicated servers. The future COS will manage widely different software-defined environments across the entire “stack” of services. This diversity will include diverse hardware, diverse applications and diverse computer devices.
The key to the deployment of COS is the establishment of a user’s personal “Cloud Identity.” Such also security offers catalogues to show what services are available on private and public clouds. The catalogues enable customers to take advantage of services from the public cloud while maintaining the security and control that is necessary for access to private clouds. With single-sign on security available, the progress to cloud computing can than accelerate.
SUMMARY
COS is not a figment. It represents a series of evolutionary software offerings that are emerging to dominate the way firms will invest in information technologies.
The integration of three completely separate but interoperable tiers of cloud computing (Operations, Applications and Appliances) becomes the way for planning the architecture of the information of the future.  It will be the availability of completely new software that will make such integration feasible.



How Much Data is Processed by Servers?


No data is available about the amounts of data processed through DoD’s $36.3 billion IT. However, there is information available of the amount of information currently  processed by servers.  With the DoD IT budget ten times larger than the IT budget of any one US corporation,  it is possible to infer what are some of the information processing issues.  In 2008 the US has installed 38% of the global server population. *
According to studies by the University of California, San Diego, in 2008, the world’s servers processed 9.57 zettabytes of information, which is 10 to the 22nd power. That is ten million million gigabytes or 12 gigabytes of information daily for each of the 3.2 billion workers in the world’s labor force. The world’s 151 million world businesses process 63 terabytes per company of  information annually.
There are about 2.5 megabytes in a long  book. It would be necessary to stack such books from the Earth to Neptune and back about 20 times to equal one year’s capacity of existing  servers. Each of the world’s 3.2 billion workers would have to read through a stack of books 36 miles long each year to read 9.57 zettabytes of text.
The total number of servers in the world in 2008 was 27.3 million, with 10.3 million in the USA. 95% of the world’s total zettabytes in 2008 were processed by low-end, servers costing $25,000 or less. The remaining 5% was processed by more expensive servers. 87.8% of processing on US servers in 2000 was performed by means of low end servers. By 2008 that number has risen to 96.0%, which indicates that the low-end servers will continue to carry a greater share of the processing workload.
The follow graph shows the total number of USA servers, including an estimated equivalent number of low-end servers based on 50% performance/price/year improvements: **


The total annual world server sales in 2008 were $53.30 billion, with entry level servers at $30.8 billion. It is interesting to note that large computer complexes, such as operated by Google, Yahoo or Facebook depend on small scale servers for information processing. High end servers show slower gains in performance/price/year than low end servers and are not  preferred in billion dollar computer complexes.
It follows that most information processing in the world is performed by low end servers (392 billion million transactions in 2008), with only 10 billion million transactons executed by high end servers. This pattern is expected to persist. The purchase of computers for cloud computing is not likely to shift in favor of mainframe equipment.
 Transaction processing workloads amounts to approximately 44% of all the total bytes processed. Such transactions are “commodity” applications, such as e-mail, that are not handled efficiently in low-end servers. The overhead costs for administration of a large number of low-end servers are excessive unless they are aggregated into huge complexes that use the identical harware management and software control architecture.
SUMMARY
More than 2,000 of DoD system projects, with budgets under one million per year, are likely to be processed in low-end servers. The the performance/price of servers has been increasing since 2000, the share of the work performed by low-end servers has been increasing. The current OMB guidelines that count as data center only operations with more than 500 sq. ft. are becoming irrelevant.  A rack mounted server costing less than $25,000 occupies only 10 sq. ft. of space. It has greater processing power than a 1990 mainframe. Consequently the current DoD approach to reducing the number of data centers will miss the attempt by contractors to continue installing low-end servers in increasingly stand-alone configurations.
Most of the workload on low-end servers consists of Web services and of commodity computing. There are large economies of scale availabale through virtualization of workloads and consolidation for processing by large servers. This will deliver economies of scale and reduce that number of operating personnel needed to support a large number of low-end stand-alone servers.
What is now emerging from the proliferation of servers is the “big data” challenge. Server capacities are almost doubling every other year, driving  similar growth rates in stored data and network capacity.  Computing is now driven by increasing data volumes,  the need for integration of ever increasing sources of heterogeneous data. There is a need for rapid processing of data to support data-intensive decision-making.
It is necessary to re-examine the current approaches to the realization of IT for economies of scale in DoD. This places pressure in the direction of more centralized management of IT resources.  


·       

An Examination of DoD IT Spending


According to the DoD Comptroller, the following are DoD FY11 total payroll costs:  *
Pay and Allowances of Officers:  $33.03 billion
Pay and Allowances of Enlisted:  $74.57 billion
Pay and Allowances for Civilians: $10.6 billion
The pay includes: Basic pay, Retired Pay Accrual, Housing Allowance, Subsistence Allowance, Incentive Pay, Special Pay, Separation Pay and Social Security Tax for a total of $118.20 billion.
The total FY11 DoD headcount is 1,430,895 military personnel and 580,049 for civilian personnel, for a total of 2,010,944 persons.
The total cost of FY11 procurement is $134.16 billion, which includes a share of embedded IT costs (such as avionics), none of which is accounted for as an IT cost. Embedded IT are estimated to exceed the costs of the reported IT as the military increasingly depends on computer technologies.
SUMMARY
The FY11 IT budget is $36.3 billion. This makes it 30.7% of total payroll spending ($36.3/$118.20). This ratio exceeds averages for most commercial firms and is comparable to the ratios for large financial services firms.
The annual per DoD personnel cost for IT, which a key indicator used in all IT benchmarking, is $18,051/person/year which reflects an estimated cost of invested information capital of at least $100,000 per person. At this level DoD IT spending is comparable to the costs exceeded by U.S. retail banks.
For benchmarking comparison we have selected the most expensive Software-as-a-Service offering, the $260/user/month "unlimited license", which customizes cloud applications**.  The per person SaaS costs of $3,120/user/year show a substantial difference from the per person DoD costs of $18,051. This can be seen as an indication that there are differences in the ways how DoD manages IT as compared with a SaaS approach. The principal distinction between these two approaches is the reliance by SaaS on a single integrated infrastructure whereas DoD continues to operate with thousands of different infrastructures.
The current levels of DoD IT spending warrant a further examination to see whether adopting an SaaS cloud approach for parts of DoD operations could potentially offer major cost reductions.