Search This Blog

How to Get Ham and Eggs


If you want scrambled eggs, you can get them at McDonald’s (one choice) or at a restaurant (four choices). You can call that Eggs-as-a-Service (EaaS). It is cheap but the choices are limited.

You can also rent an apartment.  You can cook the eggs anyway you wish. You can call that Eggs-as-a-Platform (EaaP). Costs more, but you also get a kitchen so that you can cook for whomever.

You can also do it the DoD way. Buy a plot of land. Hire an architect. Contract with a builder. Subcontract for appliances.  After that you can cook eggs in any way any official can possibly ask for. You can call that Eggs-as-a-Silo System (EaaSS). It will take at lease seven years to build though some of it will never get done.

If you want ham and eggs the DoD way, you will also have to build a house for cooking hams. A road will be needed to connect the egg and ham houses. If you also need bread, vegetables and spice you will also need roads to connect. To connect all sources to all destinations you may need more than ten thousand roads.

DoD can also build ham and egg houses using enterprise resource planning (ERP) software. There are 18 listed ERP systems delivered as Software-as-a-Service (Saas).*  These SaaS packages are inexpensive and instantly available. EaaS will be always faster and cheaper than any EaaSS solutions. There is no need to wait more than seven years to implement an EaaS. If you do not like your choice you can always switch.

SUMMARY
An EaaS method of serving ham and eggs will not be to the liking of everyone. There will be always someone who will insist on preserving legacy tastes.

While it may take almost a decade to create two perfect kitchens that fit all decision-makers the changing cast of architects, builders, suppliers and cooks will be asking for new features what the kitchen-of-the-future should look.

There must be a better way of how DoD can install EaaS solutions.  You do not have to build a separate building every place where eggs and hams are cooked.  All you need are a few secure hotels where adaptable and low cost menus can satisfy rapidly changing tourists’ tastes.


* http://www.top10erp.org/erp-software-comparison-web-based-saas-platform-566
  

Optical Fiber or Wireless?

I have just returned from an Australian lecture tour, which included meetings with government officials in Canberra. I discovered that Australia is engaged in a vigorous debate about a greater than $36 billion investment in broadband optical fiber. A new, state owned network monopoly, would deliver to every home at least 1 GB of bandwidth connectivity.

The Australian debates prompted me to look into the future of wireless bandwidth to examine the optical fiber vs. 4G wireless tradeoffs.

The International Telecommunication Union defines 4G as a downlink speed of 1 gigabit/sec for stationary or slow moving users and 100 megabits/sec for when devices are traveling at higher speeds.

A 4G network is expected to provide all-IP based broadband to IP telephony, connectivity with laptop computers, access to wireless modems and support of smartphones. Current wireless carriers have nothing like that despite repeated claims of 4G. Technically, what carriers offer are pre-4G, or even 3G and a half capabilities. ITU lets carriers advertise LTE and WiMax Advanced as 4G because these networks are significantly faster than the established 3G technology, which runs at about 14.4 megabits/sec downlink. WiMax can deliver up to 70 Mbps over a 50Km radius.

When in place the 4G technology will be able to support interactive services like video conferencing (with more than 2 sites simultaneously), high capacity wireless Internet, and other communications needs. The 4G bandwidth would be 100 MHz and data would be transferred at much higher rates. Global mobility would be possible. The networks would be all IP and based on the IPv6 protocol. The antennas will be more capable and offer improved access technologies like OFDM and MC-CDMA (Multi Carrier CDMA). Security features will be significantly improved.

The purpose of 4G technology, based on a global WWWW (world wide wireless web) standard, is to deliver “pervasive networking”, also known as “ubiquitous computing”. The user will be able to simultaneously connect to several wireless access technologies and seamlessly move between them. In 5G, this concept will be further developed into multiple concurrent data transfer paths.

In the United States, the immediate challenge is finding the wireless spectrum. Recent tests in by LightSquared’s ground-and-satellite LTE service found that it interfered with GPS signals. Therefore, the FCC is holding back on proceeding further.
Meanwhile the FCC is considering an interim deployment and operation of a nationwide 4G public safety network, which would allow first responders to communicate between agencies and across geographies, regardless of devices. The FCC released a comprehensive paper, which indicates that 10 MHz of dedicated spectrum currently allocated from the available 700 MHz spectrum for public safety will be used to provide adequate capacity and performance for special communications as well as emergency situations.

While the US is holding back on 4G South Korea has announced plans to spend US$58 billion on 4G and even 5G technologies, with the goal of having installed in S. Korea the highest mobile phone market share after 2012, with hope to set the international standards.
Japan’s NTT-DoCoMo is jointly developing 4G with HP. A the same time Korean companies like Samsung and LG are also proceeding into 4G to gain global market share in advanced smartphones. Recently Japan, China and South Korea have started working on wireless technologies and they plan to set the global standards for 4G.

A 5G family of standards would be implemented around the year of 2020. A new mobile generation has so far appeared every 10th year since the first 1G system was introduced in 1981. The 2G (GSM) system rolled out in 1992. The,3G system, W-CDMA/FOMA, appeared in 2001. The development of 2G and 3G standards took about 10 years from the official start. Accordingly, 4G should start to be installed after 2011. There is no official 5G development project though this may take place only after a 2020 launch.

The development of the peak bit rates offered by cellular systems is hard to predict, since the historical bit rate development has shown little resemblance with the exponential function of time. The data rate increased by a factor 8 from 1G (1.2 kbps) to 2G (9.6 kbps). The peak bit rate increased by a factor 40 from 2G to 3G for mobile users (to 384 kbps), and by a factor of 200 from 2G to 3G for stationary users (2 Mbps). The peak bit rates are expected to increase by a factor 260 from 3G to 4G for mobile users (100 Mbps) and by a factor 500 from 3G to 4G for stationary users (1 Gbps).

Affecting the launch of 4G and 5G wireless technologies is the growth in mobile data traffic at CAGR of 92 percent between 2010 and 2015. Global mobile data traffic will grow three times faster than land based IP traffic from 2010 to 2015. Global mobile data traffic was 1 percent of total IP traffic in 2010, and will be 8 percent of total IP traffic by 2015.

SUMMARY
Whether Australia should proceed with digging trenches to every household to secure 1GB connectivity by 2015 is debatable. There are also political considerations.
First, the Australians will have to build a wireless network and erect wireless towers anyway. They will have to provide wireless connectivity for wireless traffic that is growing much faster than land-based traffic. Wireless assets will continue to require steady new capital investments as the S. Koreans, Chinese and Japanese forge ahead with 1GB wireless 4G service prior to 2015 and with 5G after 2020.

Second, the capital costs of any technology upgrades would be always much higher for landlines, based on fiber optic fiber, than on wireless towers. The operations and maintenance costs for last-mile copper circuits that will have to remain place exceed the capital costs of additional wireless towers.

Third, LTE and WiMax Advanced wireless already in place have the capacity to support over 14.4 megabits/sec downlink and up to 70 megabits/sec.  I presently pay a premium for a 12 megabits/sec downlink high-speed connection to the Internet. That is completely satisfactory for my extensive Internet usage as well as movie downloads. What the Australian can do with a ready availability of over 14.4 megabits immediately is not clear.
Lastly, there is an issue of how to choose technologies that would lend themselves to market competition vs. monopoly operations.  The capital cost for placing fiber optic cable is an investment that has a technology life of over 50 years.  Fiber optic cable to every home suits a monopoly model of pricing, since the costs of the delivery of services are unrelated to the costs of capital investment. In contrast, once wireless towers are in place, they can host a variety of wireless providers at costs that can be adjusted to reflect geographic characteristics. There is no question that proceeding with wireless connectivity would allow diverse competitive offerings to co-exist.

With the prospects of a rapid evolution in wireless connectivity as well in consideration of the widespread dispersal of Internet users in Australia, it does not appear to make sense to commit immediately to expensive fiber optic circuits to support delivery of Internet services to the Australians.

Software-as-a-Service as Ultimate Goal of Cloud Computing

A recent example of fracturing of systems into smaller components is found in a March 2011 U.S. Government Accountability Office report on the U.S. Navy’s Next Generation Enterprise Network program. To increase the number of bidders for a major system, the acquisition executives expanded it from the existing three contractual relationships to 21. Instead of the integration that is essential for cloud computing, one of the largest Defense Department projects will be headed in the opposite direction.

In fiscal year 2011, the department’s information technology had rapidly expanded to $36.3 billion, not including the costs of military and civilian personnel, which would add at least another $6 billion to the total cost of information technology ownership. The per capita cost of information technology spending now represents more than 10 percent of payroll costs. It exceeds the expenses for most fringe benefits. It offers one of the largest operating cost targets for achieving cost reductions.

The Defense Department’s information technology systems management culture now has arrived at a dead end, traveling down the wrong street. What then is the way out? What cultural changes will be necessary to speed up the adoption of cloud-based computing? What is the time urgency?

Two variables will influence decisions regarding what needs to be done. The first is the need to make vast improvements in the security of Defense Department operations to ensure survival during a concerted cyberattack. The second, and perhaps more important variable, is financial. New money is needed to pay for better security and to fund long-overdue innovations. That cannot come from larger budgets. It must be extracted from savings in current operations.

The fastest generator of Defense Department cash savings is to transfer the operation of “commodity computing” to the SaaS clouds. Commodity computing includes email, collaboration, text processing, messaging, spreadsheets and calendars. It includes voice over Internet protocol, or VoIP, as well as video over IP. Commodity computing also provides access to all forms of social computing such as YouTube, Facebook, MySpace, Twitter and blogs.

Commodity applications consume a large share of the fiscal 2011 $4 billion Defense Department network costs. Most of that is concentrated in DISA. However, it is feasible to use communication over the Internet as a secure channel instead of depending on private circuits. There is no reason why such traffic should not be routed over the Internet instead of the department that is operating dedicated links for commodity applications. Banking transactions, airline reservations and global trade all are conducted over the Internet using a variety of techniques that have been deployed to make it more secure.

The technology for placing Defense Department commodity computing and communications into a SaaS cloud is available. SaaS services or licenses are available from competitive sources at a fraction of what it costs the department now. SaaS depends on open-source applications, which further reduces license costs for proprietary software. Operating-cost reductions are potentially in the range of 60 percent to 70 percent with only minimal up-front development expense.

One of the major cyberthreats to the Defense Department is the chance that commodity microprocessors, currently manufactured in places without adequate security inspection, may be installed in department servers. Such microprocessors would come with back-door openings already implanted. This can be overcome if the Defense Department pursues a SaaS architecture. The world’s largest SaaS firm, Google, has its circuit boards custom built. The National Security Agency (NSA) has experience with oversight of microprocessor manufacturing. No reason exists to prohibit this from being doing for a Defense Department SaaS, which would be only a fraction of the size of Google.

SUMMARY

Migration to SaaS must overcome many obstacles. For starters, the Defense Department will have to stand up an organization that will plan, manage and contract for commodity computing.

Also, Defense Department components will have to commit to a uniform approach to network access authorization by everyone. Information security will have to be implemented consistently, following standards set by NSA.

Every commodity computing service will have to be structured for interoperability across several SaaS platforms so that vendor lock-in cannot take place. SaaS services need to be delivered from redundant data centers. More than three data centers should handle every transaction for delivery of 99.9999 percent reliability.

SaaS would have to be distributed to the edge of Defense Department sites so that Google-like latency can be realized. Delivery of SaaS transactions will have to be monitored by means of automated network control centers that will conduct real-time surveillance of the status of every user device.

SaaS can operate only as a component of a hybrid cloud configuration. Legacy, IaaS and PaaS clouds must supplement Defense Department computing and telecommunication during a transition time that may be never completely finished.

Legislative History: The Brooks and Clinger-Cohen Acts

The roots of the current Defense Department information technology culture can be traced to the Brooks Act of 1965. This legislation reflected a mainframe view of computing. Its purpose was to limit the growth of data centers to constrain costs. The Brooks Act was based on the assumption that if the General Services Administration would control the purchase of mainframe computers, all associated costs would be limited. The 1965 idea that closing data centers is a way to cut costs has persisted to this day. The number of Defense Department data centers has grown at an accelerated rate in recent years. It now stands at 772. However, this count has no significance as a cost-management metric.

Servers, with a capacity far exceeding what now is defined as a data center, can be purchased for a fraction of mainframe costs. A culture that pursues numerical reductions in the number of data centers without examining associated cost reduction, security, application and support characteristics may simplify Defense Department information processing but will not lead to the PIAs that should be the future of the department. This flaw in the Brooks Act thinking was recognized in the Clinger-Cohen Act of 1996.

Clinger-Cohen hoped to address rising costs of information technologies; at that same time, the military was shifting to new concepts such as information warfare and information dominance. While the military was driving toward increased dependency on computers, the cost management culture created by Clinger-Cohen was lagging behind the military’s needs. Although Clinger-Cohen created the position of the chief information officer (CIO) to promote warfare-centric deployment of information technology, the management of information technology continued to install isolated projects as a way of containing technology risks.

The central CIO oversight was diluted when the role of acquisition personnel was enlarged. The acquisition corps, dominated by civilians with financial and not computer backgrounds, had every incentive to award contracts that followed regulations that were at least 20 years old. This locked the Defense Department culture into client-server concepts. These were alien to concepts that are now being implemented by leading firms such as Google, Amazon, Microsoft and IBM, as well as thousands of other commercial enterprises.

The unity of oversight that was advocated by Clinger-Cohen never happened. What exist now are the remainders of a culture that has been spending money primarily on buying networks on the assumption that if networks are built, integration will somehow follow. That is like building superhighways without thinking about the origins of automobile traffic.

Clinger-Cohen had several consequences. The share of information technology spending allocated to the Defense Department infrastructure, which supports an increasingly disjointed collection of systems, grew from 33 percent in 1992 to the current 57 percent. Because infrastructure reflects system overhead, less money was available for serving the warfighter’s needs.

Under Clinger-Cohen, there has been a pronounced shift of spending to contractors as well as to the legislatively mandated set-aside subcontracts. The best estimate of information technology spending now managed by contractors is 76 percent. There are more than 1,000 contracts with annual budgets of less than $1 million. The amount of documentation, testing and certification that is demanded by the stretched force of acquisition executives is now the preferred way of containing risks. The cost per function delivered by contractors is now a large multiple of commercial costs.

A consequence of shifting to contractors was a depletion in the cadre of qualified military information professionals necessary to provide leadership that will assure compliance with warfare requirements. As an example, there are only five flag-level officers presently in the Navy serving as “information professionals.” This can be contrasted with a decades-long experience of a four-star flag Navy officer in charge of nuclear propulsion. He is supported by 12 flag officers.

There are good reasons to argue that in terms of complexity and scope, the migration into the cloud environment will exceed the challenges faced by the late Adm. Hyman Rickover, USN. Supporting the future of computing in the Defense Department requires a culture that views information technology as a warfare capability and not as a back-office task that is best delegated to suppliers.

One of the consequences of Clinger-Cohen has been the shift of a large share of information technology costs from the military to defense agencies such as the Defense Information Systems Agency (DISA), the Defense Logistics Agency (DLA) and the Defense Finance and Accounting Service (DFAS). Agencies now spend twice as much ($14.6 billion) on information technology as each of the services individually (averaging $7 billion each). As a result, the culture now favors the distribution of information technologies into 2,103 systems silos. Because of funding limitations, almost every system enclave must pursue separate architectures, unique applications, diverse standards, incompatible software design methods and inconsistent operating practices.

Small projects do not have sufficient money to fund rising security requirements. The military is experiencing a rising dependency on the support from the agencies, which have no direct fiscal accountability for work done for the military.

Systems managers now concentrate more on carving out the largest information technology budget they can extract during the budget cycle than on keeping up with rapidly changing technological capabilities.

SUMMARY
Clinger-Cohen left the Defense Department with a culture that is more than 20 years old just as information technologies are charging ahead with cloud concepts. Individual project managers do not have the funds to invest in innovation. They are just trying to manage ever-smaller incremental projects.

Cultural, not Technological Obstacles for DoD

PIA-based hybrid networks differ from the client-server environment and even more from the mainframe data center environment. For example, in the client-server environment, there are multiple links from desktops to the servers. An estimated 15,000 networks are in place in the Defense Department providing such connectivity. In the hybrid cloud environment, there is only a single link from any PIA to every conceivable server.

Also, in the client-server environment, each server-silo operates as a separate computing cluster. In a hybrid cloud configuration, there are automated services that discover where the requested data is located.

Finally, in the client-server environment, there is a proliferation of separate databases with duplicate data. Almost every major application keeps storing a diverse amount of incompatible data because that is dictated by the way in which organizations let contracts for isolated applications. Added software then is required for translating content as well as formats. It makes real-time interoperability hard to achieve. In a hybrid cloud configuration, there is a universal metadata directory, which steers database accesses to the applicable servers that will provide automated responses.

The key issue facing the Defense Department now is how to migrate to where it should be—a more robust hybrid cloud environment. How fast can the department transition into a hybrid cloud that delivers universal connectivity to every PIA? How can the department change its information architecture so that the technical details of information technology management will remain invisible to users?

The obstacles that block the transition from a fractured computer-centric asset acquisition environment to a universal user-centric architecture are cultural, not technological.

It is not the limitation of technologies that are holding up the Defense Department’s progress. The technology of cloud computing is readily available. What needs to be overcome is the prevailing acquisition-centered culture. Instead of integrating systems for universal interoperability, the department still is pursuing a policy of systems fragmentation. Contracts are awarded that are dictated by acquisition regulations and not by operating needs.

The current process for acquiring information technology is broken into six phases to contain risks. This separates planning, development, vendor solicitation, contracting for services, asset acquisition, and operations and ongoing maintenance. To create a Defense Department system requires the coordination of dozens of organizations, multiple contracts and a large number of subcontracts. This results in an elongation of implementation schedules that are currently twice as long as the technology innovation cycle. Meanwhile, projects will be managed by executives who will be on an assignment only for a short while before moving on. This will guarantee that changes in scope will creep in, budgets will rise and results will fall short of what originally was proposed.

SUMMARY

The fundamental issue in the Defense Department is not the supply of technologies but the department’s capacity to organize for making the transition into the hybrid clouds. What needs changing is not technology, but rather, the fiscal culture.

The Rising Importance of the Personal Information Assistant

The majority of the 2.5 million military, civilian or reserve personnel in the U.S. Defense Department do not care much about the technical details of computing. Users only wish to receive answers reliably and quickly. Requested information needs to be available regardless of the computing device they use. Responses must be secure. No restrictions should hamper access by certified users communicating from remote locations. Information has to be available for people authorized to make use of what they receive.

Information sources must include data received from people, from sensors or from public websites. Information must be available to and from ground locations, ships, submarines, airplanes and satellites. A user must be able to connect with every government agency as well as with allies.

What the Defense Department customer wishes to have is a personal information assistant (PIA). Such a device matches a person’s identity. It is configured to adapt to changing levels of training. It understands the user’s verbal or analytic skills. It knows where the person is at all times. Any security restrictions are reconfigured to fit a user’s current job. Every PIA is monitored at all times from several network control centers that are operated by long-tenured Defense Department information professionals and not by contractors.

What a user sees on a display are either graphic applications or choices of text. The user only needs to enter authentication signatures.

Defense Department users will not know the location of the servers that drive their PIA. The user does not care which operating system the applications is using or how a graphical display on the PIA is generated. Which programming language is used is irrelevant. If a message is encrypted, the embedded security will authorize decryption. Although the department has thousands of applications, the user will be able to see only those applications that correspond to the situation for which use is authorized.

What the users then hold in their hands reflects what is known as a software-as-a-service (SaaS) cloud. SaaS is the ultimate form of cloud computing. It delivers what a user can put to use without delay. It is an inexpensive computing solution for everyone, everywhere, by any computing method.

However, SaaS is not the only model for cloud computing. There are private and public infrastructure-as-a-service (IaaS) clouds. Private and public platform-as-a-service (PaaS) clouds also are available. Some existing legacy applications ultimately will be replaced, but in the meantime, they will be placed on the Defense Department networks as virtual computers.

Virtualized legacy applications can be useful if encapsulated within an IaaS structure. During the transition from legacy to SaaS computing, every PIA will be able to access all information. The result will be the Defense Department Hybrid Cloud.

SUMMARY

A transition into hybrid computing is a concept of what could be available universally to everyone in the Defense Department. The centerpiece of the Defense Department’s future computing network will then be the ubiquitous PIA—not the servers, not the networks and not the data centers. Servers, networks and data centers are necessary, but concentrating on them represents an obsolete way of thinking. What matters now is the PIA, which is the computing appliance that delivers an all-inclusive computing experience. The user sees only the results. All the details regarding how results are calculated remain invisible.