Search This Blog

Small Business Share of IT Spending


Multiyear contract awards for Federal IT spending has been growing steadily from $50 billion in 2003 to close to $70 billion in 2010.(1) This would represent at least 70% of IT funding.

Contracts under $100,000 are automatically set aside for small business, contracts from $100,000 to $500,000 can be automatically set aside for small business provided there are sufficient bidders. Contracts of $500,000 must have a subcontracting plan for small business. Close to 23% of prime contracts have in the past been awarded to small businesses.

 In addition to the allocations to small businesses, the Federal acquisition regulations also a require awards to HUB Zones, that offer contracts to small businesses located in high unemployment, low-income areas as well as to 8(a) firms owned by socially or economically disadvantaged people or by women-owned. In addition, contract set-asides are made for Service Disabled Veteran Owned, Veteran Owned, Small Disadvantaged Business Owned and Native American small firms.

SUMMARY
The total amount of IT contracts set aside for small business is not known but could be as high as 30%. What matters is the large constituency of local firms that depend on Federal funds. That number could exceed as many as 10,000 of enterprises that are closely connected to Congressional sources.

Data center consolidations and application consolidations will have to take into consideration the obstacles in overcoming what are the existing multi-year contracts for a multitude of firms.

(1) http://iq.govwin.com/corp/forms/form.cfm?promoid=3355&sourceid=21&gclid=CNTi9bm6gq8CFYGo4Aod01DO2A

The “Personal Cloud” Concept


As information technologies become more complex and widespread consultants find it useful to articulate new labels for describing novel phenomena. The latest example is the widespread use of the “personal cloud” buzzword.  It describes what will be displacing what has heretofore been the primary focus of computing, which is the deployment of desktop personal computers. In the new era of “post personal computer” computing, the Gartner firm has started promoting the concept of the “personal cloud” to describe what is now becoming adopted at a rapid pace as new approach to computing.

There is a radical difference between desktop computing using stand-alone personal computers and what can be now found in cloud-based networks that will support the highly diverse computing needs of a user population. Differences are technological, economic as well as behavioral. Migrating from reliance on desktop personal computers to personal cloud computing represents a radical change. It cannot be achieved through small incremental improvements. It requires an overhaul in the architecture and the organization how systems are designed and then delivered.

The idea of cloud computing is rooted in the technology of virtualization. The evolution of personal computing was based on migration of information processing from the central mainframe computer into the hands of individual users through the desktop computer. The personal cloud is reversing this trend. As the processing power of billions of personal desktop computers expands, the utilization of their capabilities is diminishing. Only a part of the computer logic is now dedicated to applications through increased access of computer services directly from the Internet. The overwhelming majority of desktop computing is dedicated to the processing of codes that manage the operating system, to the manipulation of databases and to the organization of communications. As security vulnerabilities are mounting, much of the power computing is dedicated to security assurance. The computer that is now cabled to sit on the desktop is increasingly inadequate as users shift to mobile computing.

Virtualization of computing makes it possible to create pools of server-based central computing power that can share computing cycles for thousands of individual desktop machines. Virtualization allows for the pooling of data storage to obtain better utilization of capacity. Virtualization combines expensive communication service to serve a cluster of virtual computers and thereby reduces the exposure to security risks. Virtualization detaches local desktop hardware from having to maintain the large overhead of operating systems. Virtualization makes it possible to take advantage of economies of scale of the combined capacity of hundred thousands of central servers while enabling fail-overs and automated back-up. Virtualization can deliver exceptionally high levels of service reliability. Though separation of applications the underlying infrastructure, it is possible to create data processing utilities that deliver a standard computing environment that is independent from what the user needs. In this way, the user gains freedom to process applications using any computing device from any location. The central means for access to computing is not any more the dedicated personal computer, but access to the personal cloud that can be available anywhere, any time and from any device.

The personal cloud makes it possible to gain computing services from consumer-grade devices available at rapidly decreasing competitive prices. Such devices will browser-connect to the cloud without requiring expensive systems integration efforts. They depend on the network infrastructure, needs only simplified browser software for application services. This arrangement greatly reduces security risks since the user’s smartphone or tablet does not store operational code that is the prime source of vulnerability. All such code is stored on servers, where it can be protected with much greater competence.

The cheap cost of the disposable and rapidly obsolescent user devices can be matched with the large capital cost of cloud computing centers where all of the processing and communication takes place. The central facilities can be then constructed to have low depreciation rates because engineering focus can now concentrate on primarily on the delivery of commodity machine cycles. The personal cloud device can be set up for receipt of user charges based on computing usage, on a per use basis.

The feature rich desktop computer locks users into fixed cost economics. The personal computer user must be then fitted into custom-designs that are neither interoperable nor subject to transplantation into another environment. In contrast, if the cloud is constructed using open source application interfaces the cloud customer can take advantage of competitive offerings from a variety of commercial cloud services, each offering a diversity of pricing plans and a variety of services. Such an arrangement can be also designed as a hybrid model.

The owners of personal clouds will have access to a variety of public and private cloud services. In a typical environment the private cloud will contain proprietary and certainly classified computing services. Depending on security and pricing terms, customers can the shop for applications from a vast collection of ready-to-use applications, usually available for a fixed price, inclusive of maintenance. There will be no cost of integration, whereas the user of a personal computer will have to expend money for custom-adaptation of every application into a uniquely defined computing architecture.  In the personal cloud it is possible to rapidly swap applications for upgrades provided that the application interfaces are reasonably constructed as open source protocols. In contrast, owning a personal computer will always require incurring costs for making changes for any functionality.

The dependency on personal computing has pushed developers to organize projects that required building increasing complexity and size into any design. Constructing a project, which assures a total interdependency between desktop, laptop, server, database and network computers imposes rising costs of synchronization and integration for the entire effort. Consequently the size of individual projects keeps growing, the implementation schedules are elongating and the gap between original requirements and what is delivered is growing until much that was promised and what is ultimately delivered makes much of the effort obsolete.

The personal cloud avoids the dependency on large projects. For instance, the database storage pool can be constructed as a utility service that can have an implementation schedule that can be measured in decades. The server computer processing pool can proceed on schedule that requires only a few years, whereas the construction of local applications can proceed at a pace of only a few days.

SUMMARY
The last decade has seen a rapid pace of evolution in computing. We are seeing orders of magnitude changes. Firms are moving from reliance on hours of response time to results in minutes and even seconds. Enterprises need to process instantly billions of transactions that require not only internal data but also integration with external sources.

The client-server mode, which depended on billions of desktop computers to assist in the processing of local work, has outgrown its utility. The movement towards the “post personal computer” era has already begun. It places reliance on pooled cloud computing services, which are accessible through a person’s own private cloud. The private cloud can be then defined as access privileges to a collection of cloud services.

In its network form the personal cloud represents a shift of knowledge based on the ownership of assets to the ability to have universal access to everything that is available. The personal cloud will endow everyone with the potential of access to knowledge that hitherto has remained unreachable.

Air Force to Adopt Thin Clients?


The Air Force has just released a Request for Information (RFI) to support new client architectures for the years 2014 and beyond. The purpose is to explore a zero/thin-client solution for the Non-Secure Internet Protocol Router Network (NIPRNet) as well as the Secure Internet Protocol Router Network (SIPRNet).

The new architecture would be managed at the enterprise level rather than the base level. This solution would include all server hardware, client hardware, server/network-based storage, profile management capability, and zero/thin-client hardware.

The goal is to enable users to access AF desktop-like capabilities through any device, to include commercial mobile devices. This would eliminate the need for storage of classified hard drives at the client location. The following is a partial list of required features:

1. Two-factor access would be accomplished through NIPRNet Common Access Card or SIPRNet access via Smart Card (SIPRNet Token, 3.5 volt). Access would be possible from any military base or any data network that uses approved DoD/NSA Smart Card Readers.
2. System will support up to 1,000,000 NIPRNet users across the globe at 100+ bases. It will support up to 220,000 users on SIPRNet across the globe.
3. System will support up to 700,000 concurrent users on NIPRNet and up to 75,000 concurrent users on SIPRNet.
4. Clean desktop will be presented to user each time they log in.
5. Existing desktop systems (fat clients) will be supported until hardware can be refreshed through a zero client.
6. Central management of USB port will be included.
7. All support servers will be managed at the enterprise level.

SUMMARY
The new RFI is a correct first step for starting migration towards enterprise computing. It is a good approach how to start with transformation of the DoD architecture. Thin client computing is a prerequisite for making large improvements in information security while realizing large savings in operating costs.



Software Defined Networks


Customers have gained significant savings as result of compute and storage virtualization. However, additional efficiencies in cloud data centers have reached a barrier. Reaping more savings go beyond the scope of today’s data center networks.   The external physical network -- while excellent at forwarding packets – is a costly restriction to realizing full capabilities. The current network routing and switching configurations have placed limits on the success of compute and storage virtualization.

Network operations are overly complicated. They are a fragile systems constructed from thousands of individual devices tied together by vendor-specific interfaces but without an interface for network-wide controls. In a typical Interned-based net a transaction may be handled by anywhere from eight to twenty-five “hops” before it arrives at its destination. As result the network requires expensive hardware synchronization while binding network management to a particular vendor.

While a transaction traverses over the multi-step Internet, it is vulnerable. For instance, the passage over multiple routers exposes it to attacks such as: promiscuous mode corruption; router table misdirection; router information mistakes; shortest path faults; border gateway miscalculations and border gateway poisoning. Passing it through a multiplicity of switches makes it open to attacks such as: flooding attacks; address resolution spoofing; “man-in-the-middle” attacks; denial of service; switch hijacking; spanning tree attacks; root claims; forcing external root election and VLAN hopping.

Just as server virtualization decouples and isolates the computing from the underlying hardware, network virtualization decouples network services from the underlying physical network routers and switches.  Such virtualization then enables the creation of software- defined networks. Such networks can be centrally managed across the entire connection map so that security policy can be uniformly applied.

 A Network Virtualization Platform (NVP) software then makes it possible to takes over an entire networking environment and places it into a managed virtual space. It transforms the physical network, defined by a diversity of Internet protocols, into a standard pool of network capacity. This is comparable to what happens when a server hypervisor in a data center transforms physical servers into a pool of computing capacity.   Decoupling virtual networks from the physical hardware of routers and switches to allow customers to scale the pool of network capacity without affecting the physical networks operating below it.

When relegated to delivering simple IP connectivity of packets, the demand on the physical network is reduced because the paths of a transaction are managed to traverse preferred circuits. The requirements for many specialized hardware features are eliminated because the virtual network is now managed as a controlled environment that operates with uniform standards. Hardware capacity can be added in a virtual format without affecting the performance of the entire network that will now operate decoupled from the physical infrastructure.

Virtualization of Internet networks will be one of the most significant transformations of IT in the near future.  It will deliver both business efficiency as well as the vulnerability of networks to cyber attacks. Network virtualization will removes existing barriers by enabling the creation of scalable configurations that are separate from the underlying physical network. This will make possible to form new network services through software, rather than through upgrading hardware in the entire chain of transactions.

With about 6,000 physical locations that DoD networks must reach, assuring secure hardware interoperability of perhaps as many as 60,000 routers and switches is a task that is not manageable.
Once virtualized, the physical network is used only for packet forwarding and not for routing or switching. Virtual networks are then programmatically created on top of the physical networks. Virtual networks can operate independently from the underlying hardware, offering features that assure information security.

As a software solution, NVP creates an intelligent abstraction layer between end hosts and any existing network. Managed by a separate controller server this transforms the network into clusters of controlled communication capacity. It enables centrally managed control software to create tens of thousands of isolated virtual networks that are endowed with uniform capabilities to execute quickly policy-level directives. Such speed greatly increases information assurance because there will be always attacks that standard firewalls and virus protection will not be able to counter-act.

The existing physical network, populated with a variety of codes and procedures makes it possible attackers to corrupt operations of routers and switches because the software that is already installed on thousands of network devices can be corrupted. For instance, routers that are located on the path of statistically indeterminate “hops” can be tampered with, from where further security compromises can then spread. The number of possible misrouting is enormous.

The new software defined network is presently available from a number of vendors. Such software, sited at cloud hubs, then orchestrates and delivers the virtual networks and network services on top of the physical fabric. Customers can then program network service features on top of the physical network, rather than directly configure each node, one element at a time.

A software-defined network can be deployed non-disruptively on existing networks without changing hardware, or it can be used with next generation network fabric architectures from any vendor.  This allows the programmatic creation of isolated virtual networks, each of which maintains its own address space, statistics counters, quality-of-service, security configurations, and other higher-level network services. The time it takes to deploy secure applications in the cloud goes from weeks to minutes and the process goes from manual to automatic.

SUMMARY
Networking must evolve a virtualization layer that decouples workloads from the physical network. Until this happens, the full potential of compute virtualization will remain unrealized. Traditional networking approaches are not well suited for this task.

A Distributed Virtual Network Infrastructure (DVNI) provides a network virtualization architecture that addresses the shortcomings of traditional network approaches, providing a host of virtualization benefits, such as isolation, mobility, scalability, dynamic provisioning without restriction, and hardware independence. As a result, this approach is taking hold in the world’s largest virtualized data centers.

Implementation of DVNI is not feasible with close to 700 data centers and 15,000 networks in DoD. Large-scale consolidations will have to be phased with simultaneous migration to controlled Internet connectivity.

The Defense Department Has Its Information Systems Strategy--Now What?


We now have the U.S. Defense Department information technology enterprise strategy and roadmap. The new direction calls for an overhaul of policies that guide the department’s information systems. Yet, implementation is a challenge, and several issues require the reorientation of how the Defense Department manages information technologies.

With this strategy, defense personnel will have seamless access to all information, enabling its creation and sharing. Access will be through a variety of technologies, including special-purpose mobile devices. Defense Department personnel use computing services in approximately 150 countries, at nearly 6,000 locations and in more than 600,000 buildings. This diversity calls for standardization of formats for tens of thousands of programs, which requires a complete change in the way department systems are configured.

Commanders will have access to information available from all Defense Department resources, which will enable improved command and control, increased speed of action and enhanced ability to coordinate across organizational boundaries or with mission partners. Yet, more than 15,000 uncoordinated networks do not offer the availability and latency that is essential for real-time coordination of diverse information sources. Integration of all networks under centrally controlled network management centers becomes the key requirement for further progress. This mandates a complete reconfiguration of the Global Information Grid (GIG).

Individual service members and government civilians will be offered a standard information technology user experience, providing them with the same look, feel and access to information on reassignment, mobilization or deployment. Minimum retraining will be necessary, because the output formats, vocabulary and menu options must be identical regardless of the technology used. However, Defense Department systems depend on more than seven million devices for input and for display of information. Thousands of unique and incompatible formats exist for the supporting user feedback to automated systems. These format incompatibilities require replacing the existing interfaces through the use of a standard virtual desktop that recognizes the differences in training and in literacy levels.

Common identity management, authorization and authentication schemes will grant access to the networks based on a user’s credentials, as well as on physical circumstances. Achieving this goal mandates the adoption of universal network authorizations for granting access privileges. This requires a revision of how access permissions are issued for more than 70,000 servers. The workflow between the existing personnel systems and the access authorization authorities in human resources systems will require overhauling how access privileges are issued or revoked.

Common defensewide services, applications and programming tools will be usable across the entire department, thereby minimizing duplicate efforts, reducing program fragmentation and lessening the need for retraining when developers are reassigned or redeployed. But, this policy cannot be executed without revising the organizational and funding structures in place.

Standardization of applications and software tools necessitates discarding much of the code that already is in place—or requires temporarily storing it as virtualized legacy codes. Reducing data fragmentation requires a full implementation of the Defense Department data directory.

Streamlined information technology acquisition processes would deliver rapid fielding of capabilities, inclusive of enterprisewide certification and accreditation of new services and applications. Currently, more than 10,000 operational systems are in place, controlled by hundreds of acquisition personnel and involving thousands of contractors. A total of 79 major projects—with current spending of $12.3 billion—have been ongoing for close to a decade. These projects have proprietary technologies deeply ingrained through long-term contract commitments. Disentangling the Defense Department from billions of dollars worth of non-interoperable software will require congressional approval.

More than 50 percent of information technology spending is in the infrastructure, not in functional applications. The Defense Department chief information officer (CIO) has clear authority to direct the reshaping of the infrastructure organizations. Consequently, the strategic objectives largely can be achieved, but only with major changes in the authority for the execution of the proposed plan. It remains to be seen whether the ambitious strategies will meet the challenge of the new cyber operations.