Sunday, December 8, 2013

Containerization of Applications for the PaaS Cloud

Most 2nd generation applications were deployed on monolithic, proprietary servers that contained all of the associated applications such as communications, databases and security. For the 3rd generation developers have to assemble applications using a multiplicity of the best available services, and must be prepared for those applications to be deployed across a multiplicity of different hardware environments, including public, private, and virtualized servers.
·        
Fifteen years ago, virtually all applications were written using well-defined stacks of services and deployed on a single monolithic, proprietary server. Today, developers build and assemble applications using a multiplicity of the best available services, and must be prepared for those applications to be deployed across a multiplicity of different hardware environments, included public, private, and virtualized servers.

The assembly of applications sets up the possibility for adverse interactions between different services and difficulty in migrating and scaling across different hardware offerings. Managing a matrix of multiple different services deployed across multiple different types of hardware becomes exceedingly difficult.

There is a huge number of combinations and permutations of applications/services and hardware environments that need to be considered every time an application is written or rewritten. This creates a difficult situation for both the developers who are writing applications and the folks in operations who are trying to create a scalable, secure, and highly performance operations environment.

A useful analogy can be drawn from the world of shipping. Before 1960, most cargo was shipped break bulk. Shippers and carriers alike needed to worry about bad interactions between different types of cargo (e.g. if a shipment of anvils fell on a sack of bananas). Similarly, transitions between different modes of transport were painful. Up to half the time to ship something could be taken up as ships were unloaded and reloaded in ports, and in waiting for the same shipment to get reloaded onto trains, trucks, etc. Along the way, losses due to damage and theft were large. And, there was an n X n matrix between a multiplicity of different goods and a multiplicity of different transport mechanisms.

Containerization of applications and the virtual computing infrastructure can be thought of as an intermodal shipping container system for code. Containerization of code and its supporting infrastructure enables any application and its dependencies to be packaged up as a lightweight, portable, self-sufficient container. Containers have standard operations, thus enabling automation. The same container that that a developer builds on a laptop will run at scale, in production, on VMs, bare-metal servers, server clusters, public instances, or combinations of the above. Most importantly, consistent security can be applied to all of the components that reside in the container.
In other words, developers can build their application once, and then know that it can run consistently anywhere. Operators can configure their servers once, and then know that they can run any application.
After a multiplicity of applications and VMs are in a container, the following will apply:


  • Each application in the container runs in a completely separate root file system.
  • System resources like CPU and memory can be allocated differently to each process container.
  • Each process runs in its own network namespace, with a virtual interface and IP address of its own.
  • The standard streams of each process container arecollected and logged for real-time or batch retrieval.
  • Changes to a filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.

·       Root filesystems are created using copy-on-write, which makes deployment extremely fast, memory-cheap and disk-cheap.

Saturday, December 7, 2013

View of Total Cost of a Project from an IT and user standpoint

When viewing total project costs, all expenses must be considered:


From the standpoint of planning, the projected life-cycle cost of the project represent the costs of the 2009 through 2018 cash flow:


The management should therefore view the total costs of an IT-lead investment as follows:
Business Executive costs for modernization project = $683.4 million
 I.T. Executive costs of modernization                         = $  22.3 million
 CONCLUSION:

The merits of an I.T. investment must be therefore judged primarily by operating executives primarily will the I.T. executive will be evaluated only on the quality and performance of 3.3% of expenses.

Monday, December 2, 2013

Getting Ready for Third Generation Defense DeptSystems

DoD is now advancing into the third generation of information technologies. This progress is characterized by migration from an emphasis on server-based computing to a concentration on the management of huge amounts of data.

This calls for technical innovation and for abandonment of primary dependence on a multiplicity of contractors because interoperable data must be now accessed from most DoD applications. In the second generation DoD depended on thousands of custom designed applications, each with its own database. The time has now come to view DoD as an integrated enterprise that requires a unified approach. DoD must be ready to deal with attackers who have chosen corrupt DoD widely distributed applications as a platform for waging war.

When Google embarked on indexing the world’s information, which could not yet be achieved, the company had to innovate how to manage uniformly its global data platform on millions of servers in more than 30 data centers. DoD has now embarked on creating a Joint Information Environment (JIE) that will unify access to logistics, finance, personnel resources, supplies, intelligence, geography and military data. When huge amounts of sensor data are included, the JIE will be now facing two to three orders of magnitude greater challenges how to organize the third generation of computing.  

JIE applications will have to reach across thousands of separate databases that will support applications that will fulfill the diverse needs of an interoperable Joint service. Third generation systems will have to support millions of desktops, laptops and mobile networks responding to potentially billions of inquiries that must be assembled rapidly and securely.

The combined JIE databases will certainly exceed thousands of petabytes. JIE will have to manage, under emergency conditions all of the transactions per day with 99.9999% reliability. Even a very small security breach would be dangerous because a single critical event my slip by unnoticed. 0.0001% of a billion is still a potential 1,000 flaws.

The principal firm that comes close for making a comparison with DoD is the General Electric Company. It has a staff of over 300,000. It maintains and operates complex capital equipment such as aircraft, electric generators, trains and medical equipment. GE manages a long supply chain whereas none of the consumer-oriented applications that DoD has studied require that. DoD should not be compared with consumer cloud firms such as Google, Yahoo, Facebook and others because these firms deliver only a limited set of applications. For instance Amazon offers only a proprietary IaaS service that supplies computing capacity but not applications. DoD should be therefore compared with organizations where IT must satisfy a large and highly diversified constituency of diverse and global people.

DoD operates the world’s greatest collection of industry-sourced equipment such as tanks, helicopters, submarines and ships. It just happens that the General Electric is already migrating its information technologies into the third generation of computing. Therefore we can learn a great deal from their progress.

General Electric has had to learn how to do three things.  First, it had to acquire the capacity to operate with much larger data sets. That includes multiple petabytes of data, which is necessary because the capacity of the existing relational databases is limited. Second, it had to adopt a culture of rapid application development. With most of the data management, communications and security code already provided by the platform-as-a-service (PaaS) infrastructure, a new programmer should be able to produce usable results on the first day of work. Third, GE is now re-focusing on the “Internet of Things” (IoT). This includes billions of objects such as spare parts, sensor inputs, medical diagnosis, equipment identification, ammunition, telemetry and the geographic location of all devices.

For instance, single drone flight generates over 30 TB of data about the conditions of the engines, maintenance statistics, repair data and intelligence. This information must be then attached to the planning, logistics as well as to the command and control systems. The amount of data that will be generated in the future of JIT will be several orders of magnitudes greater than what is captured nowadays. Ultimately, systems that include IoT will deliver hundreds of billions of transactions that will be producing a flood of information that will have to be screened and analyzed. Therefore DoD systems will have to be changed not only for looking at data at rest but also to examine incoming transaction dynamically in real time.

Meanwhile DoD will also have to reduce the second-generation and even first-generation applications in order to find the funds needed to support third generation innovations. This can be achieved through dramatic consolidation of applications that take advantage of the large operating cost reductions available through virtualization. The economics of cost reductions will have to be balanced against reliability and security. Such innovations will be expensive unless they are developed under a tightly enforced common systems architecture.

Second generation applications will not have to be rewritten but can be included, alongside with third-generation applications, in a PaaS environment that makes it possible to exchange data to satisfy incoming random queries. To reduce costs while making applications interoperable will require proceeding with a massive consolidation of hundreds of existing data centers. The recent introduction of the software defined data environment (SDN) will make that possible. SDN allows the sharing of the costs of computing, communications and security. It can cut costs while increasing redundancy while delivering superior reliability.

In planning the transition into third generation computing the new platform will have to rely on open-source solutions because DoD must be able to move applications from private clouds to public clouds and vice-versa as the need arises. Ultimately, DoD will end up housing most of its critical applications in private clouds, while retaining options for using public clouds for lower security applications such as finance, human resources and health administration.

The more data can become accessible from any of the billions of inquiries, the greater will be the utility of a shared data platform. There is no question that DoD, like GE, will have to start converting existing databases from stand-alone relational solutions to recently available “Big Data” software. Under such conditions thousands of secure processing “sand boxes” will allow the storage of data at multiple locations for rapid restoration of operations when failure occurs. This will allow the access to applications from any source that has access privileges.

The adoption of third generation computing must overcome the difficulties that arise from the projected increase in the volume and complexity the DoD systems. That cannot be achieved by reliance entirely on increasing the numbers and the quality of existing staffs because that is not affordable. Although new cloud software will increase the productivity of network control staffs, the workload caused by increased malware attacks will make it necessary to invest in a far greater automation of all controls.

The 2014 DoD IT budget is $34.1 billion. $24.4 billion (72%) is for ongoing operations. This leaves $9.7 billion (28%) for meeting new functional requirement while leaving little for innovation. How to allocate funds in preparation for third generation computing?

There is no question that the consolidation of over 3,000 applications should be the first priority in freeing funds from ongoing operations so that money is available for new development and innovation. There are a variety of models available to simulate the potential benefits from virtualized computing. This technology is mature. It can be applied by seasoned staffs and can be implemented rapidly. 20 to 40% cost reductions have been verified, with break-even points reached in less than two years. Therefore the major obstacle is not technological but organizational.

As ongoing operations reduce costs through efficiencies, the spending on new development and particularly on third generation innovations should rise from 28%. Although cost reductions are essential and tactical, the DoD strategic budget should be evaluated primarily on the share of money that can deployed for third generation innovations.    

In 1992 the DMRD 918 set the directions for operating cost reductions through consolidation. The expected savings were never achieved in the absence of strong directions from the Office of the Secretary. New expenses were permitted that far exceeded any cost cuts. As the third generation of computing has already arrived, the lesson learned from the past is to allow the DoD CIO to manage the deployment of the entire IT budget. Sufficient funds must become available produce the essential innovations.  

Mobile Employee Behaviors Put Enterprise Data at Risk

Fiberlink, the leader in cloud-based enterprise mobility management (EMM) announced the results of its online survey that reveals employees are unknowingly putting enterprise data at risk. 

Among 2,064 U.S. adults surveyed about their mobile behavior, over half (51 percent) of employees use their personal smartphones and/or tablet devices for work purposes. However, overwhelming majority, do not use security solutions that control corporate data from enterprise data. They are engaging in risky behavior. http://www.maas360.com/news/press-releases/2013/fiberlink-survey-reveals-mobile-employee-behaviors-put-enterprise-data-at-risk/

For example, among employees who use mobile devices for work  the survey showed:
  • 25 percent have opened/saved a work attachment file into a third-party app (e.g., QuickOffice, Dropbox, Evernote).
  • 20 percent admit to having cut/pasted work-related email or attachments from company email to their personal email accounts.
  • 18 percent say they’ve accessed websites that are blocked by their company’s IT policy.
While using personal devices for work is a matter of convenience for employees, it’s a matter of security for employers. Top security issues include corporate data leakage, malicious applications, violation of corporate use policies and regulatory compliance, all of which have the potential to compromise enterprise data. In the absence of enterprise mobility management solutions, risky employee behavior on mobile devices, whether accidental or malicious, is inevitable.


Meanwhile, IBM announced that it would acquire Fiberlink Communications, which offers a cloud-based enterprise mobile management (EMM) solution under the brand MaaS360. The acquisition is expected to offer MaaS360 within the IBM SoftLayer cloud infrastructure. Whether this will include the necessary security safeguards needs further explanations. http://www.forbes.com/sites/maribellopez/2013/11/13/ibm-acquires-cloud-emm-vendor-fiberlink/

Wednesday, November 27, 2013

PRESIDENT’S TECHNOLOGY COUNCIL SAYS GOVERNMENT NEEDS TO IMPROVE ITS CYBER SECURITY

A just released report to the President on Immediate Opportunities for Strengthening the Nation’s Cyber Security (November 22, 2013) is a collection of recommendations that reflect views of academia and government executives. It misses its objective to offer to the President actions that can be construed as “immediate” and directly actionable. (http://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_cybersecurity_nov-2013.pdf).

The report correctly identifies the cyber-threat. It states that cyber threats span the range from cybercrime (estimated up to $1 trillion annually). It points to potentially devastating cyber attacks against U.S. critical infrastructure, both civilian and military. Cyberspace has become a sanctuary from which criminal hackers, spammers, viruses, botnets, and other cyber threats prey daily and openly on U.S. individuals and organizations. The Nation’s capacity for innovation and commerce, including time to market advantages for commercial products and unique U.S. technologies for national defense, is drained by cyber industrial espionage and theft.

What are the findings from the distinguished presidential Council?

1. The Federal Government rarely follows accepted best practices. It needs to lead by example and accelerate its efforts to make routine cyber attacks more difficult by implementing best practices for its own systems.
2. Many private-sector entities come under some form of Federal regulation for reasons not directly related to national security. In many such cases there is opportunity, fully consistent with the intent of the existing enabling legislation, for promoting and achieving best practices in cyber security.
3. Industry-driven, but third-party-audited, continuous-improvement processes are more likely to create an effective cyber security culture than are Government-mandated, static lists of security measures.
4. To improve the capacity to respond in real time, cyber threat data need to be shared more extensively among private-sector entities and—in appropriate circumstances and with publicly understood interfaces—between private-sector entities and Government.
5. Internet Service Providers are well positioned to contribute to rapid improvements in cyber security through real-time action.
6. Future architectures will need to start with the premise that each part of a system must be designed to operate in a hostile environment. Research is needed to foster systems with dynamic, real-time defenses to complement hardening approaches.

COMMENTS:
1. No analysis or critique offered for “rarely follows accepted best practices”. Without assessment why close to $100 billion/year for IT is not following best practices leaves the Council without relating the weakness of government IT with immediate remedial actions. The listing of some measures, such as increasing standards, has little value.
2. There is not evidence that enabling legislation for promoting best practices as ever produced results that directly counters cyber threats.
3. Industry-driven, but third party audited, continuous-improvement processes are acknowledged to be superior to government mandated measures. How to implement such support is not explained. The Council completely misses the opportunity of using the transition from 2nd generation to 3rd generation as a direction for following leading industrial innovators.
4. Information about cyber threats needs to be shared among private-sector firms is missing regulatory direction and an outline how this can be accomplished. Commercial interests perform a current effort in this field. This is done without institutional support from variety of government agencies and particularly from the Department of Homeland Security, which is lacking staff and budget.
5. The recommendation to place greater reliance for cyber security on Internet Service Providers (ISPs) is misplaced. ISP’s are commercial competitors with limited scope to be counted as the leaders in cyber defenses.
6. There is a strong academic bias when emphasizing greater funding for research on systems with dynamic, real-time defenses to complement hardening approaches. Actionable cyber defenses are now emerging from software firms, not hardware suppliers and certainly not from universities.

SUMMARY
The 2013 report to the President is neither actionable nor immediate. It illustrates a gaping hole in the national executive leadership.

Tuesday, November 26, 2013

Virtualization of Customer-Operated Computing Devices

 With ever increasing security and compliance requirements and proliferation of many client devices that vary in form factor and different security requirements, desktop virtualization downloaded from a data center is the most relevant technology to streamline client device management. By separating operating systems (OS) and applications from a physical client devices, virtual desktop infrastructure (VDI) streamlines management, lowers operational expenses and facilitate security and policy adherence.

Thousands and even millions customer devices, mostly in the form of mobile clients or real-time sensors require automated management. Such complexity will be manageable only through centrally administered software. Therefore enterprises are advised to migrate to complete virtualization of devices so that all control software is removed from individual devices where the embedding of security cannot be achieved.

Because of the need to shift the management of clients from 2nd generation client servers to 3rd generation cloud computing such transfer of control represents a radical change in the architecture and in the organization of information technologies.

The virtualization of customer-operated computing devices removes all software from the customer and leaves only a secure browser (“thin client”) available for access to computing resources. Applications, infrastructure services, security and access to data remain under the management of the data center cloud software.

VDI workloads are highly variable in terms of the demands they place upon the capacity and security of the supporting storage infrastructure. Each time a client browser accesses the central cloud, application or user data, it generates requests for the storage infrastructure. There are periods of time when multiple client devices access large amounts of data. Typically, this occurs when a large number of virtual devices simultaneously boot, login, perform virus scans, or log-off. If the storage is not able to service these requests with acceptable latency, the client devices fail to perform.

The effects of these capacity overloads are handled by allocating additional disk capacity. This results in capacity overprovisioning, and decreasing cost-efficiency. This calls for migrating the storage available at the client site to the enterprise datacenter. Enterprise storage provides higher reliability and performance, but also at sites where economics of scale offer lower cost than the client-attached storage. For the deployment of enterprise-class storage management must carefully allocate storage capacity.

Overcoming these challenges requires tight integration between storage and the hypervisor in the cloud. This integration is designed to deliver optimal storage performance and lower costs while simplifying VDI deployments. This also enables improved scalability through features such as fault-tolerant load balancing and intelligent management of multiple network paths, which allows improved data availability.

The additional security offered by cloud-based desktop virtualization means that IT management retains control and are able to secure company data in the data center. Desktop virtualization makes security and compliance easier by:

Centralizing company data in the data center where IT can more easily secure and audit it;
Allowing IT to easily back up and provide disaster recovery for end user desktops;
Ensuring that company data is not left on a mobile device, at a remote location, or in a window office where it can easily be lost or stolen;
Simplifying the application of security policies and updates to end user desktops as they are all in the data center.


Another way that desktop virtualization improves security is by offering the ability to utilize host‐based antivirus and anti‐malware. Host‐based means that antivirus/antimalware applications can run on the hypervisor host instead of on each end user desktop. This setup not only saves the time of installing all those agents but also the time to maintain those agents and the processing overhead of running one agent per host instead of one agent per desktop. Host-based antivirus and anti‐malware tools offer centralized control, efficient design, reduced resource utilization, and simplified administration.

Two‐factor authentication can easily be integrated into desktop virtualization solutions to provide a higher level of security. Thus, an end user would need not only a username/password but also a password or number from a security token to access their VDI.

The shift to VDI has resulted to a major change how an enterprise of the future will operate its information technologies. It represents a change in the relationship between customers and the IT organization due to reliance on public or private clouds. It predicts increased reliance on IT services that can support increasingly complex cyber defense requirements.

There will be a number of organizations that will offer VDI connectivity with easy access to transaction pricing for making competitive comparisons.
The shift to VDI connectivity will results in the adoption of vastly improved cyber security thus dictating the migration of 3rd generation computing based on cloud operations.
The shift to VDI will not only reduce operating costs but also safeguard enterprise operations in an era of rising security threats.

-

Saturday, November 23, 2013

CYBERCRIMINAL CHARGES FOR STOLEN IDENTITIES AND HACKING SOFTWARE


SOURCE: http://www.darkreading.com/attacks-breaches/glut-in-stolen-identities-forces-price-c/240164089

Underground Prices for Stolen Credentials and Hacker Services

Hacker Credentials and Services Details Price
*Visa and Master Card (US) $4
American Express (US) $7
Discover Card with (US) $8
Visa and Master Card (UK, Australia and Canada) $7-$8
American Express (UK, Australia and Canada) $12
Discover Card (Australia and Canada) $12
Visa and Master Card (EU and Asia) $15
Discover and American Express Card (EU and Asia) $18
Credit Card with Track 1 and 2 Data (US) $12
Credit Card with Track 1 and 2 Data (UK, Australia and Canada) $19-$20
Credit Card with Track 1 and 2 Data (EU, Asia) $28
US Fullz Fullz is a dossier of credentials for an individual, which also include Personal Identifiable Information (PII), $25
Fullz (UK, Australia, Canada, EU, Asia) $30-$40
VBV(US) Verified by Visa works to confirm an online shopper’s identity in real time by requiring an additional password or other data to help ensure that no one but the cardholder can use their Visa card online. $10
VBV (UK, Australia, Canada, EU, Asia) $17-$25
DOB (US) Date of Birth $11
DOB(UK, Australia, Canada, EU, Asia) $15-$25
Bank Acct. with $70,000-$150,000

Price depends on banking institution.
Infected Computers 1,000 $20
Infected Computers 5,000 $90
Infected Computers 10,000 $160
Infected Computers 15,000 $250
Remote Access Trojan(RAT) $50-$250
Add-On Services to RATs $20-$50
Sweet Orange Exploit Kit Leasing Fees $450 a week/$1800 a month
Hacking Website; stealing data Price depends on reputation of hacker $100-$300

DDoS Attacks Distributed Denial of Service (DDoS) Attacks

Per hour-$3-$5
Per Day-$90-$100
Per Week-$400-$600
Doxing When a hacker is hired to get all the information they can about a target victim, via social engineering and/or infecting them with an information-stealing trojan. $25-$100