ARGO TECHNOLOGY News & articles
News Technologies

Mainframe technologies that have become a mainstream

Virtualization already appeared in the 60s as an opportunity to efficiently use the computing resources of mainframes for various categories of tasks. Typically, the mainframe served hundreds and thousands of terminals (input-output devices that look almost like a modern computer, but they are not - and consisting of a monitor, keyboard, and mainframe communication device).



Until now, virtualization has remained a feature of such complex and expensive systems as mainframes. The subsequent appearance of the IBM PC/XT/AT line of personal computers and the decision of major processor manufacturers (Intel, AMD, Cyrix) to release compatible x86 architecture processors led to the undisputed leadership of IBM-compatible personal computers.

The cheapening and widespread use of PCs gave rise to a revolution. Servers were created based on the same, already “standard” processor architecture. And the accompanying development of software (operating systems, games, convenient file managers, functional text editors and spreadsheets, e-mail systems, etc.) has made many think that mainframes will wither away as archaic.

The world is increasingly immersed in information technology. Electronic document management, accounting, web - the number of services grew exponentially, and each of them required the allocation of computing resources. But, with the increase in the complexity of systems, their reliability has decreased. A bug in one program could negatively affect another program. To minimize the mutual influence of programs on each other, it was customary to place systems on dedicated servers. Stability became higher, but equipment costs also increased. So the inefficient use of equipment predetermined the triumphant return of virtualization technologies.

The emerging systems for managing virtual machines, using the ideas embodied in the mainframes of the 60s, made it possible to run several virtual machines on one physical computer, each with its own set of software. Modern virtualization systems, in addition, are able to connect remote storage and computing nodes via a network, combine servers into a cluster for fault tolerance and load redistribution.

The development of the Internet, network technologies and their widespread distribution has caused the emergence of many dozens of thousands of businesses. Not all of them were able to acquire and maintain expensive equipment. There was a request to rent “remote” computing power, and virtualization came in real handy right now: one physical server could run the virtual servers of many companies in isolation. Progress has led to the fact that now companies must not have their own equipment: the required computing power can be purchased from many specialized providers of so-called cloud technologies (cloud computing, cloud services).

Why are they cloud? This is, of course, a metaphor. For a long time, it was customary to designate the Internet in the form of a cloud symbol on network diagrams - as a collection of networks outside the company’s network. With the appearance of services for the provision of remote computing power, the term “cloud”, showing that data storage and calculations do not take place on the company’s network, but “somewhere out there”, seemed convenient to many and became commonly used.

Cloud computing is now so widespread that it has become standardized. The US Institute of Standards and Technology (NIST) defines the following key features, service models, and deployment models:

Main features:

Self-service on-demand - the consumer specifies the required capabilities, such as server time, network storage, without the need for human interaction with the service provider.
Universal network access - cloud computing capabilities are available over the network for heterogeneous platforms (mobile telephones, tablets, laptops, workstations, etc.)

Resource Pooling - A provider’s computing resources are pooled to serve multiple customers using a multi-tenant model, with various physical and virtual resources dynamically assigned and reassigned based on customer demand. There is a sense of location independence, as the customer typically has no control or knowledge of the exact location of the resources provided, although it can specify the location at a higher level of abstraction (such as city, region, or data center). Such resources can be, for example, the amount of data stored, the amount of memory used, network bandwidth.

Rapid Elasticity - Opportunities can be “elastically” provided and released, in some cases automatically, to rapidly scale out and in according to demand. For the consumer, the possibilities often seem unlimited.

Consumed resource metering - cloud systems automatically monitor and optimize resource usage using numerical metering capabilities at some level of abstraction appropriate to the type of service (e.g., amount of data stored, processor usage, bandwidth, and active user accounts). Resource usage can be tracked, monitored and reported, providing transparency to both the provider and consumer of the service being used.

Service models: 

Platform as a Service PaaS. The consumer is provided with the opportunity to deploy in the cloud infrastructure consumer-created or purchased applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the cloud infrastructure, including the network, servers, operating systems, or storage, but has control over deployed applications and possibly configuration settings for the application hosting environment. Such PaaS services as Microsoft Azure, Google App Engine are known. Basically, they are used by software developers.

Infrastructure as a Service (IaaS). The consumer is provided with the ability to use processing, storage, networking and other basic computing resources, where the consumer can deploy and run arbitrary software, which may include operating systems and applications. The consumer does not manage or control the cloud infrastructure, but has control over operating systems, storage, and deployed applications; and possibly limited control over individual network components (such as firewalls). Typical examples of IaaS are Microsoft Azure, Amazon EC2, VMware vCloud. IaaS “customers” are system administrators.

Software as a Service, SaaS. The consumer is provided with the opportunity to use the applications (software) of the supplier in the cloud infrastructure. Applications are accessible from various customer devices through a thin customer interface such as a web browser (such as Internet e-mail) or the interface of the software itself. Consumer does not control the underlying cloud infrastructure, including the network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user application configuration settings. Examples of SaaS are file storage services (iCloud, Google Drive, Mega), office applications (Microsoft Office 365, MyOffice, Google Docs). Such services can be used by any (ordinary) users.

Deployment models:

Private cloud. The cloud infrastructure is designed for the exclusive use of a single company that consists of multiple consumers (such as business units). It can be owned, managed and operated by a company, a third party, or a combination of both.

Public cloud. Cloud infrastructure is designed for the exclusive use of a specific community of consumers from comppanies that have a common interest (holding companies; companies with similar security requirements). It can be owned, managed and operated by one or more organizations in the community, by a third party, or a combination of both.

Public cloud. The cloud infrastructure is designed for open use by the general public. It can be owned, operated, and managed by a business, scientific or government organization, or a combination of both.

Hybrid cloud. It is a combination of two or more of the above, which remain unique entities but are linked together by standardized or proprietary technologies that provide data and application portability.

Other specialized software is also used on the side of the cloud provider to ensure the joint operation of computer network nodes. For example, billing (a system for charging consumed services and billing), an orchestrator (a mechanism for automated or automatic performance of service operations required for the operation of services, such as deploying virtual machines, setting up a network, etc.), backup systems, a self-service portal (so-called “administration panel”) - a tool for ordering the required service and configuration, and so on.

When developing infrastructure, the consumer can decide whether to purchase his own equipment, build specialized premises with the required climatic conditions and an acceptable level of reliability of electricity supply. Or reduce capital expenditures and opt for cloud services. It also can stop at a hybrid version. Virtualization technologies that have become mainstream allow everything.