Now, there are several classifications of data storage systems, however, this classification is often rather arbitrary:
1. By the level of reliability and functionality, it is taken to divide DSS into Entry Level, Midrange and Hi-End.
It is understood that Entry Level systems are designed to solve a certain class of tasks that do not require a high level of reliability and functionality. The equipment has small dimensions, low power consumption, low noise level and allows the placement of systems in office premises and even at home.
Hi-End systems assume the highest level of reliability, which is ensured both by multiple duplication of components (power supplies, fans, processors, disks, controllers, access to disks through several paths) and software solutions. The failure of any component has virtually no effect on the functioning of the system.
Hi-End is, as a rule, a large set of equipment that requires placement in a data processing center (DPC) with redundant power supply and a climate system. The main customer of such data storage systems are large companies that host business-critical data of enterprise management systems, automated banking systems, and accounting on Hi-End.
Midrange solutions occupy a niche between entry-level and Hi-End systems. It is rather difficult to draw a conditional boundary between them, but it is believed that Midrange provides fault tolerance and redundancy, but in the event of a failure of any component, a noticeable decrease in storage system performance cannot be ruled out.
Advanced storage functionality such as replication, virtualization support, compression, deduplication usually starts to be supported from the Midrange level.
2. By “binding to hardware” DSS can be divided into 2 main groups: classic DSS and software-defined DSS.
Classical DSS, which are a combination of “hardware-software components” from a single manufacturer, are the most widely represented on the market (according to various estimates, at least 70% of the suggested DSS).
However, software-defined DSS have become a modern trend. Equipment vendors enter the market and leave the market, and in these conditions it is a big risk for large companies to “bind” their activities to classic data storage systems. Many of the largest Internet companies, such as Amazon, Google, Yandex, VK, have long understood that software-defined systems can also provide the required level of flexibility, reliability, performance without being tied to a particular hardware vendor. Produced by many companies around the world and readily available universal servers of the “standard” architecture of high compatibility (x8664), the so-called commodity have become developing blocks for SDS.
3. By the principle of scaling, there are vertically scalable and horizontally scalable (Scale-out) DSS.
Horizontally scalable systems, unlike vertically scalable ones, do not have centralized controllers. The expansion of storage capacity (and, as a rule, performance) occurs by adding an active (having its own processors and software) disk module and connecting it to the storage interconnect network. DSS customers can access data through any module.
4. By the access method, data storage systems are divided into file, block and object.
File access is the best for accessing entire files and has locking mechanisms for multi-user access to prevent file corruption.
When accessing a block system, the customer operates with the mechanisms for addressing data storage blocks and manages the information contained in each block itself. Block access is fit for hosting databases where changes to certain blocks in files occur frequently.
Object access appeared not so long ago and has already managed to gain popularity, because. provides the highest scalability due to the “flat” storage structure. Objects in the system have metadata, many can be created/defined based on the needs of users, which allows flexibly configuring DSS for unstructured data, large-scale archives, document management systems and analytics based on object DSS.