Features and benefits of ARGO DSS

ARGO DSS are developed on the basis of Software Defined Storage technologies, which provides greater scalability and flexibility, as well as improves the ability to manage the infrastructure of the data storage system.
Controllers and storage nodes are industry standard x86/x64 architecture servers with a tested configuration. A distributed network storage factory provides guaranteed minimum delays in data transfer.
The ARGO DSS is developed on the basis of a UNIX-like OS with the ZFS file system, which provides reliable data storage. The OS has an embbeded system for monitoring and diagnosing faults, which allows timely detecting errors in the operation of system components, monitor performance and planning measures to optimize the system.
Limitless scalability
The system has no limits for expansion, scalability is determined only by the needs of the customer. The customer is provided with the opportunity to increase the number of storage nodes, the number and capacity of hard drives, the amount of cache memory and expand functionality using special software. Scaling occurs without stopping the operation of the solution, without loss of performance or functionality of the system.
Performance
ARGO DSS distributes data blocks across available media, providing a high degree of parallelism in read and write operations, minimal response time, and guaranteed performance.


ARGO DSS allows efficiently serving both traditional relational OLTP loads and modern tasks associated with processing dozens and hundreds of petabytes of unstructured data.
Flexibility
The flexibility of ARGO DSS is achieved due to its own operating system, which has no analogues in the domestic market, as well as the ZFS file system. Due to this, you can easily configure the system for efficient storage of different types of data within the same system: with different levels of data compression, data block size, with data deduplication enabled or disabled. Most of the parameters can be changed at any time without the need to stop the storage system.
Data validity
DSS stores data in blocks. After a confirmed write of a data block, this block is almost impossible to lose or corrupt, its integrity is checked and ensured by a checksum. Independence from hardware failures is ensured by integrity control at all levels — during storage, processing and transmission of data blocks.
Data integrity
Due to the data transactional model of operation used in the ARGO DSS, the integrity of the recorded data is ensured. When corrupted data blocks are detected, they are automatically restored, which does not affect the speed of the storage system. These technologies allow restoring the integrity of the array in a reasonable time, even in the event of a simultaneous failure of several media on the storage node.
Data protection
To prevent data loss or destructuring as a result of a user error, virus or hacker attack, etc. ARGO DSS provides a mechanism for snapshots and asynchronous data replication in a geo-distributed storage configuration. The maximum number of snapshots for ARGO DSS is 264, that is, almost unlimited.
Limitless scalability
The system has no limits for expansion, scalability is determined only by the needs of the customer. The customer is provided with the opportunity to increase the number of storage nodes, the number and capacity of hard drives, the amount of cache memory and expand functionality using special software. Scaling occurs without stopping the operation of the solution, without loss of performance or functionality of the system.
Performance
ARGO DSS distributes data blocks across available media, providing a high degree of parallelism in read and write operations, minimal response time, and guaranteed performance.

ARGO DSS allows efficiently serving both traditional relational OLTP loads and modern tasks associated with processing dozens and hundreds of petabytes of unstructured data.
Flexibility
The flexibility of ARGO DSS is achieved due to its own operating system, which has no analogues in the domestic market, as well as the ZFS file system. Due to this, you can easily configure the system for efficient storage of different types of data within the same system: with different levels of data compression, data block size, with data deduplication enabled or disabled. Most of the parameters can be changed at any time without the need to stop the storage system.
Data validity
DSS stores data in blocks. After a confirmed write of a data block, this block is almost impossible to lose or corrupt, its integrity is checked and ensured by a checksum. Independence from hardware failures is ensured by integrity control at all levels — during storage, processing and transmission of data blocks.
Data integrity
Due to the data transactional model of operation used in the ARGO DSS, the integrity of the recorded data is ensured. When corrupted data blocks are detected, they are automatically restored, which does not affect the speed of the storage system. These technologies allow restoring the integrity of the array in a reasonable time, even in the event of a simultaneous failure of several media on the storage node.
Data protection
To prevent data loss or destructuring as a result of a user error, virus or hacker attack, etc. ARGO DSS provides a mechanism for snapshots and asynchronous data replication in a geo-distributed storage configuration. The maximum number of snapshots for ARGO DSS is 264, that is, almost unlimited.