Disk space
The disk space is the limiting factor on how much data you can store (number and size of files). Apart from the actual user data, the Storage Platform (SP) also stores some metadata in indexes. The total size of the indexes is dependent on the number of stored files (not their sizes), but a 2% margin should be safe.
Disk speed
Disk speed can be broken down into throughput (in MB/s) and transaction speed (in operations/s). The processing of large files (whether during backups, restores, or other tasks) is limited by the disk throughput, while the processing of small files is limited by the transaction speed. In general, Direct Attached Storage (DAS) is faster than NAS/SAN with regards to transaction speed, but slower with regards to throughput.
Note: Both throughput and transaction speed can be negatively affected by disk fragmentation, and in general, the use of a good defragmentation application is an excellent idea (PerfectDisk comes highly recommended for servers).
Memory
The amount of memory required is dependent on the number of files per Account, as well as the number of Accounts being processed simultaneously. On a 64-bit operating system, over and above the minimum requirements, you will need approximately 1 GB of memory per 1 million files being processed. Having more than 50 million files per Account is not recommended. Memory requirements can be reduced either by staggering the times at which Agent machines back up (to avoid several accounts loaded in memory simultaneously), or by spreading Accounts containing large numbers of files over different StorageServers.
Comments
0 comments
Please sign in to leave a comment.