Cohesity vs. Veeam compared: storage usage

Cohesity vs. Veeam compared: storage usage

In this post  I am focussing on storage usage when comparing Veeam and Cohesity.  I have worked with Veeam for many years now and I am really interested in how the new technology from Cohesity performs in terms of storage usage.

Veeam setup

So, first of all let’s look at the specification of the setup. I am using Veeam 9.5 update 2 on a Windows Server 2012 environment within VMware vSphere 6.0.

Veeam version 9.5 update2
Veeam version 9.5 u2

The environment has two proxies, both capable of 8 concurrent tasks. The proxies are Windows Server 2012 VMs in the same VMware 6.0 cluster. The  Veeam repository is another Windows Server 2012 VM located in another datacenter.  This server is also a virtual machine in a vSphere 6.0 cluster using iSCSI storage from Dell (Equallogic).

Cohesity repository

To get the comparison started, we need to add the Cohesity storage as a repository to Veeam.  The current version of the Cohesity software I use for this test is version 4.0.1, with a build of 31th of July 2017.

Cohesity Cluster Version 4.0.1
Cohesity Cluster Version 4.0.1

First I have created a View (which is the equivalent of a fileshare) in Cohesity on a test View Box (an administrative configuration of part of the Cohesity cluster storage). The View Box is using inline deduplication and compression. The View Box does not use encryption.

View Box setting
View box setting

The View I created within the View Box is inheriting all deduplication and compression settings.

View setting
View setting

Next I have added a View from Cohesity to Veeam as a repository, mounting is as a CIFS share. On this share no limits on concurrent tasks and read/write actions are applied.

Repository configuration on Cohesity
Repository configuration with Cohesity storage

Furthermore I am using the setting for Use per-VM backup files as it is the preferred setting for Cohesity . Also vPower NFS service is enabled on this repository.

Having all pieces of infrastructure in place, we can start playing with it.

Job settings

In the test on storage usage I am using an existing Veeam Job. I have cloned this job and changed the repository from the Veeam Repository to the Cohesity one.

The workload I am backing up with the job consists of 10 VMs:

  • 4 of them are Windows Server
  • 6 of them are Linux VMs
  • Disk size vary from 20GB to 445GB
  • Total disk size of the workload: 1.10 TB
  • All disks of the VMs are thick provisioned in vSphere.

The job runs on a daily basis and keeps 4 restore points.

Job configuration with 4 restore points
Job configuration

I configured the job with a forward incremental scheme.

Job configuration: incremental
incremental backups are used

On the storage side I enabled all settings for compression and deduplication . These settings should result in the most efficient  storage usage.

Compression and deduplication enabled
Compression and deduplication enabled

Cohesity configuration

Now let’s configure the Cohesity version of the same backup job. First of all this cloned job is backing up to the Cohesity repository.  Secondly I change the settings on storage compression and deduplication in Veeam. This part is performed by Cohesity storage.

Cohesity configuration: no compression and deduplication
Cohesity configuration: no compression and deduplication

With the configuration done, it is time to grab a coffee and let the jobs run for some days.

Storage usage compared

After a week of succesful runs, I checked the usage of both runs. First I looked at the Veeam repository. I checked the repository and the total usage of the job is 367GB

Storage usage with Veeam: 367GB
Storage usage with Veeam: 367GB

To be sure everything runned good, I looked at the content of the folder. As expected I see the files for the four restorepoints.

Contents of the backup folder
Contents of the backup folder

Let’s look a the Cohesity storage usage.

The share shows a usage of 384GB. This is a little more than Veeam is using.

View storage usage: 384.14GB
View storage usage: 384.14GB

Looking at the share we see that the data on the share according to Windows is using 851GB. The efficiency of the compression and deduplication is clear: a factor of 2.2 is applied to the files.

Storage usage on the share
Storage usage on the share

Conclusions

In this specific case, we can point out the winner: Veeam has been able to save an additional 17GB on the same job. But we do have to take some things into account.

The jobs did not run at the same time. There was a three hour interval between the jobs, this could influence the data to be backed-up.

With the Use per-VM backup setting the backup files generated vary; instead of 5 files with the Veeam Repository, we see 51 files within the Cohesity View. This might influence the usage needed on disk taking filesystem properties into account.

So what about the two numbers. Given the fact that the production workload is 1.1 TB in size and 4 restore points are available (time span of 4 x 24 hours) the compression factors are both great. Veeam needs 32.5% of the production storage, hence resulting in a compression factor of 3.06.

Cohesity on the other hand needs 34.1 % of the production storage, hence resulting in a compression factor of 2.93.

Both numbers are great if you ask me and if you want a better comparison between the products, more things have to be taken into account. What about performance in backup and restore, scalability and ease of use? Please have a look on my next blog item, where I will discuss the performance of the same back-up job when running in Veeam and Cohesity.

 

Update 17 October 2017

With the launch of Cohesity 4.1.2 it is now possible to change a View Box Redundancy Factor. Default the Resiliency Factor was set to 2, meaning that data was stored to two different nodes in the cluster (compared to a RAID1 configuration). Now this View Box redundancy can set to Erasure coding with a setting of 2 data stripes to 1 coded stripe (This 2:1 in the default setting in a 4 node cluster).

This setting will change the redundancy to a RAID6 like configuration.

When enabling this setting to our Test Viewbox, we see the storage usage on the Cohesity Share drops from 384GB to 321,49 GB. In this case Cohesity has the best cards in terms of storage usage.

 

 

The following two tabs change content below.

Larik-Jan

Cloud Architect & CEO at Fundaments B.V. at Fundaments B.V.
Architecting Cloud, preferably with new disruptive technology, is my passion. I have background in both technical and executive roles from various service provider companies and working now at a cloud provider with focus on tailormade infrastructure-as-a-service.

Leave a Reply

Your email address will not be published. Required fields are marked *