If you are ever contemplating deployment decisions, remember that simplicity and flexibility of management should trump simplicity of deployment if it’s a close call. It would have been simple enough to deploy a typical VM with a second large VMDK, but managing such an arrangement would be more difficult.
In fact, this was just the first of several large pools of storage that I need to serve up.
I had to serve up a large amount (several Terabytes) of flat file storage for our Software Development Team. I wasn’t trying out the HIT/LE because I ran out of things to do. I understand the reason for thinking this way, but my experience with them have proven quite the contrary. Others might contend that because the volumes aren’t seen by vCenter, one is making things more complex, not less. If you use these commercial applications for protection, you may want to determine if guest attached volumes are a good fit, and if so, find alternate ways of protecting the volumes. The one “gotcha” about guest attached volumes is that they aren’t visible by the vCenter API, so commercial backup applications that rely on the visibility of these volumes via vCenter won’t be able to back them up. With a native volume, you would be able to use the entire 2TB of space. If you wanted to serve up a 2TB volume of storage using a VMDK, more than likely you’d have a 2TB VMFS volume, and something like a 1.6TB VMDK file to accommodate hypervisor snapshots. With guest attached volumes, it’s easy to crank up the frequency of snapshot and replica protection on just the data, without interfering with the VM that is serving up the data.
Thanks to the independent volume(s), one can easily see with SANHQ what the requirements of that data volume is. My “in the trenches” experience with the guest attached volumes in VM’s running Microsoft OS’s (and EqualLogic’s HIT/ME) have proven that recovering data off of guest attached volumes is just easier – whether you recover it from snapshot or replica, clone it for analysis, etc. Having the data as guest attached means that you can easily prepare a new VM presenting the data (via NFS, Samba, etc.), and cut it over without anyone knowing – especially when you are using DNS aliasing.
Often times, systems serving up large volumes of unstructured data are hard to update. The VM can easily fit in your standard VMFS volumes.