First impression of the XiV in “action”
The GUI is fancy. Looks like a Mac turned on it’s side. The GUI is also NOT web-based. It’s an app-install. I do believe however it’s available for multiple platforms.
It really does seem to take all of the guess work out of provisioning since you don’t really have any say on what goes where in your array.
Our first use? Backing up 6+ TB that was stored on Clariion and moving it to XiV…
Now first off, I’m glad it was decided to do it this way. Whereas a copy straight from one to the other is possible, utilizing both arrays at the same time, it wouldn’t have provided any comparison as to performance.
The backup was done using Veritas NetBackup, over the network. The data consisted of a pair of hosts running an extensive XML-type database used for indexing and categorization of unstructured content. The backup and restore were both done to the same host, over the same network, and the storage was addressed over the same switches, just zoned to different arrays. The only significant difference was that while the backup was done multiplexed, the restore had to be done single-threaded…(because NBU multiplexed both backups to the same tape)
I have to get the final start/stop-times out of NBU, but from the halway conversation I had with the NBU guy, the backup took 6-8 hours (for both hosts), the restore took 21+ hours…
The most interesting part of it was the first restore took almost the same amount of time as the backup, which is kind of what we would expect. The second host took dramatically longer to restore than to back-up.
This would indicate to me that, as expected, the XiV didn’t handle the long, sequential write very well. Since the host only connects to two of the six data nodes, virtually 100% of writes have to be destaged over the Gig-E backend. My guess is we nailed the cache to the wall with the first restore, and then kept it pegged with the second one.
I like sequential write-tests on this scale because it shows without a doubt whether the cache is masking a back-end issue or not. If it is, this is exactly what you’ll see. An initial burst of writes followed by a sharp drop as cache is saturated. This is even more pronounced in a more utilized array (rather than an idle one) because a certain percentage of cache will already be utilized by host reads/writes.
This doesn’t bode well for an application that requires occasional complete reloads of the XML database…
I can’t wait to see it in action.