As mentioned in my previous post, we’re gearing up to deploy RHEV at my workplace. The preparation for this included tacking on an additional shelf (MD1000) of storage to our pre-existing Dell MD3000i iSCSI-based storage device. Needless to say, this was not cheap. The original Dell MD3000i, with only a few 146GB drives, was close to $13,000 when we bought it two years ago. That does seem like a lot of money for not a whole lot of storage, but I will concede the device it’s pros where those pros are due:
- The one we bought has redundant power supplies. Those power supplies are hot-swappable.
- The one we bought has redundant RAID array controllers providing four iSCSI connections to the device. Those controllers are hot-swappable.
- It can be managed either in-band through the iSCSI connections, or out-of-band through an external switch-connected IP.
- It has both Windows™ graphical and CLI management tools, as well as Linux-based (Red Hat and others, I believe) CLI management tools.
- It is (relatively) easy to configure.
- It can automatically e-mail you if/when something goes wrong, and can also help you diagnose what went wrong. I cannot stress how critical this is at 3:00am when you’re not necessarily at the top of your game.
- It supports SAS, SATA, and near-line SAS right out of the box. Though a bit of a pain, it requires different sleds for the different drives.
- It can daisy chain with two additional MD1000 shelves for 45 total drives (15 per shelf), and can manage all of those from it’s software.
When you break that down, and factor in assembly cost, manufacture cost, etc. you do get somewhere up and around the $13,000 range. That’s great for 1st-tier critical storage, but what if you’re targeting the second-tier and don’t need all of that? Say, for longer-term archival or backup-to-disk, where 24/7 availability isn’t critical and it’s more about cheap, massive storage instead of raw performance? Inspired by the folks over at Backblaze, who describe in this post how they build (pretty much from scratch, and with a breakdown of components and basic wiring diagram) their own Debian-based SAN devices to power their backup business, I’ve decided to try and build my own Linux-based iSCSI SAN device.
The target price is $1500. I would like to get a decent-performing iSCSI device that is dual-homed (separate from the on-board LAN), at least gigabit speed, with as much storage as I can muster for the price-point. Expandability a plus. I’ve done a few months of researching and comparing, and came up with this list over at Newegg:
With tax and shipping, that’s roughly $1500. Right around my target price-point. Plus, the case has an additional x16 PCIe slot into which another RAID card can go, and an additional 3 5.25″ drive bays, into which another 4 drives can go later on, to expand.
I realize this won’t match the speed and enterprise-class features of something like an MD3000i, at least right off the bat, but I’m interested to see what kind of device I can get for this amount and what kind of value it will have at my place of business as a second-tier device. Who knows, maybe I’ll write some scripting and simple front-ends to do the setup and destruction of LUNs and targets.
Part 2 will exhibit the build process (the parts are due in tomorrow), installation, and software configuration. I’m planning it to run either CentOS or Fedora; we’ll see what wins.
Part 3 will be a wrap-up and reflection on the device sometime down the road after I’ve had it up and running for awhile.
As always, I welcome your comments and questions in the section below.