When cheap and simple turns into anything but…

I’ve ran a single 3TB Seagate for a few years in my stripped down workstation server for media and it’s been “fine”. Even made a hacked drive cage for it. Then a few weeks ago things started to slow down….a lot. Speeds dropped from 50-75MB/s to <1MB/s in the end and the resulting latency issues started locking up the whole server, requiring hard shutdowns to continue.

Fine. It’s served its purpose well and I’ve gotten got a fair bit of usage out of it. I’ll get some more drive before it dies, move my media over and use it (the old drive) until it dies and then harvest the magnets. Shouldn’t be too hard. Being on a tight budget (<$200) I wanted to get the most bang for my buck. Luckily Newegg had a iStarUSA 3×5.25″ to 5×3.5″ hot swap bay for 77% off. That gives me room to grow. Next comes the drives. Found some 2TB EMC SAS drives with trays for $20/p, offered $17/p and got it. I knew I had to deal with the formatting but that was fine. I’d just pass through the on-board SAS adapter and it’d be fine. $152 and some change for a drive cage and 6x2TB drives (RAIDz2 + a cold spare). Not the cheaper $/GB but the cheapest I could find w/ redundancy.

While I waited for the drives to ship I used a 500GB USB external drive for current media for my Plex server since the old 3TB drive had degraded to the point of un-usability. Everything arrived and started excitedly putting it together. Then the problems started. First with the drive cage. Turns out the Lenovo D20 I use for my server has tabs in the drive bay. Fine if you’re using the standard 1-bay drives it was meant for. Not fine if you’re using a 3×5.25″ bay drive cage. That meant an hour of painstakingly bending 12 steel tabs out of the way in a tight space to ge the drive in. Got it in and then started on connecting the cables. Turns out the SATA power are stacked on top of each other and while installed are on the backside of the case. This mean having to fanagle the cables to get them to plug in without damage. Only two of the plugs could be used which left the middle plug on the cable essentially unusable (you could probably hand a SSD off of it but anything heavy would fall. Got that done and then installed the drives, connected the cables and then booted up. Bit of a headache but everything works now and I’m done, right? Right?

Here comes the next 3 days. FreeNAS was my first attempt. Installed it, passed the on-board SAS controller through and….nothing. It saw the controller but couldn’t see any of the drives at all. Several hours and versions later and I finally determined that while FreeNAS is ***supposed*** to be able to work with that controller it doesn’t really and there was a whole mess of other issues with it once you got it running that I wasn’t willing to deal with. Fine; I wanted to learn ZFSonLinux anyways and not be reliant on the *easy* way in FreeNAS. Loaded up a Ubuntu VM, passed through the controller and bingo. The controller is seen. The drives are seen. Time to start the reformatting and get going. I used the sg_utils to pick up the drives and start the formating from 520B sector size to 512B so the system could read and use the drives. Figured out the right command string, got it ready for each drive, opened a number of SSH sessions and started. Then I waited. And waited. And waited. The supposed 20 minute process turned out to be much longer by a couple of zero’s. Each drive took on avg 8-10 hours to finish. Those were the successful ones. The others took multiple attempts. Before I knew it 3 days were gone and I was *just* finishing formating. But it was done now right? Just have to do the regular formatting, build the ZFS pool and then say hello to my new storage. Except no! If I had learned anything from those few days it was that easy had drove off a cliff.

While the drives now worked and the zpool was built something wasn’t right. These speeds were not normal. Even in the worse case I was expecting 10-20MB/s per drive and at least 40-50MB/s across the whole array with 5 drives. What I was getting was spike ~15MB/s with the vast majority of writes sitting under 5MB/s when it was tranfering at all. Why? There were not issues with CPU, RAM or Swap on the VM. In face the CPU was hardly working, the RAM was under 50% even with other services running and Swap wasn’t even being used yet. Maybe I screwed up the zpool. After all, I’m still new to manually managing and setting up ZFS. I did my research, tested a few flags, checked the SMART status — did everything I could think of and still couldn’t get a satisfactory result. Surely it must be something. The drives were refurbished so maybe they’re bad? Nope. Smart status is good (though Power On hours as a up there as expected). After doing more research I discovered the issue, stringed across a couple of newsgroup listings, forum posts and random GitHub repositories. It’s the controller! Why? Beats me. All though “everyone” agreed it was a controller issue (regardless of the system the chipset was in) no one had a firm idea of why it was happening. Even now I can’t find a definitive answer and frankly was far too worn out after a week of no sleep and frustration. So I broke down and ordered a H200 and some cheap SFF-8087 to SATA cables for another $43 and waited.

The new controller and SAS cables arrived and I prepared for another ordeal. I followed u/techmattr‘s guide on flashing the H200 to IT Mode and prepared to install it in my server. Disabled Passthrough for the old controller, put the server in Maintainance Mode and shut it down. Thirty minutes of cleaning up cables and getting the card installed (the bracket was a bit difficult to get seated) and a reboot to enable passthrough on the H200 and then the moment of truth. Success? The controller popped up in ESXi, the drives were picked up in the scan and appeared to be working. Disabled Maintainence Mode, passed the H200 through and started my storage VM. I SSH’d in and bingo! Both the controller and the 4/5 drives were seen. I ran sg_format on the 5th drive and got that working. The proper ZFS commands were already in history so it was just a few presses and an enter to get that running. An hour more sorting out the some other configurations (fixing SMB, disabling NFS, fixing mount folders, etc.) and it was time to see if it worked. I mounted the share I created on my Plex VM and started transferring a few GB’s of files. I almost yelled at the top of my lungs. Sustained writes of 60-75MB/s, highs of up to ~95MB/s with spikes of >120MB/s (line limited by my Gigabit network). Finally, after all that we had liftoff. I started copying the data from my temporary external USB drive to my new storage and then……went back to sleep. I didn’t care that it was 4PM on a Tuesday. I earned it.

Two weeks later everything is humming along nicely. I’ve replaced most of the media I lost on the old 3TB drive from the corrupt VMFS. The array is already at 36% full but should last me until I can move and have the room and budget for a full cluster. Although the stress was probably not worth the trouble over a lazier solution I still got a good deal for ~$195 and I should make back a little of it selling the empty EMC trays. A similarly sized and redundant setup would have cost me ~$250 at minimum with worst speeds. I also learned quite a bit through the process about NFS, ZFS, ESXi PCI passthrough and SAS/SATA compatibility. My takeaway from all of this was that compatible doesn’t necessary mean it “just works” and that ultimately you get what you pay for. Cheaper meant a week of headache in this situation. Was it worth the $50-100+ dollars? Now maybe but I may or may not have pondered how if the server would go through the window or bounce back.


  • Thanks to user u/xsererityx on reddit whose post on flashing the H200 to IT Mode helped with preparing the H200 as well as the original article by u/techmattr
  • The controller in question is identified as a Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B (rev 01)
  • The hard drives are EMC 2TB 7.2K RPM SAS 3.5″ 6G drives, model #005049277