When cheap and simple turns into anything but…

I’ve run a single 3TB Seagate for a few years in my stripped down workstation server for media and it’s been “fine”. Even made a hacked drive cage for it. Then a few weeks ago things started to slow down….a lot. Speeds dropped from 50-75MB/s to <1MB/s in the end and the resulting latency issues started locking up the whole server, requiring hard shutdowns to continue.

Fine. It’s served its purpose well and I’ve gotten got a fair bit of usage out of it. I’ll get some more drive before it dies, move my media over and use it (the old drive) until it dies and then harvest the magnets. Shouldn’t be too hard. Being on a tight budget (<$200) I wanted to get the most bang for my buck. Luckily Newegg had a iStarUSA 3×5.25″ to 5×3.5″ hot swap bay for 77% off. That gives me room to grow. Next comes the drives. Found some 2TB EMC SAS drives with trays for $20/p, offered $17/p and got it. I knew I had to deal with the formatting but that was fine. I’d just pass through the on-board SAS adapter and it’d be fine. $152 and some change for a drive cage and 6x2TB drives (RAIDz2 + a cold spare). Not the cheaper $/GB but the cheapest I could find w/ redundancy.

While I waited for the drives to ship I used a 500GB USB external drive for current media for my Plex server since the old 3TB drive had degraded to the point of un-usability. Everything arrived and started excitedly putting it together. Then the problems started. First with the drive cage. Turns out the Lenovo D20 I use for my server has tabs in the drive bay. Fine if you’re using the standard 1-bay drives it was meant for. Not fine if you’re using a 3×5.25″ bay drive cage. That meant an hour of painstakingly bending 12 steel tabs out of the way in a tight space to ge the drive in. Got it in and then started on connecting the cables. Turns out the SATA power are stacked on top of each other and while installed are on the backside of the case. This mean having to fanagle the cables to get them to plug in without damage. Only two of the plugs could be used which left the middle plug on the cable essentially unusable (you could probably hand a SSD off of it but anything heavy would fall. Got that done and then installed the drives, connected the cables and then booted up. Bit of a headache but everything works now and I’m done, right? Right?

Here comes the next 3 days. FreeNAS was my first attempt. Installed it, passed the on-board SAS controller through and….nothing. It saw the controller but couldn’t see any of the drives at all. Several hours and versions later and I finally determined that while FreeNAS is ***supposed*** to be able to work with that controller it doesn’t really and there was a whole mess of other issues with it once you got it running that I wasn’t willing to deal with. Fine; I wanted to learn ZFSonLinux anyways and not be reliant on the *easy* way in FreeNAS. Loaded up a Ubuntu VM, passed through the controller and bingo. The controller is seen. The drives are seen. Time to start the reformatting and get going. I used the sg_utils to pick up the drives and start the formating from 520B sector size to 512B so the system could read and use the drives. Figured out the right command string, got it ready for each drive, opened a number of SSH sessions and started. Then I waited. And waited. And waited. The supposed 20 minute process turned out to be much longer by a couple of zero’s. Each drive took on avg 8-10 hours to finish. Those were the successful ones. The others took multiple attempts. Before I knew it 3 days were gone and I was *just* finishing formating. But it was done now right? Just have to do the regular formatting, build the ZFS pool and then say hello to my new storage. Except no! If I had learned anything from those few days it was that easy had drove off a cliff.

While the drives now worked and the zpool was built something wasn’t right. These speeds were not normal. Even in the worse case I was expecting 10-20MB/s per drive and at least 40-50MB/s across the whole array with 5 drives. What I was getting was spike ~15MB/s with the vast majority of writes sitting under 5MB/s when it was tranfering at all. Why? There were not issues with CPU, RAM or Swap on the VM. In face the CPU was hardly working, the RAM was under 50% even with other services running and Swap wasn’t even being used yet. Maybe I screwed up the zpool. After all, I’m still new to manually managing and setting up ZFS. I did my research, tested a few flags, checked the SMART status — did everything I could think of and still couldn’t get a satisfactory result. Surely it must be something. The drives were refurbished so maybe they’re bad? Nope. Smart status is good (though Power On hours as a up there as expected). After doing more research I discovered the issue, stringed across a couple of newsgroup listings, forum posts and random GitHub repositories. It’s the controller! Why? Beats me. All though “everyone” agreed it was a controller issue (regardless of the system the chipset was in) no one had a firm idea of why it was happening. Even now I can’t find a definitive answer and frankly was far too worn out after a week of no sleep and frustration. So I broke down and ordered a H200 and some cheap SFF-8087 to SATA cables for another $43 and waited.

The new controller and SAS cables arrived and I prepared for another ordeal. I followed u/techmattr‘s guide on flashing the H200 to IT Mode and prepared to install it in my server. Disabled Passthrough for the old controller, put the server in Maintainance Mode and shut it down. Thirty minutes of cleaning up cables and getting the card installed (the bracket was a bit difficult to get seated) and a reboot to enable passthrough on the H200 and then the moment of truth. Success? The controller popped up in ESXi, the drives were picked up in the scan and appeared to be working. Disabled Maintainence Mode, passed the H200 through and started my storage VM. I SSH’d in and bingo! Both the controller and the 4/5 drives were seen. I ran sg_format on the 5th drive and got that working. The proper ZFS commands were already in history so it was just a few presses and an enter to get that running. An hour more sorting out the some other configurations (fixing SMB, disabling NFS, fixing mount folders, etc.) and it was time to see if it worked. I mounted the share I created on my Plex VM and started transferring a few GB’s of files. I almost yelled at the top of my lungs. Sustained writes of 60-75MB/s, highs of up to ~95MB/s with spikes of >120MB/s (line limited by my Gigabit network). Finally, after all that we had liftoff. I started copying the data from my temporary external USB drive to my new storage and then……went back to sleep. I didn’t care that it was 4PM on a Tuesday. I earned it.

Two weeks later everything is humming along nicely. I’ve replaced most of the media I lost on the old 3TB drive from the corrupt VMFS. The array is already at 36% full but should last me until I can move and have the room and budget for a full cluster. Although the stress was probably not worth the trouble over a lazier solution I still got a good deal for ~$195 and I should make back a little of it selling the empty EMC trays. A similarly sized and redundant setup would have cost me ~$250 at minimum with worst speeds. I also learned quite a bit through the process about NFS, ZFS, ESXi PCI passthrough and SAS/SATA compatibility. My takeaway from all of this was that compatible doesn’t necessary mean it “just works” and that ultimately you get what you pay for. Cheaper meant a week of headache in this situation. Was it worth the $50-100+ dollars? Now maybe but I may or may not have pondered how if the server would go through the window or bounce back.


  • Thanks to user u/xsererityx on reddit whose post on flashing the H200 to IT Mode helped with preparing the H200 as well as the original article by u/techmattr
  • The controller in question is identified as a Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B (rev 01)
  • The hard drives are EMC 2TB 7.2K RPM SAS 3.5″ 6G drives, model #005049277

PowerShell: GoodSync Installer/Updater, Service Disabler and Cleanup

I use GoodSync for file synchronization a lot. While a awesome program, there are two major issues I find with it – the lack of a auto-update mechanism and the GoodSync Server service which is auto-enabled each time you update and I personally don’t use. So I created a PowerShell script to take care of all that. Downloading the msi, initiating the install/update, stopping and disabling the GoodSync Server services and cleaning up the temporary files and desktop icons. Enjoy!



Site Broken, currently fixing.

The site is pretty broken right now after changing hosts. Currently in the process of fixing. Thank you for your patience.

Things should be fixed with exception of the Disqus comment system which I’ll take a look at later. If you find any issues just leave a comment with the standard WordPress commenting and I’ll get it fixed.

Hopefully some new articles in the future.

HTTPS brought to you by Let’s Encrypt!

The site is now fully  (mostly anyways) running on HTTPS. Thanks to my new host, MDD Hosting, the entire site is now secured with a certificate from Let’s Encrypt. 1&1 couldn’t do that without a big bundle of cash and being locked into their service!

There’s still some bugs to be worked out but the lock is here to stay. Hopefully now that I’m done with fighting with 1&1 for the last 5 months I can get back to making some content.

PowerShell : Windows 10 Modern Application Removal Script

If you have Windows 10 you’ve not doubt seen the new modern apps and either love them, hate them or just don’t want to deal with them. Windows 10 does not provide an easy way to remove any of these applications or to keep them from running which can be a major issue for low RAM systems. The good news is they can quickly be removed via a few PowerShell commands or a script. The script below is one I’ve been using to manage them on my systems. You can also find a link to GitHub where I maintain this and several other PowerShell scripts I routinely use.

GitHub PowerShell Scripts | GitHub W10RemoveCoreApps Script
Read more of this post

Patching the vCenter 6.x Appliance

With the latest version of the vCenter Appliance (vCSA) there is a new process to patching the appliance. Gone is the old Web UI of the 5.x era. The new process isn’t anything to be scared about though and should be familiar to most Admins and techs. Instead of the old web interface the upgrade process is basically just to attach the patch ISO and run a few commands via the console or SSH.

While researching the proces to update my own server however I came across a few unclear instructions concerning the process. I worked out what I needed to do after a few searches and a couple KB’s. Most of this guide will be a rehash of others and the official instructions but I will be including a few clarifications as well as some visual representations of the process to help those that got a little confused by the regular instructions.

Read more of this post

Site should be Back Up

1&1 messed up royally and completely trashed the site. Luckily I do nightly backups to Dropbox so I didn’t lose much but it’s taken 2 days to get everything restored (no thanks to 1&1 Support). The may still be some issues and missing stuff that’ll I’ll have to fix. Please let me know if you spot anything.

Windows/Office Digital River and Azure Blob links removed

I’ve removed the Windows/Office download links as Microsoft has retired their Digital River/Azure Blob service (or at least removed Windows and Office from it). Since all the links now 302 and no longer work there is no point in keeping them up. I will continue looking for a new, safe source for ISO’s and Images for Windows and Office and update the page at that time. Until then it’s back to the old ways to getting them.

Moved Hosts

Changed hosts so things are a bit broken at the moment.

ESXi / vSphere 6.0 General Availability Offline Depot

VMware released vSphere 6.0 to GA (General Availability) on 3/12/15. If you want to upgrade you can pretty easily go to the VMware site and download the appropriate files. What is missing however is a Offline Depot for that who want to perform a in-place upgrade. It seems that VMware has chosen to only make the Depot available to those that have purchased a license for vSphere 6.0. I’m sure in time they will change this but until then I’ll be making the Offline Depot available for download here.

Offline Depot via Mega [Resumable]

Otherwise, head to My VMware and get upgraded. You can see my previous post on upgrading with a Offline Depot here.