Xpenology Experiment – Norco RPC-3216 Case

What do you do when you have surgery and weeks of recovery with one arm out of commission? Start a tech project! This was not going to be easy and I could only spend short periods of time working on the project.

I had sixteen WD Red drives in 3TB, 4TB, and 6TB capacities that had been phased out for larger drives. I had been saving data to these drives and storing them in a closet as cold storage. I kept a log of data and the corresponding drives, but this was not efficient and I ended up with different versions or duplicate data. I needed a better solution. I had an old computer not being used and it had decent components. I thought I would build a storage server, but the computer case could only hold eight hard drives. I wanted to use all of my hard drives in a single system, so an eight bay case would not work for this project.

I shopped around, but didn’t want to spend a lot since this was just a project to fight boredom and make use of old drives with a lot of hours. I ended up deciding on the Norco RPC-3216, which is a 3U, rack mount, 16-bay SATA/SAS server chassis. I bought it on Amazon because I had a lot of reward points. If not for this, I would have likely purchased a Supermicro from ebay. Many of these come with everything except the hard drives, though the components are often energy hogs. Electricity is expensive where I live, so this would be an issue.

I knew this Norco case would have shortfalls because it cost far less than any other sixteen bay hot swap case and the reviews averaged on okay. Because of the price, I was willing to accept some issues. Here’s a quick summary of my first impressions.

Pros:

  • Low price. Cheaper than almost anything new and with comparable features.
  • Decent amount of room. I installed an ATX motherboard and non-modular PSU. Though it accepts an ATX board, the PSU does cover two of the PCI slots. More on that in cons. This photo is the test installation and I have not bundled wires or installed SAS cards. I will likely be removing everything because of the PCI slot covers on the case. This is covered in cons…test-fitting-components.jpg
  • The case is not as heavy as I expected at close to forty pounds, but then again, it’s made of cheap materials. It will be heavy when I load sixteen WD Red drives. I bought a rack shelf that can hold 175 pounds instead of rails.
  • 3U! I almost went with 4U for the extra room, but I really prefer 3U. It’s the sweet spot for me.
  • Description and photos on Amazon and Norco website are wrong. This case comes with three 120mm fans and not four 80mm (but they are garbage – more on that in cons). That was changed by the manufacturer and the model number remained the same.
  • The 120mm fans slide in and out as seen in the orange brackets. These slide into a mounting bracket that supplies the power. Handy since the ones included were garbage (it needed to be said again).Fan Bracket Power
  • The 60mm fans are not nearly as loud as I expected (and they actually work). I hope they last because I didn’t order replacements in advance. I hate pulling things out of a rack for something like a dead fan! Maybe I should be preemptive.
  • There’s room for a couple SSDs on top of the hot swap bays, but I am not using them for my build since Xpenology uses a flash drive to boot the system. The OS is installed on the actual drives.
  • A slim optical drive can be installed, if anyone will ever need such a thing.
  • I think it’s a decent looking case. I almost painted the body of the case black since I have an open rack. I don’t care that much and it’s staying gray.
  • I didn’t cut myself! I expected sharp edges since this case was so inexpensive, but it was not bad.

Cons:

  • The 120mm fans are garbage. One of the three fans didn’t work, and this seems to be a common problem according to other reviews. Another fan was very weak and barely moved any air. I ordered three cheap Noctua fans and I will replace all three right now because I don’t want to have to do this in the near future when the one good fan fails. Please note that these fans have 4 pin connectors and replacement fans should be the same so they can fit in the trays that hold the fans (see photo). There’s a clip that holds the 4 pin connector and a three pin would need a shim or something to hold it in place. You could go with any 120mm fan and not use the fan enclosures that allow them to be easily removed and reinserted. If you do that, you will want some type of grill to keep wires out – and you will need that because space in that area is cramped!4 Pin Fans
  • Space is tight between the backplane and the fans. Routing the SAS cables is going to be a challenge. I waited to order these to see what I would be dealing with. I was going to order angled cables due to the lack of room, but I went with regular cables that were flat and likely easily bendable. I estimated I would need cables that were 2′ 7″, but went with 1m. I have a little room behind the PSU rats nest of cables for more. One person mentioned removing a fan to route the cables. I considered removing the fan in front of the PSU, but I like a case to be as cool as possible.Cramped Fans and Backplane
  • The bracket that holds the fans should have been adjustable forward and back. I have a decent amount of room between the fans and the motherboard. It would have only required the manufacturer to drill extra holes. This would have also made it easier to plug in the molex power cables. You can remove the fan bracket and plug in cables, but I didn’t want to deal with that since the screws are on the sides and bottom of the case.
  • Back to the space between the fans and the backplane. The backplane requires 4 pin molex power and it can be difficult to route power to them. There is a notch between the fan bracket and the side of the case, but it is very narrow. My PSU cables barely made it through that notch and were not long enough to get to all four molex power on the backplane. I ended up using long molex extension cables [like this] to run to the backplane. This made it much easier, but it was still difficult to plug them in. I wish the backplane used SATA power, which would have been much easier to plug in!Fan Bracket - Little Room
  • You will also want a fifth molex extension cable because the fans connect to a bracket that provides power to the fans. This bracket uses a molex connector that is crammed in a really tight spot. Plug in this molex power first because the other cables will get in the way. Fan Power
  • Not really a fault of the case since it is 3U, but you should get a modular PSU (if buying one for the build). I had an old 850 watt that was not modular and there are a lot of cables I don’t need bundled and taking up valuable room. This will also block some airflow. Please note that this case uses a standard computer power supply and not a redundant server PSU.Room for Cables
  • The PSU mounts partially over an ATX motherboard and covers two PCI slots. This covers my third PCIe x16 slot, which I may need if I want to add another card (considering 10Gbe). It looks like a PCI riser card with a ribbon cable can fit.Covered PCI
  • There is one USB 3 and one USB 2 port on the front of the case and not two USB 2 as in the description. It’s nice having USB 3, but the cable from the case has the USB 2 plug connected to the USB 3 plug and they cannot be split apart. That means the USB 2 and 3 ports on the motherboard must be close together or you will need some type of extension cable unless this cable provides either USB 2 or 3 to both ports depending on what you have plugged in to the motherboard. I don’t know if that is even possible. I’m going to use USB 3 since I will probably never plug in USB devices. USB 2 or 3
  • The PCI slot covers on the case don’t screw in. They welded on a couple spots and it requires wiggling or prying them until the welds break. I did not know this before mounting my motherboard. I now need to remove the motherboard or very carefully try and remove the PCI covers. No Wiggle Room
  • Cheap materials, but decent build. Nothing was obviously broken or bent. What do you expect since this costs a heck of a lot less than anything else with comparable features?
  • The HDD trays do not slide smoothly. I pulled out a couple and they are stiff. I don’t mind as all of them at least move in and out. I will not be removing hard drives unless they fail. I wouldn’t want to remove these trays too often because I could see them breaking.
  • There was no manual of any kind included. I wanted to look up something (I forgot what) and I couldn’t even find a manual online (I didn’t look too hard). This was not too important since everything is fairly straightforward.

There were a lot of cons compared to the pros, but the cost made up for me (so far). If I were building something for a more important purpose and buying new components, I would not have purchased this case. In that scenario, I would have looked at Supermicro. However, this is not going to be for a mission critical purpose. I am building something to use for on-site backup. I already have off-site “cloud” backup on G Suite.

As far as my components, you can see from the photos that I am using an older Gigabyte motherboard. We had a few of these at work as legacy machines and they are great boards! I have a Corsair 850 watt PSU, which should be adequate as long as the drive spin-up does not fry it. I have it topped off 32GB of RAM and an Intel i7-3770S. This CPU is more than I need for this build and only 65 watt TDP. I will be using two LSI 9207-8i SAS cards, which will take up my two PCIe x16 slots and not leave room for a 10Gbe card.

Here are some more photographs, since I wanted to see more when researching this case.

I am awaiting parts to be delivered. Once I receive these items, I will be able to complete the build and add to this post.

UPDATE – later the same day on 8/14/2019

I received the one meter SAS cables and it was a tight fit between the backplane and the fans. I had to temporarily remove the center fan in order to plug in the SAS cables and then run them around the fan bracket. The flat cables were easy to run between the fan bracket and the side of the case, which is quite narrow, but round cables would have been fine and likely a better choice. If I were to do it over again, I would have likely chosen right angle, round cables that were a bit shorter. This was mainly because the edges of the flat cables would bend and tweak a bit when routing them and I was worried it could be an issue if I was not careful. I also had a few inches slack and it would be easy to loop round cables. However, they are staying if they work.

SAS Cables 3

SAS Cables 1

SAS Cables 2

I also installed the Noctua fans, which have cables longer than the fans that came with the case. That meant there was slack cable protruding out of the fan trays, but I just pushed them to the side. These fans push a good amount of air.

I haven’t yet done any cable organizing and it’s getting crowded. Two of the SAS cables are off to the side outside the case, so imagine two more SAS cables, and another SAS card. A modular PSU would have helped to reduce the cable clutter, but I didn’t want to add $140 to this build.

I finally got around to installing the sixteen hard drives, and as others mentioned in reviews of this chassis, the hard drive trays feel cheap and some did not insert smoothly. Some of the trays took some work to get properly inserted. Since there was no information about the backplane, I guessed it was ordered from left to right, top to bottom. However, Xpenology mixes up the drives in the OS, so be sure to document the serial number of each drive and which bay it is in.

UPDATE 8/18/2019 – Getting Xpenology Running

I finally got around to installing an OS. Xpenology was initially troublesome so I tried Windows 10 and Storage Spaces. I built a parity volume with sixteen hard drives that ended up being about 37TB with one hard drive fault tolerance. There was no option for two drive fault tolerance in Windows 10. I was not happy with Storage Spaces since I had about 56TB of raw space. I didn’t do any speed tests, but everything I read said Storage Spaces was terribly slow. I decided to give Xpenology another look, and if it didn’t work, I was going try another Linux NAS OS.

I had installed Xpenology multiple times, but I was having issues with getting all the drives to show up in the OS (DSM). What I finally discovered was that the DSM for the DS3615xs worked for my hardware. I also went with DSM version 6.1.x instead of shooting for DSM 6.2. I figured I would play it safe (for now). I did the following to change the maximum number of hard drives DSM would recognize:

  • I disabled all SATA ports on motherboard since I was using two LSI SAS cards and no SATA from the motherboard. I don’t know if this was necessary.
  • The DS3615xs is a twelve bay NAS, so I needed to change the maximum number of hard drives since I had sixteen. I decided to double it to 24. I’m going to be brief in explaining how I did this, so you may need to do some additional research:
    • Enable SSH is DSM and use your preferred SSH client to connect.
    • Navigate to the following: cd /etc.defaults/
    • Edit synoinfo.conf using vi (sudo vi synoinfo.conf).
    • Go to line 138 and change maxdisks=”12” to whatever you need. I would recommend going beyond what you need and you may need to add one disk to account for the USB boot drive. I had 16 drives so I made it 24. This will show the number of hard drives you selected as being supported in DSM, but it may not yet show drives you have installed. That’s what was happening to me until I found the next step…
    • Find internalportcfg=”0xfff” and change value to internalportcfg=”0xffffff ” equal 24 drives. You will need to research what hex to use to equal your number of hard drives. Sorry, but I didn’t log what line this was on in the config file.
    • To delete empty eSATA drives from showing up in DSM, go to (or around) line 274. Change esataportcfg=”0x1000″ to esataportcfg=”0x0000″.
    • Comment out the line 293 with # so it says #supportraidgroup=”yes”
    • Next, add this below the previous value: support_syno_hybrid_raid=”yes”
    • Reboot your NAS and create your volume.

I have sixteen older WD Red drives that I phased out of my primary storage, so failure is expected sooner than later (though all drives show 100% healthy). I chose SHR2, which can tolerate two hard drives failing at once without data loss. I ended up with 42TB usable space compared to 37TB using Microsoft Storage Spaces, which only had one disk fault tolerance. Come on Microsoft – put some effort into Storage Spaces.

UPDATE 8/24/2019

The parity check of the sixteen hard drives completed and the system has been up and running with no issues. I was also miserable and not eager to work on this project for several days! I copied a lot of data to my Xpenology NAS to see how it performed. Things went smooth and it performed just as well as my other Synology NAS devices.

I couldn’t take seeing the DSM update available red badge, so I used the command below. This is only a temporary fix and the red dot of shame will reappear the next time you login. I need try scheduling this to run at login. If you use this command, be sure to replace the text in red with your admin account name.

synoappnotify -c SYNO.SDS.AdminCenter.Application YourAdminAccount -f SYNO.SDS.AdminCenter.Update_Reset.Main -u 0

UPDATE 9/6/2019

The Xpenology NAS has been running with zero issues. I rebooted it with fingers crossed and everything started just fine. I tried to install Resilio Sync from the Synology Package Center, but it would not run. I had to download a version from Resilio and it worked just fine. I also installed Cloud Station Server from the Synology Package Center, which ran just fine. I stopped using Cloud Station Server because it is a CPU hog and used around 50% CPU. That was normal on my other Synology devices, but I thought that was just with the much weaker Atom CPU. Turns out the program is a CPU hog with more powerful CPUs.

I am going to hot swap a 3TB Red drive with a 6TB and see what happens. I need to know this NAS can handle swapping since hard drive failure is always expected. Will it work? Check back soon to see what happened!

9/7/2019 – Drive Hot Swap

I wanted to get this done yesterday, and though there was not much to it, I was in a lot pain from surgery. This afternoon, before the pain crept in, I swapped out a 3TB for a 6TB Red drive. Xpenology alerted me that the volume was degraded, detected the 6TB drive, and I could select to repair the volume – just like a regular Synology NAS. I used an SSH session to speed up the rebuild process with the following command, which may not be appropriate for a Synology NAS with an Atom CPU. I am seeing write speeds to the newly added hard drive anywhere from 55 MB/s to 85 MB/s (megabytes per second):

echo 100000 > /proc/sys/dev/raid/speed_limit_min

9/8/2019 – Still Rebuilding RAID?

My DS1813+ finished the complete process of rebuilding the volume today around 2pm. When I checked the Xpenology NAS around 6pm, it showed Checking parity consistency 0.00% and there was no change after several minutes. I decided to reboot the system and DSM warned data scrubbing was in process. After rebooting, DSM reported checking the parity consistency at 32%. It appears there is a bug that stopped showing the percentage of the process. I was a little concerned that the rebuild process was still going and the DS1813+ had finished hours ago, but then I remembered the Xpenology had sixteen drives that were 81% full. I’m glad to report the rebuild process completed on 9/9/2019 and everything is running well.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.