New Proxmox Cluster - Part 1
As mentioned a couple of posts ago I have a new Proxmox cluster that is intended to take over from the HP Microservers.
Unfortunately this project has taken far longer than intended and since it’s inception things have changed in my homelab meaning that the Microservers will need to hang on a little longer for some specific tasks.
I ran into various issues during this project but as it’s now mostly complete I think it’s time to go ahead with the writeup.
Design and Hardware choices
I spent a long time looking into the various hardware options available to me.
At the time I designed this cluster, disk space was not really an issue in my lab and what I needed most was RAM and some decent CPU performance.
The HP Microservers with their i3-2120’s and 16GB of RAM were at their limits and while adding SSD storage had kept them going a little longer, performance of any kind was extremely lacking.
I decided early on that I wanted a cluster rather than a single node as I would like to explore shared storage and HA options in the future.
The only other requirement apart from increased performance was noise and power. I briefly explored running a Dell R410 in my lab and quickly realised that noise was going to be a significant factor until I can either colocate or find somewhere else to put a rack.
After some reading and searching I settled on Dell R210ii’s as my server of choice for this build.
Specifications
The Dell R210ii is the second revision of the R210 and is significantly quieter than the R200 or R210. It is a very short-depth 1U rackmount server and while 1U servers are generally very loud, this one is even quieter than the 2U options of the same era.
The R210ii has a single socket supporting LGA1155 CPU’s, including Celeron’s, Pentium’s and E3-1200 (inc v2) models. These CPUs are limited to a maximum of 32GB of DDR3 memory and the R210ii, being a server, requires ECC memory, in this case unregistered similar to the microservers.
The server initially came with two storage options:
- Up to two 3.5” SAS or SATA drives
- Up to four 2.5” SATA SSD or SAS drives
Ideally I wanted to run 4 SSDs but as the 4 x 2.5” option appeared to be less common I settled for 2 R210ii’s with brackets to convert each 3.5” bay to 2.5”.
I was initially concerned that everything would fit or the addition of these brackets and the required SATA cables and sata splitter would hinder airflow I have had no issues in either area.
The servers and required components were bought from Ebay and while they initially came with the E3-1220 and 4GB of RAM they were quickly upgraded.
I ended up with two of the following spec:
- Dell R210ii
- E3-1230V2 (4 Cores, 8 Threads @3.30 GHz)
- 32GB DDR3 ECC UDIMM (4 x 8GB)
- 4 x 250GB SSD (Mix of Kingston & Crucial)
- 2 x 3.5” bay convertor
- 1 to 4 crimped SATA splitter. (Moulded SATA splitters can be a fire hazard)
- iDrac Express Module
- iDrac Enterprise Module
iDrac issues
With my NAS and Microservers I have become very used to having out of band management.
Out of the box my new Dell R210ii’s came without any kind of out of band management which did concern me as I would be unable to do anything remotely in the event of an issue.
Dell do offer an out of band option in the majority of their servers known as iDrac. Unlike the iLo in my Microservers, the version of iDrac in the R210ii (iDrac 6) was an optional extra and comes as physical hardware modules rather than a license key.
The iDrac Express adds the base iDrac features to the motherboard and is required for a number of features. However, for out-of-band management with a dedicated NIC the iDrac Enterprise module is required.
I quickly purchased two iDrac6 express and two iDrac6 enterprise modules from ebay and this is where I ran into issues.
The R210ii can be very fussy about the iDrac express module used. My understanding is that any of the iDrac6 enterprise modules should work, however for the express module I only had success with the part number “0PPH2J”.
Other part numbers gave various results but none of them fully worked and after some reading I have determined that this is a common issue.
Firmware
The R210ii is now end of life and so unless you can locate a valid update ISO, firmware updates need to be downloaded via the Dell EMC Repository Manager.
I would warn that others have reported bricking their motherboards using the Dell update ISOs although I did not run into issues here.
The main problem I ran into was updating the iDrac firmware. The firmware that came on the iDrac modules was very old and did not allow the Java based remote console to work due to new security requirements in the later versions of Java.
Unfortunately the iDrac firmware cannot simply be upgraded directly to the latest version.
My recommendation is to configure iDrac as-is, sign into the web interface and use the web method to update the iDrac through the following firmware versions:
- 1.92
- 1.95
- 1.97
- 1.99
- 2.8
- 2.9
- 2.9.1
The iDrac updates can be downloaded from the Dell iDrac6 homepage and Dell have a video on the process here.
In my experience each of these versions needs to be installed in order for the next upgrade to succeed.
At this stage my servers were fully up to date and ready for the OS, details on which can be read in the next blog post, here.