The last few days I’ve been visiting the Datacenter in Bend Oregon to upgrade some of my personal servers. I’ve got some new servers to replace the ageing systems that have been running for over five years. The oldest system, an IBM X3550 that was manufactured in 2010 and was my first system to have in a Datacenter.
I recently acquired two HP DL360 G7 1U servers with 96GB of RAM, 8 300GB SAS drives, with two Intel Xeon X5660 CPU’s. While they are not the latest and greatest systems, it is most definitely a huge improvement. The bottleneck I was running into with the older system was the CPU performance and the low amount of Memory.
Most of the work that the servers do in the Datacenter run personal projects. I also run a few video game servers to play with co-workers and friends instead of having to rent out servers from different companies. This allows us to run whatever and whenever without having to worry about cost and any hardware/software limits.
During my visit to the Datacenter to put in the new systems, I also wanted to get some of the software updated as a lot of internal systems were running on some very outdated software.
The first day at the Datacenter involved getting the new systems rack mounted and installed. I got some new ethernet and power cables from fs.com so I could have all the same length ethernet cables and some power cables that were bright orange/red to make it stand out from the other power cables in the shared cabinet.
Getting the systems installed and wired went really smooth. It took about an hour to get the rails mounted, finding enough rail screws and getting the cables removed from the packages and wired up.
After getting them turned on, I had to get Proxmox configured with the correct Timezone, IP addresses, Network Interface Bonding, Hostnames, updates and getting Zabbix configured. The final step involved getting the systems joined into a cluster. All of this took about three hours for the two systems. Once they were running and configured, I just had to migrate all of the virtual machines over. The migration took the longest as some of the VM’s have up to 100GB of storage and there is a few systems that are using most of the space.
The migration to the new systems went really smooth and I was able to get everything moved and running on the new hardware within the same day. The next day I was planning on doing software updates and just finalizing any changes.
Thursday morning came and I was quite happy to see that everything was still up and running and was staying stable. My first software update that I wanted to get out of the way was my pfSense router that runs in a virtual machine for the Datacenter. Public IP addresses are not cheap so I have a small public block and use internal network addressing and just port forward anything that is needed. The updates went really smooth and the router came back up and everything was working correctly after.
My next system to update was my FreeNAS system that contains all of the virtual machine backup’s and the media for the Plex Server that I host. This was the most difficult update that I did. The system was about 4 years behind on updates but it was not that concerning as it was not public facing at any point. I started the update from the webGUI and the update was successful and the system restarted without issue. This is where the trouble started, the system would boot up but would just drop to a FreeBSD shell and would not load anything related to FreeNAS. The network interfaces would not come up, the hard drives were not mounting, etc. after restarting a few times and trying to use the built in recovery from the boot manager, the system would not want to start up at all. I ended up having to download the .ISO image from the website and burn it to a flash drive and boot the system from there. During the install process, the installer was able to detect the existing configuration and was able to save that and just reinstall the base OS system. Once the OS got installed and the system rebooted, it finished the upgrade process and the system came back up with all of the correct configuration.
At this point, I got all of the software updates done and all of the new hardware is working. So I started to pack up from the little corner that I have taken over at the Datacenter.
While I was packing up, I noticed that Zabbix running on my right screen all of a sudden had some alerts show up and then right after that I lost connection to everything. This was slightly concerning as everything was working and I didn’t touch anything in the server rack in the meantime. After some investigation, I found that my network switch was acting up. It would not respond to the web interface, to SSH, or even with the serial console cable. The only way to get it to respond was to unplug it from power, wait a few seconds and plug it back in. It would start up and start working, but after about 3 minutes of working, it would stop responding again. After trying to unplug cables and restarting the unit several times, I decided it was time to just replace the unit as I don’t live locally to the datacenter and don’t want to drive or fly back out. The old network switch also showed signs of physical damage. I was unable to determine how or when the switch got damaged, but I did notice it when I first arrived. The owner at the datacenter luckily had some extra network switches just laying around and I was able to buy a Juniper Networks EX4200 series switch as I use some of the managed functions such as Bonding, and VLANs.
I have never used a Juniper switch before but I did find that the online documentation was really good for the device. I could easily search and find the correct CLI commands to get the unit configured. After about an hour, I was able to get the device running and configured to meet my needs. I got the unit swapped out and was able to get all of the equipment back up and running.
At this point, I just stayed around the datacenter and just did some cable management and worked on putting labels on everything and recording serial numbers on Hard Drives for a few hours to ensure everything was fine. So far it has been over 24 hours from when I was last in the Datacenter and everything has been running smoothly. I am hoping it will remain this way for another 5 years before I upgrade the hardware again.