Camp Kiwi Data Center: Difference between revisions

Jump to navigation Jump to search
 
(148 intermediate revisions by the same user not shown)
Line 1: Line 1:
__NOTITLE__
== Introduction ==
== Introduction ==


I started my "home lab" experience at our home on Wandering Way, by pulling a couple of Ethernet cables from one side of the house, where the cable came into the building, to my office.  I also pulled one from there to a WiFi access point on the other side, and a computer in the kitchen island.<br/><br/>
I started my "[https://www.reddit.com/search/?q=r%2Fhomelab home lab]" experience at our home on Wandering Way, by pulling a couple of Ethernet cables from one side of the house, where the cable came into the building, to my office.  I also pulled one from there to a WiFi access point on the other side, and a computer in the kitchen island.  The data center started up approximately January 2019.<br/><br/>
 
From there we moved to Top Flite where I converted a pretty good space (maybe 10'x8'?) into a mini data center.  My brother-in-law is in the surplus business and ended up with some old Great Lakes 45U racks he didn't need, so I picked one up.  I actually picked up 3, but I only used one over there.  It's not ideal, being a "Security and Sound" [https://www.telcom-data.com/racks-and-cabinets/gl840s2-2436 GL840S2-2436] rack, it doesn't have any cable management features, but I got them basically free (thanks Steve!), so who's arguing?<br/><br/>
From there we moved to Top Flite where I converted a pretty good space (maybe 10'x8'?) into a mini data center.  My brother-in-law is in the surplus business and ended up with some old Great Lakes 45U racks he didn't need, so I picked one up.  I actually picked up 3, but I only used one over there.  It's not ideal, being a "Security and Sound" [https://www.telcom-data.com/racks-and-cabinets/gl840s2-2436 GL840S2-2436] rack, it doesn't have any cable management features, but I got them basically free (thanks Steve!), so who's arguing?<br/><br/>
That Top Flite house was built in the early 2000's, and was wired with Cat5e wire to most of it's rooms, alongside dual RG6 coax, I think for the phone lines.  These were homerun back to the utility space that I'd taken over for the data center, so I started getting rack-mount switches, routers, and shelves and mounted my cable model and mac mini server in there.  I upgraded to a USG Pro and USW-16, and eventually a USW-48-750 to power my WiFi access points and some cameras.  I became an amateur wire-puller as I added ceiling-mounted UAP-AC-Pros in 4 places around the house and started to monitor the property with Foscam PTZ cameras, and added dual fiber drops to a couple of computer locations.  <br/><br/>
 
That Top Flite house was built in the early 2000's, and was wired with Cat5e wire to most of it's rooms, alongside dual RG6 coax, I think for the phone lines.  These were homerun back to the utility space that I'd taken over for the data center, so I started getting rack-mount switches, routers, and shelves and mounted my cable modem and mac mini server in there.  I upgraded to a USG Pro and USW-16, and eventually a USW-48-750 to power my WiFi access points and some cameras.  I became an amateur wire-puller as I added ceiling-mounted UAP-AC-Pros in 4 places around the house and started to monitor the property with Foscam PTZ cameras, and added dual fiber drops to a couple of computer locations.  <br/><br/>
 
Eventually I added a portable 1 ton AC unit and QNAP NAS for a Plex server, and I was really a Home Labber!  The heat from the AC was ducted to the attached garage, which as luck would have it was on the adjoining wall.  Somewhere along the way, Cincinnati Bell started offering 1 Gbps FTTH service (250 Mbps upload) and I snapped that up too.  That upload speed was particularly important for my Plex server.<br/><br/>
Eventually I added a portable 1 ton AC unit and QNAP NAS for a Plex server, and I was really a Home Labber!  The heat from the AC was ducted to the attached garage, which as luck would have it was on the adjoining wall.  Somewhere along the way, Cincinnati Bell started offering 1 Gbps FTTH service (250 Mbps upload) and I snapped that up too.  That upload speed was particularly important for my Plex server.<br/><br/>
Then it came time to move again.  I got my wife to agree that we would only look at houses that had fiber service, and that I would get a budget to install good Ethernet cabling.  And so it began...<br/>
Then it came time to move again.  I got my wife to agree that we would only look at houses that had fiber service, and that I would get a budget to install good Ethernet cabling.  And so it began...<br/>


== Stats ==
== Stats ==
The data center features Ubiquiti Unifi "pro-sumer' networking gear, with:
The data center features Ubiquiti Unifi "pro-sumer' networking gear, with:
* Nearly 90 Cat6a copper network runs, and a mile and half of cable
* 96 Cat6a copper network runs, and 2 miles of cable
* 10 OM4 fiber-optic network runs (capable of 40 or 100 gigabit per second)
* 12 OM4 fiber-optic network runs (capable of 40 or 100 gigabit per second)
* 8-20A power circuits
* 8-20A power circuits
* 1.5 tons of AC cooling
* 1.5 tons of AC cooling (typically set to 65F, which keeps rack outlet temps -top of rack- below 75F), all-weather capable
* Core network is dual 10 gigabit per second fiber optic
* Core network is dual 10 gigabit per second fiber optic
* Primary internet service to house 1 gigabit per second fiber optic
* Primary internet service to house 1 gigabit per second fiber optic (Cincinnati Bell [https://www.cincinnatibell.com/discover#/whyfioptics Fioptics])
* Backup  internet service to house 200 megabit per second cable
* Backup  internet service to house 200 megabit per second cable ([https://www.spectrum.com/internet Spectrum])
* File server is 120 terabytes of disk space (running Plex and Nextcloud personal cloud service)
* File server is 120 terabytes of disk space (running Plex and Nextcloud personal cloud service) on QNAP TVS-1271u
* 1U Virtualization Server is Dell R620 (local services, DNS, NTP, Ad Blocking, Home Automation)
* Servers
* 2U Virtualization Server is Dell R720 (Cloud Data Server, Movie Streaming, eMail, Web, Wiki)
** 1U Redundant Virtualization Servers are Dell R620 (local services, DNS, NTP, Ad Blocking, Home Automation)
* 2U Storage Area Network Server is Dell R720XD (6x6TB VM storage, 6x12TB Backup)
** 2U Redundant Virtualization Servers are Dell R720 (Cloud Data Server, Movie Streaming, eMail, Web, Wiki)
** 2U Storage Area Network Server is Dell R720XD (6x6TB VM storage, 6x12TB Backup)
** Each server was originally kitted out with 192Gb of ram
** As of January 2023 the 5 ProxMox servers and the FreeNAS #3 (which manages the VM images) were upgraded to 512Gb, and the Plex data server (FreeNAS #1) and Backup servr (FreeNAS #5) were upgraded to 384 Gb
* 2 Rijer 8 port VGA KVM Switches with remote (USB) switching on my desk
* 7 Sonos network audio amps for basement level audio
* 7 Sonos network audio amps for basement level audio
* 4 Sonos network audio amps for main floor audio
* 4 Sonos network audio amps for main floor audio
* 3 Tivo DVRs for entertainment, streamed to 6 different locations
* 3 Tivo Bolt DVRs (rack mounted) for entertainment
* 12 camera security system with on site video backup
* 6 Tivo Mini Vox (Gen3) streaming from the Bolts above (with Netflix/Hulu/Plex/YouTube)
* 16 UVC camera security system with on site video backup
* 5 Ring Floodlight cameras
* 3 Ring Doorbell cameras
* 4 Ring Doorbell extender chimes
* 21 Network switches (8 in-rack)
* 7 UniFi HD WiFi Access Points
* 150-175 typically associated network clients


=== Origin of the Name ===
== Origin of the Name ==
There's this guy named "Dave".  I used to work with this guy, and he is a bit... eccentric.  Anyway, he managed a Data Center at the office, and had some folks supporting it.  Some of these folks were not native English speakers, and he had a Monday morning telephone conversation where it got confused as to what they were saying about eating a can of peaches versus visiting "Camp Peaches", presumably like a summer camp for Scouts.  In a fit of brilliance he decided to name his Data Center Camp Peaches.  He even acquired signage to proclaim same.  When he took on a new role, I inherited this sign (our IT organization did not see the brilliance in the naming scheme - however, it is still universally known corporately as Camp Peaches).
There's this guy named ''Dave''.  I used to work with this guy, and he is a bit... eccentric.  Anyway, he managed a data center at the office, and had some folks supporting it.  Some of these folks were not native English speakers, and he had a Monday morning telephone conversation where it got confused as to what they were saying about eating a can of peaches versus visiting "Camp Peaches", presumably like a summer camp for Scouts.  In a fit of brilliance he decided to name his data center "Camp Peaches".  He even acquired signage to proclaim same.  When he took on a new role, I inherited this sign (our IT organization did not see the brilliance in the naming scheme - however, it is still universally known as '''''Camp Peaches''''').




<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image: Peaches.jpg|Camp Peaches, origin of the 'Camp Kiwi' moniker
Image: Peaches.jpg|Camp Peaches, origin of the 'Camp Kiwi' moniker
</hovergallery>
== Network Diagrams ==
Don't use these to try to hack me, that wouldn't be cool.  For those interested.</br>
In December 2022 the USG-XG-8 was replaced with a UXG, and 2 U6-Enterprise APs (2.5G uplinks) were installed in the basement to get ready for a Fioptics 2G/1G ISP upgrade.  The new generation Muli-gig Transceivers from Ubiquiti (and others) will show linked at 10G on the various Unifi switches, and link at 10/5/25/1 gigabit on the other end.  I also picked up a couple of 2.5G USB-C Ethernet adapters to test with.  The AltaFiber/Cincinnati Bell fiber upgrade happend in mid January 2023.</br>
[[:File:933_Congress_Network 10G V2.4.vsd|Vizio version]]<br/>
[[:File:933_Congress_Network 10G V2.4.pdf|PDF version]]<br/>
[[Image:933_Congress_Network 10G V2.4.jpg|350px|Network Diagram]]
<hovergallery maxhoverwidth=1600 maxhoverheight=1200>
Image:933_Congress_Network 10G V2.4.jpg
</hovergallery>
</hovergallery>
== Unifi Updates ==
Preparing for our new 2G Fioptics (Cincinnati Bell - now AltaFiber is offering 2G down, 1G upload speeds in Cincinnati as of late 2022), I found that Ubiquiti has multigig transceiver modules ([https://store.ui.com/collections/unifi-accessories/products/sfp-accessory UACC-CM-RJ45-MG] - SFP+ to 10GbE RJ45 Transceiver Module).  This shows as linked at 10G of the Unifi side but can link at 1/2.5/5/or 10G on the downstream side.  I have confirmed this module works with the below gear.  It is important to note that the non-Unifi transceivers worked better for me int he UXG than the native Unifi gear (on the 10G WAN port it would not link above 1G, but the others would).<br/>
<br/><u>'''Unifi/Ubiquiti Gear'''</u>
* 48 Port Aggregation Switch (USW-Pro-Aggregation)
* 48 Port POE Swtich Pro (USW-Pro-48-PoE)
* US-XG-16 10G Switch
* [https://store.ui.com/collections/unifi-accessories/products/sfp-accessory UACC-CM-RJ45-MG] - SFP+ to 10GbE RJ45 Transceiver Module (pick 10G 100m)
<br/><u>'''Mating Gear'''</u>
* [https://www.amazon.com/dp/B097N5WJY9?psc=1&ref=ppx_yo2ov_dt_b_product_details Anchor USB C to 2.5 Gbps Ethernet Adapter, PowerExpand USB C to Gigabit Ethernet Adapter, Aluminum Portable USB C Adapter, for MacBook Pro, MacBook Air 2018 and Later, iPad Pro 2018 and Later ] (on my M1 MacBook Pro)
* [https://www.amazon.com/dp/B085RJ4ZBB?psc=1&ref=ppx_yo2ov_dt_b_product_details Cable Matters 10G SPF Copper 30m] (SFP 10GBase - Model 104068) Transceiver
* iPolex ASF-10G-T 100Bast-T RG-45 30m
* 10GTek ASF-10G-T
<br/><u>'''Confirmed NOT to work'''</u>
* Unifi UF-RJ45-10G SFP+ Copper RJ45 30m (will only link at 1G or 10G, and will not autonegotiate)


== Finished Pictures ==
== Finished Pictures ==
Line 37: Line 82:


=== Workstation ===
=== Workstation ===
The basic Data Center is now online.  As you see in the below pictures, I have a small workstation inside the data center.  This is not my office, I have a full office elsewhere - and it's too cold and noisy in the data center to spend a lot of time there.  But, I have a KVM switch connected to my QNAP NAS, Mac Mini webserver, and the UniFi Application server to allow me to interface with these servers.  The workstation allows me to not only administer the servers, but also monitor the security cameras from around the house.  I also have a monitor connected to a HDMI switcher that allows me to monitor each of my Tivo Bolt DVRs and validate they are functioning properly.<br/>
The basic Data Center is now online.  As you see in the below pictures, I have a small workstation inside the data center.  This is '''''not''''' my office, I have a full office elsewhere - and it's too cold and noisy (not as bad as you would imagine, but noisy) in the data center to spend a lot of time there.  But, I have a set of KVM switches connected to my QNAP NAS, Mac Mini webserver, the Dell servers, and my UniFi Application server to allow me to interface with these computers.  The workstation allows me to not only administer the servers, but also monitor the security cameras from around the house.  I also have a monitor connected to a HDMI switcher that allows me to view each of my Tivo Bolt DVRs and validate they are functioning properly on a separate display.  Note in the most recent pictures I now have a "stand-up desk" since I don't have much room for a chair, and really don't want to sit in here for long periods.<br/>


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:workstation_final.jpg|Monitors for Tivo and serve control|Workstation - Monitors for Tivo and server control
Image:worstation_standing.jpg|Monitors for Tivo and server control, standing desk
Image:Cameras_status.jpg|Security Camera Configuration
Image:Cameras_status.jpg|Security Camera Configuration
Image:Cameras_all.jpg|Cameras-All View
Image:Cameras_all.jpg|Cameras-All View
Line 46: Line 91:
Image:Cameras_exterior.jpg|Cameras Exterior
Image:Cameras_exterior.jpg|Cameras Exterior
Image:data center design.png|The Design
Image:data center design.png|The Design
Image:dual_8port_KVM.jpg|Dual 8 port KVM switches for server monitoring
</hovergallery>
=== Logo Panel ===
Thanks to Redditor "[https://www.reddit.com/user/98MarkVIII 98MarkVIII]" for posting in [https://www.reddit.com/r/PleX/comments/hxdfa6/made_a_plex_logo_2u_illuminated_rack_insert/?sort=new this thread] about his 2U "Plex Logo" lighted rack insert (I also posted tehse on [https://imgur.com/a/nylPIgm ImGur]).  I thought that was a great idea... so great an idea I took it even further and had inserts made for my server "BrettFlix", as well as our core software stack (ProxMox PVE and FreeNAS).  Sadly, I didn't have that kind of space in the rack, so I got a short wall-mount 8U rack to make into stickly a display unit to line up in front of the data center door.  I think it looks great.<br/>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:IMG_7691.jpeg|Through the looking... door, I mean door...
Image:IMG_7693.jpeg|Door open. lights off
Image:IMG_7694.jpeg|...closer...
Image:IMG_7695.jpeg|... and lights on...
</hovergallery>
</hovergallery>


=== Climate Control ===
=== Climate Control ===
Data center temperature is managed with a 1.5 ton Carrier mini split AC unit.  This unit is capable of cooling the data center all winter with outside air temperatures well below freezing.  The wall-mounted thermostat is on the wall in the hot-aisle to check the server exhaust air temperature, and is set to keep that temperature in the 60s Fahrenheit.  Also, the temperature is monitored continual with an Sensor Push temperature probe on a shelf in the hot aisle, as well as CyberPower UPS environment monitors on the top of each rack.  Both of these device types can SMS/push notify me if the temperature or humidity gets out of range.<br/>
Data center temperature is managed with a 1.5 ton Carrier/Trane (40MAQB18B--331/38MAQB18R--301) mini split AC unit.  This unit is capable of cooling the data center ''all winter'' with outside air temperatures well below freezing (the servers will always need some cool air) - down to minu 20F.  The wall-mounted thermostat is on the hot-aisle to check the server exhaust air temperature, and is set to keep that temperature in the 62 degree Fahrenheit range.  Also, the temperature is monitored continually with an Sensor Push temperature probe on a shelf in the hot aisle, as well as CyberPower UPS environment monitors (and more Sensor Push units for iOS reporting) on the top of each rack, and at the Mini Split AC return (at the top of the unit).  Both of these device types can SMS/push notify me if the temperature or humidity gets out of range.  The server rack has the most extreme delta-T, with exhaust temps at the top of the rack at about 72 F, whereas the Network rack is about 66F, the entertainment rack 68F, and the hot asile at the 65 F set point (cold aisle is typically upper 50 F).  I do have to be careful opening the window in winter, though, as ithas a fail-safe that shuts it down if the incoming air to the unit is below 60F.<br/><br/>
 
UPDATE: As of early 2021 I've made some minor modifications which have really helped.  I was never really happy with how the temperature control was working, as the hot aisle temperature always seemed too high, as did (occasionally) the server outlet temperature (top of rack).  But, it got really weird this last winter.  I figured, it's pretty cold outside, and I have this handy window.  I know, I know, heresy.  Dust.  Humidity. Whatever.  I get it.  Sure, it would be better to have filtered air, and strict humidity control, but these are surplus servers, and even with solar electric isn't free (we are not completely self-sufficient on solar)... and I do monitor the humidity.  Anyway, what happened was this spring when it started to warm up the AC wouldn't keep up.  Odd.  Well, our friendly HVAC guy came by, and it turns out it tripped because the incoming air was too cold.  Yep.  It protects itself, and the window was so close to the unit it was sucking in air from the window.  That got me thinking, if that's true, I bet that it's not circulating the hot air like I expect to to.  The theory is the cold air should sink and be sucked in the cold side of the servers, and the hot air should rise, be trapped by the soffit and directed back to the AC unit.  And it kind of was.  But not enough.  So, I built a light frame along the soffit, and across the top of the rack, and to the wall by the AC, and drop a curtain of painter's plastic down the side of the rack to isolate them from the window, and force the cold air to be trapped in front of the rack, and isolated the AC return to the hot air.<br/>


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Line 58: Line 116:
Image:Thermostat.jpg|Thermostat
Image:Thermostat.jpg|Thermostat
Image:Thermostat2.jpg|Thermostat final
Image:Thermostat2.jpg|Thermostat final
Image:HVAC Screen 1.jpg|"Roof" isolates cold air supply from return
Image:HVAC Screen 2.jpg|Roof from the other end
Image:HVAC Screen 3.jpg|Sealing top of rack, cold side
Image:HVAC Screen 4.jpg|Return side of AC, over rack, with Bluetooth thermostat
Image:HVAC Screen 5.jpg|Hot side of roof, with rack exhaust fans - each with a Bluetooth thermostat
Image:HVAC Screen 6.jpg|curtain isolates cold air from window
Image:
Image:
</hovergallery>
</hovergallery>


=== Power ===
=== Power ===
A new subpanel was mounted adjacent to the data center to provide power to the server space and the new AC unit.  This panel provides eight 20A circuits to the data center below the raised flooring.  2 circuits are under the networking rack providing primary and backup power, and 3 circuits to each server cabinet.  Each rack is provided with a CyberPower 1500VA UPS, with the networking and server racks having both primary and secondary/backup units.<br/>
A new subpanel was mounted adjacent to the data center to provide power to the server space and the new AC unit.  This panel provides eight 20A circuits to the data center below the raised flooring.  2 circuits are under the networking rack providing primary and backup power, and 3 circuits to each server cabinet.  Each rack is provided with a CyberPower 1500VA UPS (OR1500LCDRM1U), with the networking and server racks having both primary and secondary/backup units to feed the dual-power-supply gear in those racks.<br/><br/>
 
Additional power management capability is coming (DONE-July 2019).  I've started by installing Eve Energy HomeKit power monitors on the primary and backup legs of each machine so I can track total power usage, and I've noted that the 1500VA CyperPower UPS are over-taxed in their current configurations and provide only a few minutes of run time.  I therefore plan to get CyberPower monitored PDUs for each circuit in the rack, and for each UPS.  Further, I plan to install one UPS for each Dell server at a minimum, and connect the USB output to that server and run "netuptools" to signal the host to shutdown the VMs and power off when power gets critical.  UPDATE: This is now done, in addition to the 900W CyberPower units, I have added four 1,350W CyberPower (2U 1500VA model PR1500LCDRT2U) UPS units in the server rack.  Each server has a primary and secondary leg on different UPS units, and each UPS is fed from a different circuit breaker.  I also added CyberPower 1U rack mount monitored PDU so I can easily see the amp loading for each circuit.  I have not yet purchased the network expansions for these units, but I plan to.  However, with this additional power I now have 30 minutes minimum runtime on power failure.  I replaced all of the black power cables with color-coded ones to make it easier to quickly see what servers are powered by what UPS (and consequently which circuit).  I've also started to play with Proxmox's ''high-availability'' settings to migrate running VMs to other servers when a system goes down.  Right now each UPS is connected via USB to one server and networkUPStools is running in standalone mode to signal the host to shutdown when the battery goes critical.  I will set them up to run in a master/slave configuration later so that the host will only shutdown when both legs of power go critical, I just haven't had the time yet.


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:floor_power.jpg|Power Outlets in Floor under racks
Image:floor_power.jpg|Power Outlets in Floor under racks
Image:Subpanel.jpg|Subpanel
Image:Subpanel.jpg|Subpanel
Image:IMG_0469.jpg|CyberPower PR1500LCDRT2U
Image:new_UPS_back.jpg
Image:power_color_code.jpg|Color-Coded power cables
Image:power_monitors.jpg|Power Monitoring
Image:new_UPS_1350W.jpg|Older UPS picture
</hovergallery>
</hovergallery>


Line 72: Line 147:


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:IMG_6821.jpeg|Carpet after plastic was removed
Image:IMG_6820.jpeg|Carpet after plastic was removed
Image:floor_power.jpg|Power Outlets in Floor under racks
Image:floor_power.jpg|Power Outlets in Floor under racks
Image:flor_squares.jpg|Carpet Design
Image:flor_squares.jpg|Carpet Design
Line 81: Line 158:


=== Servers and Racks ===
=== Servers and Racks ===
I have 3 racks for my equipment.  The first rack houses all of the networking gear, the middle rack my servers, and the last rack has the entertainment hardware like Sonos sounds systems and Tivo networked DVR (digital video recorders for cable and over-the-air TV).  Each rack has backup battery power in the event of electrical outage and lighting.  The fronts feature RGB LED "mood" lighting, and backs have white LED lighting for task illumination, which is usually off.<br/><br/>
I have 3 racks for my equipment.  The first rack houses all of the primary networking gear, the middle rack my servers, and the last rack has the entertainment hardware like Sonos sounds systems and Tivo networked DVR (digital video recorders for cable and over-the-air TV).  Each rack has backup battery power in the event of electrical outage and lightening.  The fronts feature RGB LED "mood" lighting, and backs have white LED lighting for task illumination, which is usually off but great for maintanence.<br/><br/>


I have a few servers. The most impressive of which is a QNAP 12 bay NAS, or 'network attached storage'.  This has PC class CPU and twelve 10 Terabyte hard drives, two of which serve for redundancy in case of failure, so I net out at around 100 TB of total storage.  This computer hosts my plex server and library as well as a virtualization environment that runs a private cloud ([https://nextcloud.com/ NextCloud]) instance.  This cloud keeps the family's files and picture, replicating them across our devices, as well as a shared family calendar and contacts/address book.  The Plex content is available on our Tivos as well as iOS, Playstation, and other devices in home and away.</br><br/>
I have a few servers. The most impressive of which is a QNAP 12 bay NAS, or 'network attached storage' ('''UPDATE'''-this is now my weakest server! although it is still the primary storage unit).  This has PC class CPU and twelve 10 Terabyte hard drives, two of which serve for redundancy in case of failure, so I net out at around 100 TB of total storage.  This computer hosts my plex server (UPDATE-it only servers the files now, the Plex server is a VM in my ProxMox cluster) and library as well as a virtualization environment that runs a private cloud ([https://nextcloud.com/ NextCloud]) instance.  This cloud keeps the family's files and picture, replicating them across our devices, as well as a shared family calendar and contacts/address book ('''UPDATE'''-the NextCloud private cloud is also now in my ProxMox cluster).  The Plex content is available on our Tivos as well as iOS, Playstation, and other devices in home and away.</br><br/>


The other main server that I host is my mail, web, and Wiki server, [http://ferrellmac.com ferrellmac.com] running on a Mac Mini.  Apple is abandoning the Server product, so these will soon move to an Open Source Linux system.<br/><br/>
The other main server that I host is my mail, web, and Wiki server, [http://ferrellmac.com ferrellmac.com] running on a Mac Mini.  Apple is abandoning the Server product, so these will soon move to an Open Source Linux system.<br/><br/>


My final current server is a Ubiquiti UniFi Application Server.  This is a purpose-built system in the UniFi line that hosts both the UniFi "software defined network" and the UniFi video security camera NVR (network video recorder).  This recorder supports our 13 security cameras.<br/><br/>
My final current server is a Ubiquiti UniFi Application Server.  This is a purpose-built system in the UniFi line that hosts both the UniFi "software defined network" and the UniFi video security camera NVR (network video recorder).  This recorder supports our 16 security cameras and all of the network configurations via a "single pane of glass".<br/><br/>
 
'''UPDATE''': With the coming sunset of Apple's server product line, I've move all of my services to Linux (mostly Ubuntu 16.04, although I'm sure I'll need to gradually move to 18.04 soon) virtual machines running in a cluster of 4 ProxMox host machines.  ProxMox is a Type 1 Hypervisor with high-availability, so I can live-migrate a server from physical host to another without the guest OS being aware that it was moved.  This is great as it allows me to keep my Plex server up even while upgrading and rebooting the host machines.  4 Host machines means that I always have a quorum of live hosts even if a system has to go down.  All of the hosts are 12 generating Dell rack mount Xeon server hardware with 192 GB of ram, 10G networking cards, and dual redundant power supplies.  I have another Dell (R720XD) running FreeNAS as the SAN storage for VM images, ISOs, and backups.<br/><br/>
 
I plan to get another R720XD and fit it out with twelve 12TB drives so I can eventually move off of my QNAP server.  It's just shown itself to be too limited for what I wanted to do.  It's virtualization was slow/weak and problematic - I had several instances where VMs were corrupted and struggled to get the backup/snapshots back, and the transfer and Plex encoding speeds were just not up to my needs.  The FreeNAS server so far has shown itself to be superior in every way.<br/><br/>


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:IMG_7353.jpeg|Etsy magnetic decals for racks
Image:
Image:
Image:IMG_7354.jpeg|"Cold Aisle rack with Logos
Image:cold_aisle.jpg|"Cold Aisle" or front of rack near mini-split HVAC unit
Image:cold_aisle.jpg|"Cold Aisle" or front of rack near mini-split HVAC unit
Image:rack_back.jpg|"Hot Aisle" - Back of racks (white LED lighting provided for service)
Image:rack_back.jpg|"Hot Aisle" - Back of racks (white LED lighting provided for service)
Image:rack_front.jpg|Front of racks (multi-colored LED for mood lighting)
Image:rack_front.jpg|Front of racks (multi-colored LED for mood lighting)
Image:entertain.jpg|Entertainment Rack - Tivo DVRs, Sonos connect amps for house in-ceiling audio
Image:cold_red.jpg|Code Red!
Image:servers.jpg|Server rack - Running virtual servers for private cloud, web hosting, email, security camera monitoring and recording, etc.
Image:Entertain_top.jpg|Entertainment Rack - Tivo DVRs, Sonos connect amps for house in-ceiling audio
Image:networking.jpg|Networking rack
Image:Entertain_bottom.jpg|Bottom of Entertainment Rack
Image:IMG_6502.jpg|Server rack - Running virtual servers for private cloud, web hosting, email, security camera monitoring and recording, etc.
 
Image:IMG_7355.jpeg|Servers with logos/branding added
Image:IMG_7361.jpeg|Servers with logos, from opposite side
Image:IMG_7362.jpeg|Servers with logos, lower down
 
Image:IMG_6503.jpg|The business end of the Data Center
Image:IMG_0469.jpg|UPS backup power for servers
 
Image:new_servers_back.jpg
Image:power_color_code.jpg|Back of servers with color-coded power cables
Image:Server_bottom.jpg|Bottom of Server rack, before additional servers installed
Image:Network_top.jpg|Networking rack
Image:Network_bottom.jpg|Bottom of Networking rack
Image:Pretty-good-wifi.jpg|iPhone WiFi performance test 377Mbps down / 239 Mbps up
Image:2019-07-04_DSL_Reports_speed_test.jpg|Wired speed test - 940Mbps down, 240Mbps up
Image:transfer_to_QNAP_2019-07-04.png|File copy speed to QNAP NAS, 237.1MB/s (1,900Mbps)
Image:transfer_to_FreeNAS_2019-07-04.png|File copy speed to FreeNAS, 458.3MB/s (3,365Mbps!!)
Image:transfer_iPerf_2019-07-04.jpg|iPerf from my iMac to the ProxMox cluster, 10GBe both ends, just under 8Gbps
 
Image:new_servers.jpg|previous rack stack
Image:Server_top.jpg|previous rack stack
</hovergallery>
 
=== Reverse Proxy ===
So, there's this thing called a [https://en.wikipedia.org/wiki/Reverse_proxy reverse proxy].  For the longest time I didn't really know much about them, and didn't think I needed them.  I had a bunch of port forwarding rules, and it kind-of-sort-of worked, but all of my addresses where https://bdfserver.com:SomeWeirdPortNumber, which was ugly, and not easy to just simply forward.<br/>
 
Then someone turned me onto [https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ nginx reverse proxying].  It was a little awkward at first, and getting [https://letsencrypt.org/ Let's Encrypt] to handle the SSL certificates automatically took me a bit of tinkering, but now it's awesome.  And I can give all of my services a subdomain of bdfserver.com and they can happily forward on the port 80/443 of HTTP, or any other port I need, and my router just sends them to the proxy, and the proxy handles the rest.  Very cool.
 
=== Speeds ===
I have fiber-to-the-home from Cincinnati Bell (Fioptics Service) and I get most of the advertised speeds as I'm close to their main offices.  Here's a typical set of speed tests.  My 5k iMac pulling 936Mbps download and 238Mbps upload.  The second image is from my iPhone - 418Mbps download and 230Mbps upload.  The third image is of a iperf3 performance test from my Plex virtual server to my iMac, 7.28Gbps on the 10Gbps link.  I'm not sure why it's only 7+, and maybe jumbo frames would help, but that's very consistent.  The fourth image is file transfers to my NAS, the QNAP 12 drive x 10G spinning disks. {UPDATE: I now know why, my USG-XG 10G router is a bit of a bottleneck.  I have since added a 10G third networking interface to each of my ProxMox virutal hosts and they all can now get 9+Gbps to each other and my storage SAN.}  325-350MBps is pretty typical, and for spinning disks I think is pretty good.  The last image in this set is the iperf from the QNAP to the iMac, at a little over 8Gbps.  With a 350 mega BYTE per second file copy speed I can copy a 10GB movie file to the NAS in about 30 seconds.  That's nice.<br/>
 
Update:  After some tweaking, I'm getting over 400MB/s copies to my VM NAS and 540 MB/s to my file NAS, and better than 8Gbps iperf network speed connection tests between my virtual hosts and my FreeNAS SAN.
 
<hovergallery>
Image:dsl_reports.jpg|Wired internet to DSL Reports
Image:IMG_6071.PNG|iPhone 7 WiFi Speedtest to MetroNet in Lexington on my Unifi UAP-AC-HD unit upstairs
Image:freeNAS-speeds.jpg|UPDATED: 9+Gbps from all VM hosts to FreeNAS share
Image:SVR-02_to_FreeNAS_8G.jpg|8Gbps VM Host svr-02 to FreeNAS share
Image:iMac_to_QNAP_539MBps.jpg|539 eye-popping MegaBTYES / second from iMac to QNAP file share (movie data)
Image:iMac_to_FreeNAS_450Mbps.jpg|450 MB / second from iMac to FreeNAS VM share (Install ISO, VM Images, and Backups)
Image:iperf_to_plex.jpg|OLD: 7.28Gpbs iperf Plex VM to iMac
Image:file_copy_imac_to_qnap.jpg|OLD: 354MBps file copy iMac to QNAP
Image:iperf_to_qnap_imac.jpg|OLD: 8.1Gbps iPerf from QNAP to iMac
</hovergallery>
</hovergallery>


=== Software ===
=== Software ===
==== ProxMox ====
==== ProxMox ====
[[Image:Proxmox_logo_400px.png|thumb|left|200px]]
[[Image:Proxmox_logo_400px.png|thumb|left|200px|Promox]]
I chose [https://www.proxmox.com/en/ ProxMox] as my virutalization HyperVisor, at least initially, because it's free and open source, and has most of the features as VM Ware, which I had originally intended to use (with a low cost VM User Group license).  It has high availability cluster, live migration from node-to-node of VMs, and although it's not really a type 1 hypervisor, it sort of runs on the bare medal, with a basic install of Debina at the core, and the ability to run KVM containers as well as full VMs (for OSes like MacOS and Windows) on the same host.
I chose [https://www.proxmox.com/en/ ProxMox] as my virutalization HyperVisor, at least initially, because it's free and open source, and has most of the features as VM Ware, which I had originally intended to use (with a low cost VM User Group license).  It has high availability cluster, live migration from node-to-node of VMs, and although it's not a true Type 1 hypervisor, it sort of runs on the bare medal, with a basic install of Debian at the core, and the ability to run KVM containers as well as full VMs (for OSes like MacOS and Windows) on the same host.


==== FreeNAS ====
==== FreeNAS ====
[[Image:freenas.png|thumb|left|200px]]
[[Image:freenas.png|thumb|left|200px|FreeNAS]]
I had planned to use a Drobo B810i on iSCSI as my main Storage Area Network device for my ProxMox cluster, but it was not able to run an SSH server to allow the cluster to login, so I had to go another route.  I chose to use [https://www.freenas.org FreeNAS] on another Dell server.  This is kind of the recommendation of the ProxMox folks, and it's running on Unix (in this case FreeBSD because it has the ZFS file system natively) and allows me to use the ZFS file system, which allows me to use all storage types on a network host that appears to the VMs as local storage.  Having "remote" storage allows me to live migrate a VM - move it from one physical server to another while running, without the VM knowing that it is being moved.
I had planned to use a Drobo B810i on iSCSI as my main Storage Area Network device for my ProxMox cluster, but it was not able to run an SSH server to allow the cluster to login, so I had to go another route.  I chose to use [https://www.freenas.org FreeNAS] on another Dell server.  This is kind of the recommendation of the ProxMox folks, and it's running on Unix (in this case FreeBSD because it has the ZFS file system natively) and allows me to use the ZFS file system, which allows me to use all storage types on a network host that appears to the VMs as local storage.  Having "remote" storage allows me to live migrate a VM - move it from one physical server to another while running, without the VM knowing that it is being moved.
==== Ansible ====
[[Image:ansible2.png|thumb|left|150px|Ansible]]
So, I've started to have enough unique virtual hosts that managing them is becoming a chore.... just logging into each to update the OS on a weekly basis.  I've been meaning to play with [https://www.saltstack.com/ Salt Stack], because it sounded cool... but everyone says that [https://www.ansible.com/ Ansible] is the thing... so I watched a couple of [https://www.youtube.com/watch?v=icR-df2Olm8&list=PLCvubLN4VhpB7BKUXdnz-taybmrqZ1NOy&index=1 YouTube videos] and it seemed straight-forward, so I'm trying that.<br/>
So far, it's working great.  A couple of quick text files, and enter my password, and apt update, apt upgrade, BOOM!  PlayBook done.  Roles done.  Task done.  Update, oh yea, just did that for all 30 servers!<br/><br/>


=== Door ===
=== Door ===
The door is an 8' single-lite exterior door with an August smart deadbolt lock, with custom signage.  It has an August smart lock on it so I can control and monitor who has access.<br/>
The door is an 8' single-lite exterior door (because it is a different climate zone, and the room is loud) with an August smart deadbolt lock, with custom signage.  It has an August smart lock on it so I can control and monitor who has access.<br/>


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Line 114: Line 252:
Image:August2.jpg|August
Image:August2.jpg|August
Image:August.jpg|August
Image:August.jpg|August
Image:Door3.jpg
Image:Door5.jpg
</hovergallery>
</hovergallery>


=== Patch Panel ===
=== Patch Panel ===
Here's the final patch panel, with the fiber lines connected as well.  We have nearly 80 copper drops and 10 fiber runs (2 to each of the offices and to the old utility demarcation).  The patch panel separates the house wiring from the rack wiring so that each can be logical for their own purposes.<br/>
Here's the final patch panel, with the fiber lines connected as well.  We have nearly 96 copper (Cat61) drops and 10 fiber (OM4) runs (2 to each of the offices and to the old utility demarcation).  The patch panel separates the house wiring from the rack wiring so that each can be logical for their own purposes.  The racks are connected via color-coded patch cables behind a panel inside the wall, to a channel under the floor, and to underneath each rack where they run to in-line Ethernet keystones in another set of patching panels, and then on to the appropriate networking switch.<br/>


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:final_patch.jpg|Final Patch Panel
Image:Patch_covered1.png|Patch
Image:Patch_covered1.png|Patch
Image:Patch_covered2.png|Patch
Image:Patch_covered2.png|Patch
Image:Patch_fiber.jpg|Patch panel with fiber
Image:Patch_fiber.jpg|Patch panel with fiber
Image:Patch_fiber2.jpg|Fiber
Image:Patch_fiber2.jpg|Fiber
Image:in_line_coupler.jpg|Keystone patch
Image:Cat6a Patch.jpg|Keystone patching panel
Image:Wire patch2.jpg|Patch raceway
</hovergallery>
</hovergallery>


=== Servers ===
=== Servers ===
{| cellspacing="0" cellpadding="10"
{| cellspacing="0" cellpadding="10"
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:tvs-1271u-RP.png
Image:tvs-1271u-RP.png|TVS-1271u
</hovergallery>
</hovergallery>
|style="width: 80%"|
|style="width: 80%"|
* Primary file server is QNAP TVS-1271U-RP-i7-32G 12-Bay - (running Plex and Nextcloud personal cloud service)
* Primary file server is QNAP TVS-1271U-RP-i7-32G 12-Bay - (was running Plex and Nextcloud personal cloud service, now just a file server)
** Core i7 Intel Processor
** Core i7 Intel Processor
** 32GB RAM
** 32GB RAM
** mSATA Flash Module FLASH-256GB-MSATA (2x 128GB)
** mSATA Flash Module FLASH-256GB-MSATA (2 x 128GB)
** 10GbE Netowrking (LAN-10G2SF-MLX Dual-Port PCI-Express SFP+)
** 10GbE Netowrking (LAN-10G2SF-MLX Dual-Port PCI-Express SFP+)
** 12-3.5" HDD (Seagate IronWolf Pro ST10000NE0004 3.5" SATA 6Gb/s 10TB 7200rpm) (120 TB storage, 100 TB usable in dual redundant RAID6 array)
** 12-3.5" HDD (Seagate IronWolf Pro ST10000NE0004 3.5" SATA 6Gb/s 10TB 7200rpm) (120 TB storage, 100 TB usable in dual redundant RAID6 array)
Line 142: Line 285:
|-
|-
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:R620_bezel.jpg
Image:R620_bezel.jpg|R620
Image:r620-8port-nobezel.png
Image:r620-8port-nobezel.png|R620
</hovergallery>
</hovergallery>
|style="width: 80%"|
|style="width: 80%"|
* 1U Virtualization Server is Dell R620 - (Running DNS/Domain Name System, NTP/Network Time, PiHole DNS Black Hole, Home Assistant home automation)
* 1U Virtualization Servers are redundant Dell R620 - (Running DNS/Domain Name System, NTP/Network Time, PiHole DNS Black Hole, Home Assistant home automation)
** Dual Intel Xeon E5-2660 Eight Core 2.2GHz 20MB 8.0GT/s 95W processors
** Dual Intel Xeon E5-2660 Eight Core 2.2GHz 20MB 8.0GT/s 95W processors
** 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
** 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
Line 162: Line 305:
|-
|-
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:R720_bezel.jpg
Image:R720_bezel.jpg|R720
Image:R720-nobezel.jpg
Image:R720-nobezel.jpg|R720
</hovergallery>
</hovergallery>
|style="width: 80%"|
|style="width: 80%"|
* 2U Virtualization Server is Dell R720 SFF - (Running Plex, NextCloud, etc.)
* 2U Virtualization Servers are redundant Dell R720 SFF - (Running Plex, NextCloud, etc. SFF is small form factor, for 2.5" drives)
** Dual Intel Xeon E5-2667 v2 Eight Core 3.3GHz 25MB 8.0GT/s 130W processors
** Dual Intel Xeon E5-2667 v2 Eight Core 3.3GHz 25MB 8.0GT/s 130W processors
** 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
** 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
Line 181: Line 324:
|-
|-
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
|style="width: 20%"|<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:R720XD_bezel.jpg
Image:R720XD_bezel.jpg|R720XD
Image:R720XD-nobezel.jpg
Image:R720XD-nobezel.jpg|R720XD
</hovergallery>
</hovergallery>
|style="width: 80%"|
|style="width: 80%"|
* 2U Virtualization Server is Dell R720xd LFF - (Running FreeNAS)
* 2U Virtualization Server is Dell R720xd LFF - (Running FreeNAS, LFF is large form factor or 3.5" drive capable)
** Dual Intel Xeon E5-2660 Eight Core 2.2GHz 20MB 8.0GT/s 95W processors
** Dual Intel Xeon E5-2660 Eight Core 2.2GHz 20MB 8.0GT/s 95W processors
** 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
** 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
Line 223: Line 366:
Image:2018_Cat6a_cable.jpg|Spools of cable await installation
Image:2018_Cat6a_cable.jpg|Spools of cable await installation
Image:amored fiber.jpg|I bought pre-terminated runs of OM4 40G capable armored-jacket fiber optic.  I wanted to be able to pull it without worrying too much about damaging the fiber
Image:amored fiber.jpg|I bought pre-terminated runs of OM4 40G capable armored-jacket fiber optic.  I wanted to be able to pull it without worrying too much about damaging the fiber
Image:Congress_Network 10G V1.4.jpg|Network Design
Image:933_Congress_Network 10G V2.4.jpg|Network Design
</hovergallery>
</hovergallery>


Line 233: Line 376:
For offices I pulled 2 Ethernet and 2 fiber to the logical computer location.  I pulled dual Ethernet to each WiFi location so that I could power a WAP as well as (potentially) have a 10G connection as I understand the current generation of access points can't be powered off of the same port that passes 10G data.  I went with [https://www.anixter.com/en_us/products/10GXS12-0061000/BELDEN/Voice-and-Data-Cable/p/CMR-00423BNT-6A-06 Belden 10GXS12] Cat 6a cable as I wanted to make sure the cable could do the speed, and I could always work with the terminations if I had difficulty with speeds. This cable is huge and stiff, nearly 1/3" each cable with the white plastic spline down the middle.<br/><br/>
For offices I pulled 2 Ethernet and 2 fiber to the logical computer location.  I pulled dual Ethernet to each WiFi location so that I could power a WAP as well as (potentially) have a 10G connection as I understand the current generation of access points can't be powered off of the same port that passes 10G data.  I went with [https://www.anixter.com/en_us/products/10GXS12-0061000/BELDEN/Voice-and-Data-Cable/p/CMR-00423BNT-6A-06 Belden 10GXS12] Cat 6a cable as I wanted to make sure the cable could do the speed, and I could always work with the terminations if I had difficulty with speeds. This cable is huge and stiff, nearly 1/3" each cable with the white plastic spline down the middle.<br/><br/>


I wanted to minimize the number of switches required in my infrastructure, but the Cat6a cable is pretty massive, and fairly expensive so I decided 2 runs was the most I could realistically justify at each location.  At a minimum, I can have copper going to each computer, and a copper 10G still available for a switch for the balance of the users at that location.  Note that my computers (Macs) are the only 10G capable devices I have today, aside from my 2U 12-Bay QNAP NAS which has dual 10G fiber connections and my Ubiquiti 10G USG-XG router and UAS Application Server with dual copper ports, but they are rack mounted.  I have 3 Ubiquiti 10G USW-XG-16 switches (1 in each rack), and 2 48 port switches (1 USW-48-750W and 1 USW-L2-48 with dual PSU) as the backbone of the network, but all of the remote switches have at best SFP 1G fiber ports.  Hopefully Ubiquiti will bring 10G SFP+ to the 8 port switches soon, but the backbone of the network will be dual 10G fiber, with 10G to each of the computers.  If the price of the 10G switches comes down, I'll put a 10G USW-XG-16 in each office with dual fiber uplink and a 10G copper to each computer.  That's how I ran at my old house, but it's limiting as these switches only have 4 copper ports.<br/><br/>
I wanted to minimize the number of switches required in my infrastructure, but the Cat6a cable is pretty massive, and fairly expensive so I decided 2 runs was the most I could realistically justify at each location.  At a minimum, I can have copper going to each computer, and a copper 10G still available for a switch for the balance of the users at that location.  Note that my computers (Macs) are the only 10G capable devices I have today, aside from my 2U 12-Bay QNAP NAS which has dual 10G fiber connections and my Ubiquiti 10G USG-XG router and UAS Application Server with dual copper ports, but they are rack mounted ('''UPDATE''':  All of my rack-mount Dells now have dual 10G copper ports).  I have 3 Ubiquiti 10G USW-XG-16 switches (1 in each rack), and 2 48 port switches (1 USW-48-750W and 1 USW-L2-48 with dual PSU) as the backbone of the network, but all of the remote switches have at best SFP 1G fiber ports.  Hopefully Ubiquiti will bring 10G SFP+ to the 8 port switches soon, but the backbone of the network will be dual 10G fiber, with 10G to each of the computers.  If the price of the 10G switches comes down, I'll put a 10G USW-XG-16 in each office with dual fiber uplink and a 10G copper to each computer.  That's how I ran at my old house, but it's limiting as these switches only have 4 copper ports ('''UPDATE''': UBNT now has reasonably priced copper transceivers for these switches).<br/><br/>


For fiber I went with OM4 with LC connectors on each end.  I found wall plates and a patch panel for LC connectors, and all of my switches have LC connectors.  I went with these [https://fibercablesdirect.com/armored-duplex-fiber-optic-patch-cables/353-om4-lc-lc-armored-duplex-fiber-patch-cable-100g-multimode.html?search_query=FCDUS353v10230&results=1 10/40/100G cables] with multimode 50/125 micron cable in armored jacket.  This might not be perfect, but seemed to give me decent confidence that we could install it safely, and even have potential to move up from 10G later as switches get cheaper.  Altogether we pulled about 74 Ethernet and 12 fiber lines.  <br/><br/>
For fiber I went with OM4 with LC connectors on each end.  I found wall plates and a patch panel for LC connectors, and all of my switches have LC connectors.  I went with these [https://fibercablesdirect.com/armored-duplex-fiber-optic-patch-cables/353-om4-lc-lc-armored-duplex-fiber-patch-cable-100g-multimode.html?search_query=FCDUS353v10230&results=1 10/40/100G cables] with multimode 50/125 micron cable in armored jacket.  This might not be perfect, but seemed to give me decent confidence that we could install it safely, and even have potential to move up from 10G later as switches get cheaper.  Altogether we pulled about 96 Ethernet and 12 fiber lines.  <br/><br/>


I also included dual fiber, dual Cat6a, and dual Coax from the current "utility" space where the cable and fiber services enter the house to the new data center.  I hope to move those utilities as we finish remodeling the house, but that has yet to be determined.  Until then I can bring the incoming cable, fiber, attic antenna to data center - over fiber or Ethernet.  I plan to rack mount my Tivo base units, with a Cable as well as Over-the-air box feeding 4k Tivo Minis at each TV.  My wife and I had cut the cord, but my in-laws aren't ready to do that yet.  I also plan to rack mount my Sonos Connect Amps for all of my basement spaces with these handy [https://www.parts-express.com/penn-elcom-r1498-3uk-sonoszp120-3u-custom-rack-shelf-for-2-x-sonos-connect-amp--262-2967 3U shelves], part #262-2967.<br/><br/>
I also included dual fiber, dual Cat6a, and dual Coax from the current "utility" space where the cable and fiber services enter the house to the new data center.  I hope to move those utilities as we finish remodeling the house, but that has yet to be determined ('''UPDATE''': This move was made for the fiber, but the TV still comes into what will be my master closet).  Until then I can bring the incoming cable, fiber, attic antenna to data center - over fiber or Ethernet.  I plan to rack mount my Tivo base units ('''UPDATE''': Done, check out the tivo rack mount kits under Parts and Bits), with a Cable as well as Over-the-air box feeding 4k Tivo Minis at each TV.  My wife and I had cut the cord, but my in-laws aren't ready to do that yet.  I also plan to rack mount my Sonos Connect Amps for all of my basement spaces with these handy [https://www.parts-express.com/penn-elcom-r1498-3uk-sonoszp120-3u-custom-rack-shelf-for-2-x-sonos-connect-amp--262-2967 3U shelves], part #262-2967 ('''UPDATE''': Done - each room in the basement got a local volume control and dual ceiling speakers, and each TV location has a digital audio back to the rack so I can place the TV sound onto the speakers with a Apple Airport Express).<br/><br/>


In terms of users of the network, part of getting this house was moving my in-laws in so they could have some support as they age, and have single-floor living.  My wife and I are remodeling the basement for our master suite.  The existing house has wired sound, with Sonos users, and it's a large rambling house with a pool and cabana, so I have dual Ubiquiti HD wireless APs on each end of the main house, 1 in the garage and pool cabana, and 1 each upstairs and in the basement.  There will be 1 as well in the new garage we will be adding.<br/><br/>
In terms of users of the network, part of getting this house was moving my in-laws in so they could have some support as they age, and have single-floor living.  My wife and I are remodeling the basement for our master suite.  The existing house has wired sound, with Sonos users, and it's a large rambling house with a pool and cabana, so I have dual Ubiquiti HD wireless APs on each end of the main house, 1 in the garage and pool cabana, and 1 each upstairs and in the basement.  There will be 1 as well in the new garage we will be adding.<br/><br/>


I have installed a bunch of HomeKit automation gear.  I have installed a bunch of Leviton Decora Smart Switches to control the house lighting, the fountain, Eve motion detectors and door/window modules, degree temperature sensors, iDevices outdoor switches for outdoor lighting, 5 Nest thermostats, 3 August locks with WiFi extenders, a Rain Bird WiFi sprinkler module, 5 Sonos Connect Amps (and I plan to add several more for the basement remodel), 2 laser printers, 2 multi-function printer, copier, scanners, 5 Ring flood light cameras and 2 Ring doorbell cameras, and about a dozen Ubiquiti cameras (some POE and some micros on WiFi).  I have 2 work laptops in the house, 2 personal laptops, 2 desktops, 1 windows server, the UAS, the QNAP NAS, and I plan to add some servers, maybe Dell 720s.  I have about 10 Tivo devices, and nearly as many Apple TVs (we do a lot of Airplaying) as well as a HomePod, 4 iPhones, 5 Echos, and 4 Kindles and about 6 "Smart" TVs.  Also the Security system will be upgraded to allow WiFi connectivity.  At any moment I have nearly 100 WiFi devices.<br/><br/>
I have installed a bunch of HomeKit automation gear.  I have installed a lot of Leviton Decora Smart Switches to control the house lighting, the fountain, Eve motion detectors and door/window modules, degree temperature sensors, iDevices outdoor switches for outdoor lighting, 2 Nest thermostats, 3 August locks with WiFi extenders, a Rain Bird WiFi sprinkler module, 5 Sonos Connect Amps (and I plan to add several more for the basement remodel-'''UPDATE''': Done), 2 laser printers, 2 multi-function printer, copier, scanners, 5 Ring flood light cameras and 3 Ring doorbell cameras, and 16 Ubiquiti cameras (some POE and some micros on WiFi).  I have 2 work laptops in the house, 2 personal laptops, 2 desktops, 1 windows server, the UAS, the QNAP NAS, and I plan to add some servers ('''UPDATE''': Done, see above), maybe Dell 720s.  I have about 10 Tivo devices, and nearly as many Apple TVs (we do a lot of Airplaying) as well as a HomePod, 4 iPhones, 5 Echos, and 4 Kindles and about 6 "Smart" TVs.  Also the Security system will be upgraded to allow WiFi connectivity.  At any moment I have nearly 150 WiFi devices.<br/><br/>


Nearly as important as having all of this connectivity is keeping it straight, well understood, and maintainable.  To that end, I have clear Brother labels on each wall plate with the patch panel number, and each wire has a heat-shrink label with a clear heat shrink protective covering layer to keep it legible.  Each cable has a four digit number.  The first digit is the patch strip (0/1/2/etc) and then the port number, so "0001" for the first port in first patch panel.  The house patch will be connected, by routing back inside the wall, to an inline patch in the respective rack it's served by.  The house patch panels are these [https://www.amazon.com/gp/product/B0735WSGV1/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 24 port Cat6a] units - giving me 72 ports for Ethernet, and the wall mount rack is the [https://www.amazon.com/gp/product/B000VDPBXM/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 StarTech 6U "WallMount6"].  The fiber patch panel is this [https://www.amazon.com/gp/product/B074F3G2Q2/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 12 port unit].  For the in-rack connections I'm dropping these [https://www.amazon.com/gp/product/B00UM328NC/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 SF in-line couplers] from the wall patch to feed the switches.  I plane to have a patch above and below each 48-port switch to keep the connections nice and tidy.<br/><br/>
Nearly as important as having all of this connectivity is keeping it straight, well understood, and maintainable.  To that end, I have clear Brother labels on each wall plate with the patch panel number, and each wire has a heat-shrink label with a clear heat shrink protective covering layer to keep it legible.  Each cable has a four digit number.  The first digit is the patch strip (0/1/2/etc) and then the port number, so "0001" for the first port in first patch panel.  The house patch will be connected, by routing back inside the wall, to an inline patch in the respective rack it's served by.  The house patch panels are these [https://www.amazon.com/gp/product/B0735WSGV1/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 24 port Cat6a] units - giving me 96 ports for Ethernet, and the wall mount rack is the [https://www.amazon.com/gp/product/B000VDPBXM/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 StarTech 6U "WallMount6"].  The fiber patch panel is this [https://www.amazon.com/gp/product/B074F3G2Q2/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 12 port unit].  For the in-rack connections I'm dropping these [https://www.amazon.com/gp/product/B00UM328NC/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 SF in-line couplers] from the wall patch to feed the switches.  I plane to have a patch above and below each 48-port switch to keep the connections nice and tidy.  Each Power circiuit is similarly labelled, and the power cables in the data center are color-code and labelled.  Each UPS is labelled with the circuit that feeds it, and each server is labelled with the UPS that servers it.<br/><br/>


I'll post my network diagram soon, but here are some pictures midway through construction.  As of June 25, 2018 the wire is pulled, tested, and in  the jacks.  Since there's so much construction, I have a temporary network setup inside the closet, so the patch is kind of nasty looking, with wires poking through a small hole in the rear drywall.  Long term I'll clean up this wire, and it will get a OSB or melamine cover that will be painted to match the wall color, and all of the patch cables to the rack will likewise come off the patch and look inside the way, under the flooring, and up into the rack.  The floor will be carpeted with low pile commercial carpet squares from FLOR called [https://www.flor.com/lasting-grateness1111 Lasting Grateness].  By the way, all of my 10G NICs for my Macs are [https://www.amazon.com/Sonnet-Technologies-TWIN10G-TB2-Ethernet-Thunderbolt/dp/B00NEHGZHI/ref=sr_1_1?s=electronics&ie=UTF8&qid=1529954281&sr=1-1&keywords=sonnet+twin+10g+thunderbolt Sonnet Twin 10G Thunderbolt 2 units].<br/><br/>
I'll post my network diagram soon ('''UPDATE''': Done), but here are some pictures midway through construction.  As of June 25, 2018 the wire is pulled, tested, and in  the jacks.  Since there's so much construction, I have a temporary network setup inside the closet, so the patch is kind of nasty looking, with wires poking through a small hole in the rear drywall.  Long term I'll clean up this wire, and it will get a OSB or melamine cover that will be painted to match the wall color ('''UPDATE''': Done), and all of the patch cables to the rack will likewise come off the patch and look inside the way, under the flooring, and up into the rack.  The floor will be carpeted with low pile commercial carpet squares from FLOR called [https://www.flor.com/lasting-grateness1111 Lasting Grateness].  By the way, all of my 10G NICs for my Macs are [https://www.amazon.com/Sonnet-Technologies-TWIN10G-TB2-Ethernet-Thunderbolt/dp/B00NEHGZHI/ref=sr_1_1?s=electronics&ie=UTF8&qid=1529954281&sr=1-1&keywords=sonnet+twin+10g+thunderbolt Sonnet Twin 10G Thunderbolt 2 units].<br/><br/>


As I think I mentioned, I went with a Trane 1.5 ton (18,000 Btu) low-ambient (can cool below freezing outside air temp) mini-split AC unit.  This unit can heat, but I shouldn't need that.  I will have them mount a hard-wired thermostat on the hot-side wall to verify cooling rather than depend on the wireless handheld unit, which might not be able to see over the 7' tall racks.  As you can see, we're framing in for an 8' exterior door to hold the cold in, and provide perimeter security.  With the whole-house monitored security and video system this should be adequate.<br/><br/>
As I think I mentioned, I went with a Trane 1.5 ton (18,000 Btu) low-ambient (can cool below freezing outside air temp) mini-split AC unit.  This unit can heat, but I shouldn't need that.  I will have them mount a hard-wired thermostat on the hot-side wall to verify cooling rather than depend on the wireless handheld unit, which might not be able to see over the 7' tall racks.  As you can see, we're framing in for an 8' exterior door to hold the cold in, and provide perimeter security.  With the whole-house monitored security and video system this should be adequate.<br/><br/>


For the wire pulling I hired Totten wiring services to pull my cable.  He's a magician at finding ways to get wire where it needs to go.   
For the wire pulling I hired Totten wiring services to pull my cable.  He's a magician at finding ways to get wire where it needs to go.  <br/><br/>


Awhile back I bought a [https://www.triplett.com/product/real-world-certifier-rwc1000k2/ Triplett "Real World Certifier"] for testing cables.  It does a pretty good job, and will show you if you have any breaks or shorts, and verify the wire order and give basic speed capability and an overall "Cat" score on the scale of Cat3-Cat6.  It's not a real cable tester, and Jeff will be bringing the real deal over to certify my cables, but it's was a good check before putting some of the lines into temporary service.
Awhile back I bought a [https://www.triplett.com/product/real-world-certifier-rwc1000k2/ Triplett "Real World Certifier"] for testing cables.  It does a pretty good job, and will show you if you have any breaks or shorts, and verify the wire order and give basic speed capability and an overall "Cat" score on the scale of Cat3-Cat6.  It's not a real cable tester, and Jeff will be bringing the real deal over to certify my cables, but it's was a good check before putting some of the lines into temporary service.<br/><br/>


<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
<hovergallery maxhoverwidth=1200 maxhoverheight=1024>
Image:workstation_final.jpg|Workstation - Monitors for Tivo and serve control
Image:workstation_final.jpg|Workstation - Monitors for Tivo and serve control
Image:worstation_standing.jpg
Image:in_line_coupler.jpg|In-Line Ethernet Coupler
Image:in_line_coupler.jpg|In-Line Ethernet Coupler
Image:fiber_patch.jpg|Fiber patch panel
Image:fiber_patch.jpg|Fiber patch panel
Line 270: Line 414:
Image:in progress data center.jpg|New lighting to be seen, the beam soffit will trap hot air on rack side of room
Image:in progress data center.jpg|New lighting to be seen, the beam soffit will trap hot air on rack side of room
Image:framed 8ft door.jpg|This 8' opening for rolling racks in and out of data center - the inset behind the door is for the ramp, which will get shoe molding to keep you from walking off the edge
Image:framed 8ft door.jpg|This 8' opening for rolling racks in and out of data center - the inset behind the door is for the ramp, which will get shoe molding to keep you from walking off the edge
Image:Framed_Door_w_Electric.jpg|Door framed and electric at workstation/desk location
Image:Door_insulation2.jpg
Image:Door_insulation2.jpg
Image:mini split compressor 1.jpg|Mini split AC compressor
Image:mini split compressor 1.jpg|Mini split AC compressor
Image:mini split compressor 2.jpg|Mini split outside unit mounted above the snow line so it cool data center all winter long
Image:mini split compressor 2.jpg|Mini split outside unit mounted above the snow line so it cool data center all winter long
Line 278: Line 425:
Image:racks.jpg|The racks await the room
Image:racks.jpg|The racks await the room
Image:raised floor with power.jpg|Circuits mounted in floor await the racks
Image:raised floor with power.jpg|Circuits mounted in floor await the racks
Image:split AC dataplate 2.jpg|
Image:split AC dataplate 2.jpg|AC
Image:split AC dataplate.jpg|
Image:split AC dataplate.jpg|AC
Image:wall plate labelled.jpg|Labels everywhere to keep things straight
Image:wall plate labelled.jpg|Labels everywhere to keep things straight
Image:wall plate with fiber.jpg|Fiber wall plates
Image:wall plate with fiber.jpg|Fiber wall plates
Image:Drywall_rough.jpg|Fixing the drywall and installing the lighting
Image:Drywall_rough2.jpg
Image:Drywall_rough3.jpg
Image:Drywall_roug4.jpg
Image:Flooring_power.jpg|Power outlets in floor
Image:Flooring_rough.jpg|Floor sheathed
Image:Flooring_rough2.jpg|Floor removable panels are evident
Image:IMG_5185.jpg|Carpeting installed
Image:Speaker_volume.jpg|Speaker volume control
Image:Speaker_wire_ceiling.jpg|Speaker wire
Image:Rack_install.jpg|Installing the racks
Image:Rack_install2.jpg
Image:Rack_install3.jpg
Image:Rack_install4.jpg
Image:Rack_install5.jpg
Image:Start_rack_1.jpg|Getting the racks ready
Image:Start_rack_2.jpg
Image:Start_rack_3.jpg
Image:Wire_patch3.jpg|Patching the racks up
Image:Wire_patch4.jpg|Under-Server rack grommets and running wire inside the rack side panel as it doesn't have vertical cable management
Image:Wire_patch6.jpg|Patching the rack and testing Coax connections
Image:Wire_patch14.jpg|In-wall patch routing
Image:Wire_patch7.jpg|Rear of patch panel/in-line Ethernet coupler
Image:Patch_panel3.jpg|Starting to patch up to the house wiring
Image:Paint.jpg|Painting the room
Image:Paint2.jpg
Image:Shelves.jpg|Shelves installed - note that the bin fits between racks and shelves
Image:Old_ONT.jpg|Optical Network Interface before relocation
Image:Finished.jpg|The data center is ready to go - the painted wood panel in far right is where my test stand will be for new networking gear, and the top has my UPS for my ONT and cable modem
Image:Closet_network_temp.jpg|Temporary network and server in closet of "future" data center during construction
Image:
Image:
</hovergallery>
</hovergallery>


Line 297: Line 476:
I have various other UniFi switches, including a [https://www.ubnt.com/unifi-switching/unifi-switch-16-150w/ USW US-16-150W] in the rack, various [https://www.ubnt.com/unifi-switching/unifi-switch-8-150w/ USW-8-150W] at the end points (mostly offices) - each of with can accept 1G fiber via SFP, and a bunch of [https://www.ubnt.com/unifi-switching/unifi-switch-8/ USW-US-8] units, mostly at TVs and gaming consoles.  For WiFi I recently swapped out my [https://www.ubnt.com/unifi/unifi-ap-ac-pro/ UAP-AC-Pro] units for [https://unifi-hd.ubnt.com/ UAP-AC-HD] ones.  The HD is slightly bigger than the Pro, but has a lot better throughput.  I can get nearly 400 Mbps both directions on my iPhone on the HD, which is about double what I got on the Pros.  Also, the beauty of the access points and US-8 switches are that, like the cameras, they're all POE.  That means that when my power goes out, these can stay up as each rack is protected with 1 hour of UPS backup.  So, as long as I have light on the incoming fiber, I have networking!  I currently have 6 of the UAP-AC-HD mounted, 1 on each wing of the first floor, 1 in the garage at ground level, 1 in the basement and 1 on the second floor in the center of the house, and finally one in the pool cabana to cover the backyard entertaining space.<br/><br/>
I have various other UniFi switches, including a [https://www.ubnt.com/unifi-switching/unifi-switch-16-150w/ USW US-16-150W] in the rack, various [https://www.ubnt.com/unifi-switching/unifi-switch-8-150w/ USW-8-150W] at the end points (mostly offices) - each of with can accept 1G fiber via SFP, and a bunch of [https://www.ubnt.com/unifi-switching/unifi-switch-8/ USW-US-8] units, mostly at TVs and gaming consoles.  For WiFi I recently swapped out my [https://www.ubnt.com/unifi/unifi-ap-ac-pro/ UAP-AC-Pro] units for [https://unifi-hd.ubnt.com/ UAP-AC-HD] ones.  The HD is slightly bigger than the Pro, but has a lot better throughput.  I can get nearly 400 Mbps both directions on my iPhone on the HD, which is about double what I got on the Pros.  Also, the beauty of the access points and US-8 switches are that, like the cameras, they're all POE.  That means that when my power goes out, these can stay up as each rack is protected with 1 hour of UPS backup.  So, as long as I have light on the incoming fiber, I have networking!  I currently have 6 of the UAP-AC-HD mounted, 1 on each wing of the first floor, 1 in the garage at ground level, 1 in the basement and 1 on the second floor in the center of the house, and finally one in the pool cabana to cover the backyard entertaining space.<br/><br/>


My backup power, and power line conditioning, is by [https://www.amazon.com/CyberPower-OR1500LCDRM1U-System-Outlets-Rackmount/dp/B0016P7HJA/ref=sr_1_1?s=electronics&ie=UTF8&qid=1530025798&sr=1-1&keywords=OR1500LCDRM1U&dpID=213SaqOw-7L&preST=_SX300_QL70_&dpSrc=srch CyberPower OR1500LCDRM1U] 1U tall 1,500 VA units in each rack, with dual units in the primary rack as many of those devices have dual PSU.  Since these only have 4 protected outlets, I have each powering a [https://www.amazon.com/gp/product/B00077IS32/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 Cyber Power CPS1220RMS] 1U power strip which support the bulk of power users.  Another handy feature of these UPS units, they have an environmental module you can connect to them to monitor temperature in the rack.<br/><br/>
My backup power, and power line conditioning, is by [https://www.amazon.com/CyberPower-OR1500LCDRM1U-System-Outlets-Rackmount/dp/B0016P7HJA/ref=sr_1_1?s=electronics&ie=UTF8&qid=1530025798&sr=1-1&keywords=OR1500LCDRM1U&dpID=213SaqOw-7L&preST=_SX300_QL70_&dpSrc=srch CyberPower OR1500LCDRM1U] 1U tall 1,500 VA units in each rack, with dual units in the primary rack as many of those devices have dual PSU.  Since these only have 4 protected outlets, I have each powering a [https://www.amazon.com/gp/product/B00077IS32/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 Cyber Power CPS1220RMS] 1U power strip which support the bulk of power users.  Another handy feature of these UPS units, they have an environmental module you can connect to them to monitor temperature in the rack.  I've recently started playing with the CyberPower PDU20M2F10R mointored PDU.  For starters I'll use these to monitor the total power on the 3 circuits powering my servers, and i might add one for each of the UPS units, and I'm considering more UPS units for a couple of reasons.  First, I'm not getting the run time I would like, and second I want to setup the Dell virtualization servers with a USB connect each to a UPS so they can get the shutdown message to trigger powering down the guest VMs.<br/><br/>


My QNAP network attached storage/NAS, a [https://www.qnap.com/en/product/tvs-1271u-rp TVS-1271U-RP-i7-32] that I purchased from [https://span.com SPAN.com], will connect directly to the switch with dual 10G fiber.  This 2U rackmount NAS has an Intel Core i7 processor and 32GB of ram and 12 SAS/ drive bays, and I've opted for the dual 128MB ([https://www.span.com/product/Qnap-mSATA-Flash-Module-FLASH-256GB-MSATA-2x-128GB-mSATA-Flash-Modules-Pair-256GB~45992 IBQ-XRF256]) M.2 SSD cache and dual SFP+ expansion ([https://www.span.com/product/Qnap-10GbE-PCI-Express-Card-LAN-10G2SF-MLX-Dual-Port-SFP+-PCI-Express~52717 LAN-10G2SF-MLX]) cards.  I paired this with 12 Seagate Iron Wolf Pro enterprise/NAS-ready 10G SATA ([https://www.amazon.com/Seagate-IronWolf-7200RPM-3-5-Inch-ST10000NE0004/dp/B01M4FU8Y3/ref=sr_1_1?ie=UTF8&qid=1530023096&sr=8-1&keywords=ST10000NE0004&dpID=51x8JUSJF-L&preST=_SY300_QL70_&dpSrc=srch ST10000NE0004]) spinning hard drives in a RAID6 array (2 drives for redundancy, so 100TB of storage).  On this NAS I run a [https://plex.tv Plex] server with my entire TV show and movie library as well my music and home videos.  I also run a VM with a [https://nextcloud.com/ NextCloud] instance for all of my personal cloud needs, and use it as an [https://www.qnap.com/en-us/how-to/tutorial/article/time-machine-support Apple Time Machine] target.  <br/><br/>
My QNAP network attached storage/NAS, a [https://www.qnap.com/en/product/tvs-1271u-rp TVS-1271U-RP-i7-32] that I purchased from [https://span.com SPAN.com], will connect directly to the switch with dual 10G fiber.  This 2U rackmount NAS has an Intel Core i7 processor and 32GB of ram and 12 SAS/ drive bays, and I've opted for the dual 128MB ([https://www.span.com/product/Qnap-mSATA-Flash-Module-FLASH-256GB-MSATA-2x-128GB-mSATA-Flash-Modules-Pair-256GB~45992 IBQ-XRF256]) M.2 SSD cache and dual SFP+ expansion ([https://www.span.com/product/Qnap-10GbE-PCI-Express-Card-LAN-10G2SF-MLX-Dual-Port-SFP+-PCI-Express~52717 LAN-10G2SF-MLX]) cards.  I paired this with 12 Seagate Iron Wolf Pro enterprise/NAS-ready 10G SATA ([https://www.amazon.com/Seagate-IronWolf-7200RPM-3-5-Inch-ST10000NE0004/dp/B01M4FU8Y3/ref=sr_1_1?ie=UTF8&qid=1530023096&sr=8-1&keywords=ST10000NE0004&dpID=51x8JUSJF-L&preST=_SY300_QL70_&dpSrc=srch ST10000NE0004]) spinning hard drives in a RAID6 array (2 drives for redundancy, so 100TB of storage).  On this NAS I run a [https://plex.tv Plex] server with my entire TV show and movie library as well my music and home videos.  I also run a VM with a [https://nextcloud.com/ NextCloud] instance for all of my personal cloud needs, and use it as an [https://www.qnap.com/en-us/how-to/tutorial/article/time-machine-support Apple Time Machine] target.  <br/><br/>
Line 313: Line 492:
I have a bunch of HomeKit gear as well, and I love telling Siri on my Apple Watch to do things.  I tried the Hue bulbs, but didn't like that mode of interaction (having to leave the switch on all of the time).  Instead I've moved on to the [https://www.leviton.com/en/products/residential/automation-smart-home/decora-smart-with-homekit-technology Leviton Decora Smart Switch, homekit edition].  These are basically just regular switches that have had WiFi added to them, so if data and/or automation fails you can still turn on your lights.  I'm considering [https://www.serenashades.com/ Lutron Serena] smart blinds as well.  For automation purposes I've paired the light switches with Elgato Eve Degree temperature and [https://www.evehome.com/en/eve-motion Motion] sensors, and the Eve button.  In the winter my mother-in-law's office gets chilly.  She can use the Eve [https://www.evehome.com/en/eve-button Button] to trigger a iDevices smart wall switch to turn on her space heater, or the Eve [https://www.evehome.com/en/eve-degree Degree] can do it automatically to maintain the right local temperature.  I have iDevices [https://store.idevicesinc.com/idevices-switch/ Indoor] and outdoor switches.  The outdoor switches control the pump to my fountain, the lights illuminating my fountain, the string lights over my pool, and the bug zapper next to it.<br/><br/>
I have a bunch of HomeKit gear as well, and I love telling Siri on my Apple Watch to do things.  I tried the Hue bulbs, but didn't like that mode of interaction (having to leave the switch on all of the time).  Instead I've moved on to the [https://www.leviton.com/en/products/residential/automation-smart-home/decora-smart-with-homekit-technology Leviton Decora Smart Switch, homekit edition].  These are basically just regular switches that have had WiFi added to them, so if data and/or automation fails you can still turn on your lights.  I'm considering [https://www.serenashades.com/ Lutron Serena] smart blinds as well.  For automation purposes I've paired the light switches with Elgato Eve Degree temperature and [https://www.evehome.com/en/eve-motion Motion] sensors, and the Eve button.  In the winter my mother-in-law's office gets chilly.  She can use the Eve [https://www.evehome.com/en/eve-button Button] to trigger a iDevices smart wall switch to turn on her space heater, or the Eve [https://www.evehome.com/en/eve-degree Degree] can do it automatically to maintain the right local temperature.  I have iDevices [https://store.idevicesinc.com/idevices-switch/ Indoor] and outdoor switches.  The outdoor switches control the pump to my fountain, the lights illuminating my fountain, the string lights over my pool, and the bug zapper next to it.<br/><br/>


UPDATE: I have started upgrading my on site servers with Dell rack mount units.  Partly this is driven by my desire to get better performance and mvoe to virutal servers, and partly due to Apple deprecating many of the OSX (now MacOS) Server features like web and email servers.  Previously I'd run Plex on my QNAP NAS, and got occasional complaints about its ability to transcode video.  Also, I run a NextCloud private cloud instance on the QNAP in a VM, and although it has an i7 and 32GB of ram, the performance has not been great.  So, this move will allow me to dedicate significant compute resources affordable to easy-to-manage, separate, virtual machines - mostly running linux.<br/><br/>
UPDATE: I have started upgrading my on site servers with Dell rack mount units.  Partly this is driven by my desire to get better performance and mvoe to virutal servers, and partly due to Apple deprecating many of the OSX (now MacOS) Server features like web and email servers.  Previously I'd run Plex on my QNAP NAS, and got occasional complaints about its ability to transcode video.  Also, I run a NextCloud private cloud instance on the QNAP in a VM, and although it has an i7 and 32GB of ram, the performance has not been great.  So, this move will allow me to dedicate significant compute resources affordable to easy-to-manage, separate, virtual machines - mostly running linux.  I have a pair of [https://www.amazon.com/gp/product/B01DKME5OQ/ref=ppx_yo_dt_b_asin_title_o02_s02?ie=UTF8&psc=1 Riser 8 port KVM switches] to bring the consoles to my local desktop.  I've actually also got a 2-1 KVM in the rack so I can switch between the two 8 ports, since I only have 1  keyboard, video, and mouse on my desk.  I didn't find a 16 port that had VGA, which is what I'm currently using, and I already had 1 8 port.<br/><br/>


My current plan is to split the workload into 3 pieces.
My current plan is to split the workload into 3 pieces.
Line 347: Line 526:
Image:R720XD-nobezel.jpg|2U Rack Server Dell R720xd without its Bezel
Image:R720XD-nobezel.jpg|2U Rack Server Dell R720xd without its Bezel
Image:unifi-server-XG-feature-server.jpg|UAS-Running UniFi Controller and UniFi Video Server
Image:unifi-server-XG-feature-server.jpg|UAS-Running UniFi Controller and UniFi Video Server
Image:unifi-server-XG-feature-versatile2.jpg|
Image:unifi-server-XG-feature-versatile2.jpg|UAS
Image:USG-XG-8-1.png|Router-USG-XG-8 10G capable
Image:USG-XG-8-1.png|Router-USG-XG-8 10G capable
Image:USG-XG-8-2.png|
Image:USG-XG-8-2.png|XG-8
Image:unifi-switch-16xg-features-ports.jpg|USW-XG-16 core switches, 1 per rack
Image:unifi-switch-16xg-features-ports.jpg|USW-XG-16 core switches, 1 per rack
Image:unifi-switch-16xg-features-diagram4.jpg|Setting up a 20G backbone
Image:unifi-switch-16xg-features-diagram4.jpg|Setting up a 20G backbone
Line 369: Line 548:
Image:CyberPower RMCARD205.jpg|CyberPower RMCARD205 Networking Card
Image:CyberPower RMCARD205.jpg|CyberPower RMCARD205 Networking Card
Image:ENVIROSENSOR.png|CyberPower ENVIROSENSOR temperature sensor
Image:ENVIROSENSOR.png|CyberPower ENVIROSENSOR temperature sensor
Image:new_UPS_1350W.jpg|CyberPower PR1500LCDRT2U
Image:new_UPS_back.jpg
Image:power_color_code.jpg|Color-Coded power cables
Image:power_monitors.jpg|Power Monitoring
Image:PDU20M2F10R.jpg|CyberPower PDU20M2F10R
Image:PDU20M2F10R_back.png|CyberPower PDU20M2F10R rear
Image:RWC1000K_w_Report_96.jpg|Triplett Real World Certifier RWC1000
Image:RWC1000K_w_Report_96.jpg|Triplett Real World Certifier RWC1000
Image:2012 Mac Mini Server.jpg|2012 Mac Mini running OSX Server
Image:2012 Mac Mini Server.jpg|2012 Mac Mini running OSX Server
Line 377: Line 562:
Image:5k iMac.jpg|5k iMac 32G
Image:5k iMac.jpg|5k iMac 32G
Image:QNAP TVS-1271U 1.jpg|QNAP NAS TVS-1271U-RP-i7-32-US rack mount NAS
Image:QNAP TVS-1271U 1.jpg|QNAP NAS TVS-1271U-RP-i7-32-US rack mount NAS
Image:QNAP TVS-1271U 2.jpg|X
Image:QNAP TVS-1271U 2.jpg|TVS-1271u
Image:QNAP TVS-1271U 3.jpg|IBQ-XRF256 QNAP dual M.2 256GB SSD Cache card
Image:QNAP TVS-1271U 3.jpg|IBQ-XRF256 QNAP dual M.2 256GB SSD Cache card
Image:ibq-x10gm.jpg|IBQ-X10GM QNAP dual SFP+ NIC
Image:ibq-x10gm.jpg|IBQ-X10GM QNAP dual SFP+ NIC
Line 390: Line 575:
Image:Tripp Lite 12 port Keystone N062-0120KJ.jpg|Tripp Lite 12 port Keystone N062-0120KJ for Audio Terminations
Image:Tripp Lite 12 port Keystone N062-0120KJ.jpg|Tripp Lite 12 port Keystone N062-0120KJ for Audio Terminations
Image:Keystone Binding Posts.jpg|Keystone binding posts for audio
Image:Keystone Binding Posts.jpg|Keystone binding posts for audio
Image:patch_panel2.jpg|Patch panel nearly complete
Image:Patch_panel2.jpg|Patch panel nearly complete
Image:airport_express.jpg|Airport Express for line-in audio to Sonos Connect
Image:airport_express.jpg|Airport Express for line-in audio to Sonos Connect
Image:aTV 4k gen5.png|Generation 5 Apple TV 4k
Image:aTV 4k gen5.png|Generation 5 Apple TV 4k
Line 431: Line 616:
Image:Cameras_interior.jpg|Interior
Image:Cameras_interior.jpg|Interior
Image:Cameras_exterior.jpg|Exterior
Image:Cameras_exterior.jpg|Exterior
Image:Entertain_top.jpg|Entertainment Rack - Tivo DVRs, Sonos connect amps for house in-ceiling audio
Image:Entertain_bottom.jpg|Bottom of Entertainment Rack
Image:Server_top.jpg|Server rack - Running virtual servers for private cloud, web hosting, email, security camera monitoring and recording, etc.
Image:new_servers.jpg|Servers nearly complete - at least for my current plans 2-620, 2-R720, 1-R720XD, 1 QNAP TVS-1271u
Image:new_servers_back.jpg
Image:Server_bottom.jpg|Bottom of Server rack|In-progress before getting additional servers
Image:Network_top.jpg|Networking rack
Image:Network_bottom.jpg|Bottom of Networking rack
Image:rack_back.jpg|Back of racks
Image:rack_back.jpg|Back of racks
Image:rack_front.jpg|Front of racks
Image:rack_front.jpg|Front of racks
Image:cold_red.jpg|Code Red!
Image:cold_aisle.jpg|"Cold Aisle" or front of rack near HVAC unit
Image:cold_aisle.jpg|"Cold Aisle" or front of rack near HVAC unit
Image:entertain.jpg|Entertainment Rack
Image:entertain.jpg|Entertainment Rack
Image:servers.jpg|Server rack
Image:servers.jpg|Server rack before buying Dell rack mounts, In-progress before getting additional servers
Image:servers2.jpg|Server rack with new Dell rack mount servers, In-progress before getting additional servers
Image:networking.jpg|Networking rack
Image:networking.jpg|Networking rack
Image:Riser_8_port_KVM_1.jpg|KVM Switch
Image:Riser_8_port_KVM_2.jpg|KVM Switch
Image:dual_8port_KVM.jpg|Dual 8 port KVM switches for server monitoring
Image:monitored_PDU_CP-PDU20M2F10R.jpg|Monitored PDU
Image:Door3.jpg
Image:Door5.jpg
Image:IMG_7691.jpeg|Through the looking... door, I mean door...
Image:IMG_7693.jpeg|Door open. lights off
Image:IMG_7694.jpeg|...closer...
Image:IMG_7695.jpeg|... and lights on...
</hovergallery>
</hovergallery>

Latest revision as of 21:51, 8 January 2023

Introduction

I started my "home lab" experience at our home on Wandering Way, by pulling a couple of Ethernet cables from one side of the house, where the cable came into the building, to my office. I also pulled one from there to a WiFi access point on the other side, and a computer in the kitchen island. The data center started up approximately January 2019.

From there we moved to Top Flite where I converted a pretty good space (maybe 10'x8'?) into a mini data center. My brother-in-law is in the surplus business and ended up with some old Great Lakes 45U racks he didn't need, so I picked one up. I actually picked up 3, but I only used one over there. It's not ideal, being a "Security and Sound" GL840S2-2436 rack, it doesn't have any cable management features, but I got them basically free (thanks Steve!), so who's arguing?

That Top Flite house was built in the early 2000's, and was wired with Cat5e wire to most of it's rooms, alongside dual RG6 coax, I think for the phone lines. These were homerun back to the utility space that I'd taken over for the data center, so I started getting rack-mount switches, routers, and shelves and mounted my cable modem and mac mini server in there. I upgraded to a USG Pro and USW-16, and eventually a USW-48-750 to power my WiFi access points and some cameras. I became an amateur wire-puller as I added ceiling-mounted UAP-AC-Pros in 4 places around the house and started to monitor the property with Foscam PTZ cameras, and added dual fiber drops to a couple of computer locations.

Eventually I added a portable 1 ton AC unit and QNAP NAS for a Plex server, and I was really a Home Labber! The heat from the AC was ducted to the attached garage, which as luck would have it was on the adjoining wall. Somewhere along the way, Cincinnati Bell started offering 1 Gbps FTTH service (250 Mbps upload) and I snapped that up too. That upload speed was particularly important for my Plex server.

Then it came time to move again. I got my wife to agree that we would only look at houses that had fiber service, and that I would get a budget to install good Ethernet cabling. And so it began...

Stats

The data center features Ubiquiti Unifi "pro-sumer' networking gear, with:

  • 96 Cat6a copper network runs, and 2 miles of cable
  • 12 OM4 fiber-optic network runs (capable of 40 or 100 gigabit per second)
  • 8-20A power circuits
  • 1.5 tons of AC cooling (typically set to 65F, which keeps rack outlet temps -top of rack- below 75F), all-weather capable
  • Core network is dual 10 gigabit per second fiber optic
  • Primary internet service to house 1 gigabit per second fiber optic (Cincinnati Bell Fioptics)
  • Backup internet service to house 200 megabit per second cable (Spectrum)
  • File server is 120 terabytes of disk space (running Plex and Nextcloud personal cloud service) on QNAP TVS-1271u
  • Servers
    • 1U Redundant Virtualization Servers are Dell R620 (local services, DNS, NTP, Ad Blocking, Home Automation)
    • 2U Redundant Virtualization Servers are Dell R720 (Cloud Data Server, Movie Streaming, eMail, Web, Wiki)
    • 2U Storage Area Network Server is Dell R720XD (6x6TB VM storage, 6x12TB Backup)
    • Each server was originally kitted out with 192Gb of ram
    • As of January 2023 the 5 ProxMox servers and the FreeNAS #3 (which manages the VM images) were upgraded to 512Gb, and the Plex data server (FreeNAS #1) and Backup servr (FreeNAS #5) were upgraded to 384 Gb
  • 2 Rijer 8 port VGA KVM Switches with remote (USB) switching on my desk
  • 7 Sonos network audio amps for basement level audio
  • 4 Sonos network audio amps for main floor audio
  • 3 Tivo Bolt DVRs (rack mounted) for entertainment
  • 6 Tivo Mini Vox (Gen3) streaming from the Bolts above (with Netflix/Hulu/Plex/YouTube)
  • 16 UVC camera security system with on site video backup
  • 5 Ring Floodlight cameras
  • 3 Ring Doorbell cameras
  • 4 Ring Doorbell extender chimes
  • 21 Network switches (8 in-rack)
  • 7 UniFi HD WiFi Access Points
  • 150-175 typically associated network clients

Origin of the Name

There's this guy named Dave. I used to work with this guy, and he is a bit... eccentric. Anyway, he managed a data center at the office, and had some folks supporting it. Some of these folks were not native English speakers, and he had a Monday morning telephone conversation where it got confused as to what they were saying about eating a can of peaches versus visiting "Camp Peaches", presumably like a summer camp for Scouts. In a fit of brilliance he decided to name his data center "Camp Peaches". He even acquired signage to proclaim same. When he took on a new role, I inherited this sign (our IT organization did not see the brilliance in the naming scheme - however, it is still universally known as Camp Peaches).


Network Diagrams

Don't use these to try to hack me, that wouldn't be cool. For those interested.

In December 2022 the USG-XG-8 was replaced with a UXG, and 2 U6-Enterprise APs (2.5G uplinks) were installed in the basement to get ready for a Fioptics 2G/1G ISP upgrade. The new generation Muli-gig Transceivers from Ubiquiti (and others) will show linked at 10G on the various Unifi switches, and link at 10/5/25/1 gigabit on the other end. I also picked up a couple of 2.5G USB-C Ethernet adapters to test with. The AltaFiber/Cincinnati Bell fiber upgrade happend in mid January 2023.

Vizio version
PDF version
Network Diagram

Unifi Updates

Preparing for our new 2G Fioptics (Cincinnati Bell - now AltaFiber is offering 2G down, 1G upload speeds in Cincinnati as of late 2022), I found that Ubiquiti has multigig transceiver modules (UACC-CM-RJ45-MG - SFP+ to 10GbE RJ45 Transceiver Module). This shows as linked at 10G of the Unifi side but can link at 1/2.5/5/or 10G on the downstream side. I have confirmed this module works with the below gear. It is important to note that the non-Unifi transceivers worked better for me int he UXG than the native Unifi gear (on the 10G WAN port it would not link above 1G, but the others would).


Unifi/Ubiquiti Gear

  • 48 Port Aggregation Switch (USW-Pro-Aggregation)
  • 48 Port POE Swtich Pro (USW-Pro-48-PoE)
  • US-XG-16 10G Switch
  • UACC-CM-RJ45-MG - SFP+ to 10GbE RJ45 Transceiver Module (pick 10G 100m)


Mating Gear


Confirmed NOT to work

  • Unifi UF-RJ45-10G SFP+ Copper RJ45 30m (will only link at 1G or 10G, and will not autonegotiate)

Finished Pictures

Here are some snaps of the finished product. I'll be adding these as each section comes online. This section is an overview, the details of everything are below in the 'parts and bits' section.

Workstation

The basic Data Center is now online. As you see in the below pictures, I have a small workstation inside the data center. This is not my office, I have a full office elsewhere - and it's too cold and noisy (not as bad as you would imagine, but noisy) in the data center to spend a lot of time there. But, I have a set of KVM switches connected to my QNAP NAS, Mac Mini webserver, the Dell servers, and my UniFi Application server to allow me to interface with these computers. The workstation allows me to not only administer the servers, but also monitor the security cameras from around the house. I also have a monitor connected to a HDMI switcher that allows me to view each of my Tivo Bolt DVRs and validate they are functioning properly on a separate display. Note in the most recent pictures I now have a "stand-up desk" since I don't have much room for a chair, and really don't want to sit in here for long periods.

Logo Panel

Thanks to Redditor "98MarkVIII" for posting in this thread about his 2U "Plex Logo" lighted rack insert (I also posted tehse on ImGur). I thought that was a great idea... so great an idea I took it even further and had inserts made for my server "BrettFlix", as well as our core software stack (ProxMox PVE and FreeNAS). Sadly, I didn't have that kind of space in the rack, so I got a short wall-mount 8U rack to make into stickly a display unit to line up in front of the data center door. I think it looks great.

Climate Control

Data center temperature is managed with a 1.5 ton Carrier/Trane (40MAQB18B--331/38MAQB18R--301) mini split AC unit. This unit is capable of cooling the data center all winter with outside air temperatures well below freezing (the servers will always need some cool air) - down to minu 20F. The wall-mounted thermostat is on the hot-aisle to check the server exhaust air temperature, and is set to keep that temperature in the 62 degree Fahrenheit range. Also, the temperature is monitored continually with an Sensor Push temperature probe on a shelf in the hot aisle, as well as CyberPower UPS environment monitors (and more Sensor Push units for iOS reporting) on the top of each rack, and at the Mini Split AC return (at the top of the unit). Both of these device types can SMS/push notify me if the temperature or humidity gets out of range. The server rack has the most extreme delta-T, with exhaust temps at the top of the rack at about 72 F, whereas the Network rack is about 66F, the entertainment rack 68F, and the hot asile at the 65 F set point (cold aisle is typically upper 50 F). I do have to be careful opening the window in winter, though, as ithas a fail-safe that shuts it down if the incoming air to the unit is below 60F.

UPDATE: As of early 2021 I've made some minor modifications which have really helped. I was never really happy with how the temperature control was working, as the hot aisle temperature always seemed too high, as did (occasionally) the server outlet temperature (top of rack). But, it got really weird this last winter. I figured, it's pretty cold outside, and I have this handy window. I know, I know, heresy. Dust. Humidity. Whatever. I get it. Sure, it would be better to have filtered air, and strict humidity control, but these are surplus servers, and even with solar electric isn't free (we are not completely self-sufficient on solar)... and I do monitor the humidity. Anyway, what happened was this spring when it started to warm up the AC wouldn't keep up. Odd. Well, our friendly HVAC guy came by, and it turns out it tripped because the incoming air was too cold. Yep. It protects itself, and the window was so close to the unit it was sucking in air from the window. That got me thinking, if that's true, I bet that it's not circulating the hot air like I expect to to. The theory is the cold air should sink and be sucked in the cold side of the servers, and the hot air should rise, be trapped by the soffit and directed back to the AC unit. And it kind of was. But not enough. So, I built a light frame along the soffit, and across the top of the rack, and to the wall by the AC, and drop a curtain of painter's plastic down the side of the rack to isolate them from the window, and force the cold air to be trapped in front of the rack, and isolated the AC return to the hot air.

Power

A new subpanel was mounted adjacent to the data center to provide power to the server space and the new AC unit. This panel provides eight 20A circuits to the data center below the raised flooring. 2 circuits are under the networking rack providing primary and backup power, and 3 circuits to each server cabinet. Each rack is provided with a CyberPower 1500VA UPS (OR1500LCDRM1U), with the networking and server racks having both primary and secondary/backup units to feed the dual-power-supply gear in those racks.

Additional power management capability is coming (DONE-July 2019). I've started by installing Eve Energy HomeKit power monitors on the primary and backup legs of each machine so I can track total power usage, and I've noted that the 1500VA CyperPower UPS are over-taxed in their current configurations and provide only a few minutes of run time. I therefore plan to get CyberPower monitored PDUs for each circuit in the rack, and for each UPS. Further, I plan to install one UPS for each Dell server at a minimum, and connect the USB output to that server and run "netuptools" to signal the host to shutdown the VMs and power off when power gets critical. UPDATE: This is now done, in addition to the 900W CyberPower units, I have added four 1,350W CyberPower (2U 1500VA model PR1500LCDRT2U) UPS units in the server rack. Each server has a primary and secondary leg on different UPS units, and each UPS is fed from a different circuit breaker. I also added CyberPower 1U rack mount monitored PDU so I can easily see the amp loading for each circuit. I have not yet purchased the network expansions for these units, but I plan to. However, with this additional power I now have 30 minutes minimum runtime on power failure. I replaced all of the black power cables with color-coded ones to make it easier to quickly see what servers are powered by what UPS (and consequently which circuit). I've also started to play with Proxmox's high-availability settings to migrate running VMs to other servers when a system goes down. Right now each UPS is connected via USB to one server and networkUPStools is running in standalone mode to signal the host to shutdown when the battery goes critical. I will set them up to run in a master/slave configuration later so that the host will only shutdown when both legs of power go critical, I just haven't had the time yet.

Flooring

First, the floor was raised 4" (pressure treated 2"x4" studs on edge) and placed on 12" centers for strength with 3/4" plywood covering. Within this flooring, in addition to power, are trenches with removable covers for routing data cabling from the racks to the patch panel as well as KVM and HDMI wiring to the workstation. The floor is covered with Flor carpet squares which are thin and easy to roll a rack across and durable, but also attractive. The tiles are just under 20" square (50cm x 50cm), and the style we picked was "Lasting Grateness" in indigo and bone, simulating an iron grate.

Servers and Racks

I have 3 racks for my equipment. The first rack houses all of the primary networking gear, the middle rack my servers, and the last rack has the entertainment hardware like Sonos sounds systems and Tivo networked DVR (digital video recorders for cable and over-the-air TV). Each rack has backup battery power in the event of electrical outage and lightening. The fronts feature RGB LED "mood" lighting, and backs have white LED lighting for task illumination, which is usually off but great for maintanence.

I have a few servers. The most impressive of which is a QNAP 12 bay NAS, or 'network attached storage' (UPDATE-this is now my weakest server! although it is still the primary storage unit). This has PC class CPU and twelve 10 Terabyte hard drives, two of which serve for redundancy in case of failure, so I net out at around 100 TB of total storage. This computer hosts my plex server (UPDATE-it only servers the files now, the Plex server is a VM in my ProxMox cluster) and library as well as a virtualization environment that runs a private cloud (NextCloud) instance. This cloud keeps the family's files and picture, replicating them across our devices, as well as a shared family calendar and contacts/address book (UPDATE-the NextCloud private cloud is also now in my ProxMox cluster). The Plex content is available on our Tivos as well as iOS, Playstation, and other devices in home and away.

The other main server that I host is my mail, web, and Wiki server, ferrellmac.com running on a Mac Mini. Apple is abandoning the Server product, so these will soon move to an Open Source Linux system.

My final current server is a Ubiquiti UniFi Application Server. This is a purpose-built system in the UniFi line that hosts both the UniFi "software defined network" and the UniFi video security camera NVR (network video recorder). This recorder supports our 16 security cameras and all of the network configurations via a "single pane of glass".

UPDATE: With the coming sunset of Apple's server product line, I've move all of my services to Linux (mostly Ubuntu 16.04, although I'm sure I'll need to gradually move to 18.04 soon) virtual machines running in a cluster of 4 ProxMox host machines. ProxMox is a Type 1 Hypervisor with high-availability, so I can live-migrate a server from physical host to another without the guest OS being aware that it was moved. This is great as it allows me to keep my Plex server up even while upgrading and rebooting the host machines. 4 Host machines means that I always have a quorum of live hosts even if a system has to go down. All of the hosts are 12 generating Dell rack mount Xeon server hardware with 192 GB of ram, 10G networking cards, and dual redundant power supplies. I have another Dell (R720XD) running FreeNAS as the SAN storage for VM images, ISOs, and backups.

I plan to get another R720XD and fit it out with twelve 12TB drives so I can eventually move off of my QNAP server. It's just shown itself to be too limited for what I wanted to do. It's virtualization was slow/weak and problematic - I had several instances where VMs were corrupted and struggled to get the backup/snapshots back, and the transfer and Plex encoding speeds were just not up to my needs. The FreeNAS server so far has shown itself to be superior in every way.

Reverse Proxy

So, there's this thing called a reverse proxy. For the longest time I didn't really know much about them, and didn't think I needed them. I had a bunch of port forwarding rules, and it kind-of-sort-of worked, but all of my addresses where https://bdfserver.com:SomeWeirdPortNumber, which was ugly, and not easy to just simply forward.

Then someone turned me onto nginx reverse proxying. It was a little awkward at first, and getting Let's Encrypt to handle the SSL certificates automatically took me a bit of tinkering, but now it's awesome. And I can give all of my services a subdomain of bdfserver.com and they can happily forward on the port 80/443 of HTTP, or any other port I need, and my router just sends them to the proxy, and the proxy handles the rest. Very cool.

Speeds

I have fiber-to-the-home from Cincinnati Bell (Fioptics Service) and I get most of the advertised speeds as I'm close to their main offices. Here's a typical set of speed tests. My 5k iMac pulling 936Mbps download and 238Mbps upload. The second image is from my iPhone - 418Mbps download and 230Mbps upload. The third image is of a iperf3 performance test from my Plex virtual server to my iMac, 7.28Gbps on the 10Gbps link. I'm not sure why it's only 7+, and maybe jumbo frames would help, but that's very consistent. The fourth image is file transfers to my NAS, the QNAP 12 drive x 10G spinning disks. {UPDATE: I now know why, my USG-XG 10G router is a bit of a bottleneck. I have since added a 10G third networking interface to each of my ProxMox virutal hosts and they all can now get 9+Gbps to each other and my storage SAN.} 325-350MBps is pretty typical, and for spinning disks I think is pretty good. The last image in this set is the iperf from the QNAP to the iMac, at a little over 8Gbps. With a 350 mega BYTE per second file copy speed I can copy a 10GB movie file to the NAS in about 30 seconds. That's nice.

Update: After some tweaking, I'm getting over 400MB/s copies to my VM NAS and 540 MB/s to my file NAS, and better than 8Gbps iperf network speed connection tests between my virtual hosts and my FreeNAS SAN.

Software

ProxMox

Promox

I chose ProxMox as my virutalization HyperVisor, at least initially, because it's free and open source, and has most of the features as VM Ware, which I had originally intended to use (with a low cost VM User Group license). It has high availability cluster, live migration from node-to-node of VMs, and although it's not a true Type 1 hypervisor, it sort of runs on the bare medal, with a basic install of Debian at the core, and the ability to run KVM containers as well as full VMs (for OSes like MacOS and Windows) on the same host.

FreeNAS

FreeNAS

I had planned to use a Drobo B810i on iSCSI as my main Storage Area Network device for my ProxMox cluster, but it was not able to run an SSH server to allow the cluster to login, so I had to go another route. I chose to use FreeNAS on another Dell server. This is kind of the recommendation of the ProxMox folks, and it's running on Unix (in this case FreeBSD because it has the ZFS file system natively) and allows me to use the ZFS file system, which allows me to use all storage types on a network host that appears to the VMs as local storage. Having "remote" storage allows me to live migrate a VM - move it from one physical server to another while running, without the VM knowing that it is being moved.

Ansible

Ansible

So, I've started to have enough unique virtual hosts that managing them is becoming a chore.... just logging into each to update the OS on a weekly basis. I've been meaning to play with Salt Stack, because it sounded cool... but everyone says that Ansible is the thing... so I watched a couple of YouTube videos and it seemed straight-forward, so I'm trying that.

So far, it's working great. A couple of quick text files, and enter my password, and apt update, apt upgrade, BOOM! PlayBook done. Roles done. Task done. Update, oh yea, just did that for all 30 servers!

Door

The door is an 8' single-lite exterior door (because it is a different climate zone, and the room is loud) with an August smart deadbolt lock, with custom signage. It has an August smart lock on it so I can control and monitor who has access.

Patch Panel

Here's the final patch panel, with the fiber lines connected as well. We have nearly 96 copper (Cat61) drops and 10 fiber (OM4) runs (2 to each of the offices and to the old utility demarcation). The patch panel separates the house wiring from the rack wiring so that each can be logical for their own purposes. The racks are connected via color-coded patch cables behind a panel inside the wall, to a channel under the floor, and to underneath each rack where they run to in-line Ethernet keystones in another set of patching panels, and then on to the appropriate networking switch.

Servers

  • Primary file server is QNAP TVS-1271U-RP-i7-32G 12-Bay - (was running Plex and Nextcloud personal cloud service, now just a file server)
    • Core i7 Intel Processor
    • 32GB RAM
    • mSATA Flash Module FLASH-256GB-MSATA (2 x 128GB)
    • 10GbE Netowrking (LAN-10G2SF-MLX Dual-Port PCI-Express SFP+)
    • 12-3.5" HDD (Seagate IronWolf Pro ST10000NE0004 3.5" SATA 6Gb/s 10TB 7200rpm) (120 TB storage, 100 TB usable in dual redundant RAID6 array)

  • 1U Virtualization Servers are redundant Dell R620 - (Running DNS/Domain Name System, NTP/Network Time, PiHole DNS Black Hole, Home Assistant home automation)
    • Dual Intel Xeon E5-2660 Eight Core 2.2GHz 20MB 8.0GT/s 95W processors
    • 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
    • PERC H710 512MB NVWC 6G raid drive controller
    • 8 x 2.5in Drive Bay Cage & Backplane
    • Dell Ultra-Slim 9.5mm SATA DVD-ROM
    • Dell Internal Dual SD Card Reader (No Cards)
    • Dell 4-Port Gigabit Ethernet NDC
    • Broadcom 57810 Dual Port 10GbE PCI Express Adapter (RJ-45) (A)
    • Dual 1100W Platinum Redundant PSU
    • Metal Silver Locking Bezel
    • 4x Dell 300GB 10K 6G 2.5in SAS in Hot Plug Tray
    • 1x Intel S4500 Series 480GB SSD 6G SATA in Hot Plug Tray

  • 2U Virtualization Servers are redundant Dell R720 SFF - (Running Plex, NextCloud, etc. SFF is small form factor, for 2.5" drives)
    • Dual Intel Xeon E5-2667 v2 Eight Core 3.3GHz 25MB 8.0GT/s 130W processors
    • 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
    • PERC H710p 1GB NVWC 6G raid drive controller
    • Second 8-Bay Drive Cage with 16 x 2.5in Bay Backplane
    • Dell Internal Dual SD Card Reader (No Cards)
    • Dell 4-Port Gigabit Ethernet Adapter
    • Dell Broadcom Dual Port 10Gb RJ-45 Ethernet Adapter (W1GCR 57810S)
    • Dual 1100W Platinum Redundant Power Supply Units
    • Metal Silver Locking Bezel
    • 15x Dell 1TB 7.2K 6G 2.5in SAS in Hot Plug Tray
    • 1x Intel S4510 Data Center 960GB Solid State 6G SATA

  • 2U Virtualization Server is Dell R720xd LFF - (Running FreeNAS, LFF is large form factor or 3.5" drive capable)
    • Dual Intel Xeon E5-2660 Eight Core 2.2GHz 20MB 8.0GT/s 95W processors
    • 192GB (12 x 8GB) + (12 x 8GB) PC3L-10600R ECC Ram
    • PERC H310 6G raid drive controller
    • Flex Bay Kit
    • 2 Dell 146GB 15K 6G 2.5in SAS hard drives
    • Dell 4-Port Gigabit Ethernet NDC
    • Intel X520-T2 Dual Port 10GbE PCI Express Adapter (RJ-45)
    • iDRAC 7 Express
    • Dual 1100W Platinum Redundant PSU
    • Metal Silver Locking Bezel
    • 12x Dell 3.5in Drive Tray with Screws

Before Pictures

I lucked out on the size of the data center, there was this awkward room already that we really didn't know what we were going to do with. It happened to have a window and an exterior wall to cut through for air conditioning, which was nice. It was tall enough, and happened to also connect to a furnace/utility space where we'll mount the electrical panel which will support all of the needs of this room.

You can see that this room has a steel beam running across the short direction, nearly in the middle. My plan, as you can see on the drawing, is to use this to capture the hot air and get it back to the mini split air conditioner. Since the AC will be on the outside wall, this is the cold aisle of the server room, and the door side of the room will be the hot side. I did some basic calculations that said 1 ton of AC (12,000 Btu) is plenty for now, but that I might need 1.5 later, so we're going with that. I got a low ambient Trane model because that's what our HVAC crew mostly installs (brand-wise, I had looked at the Gree and Mitsubishi's as well). It can cool my room even if the outside air temp is below freezing, which is good because the servers will always require cooling.

You can also see from the plan that I mean to have shelving units around the outside walls of the room. That isn't really ideal, but I'm used to having all of the "computer stuff" near by, and with this much room it only made sense. I have complete access to both sides of the racks, and enough room to get closable plastic bins off of the shelves as well. I'll have a small desk on the one wall with a workstation consisting of monitor, keyboard, and mouse hooked up to a USB KVM switch in the rack so I can control the servers. I had a rack-mount monitor before, but it got in the way of doing maintenance, and made the monitor smaller than I would like. All connections to the rack are via under-floor cabling to keep the room tidy. There will also be an HDMI monitor, connected by multi-switcher, to my Tivo DVR units so I can configure/monitor them (or just watch TV!).

Also note that I've designed the room with an elevated floor, with a ramp to the door. I wanted to be able to route power under the racks, with 3 independent circuits under each, with each rack to share 1 circuit with the rack next to it. The idea being that each rack will have a primary and backup UPS for all of the gear that I've since bought that has dual PSU (right now that's a 48 port switch, the UBNT USW-L2-48 and USG-XG-8, and my QNAP NAS, a TVS-1271U-RP-i7-32G). Also all of the Ethernet cabling will run from the patch panel on the wall back inside the wall, under the floor, and to patch panels in each rack. That way the house wiring is completely independent from the rack and a nice clean install.

Construction Pictures

With all of that as preliminary, the first thing to do was raise the closet door, and build the floor up. The floor is just one 2"x4" pressure treated stud (this is a basement and could get moist) on it's side, on 12" centers since the rack could be quite heavy, with 3/4" plywood on top. There is a built-in trough for the Ethernet cabling to pass through, with a removable cover so I can get in there and remove or add a single cable at a time if need be. This also allows for a standard size wall-box for the power receptacles under the floor.

I found a local company to pull the Ethernet for me. Anywhere there was going to be a TV I placed a jack at receptacle height as well as at TV height. I also allowed for a few stand-alone monitors - I hope to create a family calendar and information monitor system that will display at a few locations around the house, and display the security camera output. I included literally any place I could imagine a TV or monitor going so I wouldn't regret ignoring it later, including a drop in the garage at bench height for these handy "web relays" that I have used in the past for operating my older style garage door openers and about 10 drops for Ubiquiti POE security cameras.

For offices I pulled 2 Ethernet and 2 fiber to the logical computer location. I pulled dual Ethernet to each WiFi location so that I could power a WAP as well as (potentially) have a 10G connection as I understand the current generation of access points can't be powered off of the same port that passes 10G data. I went with Belden 10GXS12 Cat 6a cable as I wanted to make sure the cable could do the speed, and I could always work with the terminations if I had difficulty with speeds. This cable is huge and stiff, nearly 1/3" each cable with the white plastic spline down the middle.

I wanted to minimize the number of switches required in my infrastructure, but the Cat6a cable is pretty massive, and fairly expensive so I decided 2 runs was the most I could realistically justify at each location. At a minimum, I can have copper going to each computer, and a copper 10G still available for a switch for the balance of the users at that location. Note that my computers (Macs) are the only 10G capable devices I have today, aside from my 2U 12-Bay QNAP NAS which has dual 10G fiber connections and my Ubiquiti 10G USG-XG router and UAS Application Server with dual copper ports, but they are rack mounted (UPDATE: All of my rack-mount Dells now have dual 10G copper ports). I have 3 Ubiquiti 10G USW-XG-16 switches (1 in each rack), and 2 48 port switches (1 USW-48-750W and 1 USW-L2-48 with dual PSU) as the backbone of the network, but all of the remote switches have at best SFP 1G fiber ports. Hopefully Ubiquiti will bring 10G SFP+ to the 8 port switches soon, but the backbone of the network will be dual 10G fiber, with 10G to each of the computers. If the price of the 10G switches comes down, I'll put a 10G USW-XG-16 in each office with dual fiber uplink and a 10G copper to each computer. That's how I ran at my old house, but it's limiting as these switches only have 4 copper ports (UPDATE: UBNT now has reasonably priced copper transceivers for these switches).

For fiber I went with OM4 with LC connectors on each end. I found wall plates and a patch panel for LC connectors, and all of my switches have LC connectors. I went with these 10/40/100G cables with multimode 50/125 micron cable in armored jacket. This might not be perfect, but seemed to give me decent confidence that we could install it safely, and even have potential to move up from 10G later as switches get cheaper. Altogether we pulled about 96 Ethernet and 12 fiber lines.

I also included dual fiber, dual Cat6a, and dual Coax from the current "utility" space where the cable and fiber services enter the house to the new data center. I hope to move those utilities as we finish remodeling the house, but that has yet to be determined (UPDATE: This move was made for the fiber, but the TV still comes into what will be my master closet). Until then I can bring the incoming cable, fiber, attic antenna to data center - over fiber or Ethernet. I plan to rack mount my Tivo base units (UPDATE: Done, check out the tivo rack mount kits under Parts and Bits), with a Cable as well as Over-the-air box feeding 4k Tivo Minis at each TV. My wife and I had cut the cord, but my in-laws aren't ready to do that yet. I also plan to rack mount my Sonos Connect Amps for all of my basement spaces with these handy 3U shelves, part #262-2967 (UPDATE: Done - each room in the basement got a local volume control and dual ceiling speakers, and each TV location has a digital audio back to the rack so I can place the TV sound onto the speakers with a Apple Airport Express).

In terms of users of the network, part of getting this house was moving my in-laws in so they could have some support as they age, and have single-floor living. My wife and I are remodeling the basement for our master suite. The existing house has wired sound, with Sonos users, and it's a large rambling house with a pool and cabana, so I have dual Ubiquiti HD wireless APs on each end of the main house, 1 in the garage and pool cabana, and 1 each upstairs and in the basement. There will be 1 as well in the new garage we will be adding.

I have installed a bunch of HomeKit automation gear. I have installed a lot of Leviton Decora Smart Switches to control the house lighting, the fountain, Eve motion detectors and door/window modules, degree temperature sensors, iDevices outdoor switches for outdoor lighting, 2 Nest thermostats, 3 August locks with WiFi extenders, a Rain Bird WiFi sprinkler module, 5 Sonos Connect Amps (and I plan to add several more for the basement remodel-UPDATE: Done), 2 laser printers, 2 multi-function printer, copier, scanners, 5 Ring flood light cameras and 3 Ring doorbell cameras, and 16 Ubiquiti cameras (some POE and some micros on WiFi). I have 2 work laptops in the house, 2 personal laptops, 2 desktops, 1 windows server, the UAS, the QNAP NAS, and I plan to add some servers (UPDATE: Done, see above), maybe Dell 720s. I have about 10 Tivo devices, and nearly as many Apple TVs (we do a lot of Airplaying) as well as a HomePod, 4 iPhones, 5 Echos, and 4 Kindles and about 6 "Smart" TVs. Also the Security system will be upgraded to allow WiFi connectivity. At any moment I have nearly 150 WiFi devices.

Nearly as important as having all of this connectivity is keeping it straight, well understood, and maintainable. To that end, I have clear Brother labels on each wall plate with the patch panel number, and each wire has a heat-shrink label with a clear heat shrink protective covering layer to keep it legible. Each cable has a four digit number. The first digit is the patch strip (0/1/2/etc) and then the port number, so "0001" for the first port in first patch panel. The house patch will be connected, by routing back inside the wall, to an inline patch in the respective rack it's served by. The house patch panels are these 24 port Cat6a units - giving me 96 ports for Ethernet, and the wall mount rack is the StarTech 6U "WallMount6". The fiber patch panel is this 12 port unit. For the in-rack connections I'm dropping these SF in-line couplers from the wall patch to feed the switches. I plane to have a patch above and below each 48-port switch to keep the connections nice and tidy. Each Power circiuit is similarly labelled, and the power cables in the data center are color-code and labelled. Each UPS is labelled with the circuit that feeds it, and each server is labelled with the UPS that servers it.

I'll post my network diagram soon (UPDATE: Done), but here are some pictures midway through construction. As of June 25, 2018 the wire is pulled, tested, and in the jacks. Since there's so much construction, I have a temporary network setup inside the closet, so the patch is kind of nasty looking, with wires poking through a small hole in the rear drywall. Long term I'll clean up this wire, and it will get a OSB or melamine cover that will be painted to match the wall color (UPDATE: Done), and all of the patch cables to the rack will likewise come off the patch and look inside the way, under the flooring, and up into the rack. The floor will be carpeted with low pile commercial carpet squares from FLOR called Lasting Grateness. By the way, all of my 10G NICs for my Macs are Sonnet Twin 10G Thunderbolt 2 units.

As I think I mentioned, I went with a Trane 1.5 ton (18,000 Btu) low-ambient (can cool below freezing outside air temp) mini-split AC unit. This unit can heat, but I shouldn't need that. I will have them mount a hard-wired thermostat on the hot-side wall to verify cooling rather than depend on the wireless handheld unit, which might not be able to see over the 7' tall racks. As you can see, we're framing in for an 8' exterior door to hold the cold in, and provide perimeter security. With the whole-house monitored security and video system this should be adequate.

For the wire pulling I hired Totten wiring services to pull my cable. He's a magician at finding ways to get wire where it needs to go.

Awhile back I bought a Triplett "Real World Certifier" for testing cables. It does a pretty good job, and will show you if you have any breaks or shorts, and verify the wire order and give basic speed capability and an overall "Cat" score on the scale of Cat3-Cat6. It's not a real cable tester, and Jeff will be bringing the real deal over to certify my cables, but it's was a good check before putting some of the lines into temporary service.

Parts and Bits

Here are the bits and bytes in my Data Center. I apologize that it's a bit rambling, I'll try to make it more coherent later, but this is meant to list the makes, models, etc. of hardware devices in my installation.

Most of the Networking gear is Ubiquiti/UBNT.com stuff running UniFi on a UAS-the UniFi Application Server, or what is now called the UAS-XG Server. It's a Xeon class server with 32GB of ram, and I've replaced the stock video storage to dual 10G Seagate Iron Wolf Pro drives. And the network side it has dual 10G copper ports in a 1U rack mount chassis.

The router is the new UniFi Security Gateway USG-XG-8 with 8 SFP+ ports and 1 10G copper port and 80Gbps routing capability, while doing full IPS/IDS at 1G speeds. It has 16 cores and 16GB of ram in a 1U dual-PSU rack mount chassis. I really only got this so that I could do the IPS/IDS at full 10G like speed between my VLANs. The network is fed by Fiber-to-the-Home/FTTH by Cincinnati Bell Fioptics at 1 Gbps download/250 Mbps upload through an Alcatel-Lucent 7342 ONT, or Optical Network Terminal. I'd love to hook this directly to my router, but they only support GPON, not SFP, and don't support customer equipment for the ONT, though they do allow you to bring your own router.

The core switches are all USW-XG-16 with 12 SFP+ ports and 4 10G copper Ethernet ports. Each rack with have one of these switches connected to the primary by dual 10G multimode fiber patch cables. Each of these switches will support 10G copper links to my servers and 1G or 10G links to my UniFi switches. Currently only the 48 port UniFi switches have 10G SFP+ ports.

My primary distribution switch will be the as-yet-unreleased UniFi 48 port L2 POE switch with dual PSU, the USW-L2-48-POE pair with a USW-48-750W POE switch, giving me 96 ports of POE goodness.

I have various other UniFi switches, including a USW US-16-150W in the rack, various USW-8-150W at the end points (mostly offices) - each of with can accept 1G fiber via SFP, and a bunch of USW-US-8 units, mostly at TVs and gaming consoles. For WiFi I recently swapped out my UAP-AC-Pro units for UAP-AC-HD ones. The HD is slightly bigger than the Pro, but has a lot better throughput. I can get nearly 400 Mbps both directions on my iPhone on the HD, which is about double what I got on the Pros. Also, the beauty of the access points and US-8 switches are that, like the cameras, they're all POE. That means that when my power goes out, these can stay up as each rack is protected with 1 hour of UPS backup. So, as long as I have light on the incoming fiber, I have networking! I currently have 6 of the UAP-AC-HD mounted, 1 on each wing of the first floor, 1 in the garage at ground level, 1 in the basement and 1 on the second floor in the center of the house, and finally one in the pool cabana to cover the backyard entertaining space.

My backup power, and power line conditioning, is by CyberPower OR1500LCDRM1U 1U tall 1,500 VA units in each rack, with dual units in the primary rack as many of those devices have dual PSU. Since these only have 4 protected outlets, I have each powering a Cyber Power CPS1220RMS 1U power strip which support the bulk of power users. Another handy feature of these UPS units, they have an environmental module you can connect to them to monitor temperature in the rack. I've recently started playing with the CyberPower PDU20M2F10R mointored PDU. For starters I'll use these to monitor the total power on the 3 circuits powering my servers, and i might add one for each of the UPS units, and I'm considering more UPS units for a couple of reasons. First, I'm not getting the run time I would like, and second I want to setup the Dell virtualization servers with a USB connect each to a UPS so they can get the shutdown message to trigger powering down the guest VMs.

My QNAP network attached storage/NAS, a TVS-1271U-RP-i7-32 that I purchased from SPAN.com, will connect directly to the switch with dual 10G fiber. This 2U rackmount NAS has an Intel Core i7 processor and 32GB of ram and 12 SAS/ drive bays, and I've opted for the dual 128MB (IBQ-XRF256) M.2 SSD cache and dual SFP+ expansion (LAN-10G2SF-MLX) cards. I paired this with 12 Seagate Iron Wolf Pro enterprise/NAS-ready 10G SATA (ST10000NE0004) spinning hard drives in a RAID6 array (2 drives for redundancy, so 100TB of storage). On this NAS I run a Plex server with my entire TV show and movie library as well my music and home videos. I also run a VM with a NextCloud instance for all of my personal cloud needs, and use it as an Apple Time Machine target.

I also have a Mac Mini, 2012 model with quad core i7 and 16GB ram running OSX server edition as my web and email server. It has dual 1TB SSD inside the case, and it mounted in a Sonnet 1U "Rack Mini" chassis. It is connected to the network via a Sonnet Twin10G thunderbolt NIC to the copper port on the USW-XG-16. My main workstation is a 5k iMac, also with a 10G Sonnet NIC wired to the USW-XG-16.

The Plex server is mostly used as a host for all of my Tivo gear, mostly the new Tivo Mini 4k VOX units at the TVs, with Tivo Bolt units in the rack. The nice thing about the newer Tivo units is they all have 1G Ethernet ports. The bad thing about the Bolt is the silly up-bend makes it too tall for a 1U rack shelf! I have a cable card version with 6 tuners as well as an over-the-air antenna version. Each TV in the house also has and Apple TV for AirPlay capability and other uses. The primary TVs have either the 4th or 5th generation units, with the balance just being generation 3 models. The 2 newer generations support Plex and other applications, which can be handy.

The entertainment gear, all in rack 3, finishes out with Sonos Connect Amps, in these cool Penn-Elcom rack mounts. Each room will be wired with in-ceiling speakers, a local volume control, and be wired back to Sonos Connect Amp Zone Player. These are cool because they're network-attached amps that can drive the speakers, and can be controlled by pretty much any device on the network. Even better, you can easily group and ungroup multiple zones to play the same content (local, Apple Music, Plex, Air Port Express Airplay content via line-in) or different content. Finally I have a PS4 and a PS3 at different locations for gaming.

For security I have the advertised "Ring of protection", with a Ring doorbell camera and 5 Ring flood light cameras - all on WiFi. Inside of that I have a monitored Honeywell (Stanley Security) wired home security system with door and window sensors, motion and glass breakage, smoke and CO detection. I'm looking to upgrade this with a full LCD touchpad version with cellular and internet connectivity with iOS app support. I also have wired/POE UBNT cameras monitoring all of the indoor and outdoor spaces and recording to the UAS. I have a mix of the new UAP-G3-Pro and older UVC-G3 cameras on the outside and UVC-Micros covering the indoor spaces on WiFi. The Pro is waterproof and has optical zoom, so it's the workhorse, with covered areas and doorways getting the regular G3 with IR extender.

For other home automation tasks, I have a HomePod and several Alexas (although I may get rid of these soon). I have the Rain Bird WiFi Module for my sprinklers, 5 Nest smart thermostats (although we're considering going to geothermal, so these might have to change to Ecobee units), 3 August Smart Locks. One thing I like about most of the home automation gear I buy is that it can function in dumb mode as well. For instance, the Nest thermostat is intuitive to use as a local thermostat to my in-laws. The August lock from the either side can function as just a keyed deadbolt lock if power or connectivity is lost. My Leviton light switch connect to standard wire and function standardly without connectivity.

I have a bunch of HomeKit gear as well, and I love telling Siri on my Apple Watch to do things. I tried the Hue bulbs, but didn't like that mode of interaction (having to leave the switch on all of the time). Instead I've moved on to the Leviton Decora Smart Switch, homekit edition. These are basically just regular switches that have had WiFi added to them, so if data and/or automation fails you can still turn on your lights. I'm considering Lutron Serena smart blinds as well. For automation purposes I've paired the light switches with Elgato Eve Degree temperature and Motion sensors, and the Eve button. In the winter my mother-in-law's office gets chilly. She can use the Eve Button to trigger a iDevices smart wall switch to turn on her space heater, or the Eve Degree can do it automatically to maintain the right local temperature. I have iDevices Indoor and outdoor switches. The outdoor switches control the pump to my fountain, the lights illuminating my fountain, the string lights over my pool, and the bug zapper next to it.

UPDATE: I have started upgrading my on site servers with Dell rack mount units. Partly this is driven by my desire to get better performance and mvoe to virutal servers, and partly due to Apple deprecating many of the OSX (now MacOS) Server features like web and email servers. Previously I'd run Plex on my QNAP NAS, and got occasional complaints about its ability to transcode video. Also, I run a NextCloud private cloud instance on the QNAP in a VM, and although it has an i7 and 32GB of ram, the performance has not been great. So, this move will allow me to dedicate significant compute resources affordable to easy-to-manage, separate, virtual machines - mostly running linux. I have a pair of Riser 8 port KVM switches to bring the consoles to my local desktop. I've actually also got a 2-1 KVM in the rack so I can switch between the two 8 ports, since I only have 1 keyboard, video, and mouse on my desk. I didn't find a 16 port that had VGA, which is what I'm currently using, and I already had 1 8 port.

My current plan is to split the workload into 3 pieces.

  • 1U Dell R620 (ProxMox)
    • A thin server with fewer disk slots for less data-intensive workloads
    • Domain Name Server (Debian)
    • Network Time Server (Debian)
    • PiHole Anti-Adware Server (Rasbian)
    • Dual Home Assistant instances (Hass.IO) Production/Development
  • 2U Dell R720
    • Thicker "main" server (ProxMox)
    • Email Server (SMTP/IMAP Dovecot/RoundCube)
    • Web/Apache Server
    • MediaWiki Server
    • WordPress Server
    • TeamViewer Target/JumpBox
  • 2U Dell R720xd (FreeNAS)
    • Thicker "SAN" server - Storage Area Network
    • Remote storage allows Proxmox "live" migration of Virtual Machines between ProxMox Nodes
    • 1 Storage Array of 6 drives (36TB) for Live VM data
    • 1 Storage Array of 6 drives (72TB) for VM Backups