Tower Of Pibylon- how high can we pi?

My Raspberry cluster has come a long ways since the first iteration I showed with a docker cluster running OpenFaaS. I added some of the new Pi4 with 8gig of ram, added an ssd drive which I networked shared to the entire cluster. I am just about to tear it down again for improvements so its time for a snap shot.

When rebuilding the cluster I wrote an Ansible playbook to bootstrap it so that I didn’t have to repeat the same 15 processes 10 times. There’s a few major repositories doing similar things but none of them matched my vision so I wrote my own. This repo is already deprecated from version changes to Ansible but I’ll update when I finished with the hardware improvements. The repository is here:

Besides the upcoming rework; if interested, note that inventory file needs to be changed to match Rpi IP's and those IPs should be static. This script doesn't set the static IP because I have a managed switch and am running them all on a virtual lan. Also unless you are my neighbor the time/zone task within the configurations or the universal tasks may also need a change.

Ansible: beautiful and simple

The from the repository:

To highlight a few, the playbook sets new password, transfers ssh keys, hardens the system, as well as getting the Kubernetes cluster up and going with one Master and transfers the config locally to make use of kubectl to control the cluster. Among the rest.

I adapted a project from Alex Ellis’ to light an led on the blinkt module for each pod being run on a node, giving an indicator of load, balance or any failures.

Bigger, faster, taller, stronger- and not just the hardware. The real gems have been automating the bootstrap with Ansible, managing with Kubernetes (especially lens) and deploying with Helm.

ssd to one of the rPi’s to have some larger persistent storage

I’ve been running a few services on the cluster, I added a SATA adapter and an SSD drive to one of the pi’s to host my own cloud storage. I installed Falco(cloud-native runtime security) through helm along with metalb for the load-balancer, an nginx ingress, with a let’s Encrypt deployment for certificates. Also used pi hole running on the cluster as my networks DNS, providing local caching and secure dns and dnssec for everything else.

The number’s aren’t too drastic here but this is all being caught despite my browsers running ublock-origin.

I forwarded from a domain name that I own to the services I was exposing to the internet so I could reach them whenever wherever. Exposing your own services on the net has a long list of items to go through in terms of safety but just to touch on the basics, long high entropy password which cannot be guessed within the lifetime of the universe is a good start.

Following this up with a dual factor hardware key, also not a bad idea.

This doesn’t address any of the network and firewall changes needed but hopefully I can get into a bit more detail when I have time to share about hosting my own cloud security and surveillance system.

Much more to come but here are some pictures showing a bit of how nice it is to run your own Nextcloud server with the universal support their community has created I have it running on Android, iOS, MacOS, and Ubuntu.

Then to give a look at lens and how slick it is for any who haven’t seen it, it is amazing and easily lets you move from controlling a few things through the CLI to easily managing a massive scaling design.

not this Lens

-this lens

Going to try and catch up on this back log of things I’ve been working on! I think some android projects and Apps I’ve done and basic AI simulations may be next. Since I mentioned it here’s part of the hardware used to transition to fully manage my security systems with cloud access, no monthly fees and control of my own data, getting full throughput with deep packet inspection and ect,

Leave a Reply