Turing Pi is a private mobile cloud in a mini ITX form factor. It’s a scale model of bare metal clusters like you see in data centers. Turing Pi cluster board can scale up to 7 compute nodes.
Turing Pi cluster board could be an excellent fit for the following use cases:
The Turing Pi top/master node can act as a NAT/Router for the rest of the nodes. The main advantage is that if you move the cluster from one location to another, all the nodes IPs stay identical. The next thing is when you set up the operating system, you immediately realize how unpractical it is to plug in each module into a master node and connect display via HDMI, as well as connect USB mouse and keyboard to perform the first initialization. However, the Hypriot OS and some other OS preconfigure nodes with SSH enabled Docker container and makes it super easy to setup each node using SSH.
This is Turing Pi Basics
This is Kubernetes tutorial page
This is Helm & Maintenance tutorial page
1. Which Raspberry Pi models are compatible? Turing Pi supports the following models with and without eMMC Raspberry Pi Compute Module 1 Raspberry Pi Compute Module 3 Raspberry Pi Compute Module 3+ 2. Does the board support the new Rasberry pi 4? There is no Compute Model for the Raspberry pi 4th generation yet 3. How to the compute modules communicate with each other? The nodes interconnected with the onboard 1 Gbps switch.
Turing Pi specifications
Building Kubernetes on top of Turing Pi brings another dimension to the edge computing and learning, from setting up the OS, partitionning the OS, DHCP, NAT, cross compiling for the ARM32V7.
Turing Pi cluster management bus configuration, security and internal devices
In order to install Prometheus, NATS, Cassandra using Kubernetes, we need to first create Persistency Volumes
Use Helm & Tiller on the Turing PI Cluster
In order to get the nodes and pods interface with each other accross the cluster. This post describes how I deployed Flannel acounting with the fact that some of the nodes have multiple interfaces (wlan0 and eth0).
During some of the manipulation of the partition table of my SD card, I ended up screwing up both my SD card and my backup Win32DiskImage backup. Moreover if your SD card is 32G, it takes around 30 minute to restore from backup. Hence the idea to come up with a way to build more resiliency in the cluster. Recreating a node from scratch should not take more than 10 mn. The propose procedure is still rather long because I did not push enough yet what the HypriotOS team, aka build a default SD image where cloud-init does 100% of the initialization work.