Kubezilla is an ambitious project that aims to provide community members with an immersive learning experience in Kubernetes and related technologies. By building the largest Kubernetes Community Cluster, we aim to demonstrate the scalability and power of Kubernetes, while fostering collaboration and knowledge sharing within the community.We are aiming to build a largest Kubernetes Community Cluster and target to showcase it during Docker Bangalore Meetup event.
Why Kubezilla?
It’s great opportunity for community members to learn, collaborate and contribute around Kubernetes and related technologies. As a team, we will learn how Kubernetes cluster is setup, how apps gets deployed over Cloud and what kind of microservices can be run possibly on these HUGE cluster nodes.Community members will learn how monitoring tools like Prometheus and Grafana can be deployed and fetch time-series metrics out of these HUGE cluster of nodes. In nutshell, it’s 2-3 hour effort which will let every single individual to learn Kubernetes and understand its scalability.
Our GitHub Repo: https://github.com/collabnix/kubezilla500/tree/main
How it works?
We invite contributors to help us build the largest Kubernetes Community Cluster by providing their FREE cloud instances or Raspberry Pi devices.
How shall I join my worker node?
It’s simple. Follow the below steps:
Installing Docker
curl -sSL https://get.docker.com/ | sh
Run the below container
Make sure to update the label below before running.
Very Important
Node can be node=cloud or node=rpi or node=jetson
name=<your_name>
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.5 --server https://success.kubezilla.com --token xg8sm7fgxgls7p52rcjbkjsszp2cb8l4gmfqpvxt48s65dcqhjvqww --worker --label node=cloud --label name=kubezilla
That’s it. Open up success.kubezilla.com with kubezilla as login and kubezilla as password
Contribution Proposal
- Create a pull request if you’re interested to contribute your FREE Cloud instance(AWS/Azure/GCP/DO).
- You can even contribute Your Raspberry Pi too if you know how to connect it to this cluster.
- Please also include your full name, Twitter’s handle and your company name.
- Please note that the node’s specification for this run is XGB of RAM with X vCore. We’re sorry that 512MB will be not enough for our testing.
What’s mininum requirements of a node?
- 2 GB or more of RAM per machine (any less will leave little room for your apps)
- 2 CPUs or more
- Full network connectivity between all machines in the cluster (public or private network is fine)
- Unique hostname, MAC address, and product_uuid for every node. See here for more details.
- Certain ports are open on your machines. See here for more details.
- Swap disabled. You MUST disable swap in order for the kubelet to work properly.
- TCP Inbound 10250 open for Kubelet API
- TCP Inbound 30000-32767 open for NodePort Services
Size of Master & Master Components
On GCE/Google Kubernetes Engine, and AWS, kube-up automatically configures the proper VM size for your master depending on the number of nodes in your cluster. On other providers, you will need to configure it manually.
For reference, the sizes we use on GCE are
1-5 nodes: n1-standard-1
6-10 nodes: n1-standard-2
11-100 nodes: n1-standard-4
101-250 nodes: n1-standard-8
251-500 nodes: n1-standard-16
more than 500 nodes: n1-standard-32
And the sizes we use on AWS are
1-5 nodes: m3.medium
6-10 nodes: m3.large
11-100 nodes: m3.xlarge
101-250 nodes: m3.2xlarge
251-500 nodes: c4.4xlarge
more than 500 nodes: c4.8xlarge
Ports required to be open on Worker Nodes
TCP 10250 Kubelet API
TCP 10255 Read-Only Kubelet API
Contributors
| Name | Company | Number of Nodes Expected to Contribute | Machine Type |
|---|---|---|---|
| @ajeetsraina | Collabnix | 10 | |
| @anmolnagpal (github) (linkedin) | CloudDrove Inc. | 45 | t3.medium/t3a.medium (2 vCPUs, 4 GB) |
| @cube8021 | Rancher Labs | 5 | |
| @dostiharise (github) (linkedin) | Alvyl Consulting | 32 | t3.medium/t3a.medium (2 vCPUs, 4 GB) |
| @dostiharise (github) (linkedin) | Alvyl Consulting | 3 | Raspberry Pis 3B+/4B |
| @vinayaggarwal | Dellemc | 2 | |
| @josanabr | Personal | 2 | Raspberry Pis 3B+/4B |
| @kvenu | Personal | 2 | |
| @MeenachiSundaram (github) (linkedin) | Personal | 2 | Raspberry Pis 4B |
| @stefscherer | Docker Inc. | 2 | Raspberry Pis 3B+ |
| @stefscherer Docker Inc. | 10 | Azure (5x D8s, 5x D32s) | |
| @ginigangadharan | |||
| @omkarjoshi |
Beginner’s Guide
If you’re an individual and it’s your first time joining KubeZilla, we encourage you to not contribute more than 50 nodes.
How to setup Master Node for Kubezilla
Configuring the networking stuff
By default, if you run ifconfig command on your Cloud instance, it shows private IP address(and not Public). You must add publicIP interface for kubeadm init to work. Caution: If you dont add publicIP interface, kubeadm init –apiserver-advertise-address=publicIP will not success.
Follow the below steps to configure public IP:
Tested Environment:
- Ubuntu 16.04/18.04/20.04
Step #1: Access your Cloud Instance via SSH
Step #2: Create the configuration file and open an editor
touch /etc/netplan/60-floating-ip.yaml
nano /etc/netplan/60-floating-ip.yaml
Step #3: Paste the following configuration into the editor and replace with your actual public IP.
IPv4:
network:
version: 2
ethernets:
eth0:
addresses:
- <YOUR PUBLIC IP>/32
Step #4: Restart your network. Caution: This will reset your network connection
sudo netplan apply
Step #5: Verify if private IP interface gets replaced by the public IP
$ ifconfig ens4
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 34.75.235.117 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4001:aff:fe8e:3 prefixlen 64 scopeid 0x20<link>
ether 42:01:0a:8e:00:03 txqueuelen 1000 (Ethernet)
RX packets 487336 bytes 175945336 (175.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 409437 bytes 278796504 (278.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Step #6: Initialize the Kubeadm
sudo kubeadm init --apiserver-advertise-address=34.75.235.117 --control-plane-endpoint=34.75.235.117 --upload-certs --pod-network-cidr 10.5.0.0/16
Step #7: Kubeconfig Configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step #8: Configuring Kube Router
We tested it with Calico/Weave but it kept crashing. Kube Router looks to be perfect solution.
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
Steps #9: Verfiying Kube components
$ sudo kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
Step #10: Ensuring kube router is up and running
$ sudo kubectl get po -A | grep kube
kube-system coredns-66bff467f8-m2k4d 1/1 Running 0 90m
kube-system coredns-66bff467f8-s55wl 1/1 Running 0 90m
kube-system etcd-worker1 1/1 Running 0 90m
kube-system kube-apiserver-worker1 1/1 Running 0 90m
kube-system kube-controller-manager-worker1 1/1 Running 0 90m
kube-system kube-proxy-bmz5t 1/1 Running 0 81m
kube-system kube-proxy-czxns 1/1 Running 0 87m
kube-system kube-proxy-f2lx9 1/1 Running 0 90m
kube-system kube-proxy-fmtv5 1/1 Running 0 78m
kube-system kube-proxy-gh9jf 1/1 Running 0 44m
kube-system kube-proxy-kdbv6 1/1 Running 0 52m
kube-system kube-proxy-sswqx 1/1 Running 0 47m
kube-system kube-router-lnrkq 1/1 Running 0 90m
kube-system kube-router-mmf95 1/1 Running 0 47m
kube-system kube-router-nmfhc 1/1 Running 0 78m
kube-system kube-router-pxkvt 1/1 Running 0 52m
kube-system kube-router-q7lq6 1/1 Running 0 81m
kube-system kube-router-rx4bm 1/1 Running 0 44m
kube-system kube-router-xkdpd 1/1 Running 0 87m
kube-system kube-scheduler-worker1 1/1 Running 0 90m
Step #11: Verify the nodes
$ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 96m v1.18.4
Goals
- This is the 1st collaborative project powered by Collabnix Slack community to form 100+ nodes
- This is being planned on the latest Kubernetes version
- Networking; We will be creating a large subnet /20 and trying to assign, as many as possible, IP addresses to each container on each node distributedly. We expect to have around ~1k IP addresses assigned and the workload application should be working fine.
- Once 100+ K8s cluster is setup, we will be running an application to see the performance
- We will be using Prometheus as well as Grafana for visualisation
Results
All experimental results will be provided publicly for all of you to analyze, write blogs, or even used as information for further development of your own commercial projects. Please feel free to use it.
Who’s behind Kubezilla?
Kubezilla is a non-profit project organized by the Collabnix community. If you want to be part of the organization team or support through sponsorship, please send us a DM in Gitter.
Feel difficulty raising pr for contributing here ?
Add your nodes here.