Recently, my company abandoned the local data centre and migrated all services to the k8s cluster in a public cloud provider. But after that, some problems emerged, and I was dealing with them. One of the problems is that we also migrated the development environment to the k8s cluster, but the developers of the R&D team still need to access the development environment directly. Let me explain why.
We used a microservice model to design our app architecture. Because my colleague needed access to the other modules and middleware, such as Zookeeper, Kafka, etc., while debugging in IDE, so you may ask, why don't my colleagues access services in k8s via the ingress? Well, usually, it can solve the problem. But in this case, because our services used Nacos as the service discovery/registry centre and Apache Dubbo as the RPC Framework, the services invoke the other services directly by IP address and port, so it's hard to let modules communicate by k8s service or ingress.
After that, I first needed to get through the network between the on-premises and cloud VPC, and I used the IPsec VPN protocol to deal with that. That's written in another post. Ok, Now you may know the background of my work. Still, I wasn't going to implement this solution in the on-used environment—the entire solution was deployed in an experimental environment because some work processes had to finish. Anyway, the result is not my point. The whole process is much more worthwhile.
0x0001 The Experimental Environment
In the previous, I said I deployed this in an experimental environment, which I set up with some virtual machines in the VirtualBox. So I have to introduce my network structure and System in my environment.
Here’s my IP assignment:
- k8s node CIDR:
- k8s master0 IP:
- k8s slave0 IP:
- k8s cluster-IP CIDR:
- k8s pod-IP CIDR:
- On-Premises CIDR:
Here’s my OS Configuration:
- System Version:
Ubuntu 22.04.1 LTS
- Linux Kernel Version:
Besides what I mentioned above, I also installed eBPF tools on my ubuntu. Here’s the command I used for the installation.
sudo apt-get install -y make clang llvm libelf-dev libbpf-dev bpfcc-tools libbpfcc-dev linux-tools-$(uname -r) linux-headers-$(uname -r)
And I installed two NICs on each VM. One of the NICs was using a NAT network, and the other was using a Host-Only network, configuring IP within the k8s Nodes CIDR range.
Here’s a network configuration example from one of the VMs.
root@k8smaster0:~# cat /etc/netplan/00-installer-config.yaml # This is the network config written by 'subiquity' network: ethernets: enp0s3: dhcp4: true enp0s8: dhcp4: false addresses: [192.168.56.4/24] nameservers: addresses: [184.108.40.206,220.127.116.11] version: 2 root@k8smaster0:~#
After I installed ubuntu and configured the network in the virtual machines, its’ time to install Kubernetes.
First, I specified the hostname with IP addresses in the
/etc/hosts on each node, which is unnecessary. Still, I strongly recommend doing it because the kubelet will use the IP parsed by hostname as the node IP for the registry by default. Otherwise, you need to specify the parameter with
--node-ip=x.x.x.x when starting kubelet. For example, If I want to register a k8s worker node with the hostname of k8sslave0 and two NICs, the first NIC with IP10.0.2.15 configured with the default route and the second with IP
192.168.56.5. If I want to use
192.168.56.5 as my registry IP, I need to configure
/etc/hosts as follows:
127.0.0.1 localhost 127.0.1.1 ubuntu 192.168.56.5 k8sslave0
After configuring the host for each node, we need to install container runtime on each node so pods can run there. As you may know, many choices exist, but many people prefer to use the Docker Engine as runtime directly. Still, I used containerd as my runtime because Kubernetes no longer support Docker Engine directly after v1.23. You can access Kubernetes’s official documents to get more details.
Here are the commands to install containerd on ubuntu. You can find all of it on Docker’s official site.
# Uninstall old versions sudo apt-get remove docker docker-engine docker.io containerd runc # Set up the repository sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release # Add Docker's official GPG key: sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg # Use the following command to set up the repository: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install containerd.io
Then we need to configure systemd as the cgroup driver, this is optional, but systemd is recommended if you use cgroupv2. Here’re the commands to enable systemd.
# If we use package management tools like apt or yum, # we need to generate the default configuration first. containerd config default > /etc/containerd/config.toml # Then, modify the config.toml to enable systemd [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true # We will use kubeadm 1.25.4 to install k8s later, # so, we need to change the sandbox image configured in the config.toml [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.k8s.io/pause:3.8" sudo systemctl restart containerd
We also need to check if our CNI is installed. Usually, after the container runtime is installed, the CNI will also be installed with the runtime. As default, the CNI binary files are located in the directory
/opt/cni/bin, and the configuration directory is
/etc/cni/net.d. Of course, the directory is configurable, which can find in the containerd configuration file.
cat /etc/containerd/config.toml ## Here's the configuration related to the cni ... [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" conf_template = "" ip_pref = "" max_conf_num = 1
At this point, all of our preparation work for installing Kubernetes has been done. In my next blog, I will continue introducing how to install Kubernetes and configure cilium.