Skip to main content

Kubernetes and Calico development environment as easy as a flick

I became an active member of the Calico community so I had to build my own development environment from zero. It wasn't trivial for many reasons but mainly because I have MacOS on my machine and not all of the features of Calico are available on my main operating system. The setup also makes some sense on Linux hosts, because if the node controller runs locally it might make changes to the system, which always has some risk in the playing cards. The other big challenge was that I wanted to start any version of Kubernetes with the ability to do changes in it next to Calico. Exactly I had to prepare two tightly coupled environments. My idea was to create a virtual machine with Linux on it, configure development environments for both projects in the VM and use VSCode's nice remote development feature for code editing.
In this way projects are hosted on the target operating system, I don't risk my system, I don't have to deal with poor file system sync between host and guest, while I can use my favorite desktop environment to done my editing tasks. What could be wrong :) let's start the journey.
First, I created the VM. I preferred Canonical's Multipass because it can create lightweight VMs with Xhyve as a backend engine. Xhyve doesn't allocate CPU and RAM resources directly to the machine, so VMs are using only that amount of RAM for example which they actually needed. It also supports Cloud-init as a built in provision tool, so we can describe our system in a nice yaml file to reproduce the environment any time.
Please replace gh:mhmxs with your own ssh key import method!
Now we are ready to launch the virtual machine:
multipass launch -c 6 -d 100G -m 12G -n calico --cloud-init ./calico-cloud-init.yml
Next step is to add kube and calico projects to the SSH config. This step is optional, but later it would be very handy during VSCode configuration:
export CALICO_IP=$(multipass list | grep calico | awk '{print $3}')
cat >> ~/.ssh/config << EOF
Host kube
    HostName $CALICO_IP
    User kube
    
Host calico
    HostName $CALICO_IP
    User calico
EOF
Each project has its separated user, so environment variable customization can be done in ~/.bash_profile file easily. At this point we can login into both projects via SSH from the base machine, let's start with Kubernetes first:
ssh kube
mkdir -p src/k8s.io
cd src/k8s.io
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
hack/install-etcd.sh
# If you want a single ETCD server for Calico
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir $(mktemp -d) --listen-client-urls http://127.0.0.1:2379 --debug > "/tmp/etcd.log"
# If you need a full Kubernetes
ALLOW_PRIVILEGED=1 NET_PLUGIN=cni API_HOST=0.0.0.0 hack/local-up-cluster.sh
That's easy. Now we have a running Kubernetes and it uses the local CNI plugin as a network provider. The only problem is that we don't have any on this machine. So open an other terminal and SSH into the Calico project:
ssh calico
mkdir -p src/github.com/projectcalico
cd src/github.com/projectcalico
git clone https://github.com/projectcalico/bird.git
git clone https://github.com/projectcalico/calico.git
git clone https://github.com/projectcalico/calicoctl.git
git clone https://github.com/projectcalico/cni-plugin.git
git clone https://github.com/projectcalico/confd.git
git clone https://github.com/projectcalico/felix.git
git clone https://github.com/projectcalico/kube-controllers.git
git clone https://github.com/projectcalico/libcalico-go.git
git clone https://github.com/projectcalico/node.git
git clone https://github.com/projectcalico/pod2daemon.git
git clone https://github.com/projectcalico/typha.git
(cd calicoctl; make all)
(cd cni-plugin; make all)
(cd node; make image)
I created some symlink to reach all the scattered binaries:
sudo ln -s /home/kube/src/k8s.io/kubernetes/_output/local/go/bin/kubectl /opt/cni/bin/kubectl
sudo ln -s /home/calico/src/github.com/projectcalico/calicoctl/bin/calicoctl-linux-amd64 /opt/cni/bin/calicoctl
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/bandwidth /opt/cni/bin/bandwidth
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/calico /opt/cni/bin/calico
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/calico-ipam /opt/cni/bin/calico-ipam
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/flannel /opt/cni/bin/flannel
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/host-local /opt/cni/bin/host-local
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/loopback /opt/cni/bin/loopback
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/portmap /opt/cni/bin/portmap
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/tuning /opt/cni/bin/tuning
In the last step we apply the RBAC rules and spin up the Calico node controller itself:
kubectl apply -f https://docs.projectcalico.org/v3.15/manifests/rbac/rbac-etcd-calico.yaml 
sudo ETCD_ENDPOINTS=http://127.0.0.1:2379 $(which calicoctl) node run --node-image=calico/node:latest-amd64
After a while the Kubernetes pods must became "Running" state, so we can put any workload on the cluster:
kubectl get po -A
To edit the projects open your VSCode editor on the host machine, install the Remote Development plugin. The plugin parses the SSH config file, so our projects will be available from the remote host browser. It also detects remote project type and helps syncing the local extensions to the remote.
If you plan to edit Kubernetes source too I suggest walking through Paulo Gomes's nice tutorial about how to not kill VSCode with Kubernetes source.
If you have any further questions please feel free to reach me out, I'm glad to help.

Popular posts from this blog

Advanced testing of Golang applications

Golang has a nice built-in framework for testing production code and you can find many articles on how to use it. In this blog post, I don't want to talk too much about the basics , table-driven testing ,  how to generate code coverage  or detect race conditions . I would like to share my personal experiences with a real-world scenario. Go is a relatively young and modern programming language on one side, and it is an old fashion procedural language on the other. You have to keep in mind that fact when you are writing production code from the beginning, otherwise, your program should become an untestable mess so easily. In a procedural way, your program is executed line by line and functions call other functions without any control of the dependencies. Hard to unit test, because you are testing underlying functions too, which are side effects from the perspective of testing.  It looks like everything is static if you are coming from object-oriented world. There are...

Autoscaling Calico Route Reflector topology in Kubernetes

Kubernetes is a great tool to organize your workloads on a low or high scale. It has many nice features in different areas, but it is totally out-sourcing the complexity of the network. Network is one of the key layers of a success story and happily there are many available solutions on the market. Calico is one of them, and it is I think the most used network provider, including big players in public cloud space and has a great community who works day by day to make Calico better. Installing Kubernetes and Calico nowadays is easy as a flick if you are happy with the default configurations. Otherwise, life became tricky very easily, there are so many options, configurations, topologies, automation, etc. Surprise or not, networking is one of the hard parts in high scale, and requires thorough design from the beginning. By default Calico uses IPIP encapsulation and full mesh BGP to share routing information within the cluster. This means every single node in the cluster is connected w...