Skip to main content

Kubernetes and Calico development environment as easy as a flick

I became an active member of the Calico community so I had to build my own development environment from zero. It wasn't trivial for many reasons but mainly because I have MacOS on my machine and not all of the features of Calico are available on my main operating system. The setup also makes some sense on Linux hosts, because if the node controller runs locally it might make changes to the system, which always has some risk in the playing cards. The other big challenge was that I wanted to start any version of Kubernetes with the ability to do changes in it next to Calico. Exactly I had to prepare two tightly coupled environments. My idea was to create a virtual machine with Linux on it, configure development environments for both projects in the VM and use VSCode's nice remote development feature for code editing.
In this way projects are hosted on the target operating system, I don't risk my system, I don't have to deal with poor file system sync between host and guest, while I can use my favorite desktop environment to done my editing tasks. What could be wrong :) let's start the journey.
First, I created the VM. I preferred Canonical's Multipass because it can create lightweight VMs with Xhyve as a backend engine. Xhyve doesn't allocate CPU and RAM resources directly to the machine, so VMs are using only that amount of RAM for example which they actually needed. It also supports Cloud-init as a built in provision tool, so we can describe our system in a nice yaml file to reproduce the environment any time.
Please replace gh:mhmxs with your own ssh key import method!
Now we are ready to launch the virtual machine:
multipass launch -c 6 -d 100G -m 12G -n calico --cloud-init ./calico-cloud-init.yml
Next step is to add kube and calico projects to the SSH config. This step is optional, but later it would be very handy during VSCode configuration:
export CALICO_IP=$(multipass list | grep calico | awk '{print $3}')
cat >> ~/.ssh/config << EOF
Host kube
    HostName $CALICO_IP
    User kube
    
Host calico
    HostName $CALICO_IP
    User calico
EOF
Each project has its separated user, so environment variable customization can be done in ~/.bash_profile file easily. At this point we can login into both projects via SSH from the base machine, let's start with Kubernetes first:
ssh kube
mkdir -p src/k8s.io
cd src/k8s.io
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
hack/install-etcd.sh
# If you want a single ETCD server for Calico
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir $(mktemp -d) --listen-client-urls http://127.0.0.1:2379 --debug > "/tmp/etcd.log"
# If you need a full Kubernetes
ALLOW_PRIVILEGED=1 NET_PLUGIN=cni API_HOST=0.0.0.0 hack/local-up-cluster.sh
That's easy. Now we have a running Kubernetes and it uses the local CNI plugin as a network provider. The only problem is that we don't have any on this machine. So open an other terminal and SSH into the Calico project:
ssh calico
mkdir -p src/github.com/projectcalico
cd src/github.com/projectcalico
git clone https://github.com/projectcalico/bird.git
git clone https://github.com/projectcalico/calico.git
git clone https://github.com/projectcalico/calicoctl.git
git clone https://github.com/projectcalico/cni-plugin.git
git clone https://github.com/projectcalico/confd.git
git clone https://github.com/projectcalico/felix.git
git clone https://github.com/projectcalico/kube-controllers.git
git clone https://github.com/projectcalico/libcalico-go.git
git clone https://github.com/projectcalico/node.git
git clone https://github.com/projectcalico/pod2daemon.git
git clone https://github.com/projectcalico/typha.git
(cd calicoctl; make all)
(cd cni-plugin; make all)
(cd node; make image)
I created some symlink to reach all the scattered binaries:
sudo ln -s /home/kube/src/k8s.io/kubernetes/_output/local/go/bin/kubectl /opt/cni/bin/kubectl
sudo ln -s /home/calico/src/github.com/projectcalico/calicoctl/bin/calicoctl-linux-amd64 /opt/cni/bin/calicoctl
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/bandwidth /opt/cni/bin/bandwidth
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/calico /opt/cni/bin/calico
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/calico-ipam /opt/cni/bin/calico-ipam
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/flannel /opt/cni/bin/flannel
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/host-local /opt/cni/bin/host-local
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/loopback /opt/cni/bin/loopback
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/portmap /opt/cni/bin/portmap
sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/tuning /opt/cni/bin/tuning
In the last step we apply the RBAC rules and spin up the Calico node controller itself:
kubectl apply -f https://docs.projectcalico.org/v3.15/manifests/rbac/rbac-etcd-calico.yaml 
sudo ETCD_ENDPOINTS=http://127.0.0.1:2379 $(which calicoctl) node run --node-image=calico/node:latest-amd64
After a while the Kubernetes pods must became "Running" state, so we can put any workload on the cluster:
kubectl get po -A
To edit the projects open your VSCode editor on the host machine, install the Remote Development plugin. The plugin parses the SSH config file, so our projects will be available from the remote host browser. It also detects remote project type and helps syncing the local extensions to the remote.
If you plan to edit Kubernetes source too I suggest walking through Paulo Gomes's nice tutorial about how to not kill VSCode with Kubernetes source.
If you have any further questions please feel free to reach me out, I'm glad to help.

Popular posts from this blog

Advanced testing of Golang applications

Golang has a nice built-in framework for testing production code and you can find many articles on how to use it. In this blog post, I don't want to talk too much about the basics , table-driven testing ,  how to generate code coverage  or detect race conditions . I would like to share my personal experiences with a real-world scenario. Go is a relatively young and modern programming language on one side, and it is an old fashion procedural language on the other. You have to keep in mind that fact when you are writing production code from the beginning, otherwise, your program should become an untestable mess so easily. In a procedural way, your program is executed line by line and functions call other functions without any control of the dependencies. Hard to unit test, because you are testing underlying functions too, which are side effects from the perspective of testing.  It looks like everything is static if you are coming from object-oriented world. There are...

First impressions of the new Cloud Native programming language Ballerina

Nowadays everything is Cloud Native; everybody talks about CN tools, frameworks, solutions, and so on. On the other hand, those tools totally changed the way we design, develop, test and release modern applications. I think the number of issues that we solved with the new concepts is equal to the number of new challenges, so in short,     we simply shoveled problems from one hole to the other. Many new tools appeared on the market to make developers' life easier by integrating software with the underlying infrastructure watching file changes and building containers automatically generating resource descriptors on the fly allowing debugging in a running container etc. Next to the new tools, new programming languages such as Metaparticle , Pulumi or Ballerina have been born. The last one had my attention because others are extensions on top of some existing languages, while Ballerina is a brand new programming language, des...

Story of deploying pod to every node and preventing them from termination

 During the development of one of our new features, I faced an interesting challenge. The feature requests are simple and clear: there is an existing DaemonSet  (workload running on "every" node) on the target Kubernetes  cluster, we have to deploy another workload next to each instance and prevent workload termination under certain conditions. Let's split the problem into two parts; deployment and prevention. From the deployment perspective, another DaemonSet makes lots of sense. If we use the same node selectors as the existing one, Kubernetes would deploy pods to the same nodes. In our case a custom operator is working in the background, so we are able to sync node selectors, but for other kinds of deployments this should be a tricky piece. On the topic of prevention PodDisruptionBudget [PDB] comes into the picture. Without going into the details PDB allows us to define how many of the target pods should be terminated by Kubernetes at once. It has a maxUnavailable fi...