I became an active member of the Calico community so I had to build my own development environment from zero. It wasn't trivial for many reasons but mainly because I have MacOS on my machine and not all of the features of Calico are available on my main operating system. The setup also makes some sense on Linux hosts, because if the node controller runs locally it might make changes to the system, which always has some risk in the playing cards. The other big challenge was that I wanted to start any version of Kubernetes with the ability to do changes in it next to Calico. Exactly I had to prepare two tightly coupled environments. My idea was to create a virtual machine with Linux on it, configure development environments for both projects in the VM and use VSCode's nice remote development feature for code editing.
In this way projects are hosted on the target operating system, I don't risk my system, I don't have to deal with poor file system sync between host and guest, while I can use my favorite desktop environment to done my editing tasks. What could be wrong :) let's start the journey.
First, I created the VM. I preferred Canonical's Multipass because it can create lightweight VMs with Xhyve as a backend engine. Xhyve doesn't allocate CPU and RAM resources directly to the machine, so VMs are using only that amount of RAM for example which they actually needed. It also supports Cloud-init as a built in provision tool, so we can describe our system in a nice yaml file to reproduce the environment any time.
Please replace gh:mhmxs with your own ssh key import method!
In this way projects are hosted on the target operating system, I don't risk my system, I don't have to deal with poor file system sync between host and guest, while I can use my favorite desktop environment to done my editing tasks. What could be wrong :) let's start the journey.
First, I created the VM. I preferred Canonical's Multipass because it can create lightweight VMs with Xhyve as a backend engine. Xhyve doesn't allocate CPU and RAM resources directly to the machine, so VMs are using only that amount of RAM for example which they actually needed. It also supports Cloud-init as a built in provision tool, so we can describe our system in a nice yaml file to reproduce the environment any time.
Please replace gh:mhmxs with your own ssh key import method!
Now we are ready to launch the virtual machine:
If you plan to edit Kubernetes source too I suggest walking through Paulo Gomes's nice tutorial about how to not kill VSCode with Kubernetes source.
If you have any further questions please feel free to reach me out, I'm glad to help.
multipass launch -c 6 -d 100G -m 12G -n calico --cloud-init ./calico-cloud-init.ymlNext step is to add kube and calico projects to the SSH config. This step is optional, but later it would be very handy during VSCode configuration:
export CALICO_IP=$(multipass list | grep calico | awk '{print $3}') cat >> ~/.ssh/config << EOF Host kube HostName $CALICO_IP User kube Host calico HostName $CALICO_IP User calico EOFEach project has its separated user, so environment variable customization can be done in ~/.bash_profile file easily. At this point we can login into both projects via SSH from the base machine, let's start with Kubernetes first:
ssh kube mkdir -p src/k8s.io cd src/k8s.io git clone https://github.com/kubernetes/kubernetes.git cd kubernetes hack/install-etcd.sh # If you want a single ETCD server for Calico etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir $(mktemp -d) --listen-client-urls http://127.0.0.1:2379 --debug > "/tmp/etcd.log" # If you need a full Kubernetes ALLOW_PRIVILEGED=1 NET_PLUGIN=cni API_HOST=0.0.0.0 hack/local-up-cluster.shThat's easy. Now we have a running Kubernetes and it uses the local CNI plugin as a network provider. The only problem is that we don't have any on this machine. So open an other terminal and SSH into the Calico project:
ssh calico mkdir -p src/github.com/projectcalico cd src/github.com/projectcalico git clone https://github.com/projectcalico/bird.git git clone https://github.com/projectcalico/calico.git git clone https://github.com/projectcalico/calicoctl.git git clone https://github.com/projectcalico/cni-plugin.git git clone https://github.com/projectcalico/confd.git git clone https://github.com/projectcalico/felix.git git clone https://github.com/projectcalico/kube-controllers.git git clone https://github.com/projectcalico/libcalico-go.git git clone https://github.com/projectcalico/node.git git clone https://github.com/projectcalico/pod2daemon.git git clone https://github.com/projectcalico/typha.git (cd calicoctl; make all) (cd cni-plugin; make all) (cd node; make image)I created some symlink to reach all the scattered binaries:
sudo ln -s /home/kube/src/k8s.io/kubernetes/_output/local/go/bin/kubectl /opt/cni/bin/kubectl sudo ln -s /home/calico/src/github.com/projectcalico/calicoctl/bin/calicoctl-linux-amd64 /opt/cni/bin/calicoctl sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/bandwidth /opt/cni/bin/bandwidth sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/calico /opt/cni/bin/calico sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/calico-ipam /opt/cni/bin/calico-ipam sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/flannel /opt/cni/bin/flannel sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/host-local /opt/cni/bin/host-local sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/loopback /opt/cni/bin/loopback sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/portmap /opt/cni/bin/portmap sudo ln -s /home/calico/src/github.com/projectcalico/cni-plugin/bin/amd64/tuning /opt/cni/bin/tuningIn the last step we apply the RBAC rules and spin up the Calico node controller itself:
kubectl apply -f https://docs.projectcalico.org/v3.15/manifests/rbac/rbac-etcd-calico.yaml sudo ETCD_ENDPOINTS=http://127.0.0.1:2379 $(which calicoctl) node run --node-image=calico/node:latest-amd64After a while the Kubernetes pods must became "Running" state, so we can put any workload on the cluster:
kubectl get po -ATo edit the projects open your VSCode editor on the host machine, install the Remote Development plugin. The plugin parses the SSH config file, so our projects will be available from the remote host browser. It also detects remote project type and helps syncing the local extensions to the remote.
If you plan to edit Kubernetes source too I suggest walking through Paulo Gomes's nice tutorial about how to not kill VSCode with Kubernetes source.
If you have any further questions please feel free to reach me out, I'm glad to help.