Skip to main content

Advanced testing of Golang applications

Golang has a nice built-in framework for testing production code and you can find many articles on how to use it. In this blog post, I don't want to talk too much about the basics, table-driven testing,  how to generate code coverage or detect race conditions. I would like to share my personal experiences with a real-world scenario.

Go is a relatively young and modern programming language on one side, and it is an old fashion procedural language on the other. You have to keep in mind that fact when you are writing production code from the beginning, otherwise, your program should become an untestable mess so easily. In a procedural way, your program is executed line by line and functions call other functions without any control of the dependencies. Hard to unit test, because you are testing underlying functions too, which are side effects from the perspective of testing.  It looks like everything is static if you are coming from object-oriented world. There are no dependency injection frameworks and/or inversion of control  (of course you can implement them if you want to abuse the language, but it's a different story). The $1000 question is how to write testable Go code, and Dependency Injection Design Pattern (DIDP) is the answer. What does it mean? You have to pass over all of your side effects as input parameters of the functions. Sounds trivial, but believe me it's tricky to solve in a nice way or so tiring to refactor the existing project later on.

Without applying DIDP the only proper way to test complex Go source is monkey patching. I found one library called monkey, to do this. I don't suggest using it at all. Changing running code on the fly is a bad practice from the view of testing. On the other hand, I can imagine situations where there are no other options.

Let's see how DIDP works. In my first example, I would like to open a file and then read the first byte of the file. It looks like something like this (without any proper error handling, just keep it simple):

We have many options to reorganize the code to make it testable, and I would like to cover most of them later, but first let's see the easiest one. Functions are first-class citizens in Go and our side effect is a function call above, so it is straightforward to pass the function as an input variable:

Our new function doesn't know how to read a file, it gets read and the implementation is out of scope which causes very easy testing:

It's clear now what is DIDP (sorry for boring you), so time to talk about the options that we have in Go.

Mocks and fakes. Go has a few existing mocking frameworks and they have good integration with the built-in framework. The reason why I don't like them all depends on code generation, which requires an extra step in the development cycle. Here is a list of the most famous frameworks:
Functions and interfaces. As I mentioned functions are first-class citizens and gladly Go has a really awesome dynamic interface implementation system. Why not use them? They don't require code generation, they are built-in, and we have full control of the behavior without replacing production code runtime. Function parameters are nice for simple features and interface parameters for the rest.

My example is part of a CLI tool. It retrieves a token, fetches a credential by name, and prints the ID of the credential.

There are three side effects in the code.
  • Authentication process -> we simply move out of the implementation
  • CLI context argument finder (c.String) -> become function parameter
  • REST call to fetch credential -> become interface parameter

Let's write the test code:

In the previous solution, we only tested the output of the function. What can we do if we are interested in for example the input parameter of the GetCredential() call? Things become a bit complicated at this point, but we can solve them by channels:

As you see built-in features of the language are enough to solve most of the cases, and not need to introduce an external framework into the project or increase the development process with extra steps.

Popular posts from this blog

Kubernetes and Calico development environment as easy as a flick

I became an active member of the Calico community so I had to build my own development environment from zero. It wasn't trivial for many reasons but mainly because I have MacOS on my machine and not all of the features of Calico are available on my main operating system. The setup also makes some sense on Linux hosts, because if the node controller runs locally it might make changes to the system, which always has some risk in the playing cards. The other big challenge was that I wanted to start any version of Kubernetes with the ability to do changes in it next to Calico. Exactly I had to prepare two tightly coupled environments. My idea was to create a virtual machine with Linux on it, configure development environments for both projects in the VM and use VSCode 's nice remote development feature for code editing. In this way projects are hosted on the target operating system, I don't risk my system, I don't have to deal with poor file system sync between host a...

Autoscaling Calico Route Reflector topology in Kubernetes

Kubernetes is a great tool to organize your workloads on a low or high scale. It has many nice features in different areas, but it is totally out-sourcing the complexity of the network. Network is one of the key layers of a success story and happily there are many available solutions on the market. Calico is one of them, and it is I think the most used network provider, including big players in public cloud space and has a great community who works day by day to make Calico better. Installing Kubernetes and Calico nowadays is easy as a flick if you are happy with the default configurations. Otherwise, life became tricky very easily, there are so many options, configurations, topologies, automation, etc. Surprise or not, networking is one of the hard parts in high scale, and requires thorough design from the beginning. By default Calico uses IPIP encapsulation and full mesh BGP to share routing information within the cluster. This means every single node in the cluster is connected w...