IThing and ThingImpl

(crosspost from my apprenticeship blog)

While refactoring my HTTP server last week, I introduced abstractions for the sockets it used, in order to invert the dependencies to them.

I created an interface called ClientSocket and a class named DefaultClientSocket that is used for the production environment. For testing a StubClientSocket was used that could be prepared with data for the test.

As the name DefaultClientSocket isn’t a particular good one, I had a hard time coming up with a better name. In the end it’s really just the ClientSocket. But that name was already used by the interface I extracted.

I definitely do not want to use IClientSocket for the interface or ClientSocketImpl for the implementation. I’m not writing software for Windows, so this naming scheme is clearly off the table. Apart from that I do not like having to pre- or postfix class names in general.

So what to do?

One thing my mentor brought up in todays IPM was, that when you cannot find good names for an interface and the corresponding class, the class is the interface.

What does that mean?

There is only a class called ClientSocket that has a few public methods. For testing these methods are then redefined. Having to subclass as a tradeoff in order to introduce better naming is worth it.


(crosspost from my apprenticeship blog)

Apart from getting into Clojure this week, I had a code review with my mentor for my HTTP server.

I struggled with testing the part where the blocking server socket accepts new connections and spawns a new thread.

A solution to that however was as simple as brilliant.

The code I had looked similar to the following:

final Socket clientSocket = socket.accept();
WorkerThread thread = new WorkerThread(clientSocket);

Line one is a blocking call, as it waits until the server sockets gets a new connection. When a new connection has been accepted, this connection is passed to a worker thread do to further work with it.

So, how to test that?

One approach could be to use Mockito – but I will not go into that here. Mostly because you shouldn’t mock what you don’t own.

Another approach is to invert the dependencies to the sockets (and maybe on the thread pool). By using custom ServerSocket and ClientSocket classes, calls to methods like accept() can easily be stubbed out for testing. The implementation of ServerSocket for the production environment would include simple delegating methods to the real object.

After my mentor proposed this approach to me, it felt like I knew nothing or had forgotten everything I knew so far. Of course you should invert the dependency to “hard” dependencies like sockets!

As I said in the beginning, this is brilliant yet simple.

I do know why it is important to invert dependencies, but not thinking of that myself before, makes this another eye opening experience for me.

Which is what I really enjoy about the apprenticeship here at 8th Light. Even though I have a few years of experience in software development, I keep getting Aha! moments regularly.


It's all about Practice

(crosspost from my apprenticeship blog)

Yesterday I had the chance to give a lightning talk at Codebar.

The topic was “Practice”. How we, as developers, practice and what we can do to practice more. Spoiler: look for pet projects as a “breakable toy” and just build something.

But anyway, I’m always super nervous when I have to speak in front of an audience I don’t really know. So I’m trying to get out of my comfort zone and practice that more (pun intended).

Codebar is a very good environment to do that. Everyone is there to learn, the students just as much as the coaches. So I think the idea of having a lightning talk before the coaching begins fits pretty well to Codebar (they started with the talks four or five weeks ago).

It was a good experience for me. Many students liked the talk, even though I stumbled a bit in between.

When I have an idea for a new talk, I definitely want to do it again.


First Week with Clojure

(crosspost from my apprenticeship blog)

This was my first week of Clojure. I was pretty excited about it to finally get the change to actually do something with. I did the Clojure Koans and parts of 4Clojure before, but I prefer doing something more than just filling out blanks.

Don’t get me wrong, the Koans and 4Clojure are an excellent entry point to get a feeling of the language itself, and I recommend it to everyone wanting to start with Clojure.

It’s just that I like having a small “project” or goal to work towards to. With that I have a goal in mind and can start improving my knowledge about a new language and the environment of it. Just by doing the Koans you still don’t know how to separate your source files, how to test your code and how you build your project into a deliverable component.

(To learn it a bit better I also used Clojure for last Friday’s Waza time to create speclj-unicornleap. A plugin for Speclj that integrates unicornleap into a test run.)

This week of Clojure was definitely entertaining as the functional approach can be mind bending sometimes. Having done OO for several years left it’s marks, as I constantly try to get rid of all the primitive type handling. But with Clojure it seems I should embrace it again more often.

That said, my code is far from idiomatic Clojure. But we all start somewhere, don’t we?


Unicorns for Speclj

(crosspost from my apprenticeship blog)

Since I started with Clojure this week, there is a whole new environment to learn. New language, new build tool, new testing framework(s), new everything…

To get myself more familiar with Clojure and Speclj I decided to invest my Waza Friday afternoon with creating a Speclj plugin for unicornleap.

The final result is available on GitHub (source) and published on Clojars.

It was interesting and frustrating at times. Mostly because I know so little about Clojure yet that the main blocks I had weren’t like “how do I do that…?” but more like “how do I do that in Clojure…?”.

I learned quite a lot while implementing it. Granted, the code needed to write the plugin itself is really small, but I know more now about some Speclj internals and how dependency management is done with Leiningen.

To be fair, it isn’t the most useful plugin, but as a learning exercise it really was useful for me. And that’s what counts, isn’t it?


HTTP Server Challenge

(crosspost from my apprenticeship blog)

For this iteration I got the HTTP Server challenge. We have to implement an HTTP Server that adheres to a specific set of requirements. These requirements are documented and verified in a FitNesse test suite.

It is a tough timeframe to get all tests pass in one week. But that’s exactly the point: Deliver a big task in a tight timeframe.

To succeed with the challenge, I often had to remind myself not to “waste” too much time with features that aren’t necessary needed.

The challenge is not to implement an HTTP server that is fully compliant with the HTTP/1.1 specification. But “just” enough to make it compliant with the current specifications documented in the acceptance tests.

“Just enough” means also to take shortcuts. Shortcuts are okay – as long as you don’t leave a complete mess. One example from my implementation is the Content-Range header. The specification defines all the different format possibilities this header can have. And supporting every possibility may me needed in the future, but the current requirement is only to support the very basic form of bytes 0-4. And that’s what I focussed on.

I’m not saying that it isn’t possible to support every Content-Range format in one week (IIRC Daniel, my fellow apprentice, does support all the different possibilities in his implementation – good work by the way). But is it really needed, or could the time spent on that used for another feature?

That’s definitely the most valuable take-away for me from that challenge. I see myself often drift away into details that aren’t really needed at the time.

So I’ll try to remind myself more often: “What would be the most simplest thing to work?”

“Simplicity”, as one of XP’s core values is defined as:

Simplicity: We will do what is needed and asked for, but no more. This will maximize the value created for the investment made to date. We will take small simple steps to our goal and mitigate failures as they happen. We will create something we are proud of and maintain it long term for reasonable costs.



Spring MVC Tic Tac Toe

(crosspost from my apprenticeship blog)

After completing the tic tac toe implementation in Java with a terminal UI, the next task was to create a web interface for it with Spring MVC.

Spring itself is … how can I put that? Complex. Yes, I think that’s a good way to put it.

There’s a lot available in Spring (MVC) and most of it isn’t really needed or useful for a simple tic tac toe implementation.

Interestingly enough that there is a “Rails-y” approach to it available that is called Spring Boot. The website states: “Takes an opinionated view of building production-ready Spring applications. Spring Boot favors convention over configuration and is designed to get you up and running as quickly as possible.”

And it really does favour convention over configuration.

The first Spring tic tac toe I did was with Spring Boot. And there’s a lot of automagic going on. I remember my fellow apprentices discussing their approaches when they did their Spring version, and I also remember hearing about web.xml configurations and such.

Surprisingly, with Spring Boot, that kind of configuration is not even needed. All that is needed is to annotate your controllers accordingly, and Spring Boot will do the rest for you.

Unfortunately there’s a downside to it. You really can get an application up and running in a very small amount of time. For example The monday my iteration started I read into Spring Boot at around 09:00am. My first fully working spike of Spring tic tac toe was done by 11:40am. I certainly did not expect that. I never worked with Spring before, and had only very little knowledge about web development with Java in general.

After having a working spike I git reset --hard and started from scratch.

Until 17:00pm that day I did not manage to get a single controller into a JUnit test harness. Maybe it was because my little understanding of Spring, or Spring Boot is really hard to test in isolation. I still don’t know really.

So I tried to keep the web/controller layer as thin as possible but to get everything else tested 100%. Which worked out fine, since it’s just POJOs behind the controllers.

In the IPM with my mentor I went with him through the whole process and explained everything. At least everything I knew about Spring. Which was very little, since Spring Boot did all the hard work for me.

So the next tasks involved creating a Spring MVC application in a more classical Spring way.

That was definitely helpful. Using Spring Boot felt a bit like cheating. I mean, you really can get up and running fast. But I think it’s not the best way to learn Spring.

So my final Spring tic tac toe is using classical Spring MVC with a web.xml configuration.

Creating it a second time without Spring Boot was helpful as I got to understand the wiring and interconnections of the different components in Spring much better.


Rails and Timezones

(crosspost from my apprenticeship blog)

This week I work on an internal Rails project that included some date and time crunching (among other things).

Some specs for the various controllers and classes included stubbed instances of Time and Date. Surprisingly when I first cloned the project and ran the test suite I got failures.

The test code contained a line like the following:

today =, 10, 1)

Not too overly complicated. But the test failed, and the failure message showed that the actual date that has been compared was this: "2014-09-30 23:00:00".

That was not the date that the test set up before. At least that’s not the date the developer had in mind when writing the test.

And the problem is with timezones. Or the seemingly infinite possibilities to create Time instances – which can all use different timezone settings. I don’t know really why Ruby/Rails behaves that way it does, but take the following commands as an example of what I mean (executed in the Rails console):

irb(main):001:0> => "GMT" irb(main):002:0> => "GMT" irb(main):003:0> => "UTC" irb(main):004:0>, 10, 1).zone => "BST"

So whenever or is called, it returns a GMT time. This was mostly used in the production code. The test code though, used, 10, 1). Which is “British Summer Time” for my local machine. So when Rails (resp. ActiveRecord) stores timestamps, it converts the given object to GMT internally. My guess at the moment is (haven’t digged through it completely), that the test setup created the test data in the (in-memory) database, where it was converted to GMT. When the test expectation happened it compared the converted timestamp to the expected one. Which was one hour off by then ("2014-09-30 23:00:00").

There is a pretty good blog post about what to do in a situation like that. One of the quintessences is that instead of calling one should use which uses the help of some neat ActiveSupport helpers.

I think it’s generally a good idea to know what is going on when it comes to timestamps and timezones in your applications. I don’t think every advice should be followed blindly. If this application would have been developed in one location only, this behaviour would have never been uncovered. Which is also not necessarily the worst thing, because the application would still be valid and the tests right. It’s just that when you have development teams spread across different timezones working on the same codebase, these timezones can turn on you.


Shallow Tests with Spying

(crosspost from my apprenticeship blog)

This iteration was about improving the tests of my Java Tic Tac Toe.

After adding more substance to the tests I can say that looking back at the previous state of the tests, it feels like I tested nothing.

Sure, I verified that every part of the game interacts with one another as expected. But verifying real behaviour between the collaborators now definitely gives me more confidence that everything works.

Instead of just testing that a method has been called with a spy, verify the actual behaviour of the method you want to test. Take a look at the old revision of the tests for Game and you see that the highlighted test verifies that the method showNextPlayer() has been called. So far so good, but I found a slightly better approach to that.

After a refactoring, this test still uses a test double, but it verifies that showNextPlayer() is called correctly from within Game.

Another thing I noticed quite often was that I could replace two tests that verified spies, with one test that verifies behaviour.

This is visible in the tests for HumanPlayer. There were two tests that made sure it called the correct methods on its Input object as well as the Board that is passed. These two tests were really just verifying that the player made a move on the board. This has been refactored so that the outcome of the interaction between the HumanPlayer and its collaborators is verified.

The next time I need to use spies to verify something, I will definitely think twice.


Outside-In TDD

(crosspost from my apprenticeship blog)

For my Java version of Tic Tac Toe I wanted to test drive it completely outside-in.
And I did. The end result was a bit surprising, though.

I didn’t run into any serious trouble while implementing the game. All pieces worked together just fine when I was done and wired everyting up in a main executable. To be fair though, I think the fact that I did know exactly where I wanted my code and design to go, definitely played into it (all the Ruby Tic Tac Toe from the last weeks surely left its marks…).

But still, seeing everything working perfectly together after just TDD’ing several small classes never gets old.

So there I was, having a tic tac toe implementation with 100% test coverage. But how valuable were my tests? Not very much it turned out.

Don’t get me wrong though, having these kinds of tests is still better than no tests at all. But due to the fact that I verified all behaviour with test doubles only (stubs and spies), lead to tests that were tightly coupled to a concrete implementation.

Take my implementation of the Game class for example. It’s 60 lines of code with not too much complex logic. But having a look at the tests for it, reveals that nearly every test-method verifies the correct wiring with a collaborating class. There’s no verification of real behaviour.

Instead of checking that a method on a Player object has been called, I could (and should) have verified, that the player made an actual move on the board – i.e. the board before the player made a move is a different one compared to the board that exists after the move.

And that is because I didn’t had any real Board class to use in the tests at that time. Because of the outside-in approach I did. While writing the tests for Game I only had the Board interface implemented by a test double.

I feels like an outside-in approach like that isn’t particular well suited for a focussed problem domain like that. It seems it is more suitable for testing boundaries, like explained thoroughly in “Growing Object-Oriented Software, Guided by Tests” by Steve Freeman and Nat Pryce.


Page 1 of 6