I was fortunate enough to attend the O’Reilly Software Architecture Conference, 2016 held in NYC. As if attending the conference wasn’t good enough, I also had the fortune of attending as a platinum pass member. This gave me access to a two day workshop followed by two days of access to a bunchf of keynotes on microservices, reactive systems and some other topics that Enterprises are beginning to adopt at a wide scale. I choose to attend the two day workshop on ‘Designing for Volatility’ by Allen Holub. Now before you continue to read on, I’d like to mention that what follows is MY interpretation of what Allen mentioned in the workshop. It is quite possible that what Allen intended for us to interpret and what I did end up interpreting might be completely different. But I’d like to think my interpretation is close to accurate :).
Now that the disclaimer is out of the way, I want to highlight some of the key takeaways that I, well, took away, from this workshop. In it Allen covered topics on how to design software systems that are amenable to change in agile environments. I think this is a topic that is undersold. When people talk about agile, they almost always end up talking about sprints, releases, burn-downs, velocity, etc. All this is good, but is of no real value unless your software is designed in a way it can handle frequent changes. Isn’t agile all about accepting that business requirements are volatile? If we can model our software delivery around this acceptance, shouldn’t we be designing our software this way too?
Before I get into the key takeaways, I’d like to share my thoughts on the group exercise I was part of during the workshop. The two days of the workshop were intermixed with Allen talking to us about designing for volatility and a group exercise. We were a class of 16 people (most of us were wanna be architects – some had already made the cut) and we organized ourselves in 4 groups of 4 each. We were given the task of coming up with a problem statement and translating that problem statement into a bunch of user stories. A problem statement wasn’t necessarily a single statement but a group of statements that concisely and succinctly defined a business problem. The focus was on clearly defining the problem. Once we had the problem defined, we had to come up with a bunch of user stories that would reflect the problem statement with a high level of fidelity. We were (re)introduced to modeling tools like Activity Diagrams and Collaboration Diagrams that helped us better represent the problem statement and gave us invaluable insights into how user stories can be broken down. If there is one thing that I took away from this group exercise, it was that having a technical member (Architect, Team Lead, Developer, etc) being involved in the development of user stories is extremely valuable. I understand that this might not be possible in all types of organizations, but if it were up to me, I’d do it this way for sure.
On to the takeaways now –
Agile is all about Trust & Culture
Allen takes the example of Spotify to drive home the point of how being agile is all about trust in your employees, your teams and the culture you drive. Apparently Spotify has about 700 developers (this number might not be accurate) and they are divided into small teams – each with its own process. Each team is empowered to do the right thing. The leadership follows the ‘servant leadership’ model – where the leaders strive to empower their teams. Teams are given a high degree of autonomy. Allen was mentioning how Spotify doesn’t have the notion of budget approvals. Teams are trusted to do the right thing with the budget – this was Spotify’s way of reducing institutional friction.
You can take a look at the slides here to further explore how Spotify does agile. While I’d love to work in a team that provides such autonomy, I’m afraid this would not fly with large enterprises where scientific management (as opposed to lean management) is the norm.
This was a very interesting topic that Allen covered in the workshop. The line of thinking was something like – “Software is like physics – how much time does it take to build a warp drive?”. This cracked me up because this is so relevant. Most of the software we develop can be considered to be bit of R&D in the sense that we operate with a lot of unknown variables. Software development is not a linear process and this ensures that almost always, we are never able to provide accurate estimates (or as they are called in the industry, ‘guestimates’). There has been so much research that has gone into providing visibility into the software development process, on how we can provide better estimates, etc. Instead of trying to achieve something that we know we can’t do – why not not provide estimates at all.
“No estimates” might not fly well with most of the management. Estimates provide some sort of visibility into the software development process (for management, this mean when can this software be shippable or how often can we release new features, etc). If there are no estimates, how can we provide visibility into the software development process? Enter “Backlog Cumulative Flow Diagrams”. While going in depth about these diagrams is definitely out of the bounds of this blog post, I will refer you to these links – Holub No-Estimates and No-Estimates Book.
To briefly speak about Cumulative Flow Diagrams, they help you project your work instead of estimate them. You spend a few sprints picking up user stories and then you arrive on an average number of user stories that you are able to complete in a sprint. Using projections, you look at your backlog and work forward the number of sprints it will take to finish that backlog. Cumulative Flow Diagrams help you with these projections.
Statically Typed Languages
Allen mentioned how, according to his experience, all the successful large projects he had worked on with massive code bases were written in statically typed languages (specifically C++, C# and Java). Not to set off a flame war but I think that large code bases are better off using statically typed languages. One reason for that is statically typed languages force you to think in terms of contracts. They train you to think in terms of interfaces and how components will interact. And this interaction is usually at the crux of all complexity. Well defined interfaces result in manageable complexity. Dynamically typed languages on the other hand, offer abundant scope to the lazy programmer to not define proper interfaces and contracts. I’ve noticed how folks coming from dynamically typed languages have an unhealthy obsession with ‘Maps’. Everything is just dumped into a map data structure and passed around the entire application. While this results in quicker development time, it usually comes back and bites you in the bottom.
Having said that, I’d also like to mention that for a “disciplined” team of programmers, the choice of the programming language doesn’t matter. Likewise, even if you have chosen a statically typed language, nothing can save you from poor design choices. Only, I feel that statically typed languages provide less scope to f**k it up bad.
How Microservices Help With Agility
Allen would have liked to cover more of how Microservices help with agility but with all the great content and conversations we had as part of the workshop, he really didn’t have time to spend more than a few minutes on this topic. Allen mentions that a big part of the agile development story is designing loosely coupled components that can have their own release trains i.e. can be evolved without fearing about impacting existing components that depend on them. Microservices help you achieve this level of agility. Thinking in microservices helps you think in terms of services and their interactions. Thinking ahead about interactions, helps reduce complexity in design. The other advantage of using microservices is the ability to use polyglot programming and persistence – you get to choose the right tools for the right job. How many times have you tried to shoe horn things just because of shortcomings in your application’s language and storage mechanisms – microservices really shine when it comes to this.
Now finally some advice on the actual implementation part. We (re)visited the SOLID principles in software design. I’m sure all of us have gone through the SOLID principles multiple times before, so nothing new or interesting here. What was interesting is Allen’s emphasis on separating out the Domain from everything else in the application. More than once he mentioned about the Domain layer being on top of other layers (DDD anybody?), about how the developers and the business should use the same language. About how your code itself should be the documentation (Code As Documentation) and how behavior driven development helps new developers (or even existing ones) understand how a component behaves and how it maps to the domain and how we should encapsulate state and only expose behavior.
Most of the stuff that we covered was nothing radical – in fact these are the basics of software development – stuff that has been reiterated a gazillion times all over the ever expanding source of wisdom known as the internet. Its only that very few of us take the time to take a step back and think hard about the design decisions we are making. Most of us are so caught up in the dreaded eternal loop of delivery that we really don’t appreciate how nuanced and enjoyable software development can really be. My number one takeaway from all of this is take a step back more often and think about the design decisions that we have made, will make.