Over this series, I’ve spoken in detail about the organisational impact of IT dependencies in your system. I’d like to take a diversion and discuss how it can impact the individual.
I posit that even without the work benefits you could derive from eliminating dependencies, the quality of life changes you could bring about would more than pay for the effort.
This isn’t an article about workplace stress, we’re all pretty familiar with its cost to our economy. Nor is this an article about the cost of employee churn. It’s not even about the cost to productivity through disengagement. Let’s move forward knowing the impact of people not being happy.
After some of my recent articles on building a dependency map, a few people got in touch asking for tips on actually creating them. Here’s a quick way to get started.
You might have noticed the following example in my previous posts.
I created the graphic above with an amazing bit of kit called Neo4j. It’s actually an incredibly sophisticated graph database technology, so it almost feels a little sacrilegious to be using it for this.
I’ve written a few heavy articles recently, so thought I’d take a step back with something a little lighter. I absolutely love retrospectives and am going to take this time to highlight some common retrospective pitfalls that I frequently see people falling into.
If you’re not up to speed on the concept of dependency mapping, then I’d suggest taking a look at my previous post where I talked through how to go about building a dependency map.
So what happens now? You’ve gone through the workshop and now have a bunch of data that’s telling you what? Something about your system? I’m going to run through some of the actions I take when attempting to understand a dependency map.
Recently, I published an article where I discussed the concept of dependency as a proxy for complexity. I also tried to show that complexity is the biggest aspect to consider when attempting to improve the flow of value. This is all part of a concept I call Dependency Mapping.
Following publication of the article, I had a few people who read it on LinkedIn reach out for some advice on actually doing it in their place of work. In this article I will dig deeper into the process by which we can start to map out dependencies. With the map completed, we can then work to remove them.
Dependency maps are going to be messy as they try and convey a huge amount of information. We’ll aim small, and then offer some suggestions for improvement from there.
If you ask 10 people why Digital Transformations fail, you’ll get 10 different answers, often with phrases like “buy-in” and “culture” thrown around. Although there isn’t a simple answer to this question, I’d like to talk about one that often gets ignored. Dependencies.
You have a dependency if something is contingent on another, here are some examples:
“Getting this work complete is contingent on the tests passing.”
“We need the platform team to spin up a test environment before we can run the tests.”
“We need to have sign-off before we can allocate time to create the instance.”
This Agile game is based on The Penny Game. I’ve seen it in various incarnations, but this is based on the original concept. Instead of simply showing how batch size affects throughput, this game has been heavily modified to give several additional lessons. Attendees can expect to learn about Agile’s roots in Lean manufacturing, batch size theory, single piece flow, being adaptive to change, quick feedback and communication.
Each round after the first is essentially optional, you can choose exactly which lessons you wish to deliver. This may be all of them for more experienced teams, or fewer for those new to Agile.
Please note that this no longer represents my view on this subject, but is maintained for posterity.
Risk Management is a part of everyday life, crossing the street or even walking downstairs in the morning carries with it some risk factor. A project is no different. When projects were managed by default by using high overhead techniques such as PRINCE2, risk was an integral part of running a project; they were cataloged, monitored and actively mitigated through the use of logs and registers. Many of us have since abandoned these heavy methods in favour of lighter approaches, such as Scrum or Kanban. However, amid all of the confusion in the revolution, we appear to have thrown the baby out with the bath water. Even now, risk is rarely included as an active element in our practice, and rarely done well. I would propose a new method of managing risk, something lightweight that fits with our de facto practices, but with the rigour of the old guard.
What is Risk?
First, let’s define risk. The ISO31000 standard defined risk as an uncertain event, which should it occur, will have an effect on the project meeting it’s objectives. Notice the lack of connotation here, risk isn’t some inherently bad thing, it’s simply a degree of uncertainty. The message is clear, we should be looking for, and actively managing, all forms of uncertainty.
Risk sits in the middle of cause and effect, some cause may trigger an effect, although we don’t know.
Tracking Agile Projects is about monitoring the project health, and checking adherence to the schedule is a relatively simple exercise. The amount of work actually completed in an iteration is compared with the amount of work expected to be completed. Differentials are feed back into the amount of work expected to be completed in the future iterations, which has a self corrected effect on the project schedule.
Note that this no longer represents my thinking, but is maintained for posterity and ridicule.
Agile Planning and Estimating covers the activities involved in actually planning how the software being developed is going to be built, splitting up the development into manageable pieces and deciding a schedule for the release. This part of my Agile development guide is based on the excellent book by Mike Cohn, Agile Estimating and Planning.