Call your shot and record it

Ken Favaro and Manish Jhunjhunwala, Why Teams Should Record Individual Expectations, MIT Sloan Management Review (2018-11-30)


However, when individual expectations are recorded along with the key assumptions behind them, important differences become visible. One person might see 2+2 as the problem to solve, another might see 1+3, and another might think it’s 5-1. Even if you all arrive at the same answer, recording and then discussing the variety of paths that different stakeholders expect forces everyone to think in new ways. And often the team ends up concluding that 1+5 is the right starting place — and thus arriving at a different, unanticipated, and better decision altogether.

I agree. Record everything—practically speaking, of course. Set up a system to record your predictions and assumptions and how the thing you predicted turned out. Thought you'd finish some aspect of a project in 4 weeks while spending 50% of your time on it? Mark it down. See how it actually turned out. Compare. Get better at predicting.

I've tried versions of this. Say I have a task—finish a section of a requirements spec. How long will it take, the bossman asks. I can get it done in four weeks, I say. Mark it down. Capture the actual result. Compare. How far were you off? Note the difference. Debrief and figure out why. Do it again. And again. And again.

It's not a comfortable exercise.

Want to find out how bad you are at predicting the future? Write down your predictions. Write down the actual result. Compare the two. You'll stop hating on the weatherman. At least he stands up in public, on TV, and gives his prediction.

I've never done this exercise extensively, or with a team of people. I bet you would find a lot of interesting patterns—who chronically underestimates the amount of time for projects, who has a hard time breaking up big predictions into smaller sized predictions (which should be the same as breaking up big tasks into smaller tasks), who gets touchy about having their guesses recorded and compared. If you could figure out the patterns—the good and the bad—you could cover for your blind spots and play to your strengths. It would have to be the right team—the kind of people who could take straightforward feedback. That's not easy. I love to say that I can take it, but in practice it's hit-or-miss. Negative feedback, however well-meant, still feels like a slap in the face. The real trick is your reaction to that feeling.

Every month, every week, every day: I try to lay out what work I have to do, and when I'm going to work on it. But I don't often aggregate the total estimate of how much time it will take (because I get a pile of tasks to work on, and there is some wrangling about how to work on that pile so that the pilers think you're not ignoring them), and I never count the total amount of time it takes (even though I actually record it with Toggl. The truth is: doing the work itself takes a lot of time and effort, and the extra step of recording and comparing feels like extra work. It's true: it is extra work. But if the extra work can make the future work go faster, then it's a useful feedback loop. And if you can automate the input back into the feedback loop, then you can really improve.

So, that's my mission for April, I think: record my assumptions and predictions on the projects I'm working on, then record the actuals, then compare them, then do the hard work of improving.

Leave a Reply

Your email address will not be published.