Instructions can be useful or infuriating, Frances Buontempo wonders how to give and follow directions.
As the winter drags on, I have spent too much time watching television so haven’t written an editorial. In particular, Junior Taskmaster [IMDB] has been on recently. Watching ‘live’ TV probably proves I’m getting old, as well as wasting my life. Nonetheless, if you’re not aware of it, let me explain. The original Taskmaster [Wikipedia] is hosted by Alex Horne and Greg Davies. The contestants, all celebrities and usually comedians, are set tasks. They are awarded points and the contestant with the most points at the end wins. The tasks are very silly, and often lateral thinking wins out. Frequently, the contestants query the tasks, and are told, “All the information is on the task.” Which almost never helps. Junior Taskmaster is hosted by Rose Matefeo and Mike Wozniak and has children rather than celebrities as contestants. The children’s insistence on fair play gives the new series a different edge, but their imagination is amazing. One task involved moving a sand castle from a podium labelled ‘A’ to a podium labelled ‘B’. I wondered if moving the podiums side by side might help, and a child tried this. Two children were even more sensible, just peeling the labels off and switching those. Lateral thinking often provides new and sometimes simpler ways of solving a problem.
You have probably been tasked with something which seems almost impossible or immensely tedious before. I started out as a maths teacher after university and set a pupil lines once. Rather than writing out the lines by hand they got a computer to generate a printout. Fine by me; they had done as requested, and showed some initiative. Automating can sometimes deal with the tedious, but the impossible is a different challenge. I recall a couple of interviews where I needed to stall slightly for thinking time. One involved live coding, which makes a change from using a white board to reverse a linked list. However, I wasn’t 100% sure how to approach the question, which involved spotting palindromes. Not a difficult problem, but in an interview situation my brain tends to freeze up and I wasn’t sure what I was allowed to use. I started, as I often do, by writing a test. Using assert
. For an empty string, with a function that only returned false
. The interviewer was deeply unimpressed, and pointed out my code didn’t work, and all the information was in the question. Explaining I often started like that when using TDD didn’t seem to help. The interviewer simply looked bemused. I managed the required function in the end. Starting with a very simple case helped me start thinking straight, though someone not getting writing a failing test first was off putting. Another interview question involved a brain teaser. I don’t recall the precise details, but it involved putting pennies on a table and the person who put the last coin down either won or lost. Coins weren’t allowed to overlap, and I think you had to say if you would go first or second. I had no idea how to start thinking it through, so asked probing questions about the size of coins and table. If a coin is as big as a table, you can only put one down. I suspect the interviewer wasn’t impressed by me starting with edge cases, trying to flush out the specific details. But you need to start thinking somewhere.
Have you ever picked up a task from a tracking system, like Jira, and got stuck immediately? In theory, if you have backlog grooming/refinement sessions, everyone on the team should be able to understand what a task requires. And yet, it is still possible to get to some work and find things have changed, or assumptions no longer hold. Seb Rose wrote about this in his ‘User Stories and BDD’ series. In ‘Part 2, Discovery’ [Rose23]. He said:
As professionals, we are paid to have answers. We feel deeply uncomfortable with uncertainty and will do almost anything to avoid having to admit to any level of ignorance.
Finding the uncertainty can be useful though. He goes on to talk about deliberate discovery and how to spot questions and unclear parts as well as splitting stories into manageable chunks. If a task or Jira has some example cases, or even if you have actual BDD automation tests to start coding against, you are much less likely to find yourself staring at the task wondering where to begin. In this case, all, or at least enough, information will be on the task. An example is often clearer than a Jira.
I heard a talk recently by a business person about how they wrote Jiras. Their team had a template with several sections, like acceptance criteria and so on, but they frequently forgot sections. Their solution was to use GenAI to write the tickets. The thought of this instantly horrified me. If the team subsequently talked through the Jiras I could see it working, but again having a list of what’s required doesn’t always mean the tasks make sense. Have you ever given someone instructions and they somehow miss the point completely? No matter how clear and precise you try to be, there is always room for misunderstanding. I recall a tale of a child making his Mum a cup of tea. Said child knew he had to boil the kettle, but thought it would be more efficient to put a teabag in the kettle while it boiled. A cup of ‘tea’ was made, but probably wasn’t very tea flavoured. Spelling out the precise steps, in order, might avoid such creative thinking, but is very hard to do. There’s usually a balance point. If a recipe says “Make a pastry case” but you don’t know how to make pastry that won’t be much help. Whereas, if the recipe spells out what a gram or milliliter are, that will distract from the baking instructions. An imperative set of instructions will make assumptions about a common understanding of words and instructions. “Boil a kettle” does not mean heating a kettle until it reaches boiling point. “Run the tests” should mean checking they pass, and taking appropriate action for any failures. Trying to communicate how to achieve something is hard, and often requires some back and forth.
The back and forth conversation necessitates people being able to communicate. Sometimes that is not possible. For example, if you write documentation, the chances are you will never meet many people who read your instructions. You can get a friend or colleague to read through your first drafts. You might also be able to read through yourself, trying to misunderstand everything you have written, searching for potential misunderstandings or confusion. You might find you can write a script or automate some of the steps. Sometimes explaining to a computer is easier than explaining to a human.
Documentation crops up in various places. Maybe for a new machine or perhaps a game. Lots of machines no longer come with documentation, in particular mobile phones or laptops. Last time I bought a laptop, I had to search the internet to find out where the on button was. Nonetheless, you do still get written instructions, for example for games. And sometimes they are incomprehensible, so you need to attempt to play and decide amongst yourselves what to do under various circumstances. Some games don’t come with full instructions. You might find a settings menu telling you key bindings like ‘W’, ‘A’, ‘S’, ‘D’ for up, left, down, right respectively. Figuring out what the rules are and how to score after that is another matter. I’m currently trying to prepare a talk for the ACCU conference [Buontempo25] about reinforcement learning (RL). RL is a type of machine learning where agents take actions in an environment, using trial and error to ‘learn’. Rewards or penalties reinforce actions, and agents try to maximize rewards over time. For example, playing an arcade game and trying to get a high score. You can tell the agent the possible moves, WASD, and track the environment, letting the agent learn how to play the game. Deep Mind produced a paper showing how to train an agent using the pixels on screen to describe the environment [Mnih13]. Plug the agent and environment into an RL framework and watch your machine learn to play PacMan or similar over time [Gymnasium (for example)]. Or wait for me to find a simple way to explain how to code the reinforcement learning up from scratch. Deep Mind’s reinforcement learning, called Deep Q-Learning, did not need all the information upfront. The algorithm discovered how to play to get a good score by experimentation.
Writing code is often an iterative process, at least in terms of discovering the requirements. The code itself may be more declarative than iterative, or might even be recursive. I dip into functional languages from time to time, and can feel my brain starting to hurt/expand/change viewpoints while I get re-familiarised with recursive approaches. For example, you may see code for a sort along the lines of
merge_sort(A, start, end): if start<end mid = (start+end)/2 merge_sort(A, start, mid) merge_sort(A, mid+1, end) merge(A, start, mid, end)
The merge
function is left as an exercise for the reader. If you are familiar with merge sort, you will recognize this pseudocode. However, do you remember the first time you encountered code like this? How do you even start thinking this through? We’ve probably all seen jokes like the dictionary definition of recursion saying “see recursion”. How do you start? All the information may be in the pseudocode, but you might need to rewire your brain slightly to understand. All the information is in the code, but that doesn’t always help. And sometimes, some of the information is in a config file. Or more than one config file. Or replaced upfront by a setting in a database. So, we have two extremes: first a short piece of code in one place (apart from the merge
function, sorry!) and another codebase with parts scattered in various places. Both can be hard to understand but for very different reasons. Figuring out how to understand a new codebase is a topic in itself. If you want some ProTips, watch Jonathan Boccara’s ACCU 2019 conference talk, ‘10 Techniques to Understand Code You Don’t Know’ [Boccara19]. He talks about exploring, reading and understanding code. The exploring ideas start by finding where and how to experiment with input and outputs, whether a UI framework or log files or unit tests. We tend to learn by experimenting and discovering. Just staring at the merge-sort might not be enough to figure out what’s going on. Finding a way to play with the code is more helpful. Or even, trying to sort some playing cards by following the instructions in the code can be useful.
Now, following instructions without thinking might prove that a set of instructions fulfill the requirements. That doesn’t mean you have understood why the recipe works. I spent some time last year trying to solve the Rubik’s cube. A friend set up a discussion group, sending videos and instructions to help. I did finally manage to solve the cube, but I would have to follow instructions to do this a second time. I know full well I don’t fully understand why certain sequences of moves work, and I often have the orientation incorrect and end up moving the wrong pieces. Hopefully, I will eventually form a mental model, allowing me to think through what I need to do. Next time someone tells you “All the information is on the task” or tells you to “Read the question” in an exam, feel free to experiment and find out what happens. That’s how we learn. You might discover something, or come out with a clever solution, you never know. In fact, here’s a challenge. An Overload editorial requires two pages of writing for the front of the magazine. An editorial should be an opinion piece, or relevant to something topical, which as you know I never manage. If you want to try your hand, please get in touch. Task: 2,000 words or so, on a topic of your choice. Send it to me, and we’ll see what the review team thinks. Over to you.
References
[Boccara19] Jonathan Boccara, ‘10 Techniques to Understand Code You Don’t Know’, ACCU 2019, available at https://www.youtube.com/watch?v=tOOK-VsWU-I
[Buontempo25] Frances Buontempo ‘An introduction to reinforcement learning: Snake your way out of a paper bag’, talk to be delivered at ACCU 2025, abstract available at: https://accuconference.org/2025/session/an-introduction-to-reinforcement-learning-snake-your-way-out-of-a-paper-bag
[IMDB] Junior Taskmaster: https://www.imdb.com/title/tt34234603/
[Gymnasium] https://gymnasium.farama.org/
[Mnih13] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller, ‘Playing Atari with Deep Reinforcement Learning’ NIPS Deep Learning Workshop 2013
[Rose23] Seb Rose, ‘Part 2, Discovery’ in Overload 31(178):4-5, December 2023 https://accu.org/journals/overload/31/178/overload178.pdf#page=6 and https://accu.org/journals/overload/31/178/rose/
[Wikipedia] Taskmaster: https://en.wikipedia.org/wiki/Taskmaster_(TV_series)
has a BA in Maths + Philosophy, an MSc in Pure Maths and a PhD using AI and data mining. She's written a book about machine learning: Genetic Algorithms and Machine Learning for Programmers. She has been a programmer since the 90s, and learnt to program by reading the manual for her Dad’s BBC model B machine.