Some things are – or seem to be – impossible. Frances Buontempo explores how to distinguish between the two.
I haven’t manage to think of an editorial topic, so yet again, sorry. There are so many things I could write about, but I don’t want to cover old ground and don’t have the bandwidth to spend ages learning new topics at the moment. I am currently trying to rein in my commitments. I say “Yes” far too often, and am now starting to realise I can’t do “all the things”. Trying to limit the choice of what to do is difficult. I am tending to postpone some things, and they eventually fall off a TODO list. Not a great strategy, but a strategy nonetheless.
Trying to eliminate things is difficult. The “You ain’t gonna need it”(YAGNI) mantra from Extreme Programming encourages us to avoid creating things we don’t need now. Martin Fowler wrote about YAGNI [Fowler15], comparing the cost of building now versus building later. Sometimes delay has a cost, but doing things now costs, too. He says, YAGNI:
doesn’t mean to forego all abstractions, but it does mean any abstraction that makes it harder to understand the code for current requirements is presumed guilty.
For example, it’s OK to build an abstraction, if that makes code easier to change. He points out:
Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.
Maybe the phrase “Never say never” is relevant? Trying to eliminate unneeded code, or anything unneeded, is sensible, as is avoiding wasting time on planning for something that won’t happened. However, predicting the future is difficult. I prepared a workshop last year for a conference, but the conference got cancelled. That was frustrating, but I can use the materials for a different conference.
Now, consider the Sherlock Holmes quote, “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” This presupposes you have an exhaustive list that includes the truth. This is called a Holmesian fallacy [RationalWiki]: believing one explanation because the others you have thought of are impossible. The rational wiki (op cit) gives an example from Thales of Miletus: “The lodestone has a soul because it moves iron. This proves that all things are full of gods.” That might not be the best example, since I suspect a non-corporeal substance like a soul cannot move something physical. A better example might be C++ programmers arguing over undefined behaviour (UB). You often see people asking questions about strange behaviour in code, for example getting the right numbers from code compiled with one compiler, but not from another. That code seems to work sometimes leads to the claim they can’t have UB, otherwise why is it OK in some circumstances? Of course, that’s not how UB works.
Furthermore, the history of science and mathematics is littered with examples of impossible things becoming possible. What’s the square root of a negative number? Initially regarded as impossible, allowing the possibility opens up new mathematics. I have written about complex numbers before [Buontempo24]. Pythagoras believed all numbers were rational. A story goes that Hippasus of Metapontum, a member of Pythagoras’ group, demonstrated that the length of the diagonal of a square of side length 1 is the square root of 2, which is not rational (the length, not the proof) [Cambridge]. He was kicked out for heresy. Pythagoras thought everything in nature must be based on whole numbers, so did not approve. Mind you, Pythagoras also held that 1 is not a number, because it represents a singularity rather than a plurality [Britannica], and believed you shouldn’t eat beans because they have a soul. (You’ve heard of jumping beans, I presume? They move, so like the lodestone, must have a soul.)
Many things are now possible on computers that would have been unthinkable years ago. The rise of deep learning needed much faster processors and much more memory. The precise requirements vary, but for example consider a 50 layer network with about 26 million weight parameters and about 16 million activations in the forward pass. Using a 32-bit floating-point value to store each weight and activation gives a total storage requirement of 168 MB [Hanlon17]. Lots of research is focused on speeding up the calculations, or running algos on GPUs, or even building specialized hardware, but maybe we need to step back and find a completely different algorithm? The power requirements and excessive use of water for cooling in data centres worries me as well. Perhaps we should eliminate resource hungry methods? Doing so might also reduce costs. I realise I am in danger of expressing opinions now, which would take me dangerously near to an editorial! Which would, of course, be impossible. Let’s eliminate that immediately.
Stepping back and thinking through why you believe something is impossible can be useful. You might not invent a new branch of mathematics, or find a new computing algorithm, but you might discover a different approach. Alternatively, you might find you can manage something you thought you couldn’t do. This can happen when you try to learn something. We all have blind spots, or certain things we find difficult to get our heads round. Some people panic at the sight of numbers, but discovering how to deal with a small part of a big scary topic helps. A thousand mile journey begins with the first step, as they say. You might discover you can manage something, even if you are neither very good at it nor enjoy it. GUI work is my mental block. I can write a front end program, but I’d rather not. I’m also trying to learn German on Duolingo. I didn’t do very well at foreign languages at school, and struggle to spell English words. In fact I just typed ‘sturrgl eot’. I suspect I have dyslexia, which doesn’t help. I used to think I would never be able to learn a different language or spell properly. I now realise I can try a different ways to phrase something if I get stuck. In school exercises, you often aren’t allowed to do that. Finding a Plan B offers an alternative if Plan A is impossible. Eliminate the impossible, and what’s left might work, you never know.
I gave a talk at C++Online called ‘Don’t be negative’ [Buontempo25]. Why might you want to eliminate negative elements from a container or range? Well, maybe a negative price is implausible. Go give it a try in your favourite language. I used C++. The std::remove_if
algorithm used to be a common interview question. As you probably know, this doesn’t remove elements – the container stays the same size, but appropriate elements are shuffled to the front. There are newer, better ways, like std::erase_if
. You can also try a recursive approach and more besides. You had to be there. It looks like you can get exclusive access to content if you can’t wait for YouTube [C++Online]. I believe this is a great example, with a simple problem statement, but many valid approaches, as well as somewhat silly methods. Being silly often gets your imagination going, and can provide great learning opportunities.
People often use silly analogies to make points. Sometimes these are intended to ridicule others’ points of view. For example, Bertrand Russell discussed the idea of a celestial teapot, too small to be seen, orbiting the sun between Earth and Mars. Hard to argue with, right? Because whatever you say to suggest there is no such teapot can be countered by pointing out there can’t be any evidence because it is unobservable. Bertrand Russell’s point was “the philosophic burden of proof lies upon a person making empirically unfalsifiable claims, as opposed to shifting the burden of disproof to others.” [Wikipedia-1] Russell was talking about religion, but the logic applies more generally. When you eliminate the impossible, if what’s left is unfalsifiable, Russell would say the person making the suggestion still has to prove it’s true. Sherlock Holmes was wrong. Not everyone agrees with Russell’s thought experiment. For example, the philosopher Paul Chamberlain countered, “every truth claim, whether positive or negative, has a burden of proof.” Again, this would mean Sherlock Holmes is wrong.
Now, Sherlock Holmes is a fictional character, so shouldn’t be taken as a source of authority. To be honest, many non-fictional characters shouldn’t be taken as a source of authority either. Fiction can be useful though. Russell’s teapot is one of many thought experiments. The dining philosophers problem [Wikipedia-2] is a good story for thinking through concurrency and deadlock problems. Five philosophers sit at a table, with a plate each. There is a fork between each plate, but eating from a pile of spaghetti requires two forks. The problem is to allow the philosophers to eat or think, while ensuring none starve. It’s easy to end up with a deadlock, whereby philosophers starve. Setting the problem as a story makes it easier to visualize and discuss. I’m sure you can think of many other stories or thought experiments. Schrodinger’s cat comes to mind too [Wikipedia-3]. Even if you don’t understand the physics you have probably heard of the story. Is the cat both dead and alive until you look? Is that impossible? I’ll leave that thought with you.
Stories can be a useful way of thinking about things. They can illustrate an abstract idea or help to compress a chain of thought. By ‘compress’ I mean pick out salient parts, rather than conveying everything. Maybe your CV is a work of fiction, to some extent? Not that you have made up roles, but have you tried to give it a narrative, emphasising relevant skills and experience for a specific roles? You eliminate the irrelevant, if you are as old as me. Fitting everything on two pages is difficult. If you don’t have much experience, filling two pages is a different problem. Don’t forget, if you write for Overload you can include that on your CV.
Some stories worry me, though. It’s easy to come to unfounded conclusions if you follow Sherlock Holmes’ statement. I notice myself thinking, ‘Oh, perhaps they are annoyed because…’ or ‘That bug must be due to …’ or similar. I suspect you do as well. If you think of something that’s not impossible, that does not mean it is correct. I spent a long while working in finance. You saw reports called ‘PnL Explain’, which ‘explained’ the profit or loss on a balance sheet. Sometimes ‘attribution’ is used instead of explain. There is more than one way to calculate this, and you often end up with an ‘unexplained’ portion of profit or loss [Wikipedia-4]. These reports are useful for risk analysis, but the idea that an explanation might come with an unexplained part is of note. Another finance example involves validating financial models. You often value a complicated instrument based on something simple that you can find prices for in the markets. Your model should be able to reconstruct the values you get from the markets, but often doesn’t do this precisely. On more than one occasion I have seen ‘stories’ told explaining why there are differences in the numbers, floating point inaccuracy being a common excuse. More than once, the team later found a bug in the code which more accurately explained the difference.
We all come to wrong conclusions from time to time. That’s OK. Being humble enough to admit your mistakes and say sorry matters. Maybe going forward, let’s try to notice if we have picked what’s left when we eliminated the impossible, but may not have thought of everything possible. Or catch ourselves spotting a possible explanation: the first thing you think of to make sense of the world might not be correct. Being wrong is OK, but that’s why we all need to bounce our ideas off people, get a code review, or sanity check with a review team.
References
[Britannica] ‘Pythagoreanism’, published by Britannica, available at: https://www.britannica.com/topic/number-symbolism/Pythagoreanism
[Buontempo24] Frances Buontemp ‘Counting Quals’ in Overload 184, published December 2024, available at: https://accu.org/journals/overload/32/184/buontempo/
[Buontempo25] Frances Buontempo ‘Don’t be negative’, slides from a talk given at C++Online given on 27 February 2025 available from: https://cpponline.uk/session/2025/dont-be-negative/
[Cambridge] ‘Death by number’, published on Underground Mathematics by University of Cambridge, last updated 18 Jan 2016, available at: https://undergroundmathematics.org/thinking-about-numbers/death-by-number
[C++Online] Access to talks available from: https://cpponline.uk/on-demand-early-access-pass-now-available/
[Fowler15] Martin Fowler, ‘Yagni’, posted 26 May 2015 at https://martinfowler.com/bliki/Yagni.html
[Hanlon17] Jamie Hanlon ‘Why is so much memory needed for deep neural networks?’, published 31 January 2017 on Graphcore, available at: https://www.graphcore.ai/posts/why-is-so-much-memory-needed-for-deep-neural-networks
[RationalWiki] ‘Homesian fallacy’ at http://rationalwiki.org/wiki/Holmesian_fallacy
[Wikipedia-1] ‘Russell’s teapot’, available at: https://en.wikipedia.org/wiki/Russell%27s_teapot
[Wikipedia-2] ‘Dining philosphers problem’, available at: https://en.wikipedia.org/wiki/Dining_philosophers_problem
[Wikipedia-3] ‘Schrödinger’s cat’, available at: https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat
[Wikipedia-4] ‘PnL explained’, available at: https://en.wikipedia.org/wiki/PnL_explained
has a BA in Maths + Philosophy, an MSc in Pure Maths and a PhD using AI and data mining. She's written a book about machine learning: Genetic Algorithms and Machine Learning for Programmers. She has been a programmer since the 90s, and learnt to program by reading the manual for her Dad’s BBC model B machine.