Semi-automatic Weapons

Semi-automatic Weapons

By Frances Buontempo

Overload, 23(128):2-3, August 2015

Automating work can save time and avoid mistakes. But Frances Buontempo doesn’t think you should script everything.

Having been distracted by an esteemed member’s [@ChrisOldwood] tirade of gags and puns on twitter, I thought I might attempt to tell a joke myself.

“Doctor, doctor, it hurts when I jab myself in the eye with a pencil.”

“Have you tried automating this?”

As you can see I am not very good at jokes. I did spend some time trying to think up others, and as ever this has put paid to any hope of writing an editorial for this issue. In all seriousness, I have been musing on various kinds of automation recently. My dream of automatically generating an editorial remains, but is still just a dream. Instead I have been observing other people trying to automate various activities with various degrees of success. Having also just finished reading The Thrilling Adventures of Lovelace and Babbage: The (mostly) true story of the first computer [ Padua ], I was struck this time by the motivation of the first computer – to replace human computers of logarithmic and other mathematical tables by an automatic calculating machine. Padua suggested that Babbage owned a mechanical ballerina – an automaton – of which he was extremely proud. Perhaps he was inspired by roller-skating Merlin’s famous silver swan [ Swan ]. The steam-punk style mechanical swan gracefully appears to move in a stream and catch a fish. A young Charles Babbage is reported to have seen this and been mesmerised. Merlin showed him a mechanical dancer, which Babbage managed to purchase at auction many years later [ Dancer ] that he proudly restored and displayed on a glass pedestal in his office. It seems like his youthful enchantment with automata fed his later attempts to build the first computer. These tales also make a delightful steam-punk comic [ Padua ].

Though certain automata were purely for entertainment, many of Babbage’s difference and analytical engines were designed to speed up laborious manual calculations, even though they may not have progressed much beyond the design stage. Other machines appear to be somewhere in between. In a century where people wondered if man could create life or if a woman’s vision of Frankenstein creating a monster were a real possibility, one may discover the beginnings of attempts at artificial intelligence. Many years before the Deep Blue [ DeepBlue ] chess computer, one finds the mechanical Turk [ Turk ] – an apparent automaton which was really rather good at playing chess. It seems that despite the appearance of being a machine that could automatically play chess, it was in fact being driven manually by, well, a man, not the apparent automatic marvel first promised. Obviously, it actually needed to be driven by a very good chess player, since a machine playing chess poorly would have been somewhat less remarkable. We will return to this theme of something ‘automatic’ actually being semi-automatic or manual shortly.

While many of the mechanical marvels entertained the bourgeoisie or aristocracy, automation gradually crept into almost all realms of life. In England, we are taught about the rise of the machines and the clog-throwing saboteurs and Luddites. To be fair the saboteurs are possibly French [ Saboteur ]. The Luddites, sometimes described as machine-breakers, were attempting to bring about a change putting the workers in a better negotiating position with their employers. The word is frequently misused nowadays to mean someone who is afraid of technology. Technophobe would be a better term for such a person, though that could be argued to literally mean one who is afraid of skill – those of us who work with ‘jobbing programmers’ had better beware! One of the saboteur methods was the withdrawal of efficient working, specifically where workers ‘avoid any actions that would hurt their own job prospects’ [ Method ]. You may suspect that failing to do your job efficiently would harm your job prospects, but until we are able to measure efficiency this may be a moot point. Some measures, such as lines of code written have been used previously but tend to encourage people to work to the metrics rather than producing the software which our customers desire and furthermore discourage deleting code.

Getting back on track, it is important to notice that the machines which saboteurs were punished for breaking were not fully automated. They still required some workers to load up the threads for the looms and so on. So many times our automatic tools are simply semi-automatic. An automatic car does not drive itself, yet. Furthermore, many attempts at machine learning or artificial intelligence require a lot of human input. Cordelia Schmid remarked at the BCS’ annual Karen Sparck Jones lecture this year that models using hand-tuned parameters are not examples of machines learning [ Eigenfaces ]. Babbage’s difference engine required human-produced punched cards for the instructions. A continuous integration server will do nothing until a human commits some code, apart from time-based runs which still require code in the first place and a human to setup the job. Each automatic set-up requires a human in the loop [ HITL ].

Having observed that many automated processes are not fully automated, it might be worth stopping to think why we have tried to automate the process in the first place. If you have written a library to automatically generate boiler-plate code to save you hand crafting it, it might be worth pausing and considering why you need so much boiler-plate code. And besides, who hand-writes code these days anyway? Perhaps you are solving the wrong problem. Having to do something over and over again might indicate a deeper underlying problem that should be removed. If your code needs a special linker tool to find all the dependencies it might be better to re-architect your solutions so it doesn’t need turtles, I mean dependencies, all the way down. If you use a dependency injection framework to automate composition of code, and end up having to constantly hand-tune miles and miles of xml, you should probably start wondering if you are going down the wrong path rather than writing a GUI to allow automatic generation of the required configuration once someone has clicked all the right buttons. Semi-automatic weapons might not be the best solution to a computer problem, though we all feel the computer deserves destruction from time to time. Use the right tool for the job, or consider if the job’s worth doing.

This brings me neatly on to the ‘F’ word… I say the ‘F’ word, but there is more than one:

  • Framework
  • Factory, notice the Luddites at the ready again
  • Farm, another aspect involved in the industrial revolution
  • Fudge, or perhaps kludge, certainly not very tasty
  • FAQs – often written before anyone tries to ‘Read the Flipping Manual’ so they are often not frequently or indeed ever asked
  • Future-proofed – almost never involving any kind of proof
  • Functional, just barely, and only on one machine
  • Failover, again and again and again
  • And others
    • Fortran, function, Fail...
    • Fuzz, futz, fiddle,
    • File, FAT, file-system, factor, foobar, frame, FIFO…

The list is rather long, so let us just focus on the first few. Though the word factory clearly has its origins in the Latin facere – to make or do, it also combines the sense of an office for agents or workers in a possibly foreign place or perhaps derives from the idea of an oil press or mill [ Etymonline ]. Perhaps you should start using IMill for your abstract factory builder patterns from now on. A factory is now used to mean a place where items are manufactured, and the irony of automatically manufacturing something is not lost on me. Notice that word, manufactured, means making something by hand. Is anything truly automated?

The ubiquitous Frameworks are a specific example of semi-automatic weapons. These APIs and libraries are designed to take some of the monotony out of writing code. Various third-party frameworks exist, ranging from Ajax to various middleware and so on. Many large companies end up writing their own as well, to suit their special needs, which will tend to add to the time required to learn how to use the framework, whereas with a third-party solution you have a chance to hire workers who already know how to drive the code. Clearly each approach has pros and cons. Part of the drive behind many frameworks is to provide software reuse, to save people time reinventing the wheel. If your framework is slowing you down, “You’re doing it wrong”. It is possible to take reuse too far. If you extracted every for loop and moved it to a framework, you would end-up tying together lots of different modules that had no reason to be bound together. It is important to step back and consider what problem you are trying to solve before speeding ahead and writing scripts or frameworks for every commonality you spot or imagine. If you do conclude that it is worth making a library, or framework, of reusable components keep Gödel’s incompleteness laws in mind.

  • in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F [‘F’ words again]
  • such a formal system cannot prove that the system itself is consistent (assuming it is indeed consistent) [ Gödel ]

It would be worrying to end up with an inconsistent framework, though perhaps as programmers we are less concerned by its incompleteness – just keep churning out more code. If Gödel’s theorems seem a little abstract, we could simply consider the entropy in our code. The lower the entropy, the more likely we are to be able to compress reusable parts down to a library, if not a framework [ Veldhuizen ]. Of course, it will be hard to know in advance how much entropy our code will contain, and yet many people will be tempted to automate something before they have tried it. This flies in the face of advice like ‘YAGNI’ – you ain’t going to need it. It is hard to find the balance between experimentally writing a few scripts that might come in handy and waiting until you know what you really need. Veldhuizen [op cit] draws out an interesting principle:

Principle 1 (Entropy maximization). Programmers develop domain-specific libraries that minimize the amount of frequently rewritten code for the problem domain. This tends to maximize the entropy of compiled programs that use libraries.

In other words, as we pull out commonality what we are left with has higher entropy or chaos. By trying to make our lives easier, we are causing chaos, which may be no bad thing, but on the face of it this seems remarkable. After some mathematics the author proves “ the process of discovering new and useful library components is not a process that can be fully automated ”. It is not possible to automate everything. Humble human programmers will always be required. Our tools can only ever be semi-automatic, and are frequently hand-crafted. One blog on this paper draws the conclusion “ The takeaway is that when trying to create reuse you can probably do it forever so one needs to temper this desire with practicality. ” [ ElegantCoding ]

There is nothing wrong with a framework per se, but it is very difficult to write one up front. Martin Fowler talks about ‘Harvested Frameworks’ [ Fowler ], whereby you grow or rather harvest a framework from a working application. That allows you to spot commonality after you have written it, instead of guessing upfront. In general, I tend to try to capture repetitive tasks at least as a script, once I had done them three times or so, allowing me to get a feel for what’s common and what needs to be configurable. There is nothing wrong with a semi-automatic process. Even if our continuous integration server will run any tests after compiling the code, this won’t stop me running the tests before I commit my changes. In our geek-driven search to automate everything, we need space for humans in the loop. If jabbing yourself in the eye hurts, stop it rather than automate it. As Bill Gates is reported to have said

The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency. [ Gates ]




[HITL] (a) Cranor ‘A Framework for Reasoning about the Human in the Loop’ Usability, Psychology and Security , 2008

(b) Rothrock and Narayanan, ‘Human-in-the-Loop Simulations. Methods and Practice’ Springer 2011.








[Padua] The Thrilling Adventures of Lovelace and Babbage: The (mostly) true story of the first computer. Sydney Padua 2015 Pantheon.




[Veldhuizen] Libraries and their Reuse: Entropy, Kolmogorov complexity, and Zipf’s Law . OOPSLA 2005

Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED

By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED

Settings can be changed at any time from the Cookie Policy page.