A recent discussion started on accu general about editors, specifically asking for helpful top-tips for gvim or emacs, rather than starting yet another editors' war. This halted all noble thoughts of finally writing an Overload editorial, since I couldn’t remember the first editor I used. The internet came to my rescue, and I whiled away minutes speed reading the BBC user manual. The keyboard was an integral part of the Beeb. It had a break key: “This key stops the computer no matter what it is doing. The computer forgets almost everything that it has been set to do” [BBCUserGuide]. Commands typed at a prompt were executed immediately whereas a series of numbered instructions formed a program. The user guide is littered with instructions on how to type, such as “If you want to get the + or * then you will have to press the SHIFT key as well as the key you want. It’s rather like a typewriter: while holding the SHIFT key down, press the + sign once.“ [op cit]
Being able to type well seemed to be important back then, if slightly esoteric. After all, computers do often come with a keyboard, though not always. Can you imagine programming on your mobile phone using predictive text? Many years ago, people’s first encounter with programming might have been via a programmable calculator, which posed similar editing problems. I’m sure most of us have seen rants from Jeff Atwood and Steve Yegge on typing, or rather many programmers’ inability to type [Atwood]. Though typing is very important, I think my first interaction with a computing device was sliding round beads on an abacus, later followed by ‘needle cards’ [NeedleCards]. Allow me to explain – I dimly recall punching holes in a neat line along the top of cue cards, writing something on the cards, quite possibly names of polygons and polyhedra (ooh – I have just discovered polytopes, but I digress) and using each hole to indicate whether a property held true or held false for the item on the card. If the property were true, you cut the hole out to form a notch, and if false, you left the hole (or possibly vice versa). To discover which shapes involved, say, triangles you stuck a knitting needle in the pile of cards through the hole representing the property ‘Triangle?’, and gave the stack of cards a good shake. The relevant ones then fell in front of you (or stayed on the needle depending – it is a tad hazy). My introduction to knitting was therefore derailed and I still can’t knit.
Historically inputs to a variety of machines involved holes in card or paper. Aside from the pianola, or autopiano, using perforated paper to mechanically move piano keys, it is often claimed that computers trace back to the Jacquard loom. According to one Wikipedia page, “The Jacquard loom was the first machine to use punched cards to control a sequence of operations” [Jacquard]. Eventually, computers used punched cards, for example Hollerith cards, prepared using key punch machines. On a trip to Bletchley Park a couple of years ago, I was surprised to learn that the paper tape input for Colossus (more holes in paper) was read by light, rather than mechanically, and could therefore achieve a comparatively high reading speed. Generating input in this forward-only format must have been quite time consuming. Without a backspace key, if you make a mistake, “You have to throw the card out and start the line all over again.” [R-inferno] Aside from getting the hole in the punched cards correct and the cards themselves arranged in the correct order, they need carrying around: “One full box was pretty heavy; more than one became a load” [Fisk]. Not all machine inputs were holes in card or paper. Consider the ‘Electronic Numerical Integrator and Computer’, ENIAC. Though it would accept punched cards, its programming interface originally “required physical stamina, mental creativity and patience. The machine was enormous, with an estimated 18,000 vacuum tubes and 40 black 8-foot cables. The programmers used 3,000 switches and dozens of cables and digit trays to physically route the data and program pulses.” [ENIAC]
Eventually random-access editing became possible, with punched cards being replaced by keyboards. On a typewriter, it was possible to wind the paper back and use Tipp-Ex to edit previous mistakes, though it would help if you wanted to replace the mistake with the same number of characters, or fewer. Tipp-Ex cannot change the topology of the paper you are typing on. “In 1962 IBM announced the Selectric typewriter” [Reilly]. This allowed proportional font and had a spherical ‘golf-ball’ typing head that could be swapped to change font. The ball would rotate and pivot in order to produce the correct character. These electronic typewriters eventually morphed into a machine with memory thereby allowing word-processing. They were also used as computer terminals after the addition of solenoids and switches, though other purpose built devices were available [Selectric]. A computer interface that allows editing changes the game. Emacs came on the scene in 1976, while Vim released in 1991, was based on Vi, which Wikipedia claims was also written in 1976 [vi]. Many editors allow syntax highlighting now, adding an extra dimension to moving backwards and forwards in the text. This requires the ability to parse the language being input which is leaps and bounds beyond making holes in card. Parsing the language on input also allows intellisense or auto-completion, though I tend to find combo-boxes popping up in front of what I am trying to type very off-putting. After a contract using C# with Resharper for a year, my typing speed has taken a nose-dive, and my spelling is now even worse. I tend to look at the screen rather than the keyboard since I can type, but when the screen is littered with pop-ups I stop watching the screen and look out of the window instead. If only that particular IDE came with a pop-up blocker.
In order to use these text editors, a keyboard obviously is required. QWERTY keyboards have become the de-facto standard. Why? “It makes no sense. It is awkward, inefficient and confusing. We’ve been saying that for 124 years. But there it remains. Those keys made their first appearance on a rickety, clumsy device marketed as the ‘Type-Writer’ in 1872.” [QWERTY] This article debunks the myth that its inventor Sholes deliberately arranged his keyboard to slow down fast typists who would otherwise jam up the keys. The arrangement seems rather to have been designed to avoid key jams directly rather than slowing down typists and hoping this avoided key jams. Surprisingly the keyboard in the patent has neither a zero nor a one. I am reliably told that the letters ‘l’ and ‘O’ would be used instead. Not only have typewriters left us with the key layout, but also archaic terms like carriage return, line feed and shift. There are other keyboard layouts, such as Dvorak, which you can switch to in order to confuse your colleagues or family silly. I am personally tempted to get a Das keyboard. From their marketing spiel, “Efficient typists don’t look at their keyboards. So why do others insist on labelling the keys? Turns out you’ll type faster than you ever dreamed on one of these blank babies. And that’s not to mention its powers of intimidation.” [DASKeyboard]
Research into computer interfaces has a surprisingly long history. “The principles for applying human factors to machine interfaces became the topic of intense applied research during the 1940s” [HCI] Human-computer interaction is still an active area of research today, covering many inter-disciplinary areas from psychology to engineering. Input methods have moved away from knitting needles and even keyboards. Starting with text editing, in the 1950s, through to the mouse (1968) and gesture recognition (1963), notice all in the 60s, to the first WYSIWYG editor-formatter Xerox PARC’s Bravo (1974) [Meyers], new ways of telling a computer what to do are constantly being created. Perhaps we are moving closer to the realisation of a futuristic cyberpunk dream or virtual reality. “By coupling a motion of the user’s head to changes in the images presented on a head-mounted display, the illusion of being surrounded by a world of computer-generated images or a virtual environment is created. Hand-mounted sensors allow the user to interact with these images as if they were real objects located in space surrounding him or her” [HCI] Mind you, Meyers tells us virtual reality was first worked on in 1965–1968, using head-mounted displays and ‘data gloves’ [op cit], so perhaps it is more a dystopian cyberpunk dream that is constantly re-iterated. Let’s keep our eyes on the latest virtual reality, Google glass [Glass].
A variety of programming languages have sprung up now we can type words into machines, or click on words in intellisense. Some programming languages do seem to be easier to edit than others. APL springs to mind. APL had its own character set, and required a special typeball. “Some APL symbols, even with the APL characters on the typeball, still had to be typed in by over-striking two existing typeball characters. An example would be the ‘grade up’ character, which had to be made from a ‘delta’ (shift-H) and a ‘Sheffer stroke’ (shift-M). This was necessary because the APL character set was larger than the 88 characters allowed on the Selectric typeball.” [APL] It seems likely that making input to a computer easier than punching cards or swapping cables and flipping switches gave rise to high-level programming languages. I wonder if any new ways of interacting with computers will further change the languages we use. Perhaps the growth of test-driven development, TDD, will one day be taken to its logical conclusion: humans will write all the tests then machines can generate the code they need to pass the tests. Genetic programming was introduced to perform exactly this task, possibly before TDD was dreamt of [GP]. If this became the norm, another form of human computer interaction we have not considered would become obsolete: compilers. They exist purely to turn high-level languages, understood by humans, into machine language. If programs are eventually all written by machines, there will be no need for a human to ever read another line of code again. Electronic wizards can automatically generate all the code we need; we are only required to get the tests correct.
We have considered a variety of ways of editing inputs for computers, so should step back and consider editors in the more usual sense of the word. An editor is “in charge of and determines the final content of a text”, according to Google. It is striking that this is remarkably like the definition of a computer editor. Now, an editor needs something to edit, which ultimately needs the invention of printing. Originally, scribes would hand-copy texts self-editing as they went, but eventually books and pamphlets could be mass-produced. This allowed arguments about spelling and an insistence on accuracy, with such characters as Frederic Madden coming to the fore [Matthews]. Perhaps with the prevalence of blogs and other means of self-publishing, things have come full circle, leaving people tending to self-edit.
We have seen the historical forms and roles of editors, and glimpsed a fifty-year old dream of the future. I feel more prepared to attempt a proper editorial next time, but suspect I might need to learn to type properly first. Hoorah for spell checkers and the Overload review team.
[R-inferno] Partick Burns 2011 (http://www.burns-stat.com/pages/Tutor/R_inferno.pdf)
Overload Journal #114 - April 2013 + Journal Editorial
|Browse in :||
All > Journal Columns > Editorial (178)
Any of these categories - All of these categories