It Began with Babbage (47 page)

Read It Began with Babbage Online

Authors: Subrata Dasgupta

BOOK: It Began with Babbage
11.16Mb size Format: txt, pdf, ePub

But there were some to whom the very austerity, economy, aesthetics of mathematical notation beckoned, and to none more so than Canadian Kenneth Iverson (1920–2004).
A graduate in mathematics and physics from Queen's University, Kingston, Ontario, Iverson did his graduate work in applied mathematics at Harvard, obtaining his PhD in 1954 with a dissertation on the automatic solutions of linear differential equations. He was a student of Howard Aiken, and years later he would acknowledge Aiken's influence on his thinking. One of the precepts he learned from Aiken was the desire for simplicity in scientific thinking.
126

After achieving his doctorate, and as a freshly appointed assistant professor in mathematics at Harvard, Iverson was inducted into a new graduate program on “automatic data processing” initiated by Aiken.
127
As part of his new teaching duties, he embarked on the invention of a new notation for teaching and writing about computation—a notation, in other words, for strictly human consumption; a notation for pedagogic use,
128
but as much a notation as a tool of thought.
129

We revert to the question: When does notation become a language? The dividing line is fuzzy. Swiss-American mathematician and mathematical historian Florian Cajori (1859–1930) titled his most famous book
A History of Mathematical Notation
(1928), not
A History of Mathematical Language
. On the other hand, we commonly speak of mathematics as “the language of science”—meaning, not just notation, but that the calculus of mathematics serves as a means of scientific discourse.

At any rate, in 1962, Iverson published his first book, devoted to the notation he had invented and, with breathtaking aplomb, titled his book
A Programming Language
.
130
Although the language had yet no name, this very title would, in a few years, inspire the acronym APL. Rather as John Backus's name is inextricably entwined with FORTRAN, so also Iverson's with APL.

XXIII

If brevity is the soul of wit, as Polonius claimed in
Hamlet
, then both writer and reader of APL programs must be enormously quick-witted for, in designing APL, Iverson took his mentor Aiken's ideal of simplicity to heart to mean an extreme brevity. His goal seemed to be: How little do I need to write to say something meaningful. The outcome was a notation that was uncompromisingly terse—hence the cult of the “one-liners” to express meaningful computational actions much beloved of APL devotees in later years.
131

This brevity of expressiveness was a result of a number of factors, the most noticeable being the presence of an implicit
parallelism
in the meanings (semantics) of APL statements. We illustrate this with two small examples.

First, suppose there are two programs (or routines) named
A
and
B
. And suppose that inside
B
there is an APL statement

→
A
,
n

Then, the execution of this statement results in the immediate execution of the statement numbered
n
in
A
and the continuation of the execution of statements in
A
following
n
, whereas
B
also continues on to its next statement. In other words, this statement effects both a branch to another program that is then executed
concurrently
with the continuing execution of the originating program.

A second source of implicit parallelism in APL statements lay in the fact that APL is an
array-based
language. In addition to ordinary numbers (scalars), its fundamental data
types
are vectors and matrices—two standard mathematical entities. APL's brevity of expression is exemplified by notations that express operations on entire vectors or matrices. To take an example, suppose the elements of a vector
x
= [
x1, x2,
…,
xn
] are to be added to form a single scalar
z
=
x1
+
x2
+ … +
xn
. In an Algol-like language, this would be expressed as something like

z
:= 0;

i
:= 1;

while
i
≤
n
do

z
:=
z
+
x
[
i
];

i
:=
i
+ 1

That is, after “initializing” the variables
z
and
i
, the contents of
x
[
i
] is added to
z
iteratively with
i
incremented by one each time. The iteration (
while
loop) ends when the value of
i
exceeds
n
.

In APL, this task would be expressed by the statement

z
← +/
x

XXIV

As we have noted, Iverson's original intention in inventing APL (even before it came to be known by this acronym) was a notation for teaching computing. It was meant to be a pedagogic tool.
132
Indeed, when he and Frederick P. Brooks (1931–) wrote their textbook
Automatic Data Processing
(1965),
133
some of the algorithms presented therein were in APL. But well before then, in 1960, Iverson left Harvard for IBM's Thomas J. Watson Research Center, the company's headquarters for fundamental research. And much more interestingly and creatively, in 1964, Iverson's colleague and (as it turned out) long-time collaborator, Adin Falkoff (1921–2010), set out to use APL as a
machine description language
. One of IBM's major computer systems of the mid 1960s, the System/360, was described formally and precisely in APL (although the language in this exposition was still presented as a notation without a name).
134

This work was especially seminal for it was undoubtedly the first formal description of a physical computer (more precisely, its architecture, a liminal artifact) in a quasiprogramming notation. When coupled with the description of algorithms in
Automatic Data Processing
, we find a blurring of the distinction between the computer as a material artifact, algorithms as abstract artifacts, and computer architectures as liminal artifacts. They can all be spoken about using a common language.

The description of the System/360 in APL also marks the emergence of a new language genus that came to be called
computer hardware description language
(or computer design language).
135

By the time the description of the System/360 was published, the notation was sufficiently matured that thoughts turned to its
implementation
—the transformation of a mathematical notation for thinking about computation to a programming language.
136
In two fundamental ways, however, the language's implementation deviated markedly from the implementation of such languages as FORTRAN and Algol 60. First, the
symbols
constituting the APL notation demanded the development of a distinct character set for printers to print APL programs. Indeed, specialized APL input/output devices (called
APL terminals
) were developed.
137
Second, the execution strategy for APL programs eschewed the compile-and-execute strategy developed for FORTRAN and other contemporary programming languages (that is, first translate the whole program into a “native” machine code and
then
execute the translated program). Instead, APL statements were interpreted (by another program, the “interpreter”) and executed one statement at a time. APL was implemented as an
interpretive
language.
138
A new language, implemented for the IBM System/360 and named APL\360 was thus born.
139

NOTES

  
1
. D. E. Knuth & L. T. Pardo. (1980). The early development of programming languages (original work published 1977). In N. Metropolis, J. S. Howlett, & G.- C. Rota (Eds.),
A history of computing in the twentieth century
(pp. 197–273). New York: Academic Press (see especially, p. 202).

  
2
. K. Zuse. (1975). The outline of a computer development from mechanics to electronics (original work published 1962). In B. Randell (Ed.),
The origins of digital computers
(2nd ed., pp. 171–186). New York: Springer-Verlag (see especially p. 181).

  
3
. F. L. Bauer & H. Wössner. (1972). The “Plankalkül” of Konrad Zuse: A forerunner of today's programming languages.
Communication of the ACM, 15
, 678–685.

  
4
. Zuse, op cit., p. 101.

  
5
. Quoted by Knuth and Pardo, op cit., p. 203.

  
6
. Baurer and Wössner, op cit., p. 679.

  
7
. Ibid., p. 681.

  
8
. Knuth and Pardo, op cit., p. 206.

  
9
. Zuse, op cit., p. 181.

10
. Bauer and Wössner, op cit., p. 682.

11
. Ibid., p. 683.

12
. Knuth and Pardo, op cit., p. 203.

13
. Ibid.

14
. A. W. Burks. (1951).
An intermediate program language as an aid in program synthesis
. Report for Burroughs Adding Machine Company. Ann Arbor, MI: University of Michigan.

15
. Knuth and Pardo, op cit., p. 216.

16
. Zuse, op cit., p. 180.

17
. J. W. Backus. (1981). The history of Fortran I, II and III. In R. L. Wexelblat (Ed.),
History of programming languages
(pp. 25–74). New York: Academic Press (see especially p. 25).

18
. Ibid., p. 28.

19
. J. Backus & H. Herrick. (1954). IBM 701 speedcoding and other automatic programming systems. In
Proceedings of the ONR Symposium on Automatic Programming for Digital Computers
. Washington, DC: Office of Naval Research, Department of the Navy. Cited by Backus, op cit., p. 28.

20
. Knuth and Pardo, op cit., p. 222.

21
. J. H. Laning, Jr. & N. Zierler. (1952).
A program for translation of mathematical equations for Whirlwind I
. Engineering memorandum E-364. Cambridge, MA: MIT Instrumentation Laboratory.

22
. Knuth and Pardo, op cit., p. 227.

23
. Ibid.

24
. Quoted by Knuth and Pardo, op cit., p. 228.

25
. See, for example, Remington-Rand, Inc. (1953).
The A-2 compiler system operations manual
. Norwalk, CT: Remington-Rand (prepared by R. K. Ridgeway and M. H. Harper).; N. B. Moser. (1954). Compiler method of automatic programming. In
Proceedings of the ONR Symposium on Automatic Programming for Digital Computers
(pp. 15–21). Washington, DC: Office of Naval Research, Department of the Navy.

26
. Knuth and Pardo, op cit., p. 227.

27
. Backus and Herrick, op cit.

28
. S. Rosen. (1969). Electronic computers: A historical survey.
Computing Surveys, 1
, 7–36 (see especially p. 14).

29
. Backus, op cit., p. 27.

30
. Ibid., p. 28.

31
. Ibid.

32
. Ibid., p. 30.

33
. Ibid., p. 28.

34
. Ibid.

35
. S. H. Lavington. (1998).
A history of Manchester computers
(p. 20). London: The British Computer Society (original work published 1975).

36
. The terms
Big Science
and
Little Science
are due to the historian of science Derek de Solla Price. See D. K. de Solla Price. (1986).
Little science, big science—and beyond
(Exp. ed.). New York: Columbia University Press (original work published 1963). See also A. Weinberg. (1967).
Reflections on big science
. Oxford: Pergamon.

37
. J. W. Backus, R. W. Beeber, S. Best, R. Goldberg, L. M. Haibit, H. C. Herrick, R. A. Nelson, D. Sayre, P. B. Sheridan, H. Stern, I. Ziller, R. A. Hughes, & R. Nutt. (1957).
The FORTRAN automatic coding system
. Proceedings of the Western Joint Computer Conference (pp. 188–197). Los Angeles, CA.

38
. Backus, op cit., p. 30.

39
. Ibid., p. 45.

40
. Ibid., p. 30.

41
. Ibid., p. 32.

42
. Ibid., p. 33.

43
. Ibid., p. 36.

44
. Ibid.

45
. IBM. (1956).
Programmer's reference manual: The FORTRAN automatic coding system for the IBM 704 EDPM
. New York: IBM Corporation.

46
. Backus et al., op cit.

47
. IBM. (1957).
Programmer's primer for FORTRAN automatic coding system for the IBM 704
. New York: IBM Corporation.

48
. Backus, op cit.

49
. Knuth and Pardo, op cit., p. 243.

50
. Quoted in Backus, op cit., p. 39.

51
. Ibid.

52
. Ibid., p. 37.

53
. Ibid., p. 30.

54
. Ibid., p. 29.

55
. See, for example, B. Randell & L. J. Russell. (1964).
Algol 60 implementation
. New York: Academic Press; J. A. N. Lee. (1967).
Anatomy of a compiler
. New York: Rheinhold; F. R. A. Hopgood. (1969).
Compiling techniques
. London: Macdonald.

56
. S. Rosen. (Ed.). (1967).
Programming systems and languages
. New York: McGraw-Hill.

57
. The source and object codes are functionally equivalent in the sense that if the source program is designed to compute a function
F(I)
for some input data
I
, then if the same input is fed to the object code it will execute on its target computer and compute the same function
F(I)
.

Other books

Set in Darkness by Ian Rankin
Var the Stick by Piers Anthony
The Clone Redemption by Steven L. Kent
The Coalwood Way by Homer Hickam