on the inter-linked origins of symbolic computing
and the self-modifying, stored-program computer.
*** Only detailed descriptions of Turing's ACE, EDVAC design.
*** SUmmaries of all other machines.
*** Only 1st gen machines (pre-1950 designs)
*** Maybe find 1st machine with logical instructions?
NEED INFO ON LOVELACE!!
NOTES:
(1) EDVAC vs. Turing, ENIGMA p324 (critical)
(2) IAS instruction set, 20TH, p343
(3) Manchester instruction set, 20TH p436 (only vestiges of ACE, but some vestiges)
SOURCES:
COMPUTABLE... [summary in ENIGMA suffices]
ACE Report
EDVAC report [locate]
"First draft of a report on the EDVAC", June 1945
"Proposed electronic calculator" [from notes; what is this?!]
"Computing machinery and intelligence"
TURING
WEAK ITEMS TO COVER EXPLICITLY:
Why is there lack of symbolic (eg. character) output in ACE-derived machines?
(viz turing letter to gandy)
Turing's abandonment of higher-level work
goal
----
To point out that what makes computers interesting is not "speed or complexity of
mathematical computation" at all, but the ability to do symbolic logic;
and further, that numerical computation is a sub-set of that.
During the period in question (late 1930's through 1950 or so) only Alan Turing
understood this explicitly [LOVELACE?].
Finally, to debunk self-agrandizing in GOLDSTINE regarding timelines and milestones.
intro
-----
Symbolic computing is the manipulation of symbols via machinery,
and the machinery that does this we generally call "computers".
My concern here is with the dawn of the modern computer age,
approximately 1936 through 1955,
when electronic computers decimated (sic) all other forms
of non-living symbol manipulation.
Simply put, symbolic computing is what computers are for,
though it's interesting to note that nearly all of the early interest
in electronic computers was for calculating,
largely because long, complex, error prone mathematical work
was quickly becoming the limiting factor throughout much of science
(and it makes a lovely concrete goal to impress less-imaginative
bean counters and bureaucrats).
Few in the earliest days understood the true importance of symbolic
computing (and I'd dare say few really understand it today).
[LOVELACE!] In fact, until well after electronic digital computers
were well-established, only Alan Turing understood it in it's full
glory, with all it's implications.
Turing was the first to tie theoretical concerns to then-current technology
to make a self-modifying, stored-program computer in the modern sense.
This was implicit in his 1936 paper "ON COMPUTABLE NUMBERS..."[NOTE],
developed in his World War II work on cryptological machinery, and
made explicit in his 1946 "ACE Report". For example, the observation
that mathematical computing is a subset of symbolic computing
(eg. "floating point arithmetic" isn't arithmetic per se, it is a simulation
of arithmetic's rules with it's own set of behaviours and effects).
The only other person who may have grasped these ideas was JvN,
who was well aware of Turing's early work[NOTE]; but JvN
was entirely occupied with other projects, notably hydrodynamic
problems for the atomic effort at Los Alamos.
My goal is to document the above, and correct some errors
in the historical record, mainly claims made by H. H. Goldstine
regarding the sequence of events.
Instead of researching historical conversations and relying on my data being
bigger than yours, I will use the architecture and instruction sets, the "order code",
of the historical machines as physical (sic) evidence,
my assertion being that the order code, well documented and widely
agreed to be correct, actually reflects what a particular machine's
designers actually thought, and intended, at the time,
regardless of any claims made then, or much later.
JvN apparently [RESEARCH!] saw stored-program as a means to the
end of fast mathematical machines for other projects, eg. Los Alamos.
He may have been aware of symbolic computing; discussions with Turing.
The British B.P. experience set them as the inheritors of Turing's legacy.
There is no question that British researchers had the first TWO
working computers in the world.
background
----------
Today we can merely assume we know what is meant by the word "computer", but in this
historical period it's meaning was anything but stable.
Until the waters were muddied with all this complicated machinery,
"a computer" meant only "a person that computes", that is, a person
with pencil, paper and formulae or instructions, who, well, computed;
added columns of numbers, solved equations, etc.
When reasonably complex electro-mechanical mathematical machinery appeared in the 1920's,
onces that could perform large, repetitive tasks such as solving a simple equation for
many values of an input N, they were reasonably called "automatic computers"; while
just plain "computer" retained it's old meaning.
During World War II there were many new machines put in use that were called computers;
for example special-purpose analog machines used to aim guns, bombsights, etc, and
calculated gun-aim angles with a fixed formula, from inputs such as speed, mass, air
temperature etc, and a table of constants, provided as a metal cam, drum, etc.
During most of the 1940's, "computing machinery" or "automatic computer"
meant machinery that computed, or
calculated, or effected some change in the world, with some sort of internal,
"black box" operations. This could be in the form of continuous calculations, such
as servo systems (eg. positioning a tool in response to other stimulus) or discrete
(eg. calculating firing tables for all velocities 100 to 10,000); in other words,
"analog" and "digital", it was all computing.
In the late 1940's (early post-war period), when machinery approaching the
capabilities of what we today call "computers" appeared, terminology got very messy.
"Automatic computer" or "automatic electronic computer" (if indeed it was electronic)
were the most accurate, and used within technical worlds (they were popularized as
"brains" or "electronic brains" by an enthusiastic press). At various periods, when
the design of some particularly modern machine was being discussed, such as EDVAC,
people would describe others as "machines of the EDVAC type". (I remember as a kid
in the early 1960's the word "UNIVAC" was synonymous with "computer" (modern usage);
it's all but forgotten now.)
By the early 1950's it was quickly becoming obvious that discrete-symbolic machinery
-- aka digital -- was the technology of choice; for very specific tasks continuous
modelling ("analog computing") was cheapest, simplest and extremely fast, but it didn't scale
well (eg. doubling accuracy in analog means complete circuitry redesign, tolerances
skyrocket; in digital you simply add another column of identical circuitry,
tolerances remain the same) and was extremely in-flexible.
"Automatic electronic digital computer" quickly shortened into "computer", through
the attrition of it's competitors.
We are of course considering only "automatic, electronic, digital computers" here.
types of digital computers
--------------------------
There are three basic internal organizations for electronic computing machinery;
all are in use today. The idea isn't to create hard categories that all devices
must fit into, but a short-hand for describing existing machines. There are of
course easily-conjured exceptions to every category below.
STORE, STORAGE, MEMORY:
CALCULATOR: Defined here as a semi-automatic machine that can manipulate numbers or
symbols in fixed ways, and specifically can not modify it's functioning in any way
(though it may perform conditional calculations, eg. "if number is less than zero then...")
In simple calculators, of the modern $4.00 drug-store variety, or a Burroughs desk
calculator of the 1920's, the manipulations
are limited to the functions built-in; add, subtract, multiply, divide. "In fixed
ways", intentionally vague, means essentially that it performs functions one after the
other as directed externally with fingers or punched cards.
A example of a complex calculator might be an IBM calculating card punch of the 1940's;
given a stack of dozens or thousands of operands stored
on punched cards, a calculating punch can perform a complex operation on the number
contained on each card, and punch out another card with the results. This kind of
machine was used to do most of the monstrously-large calculations needed for the first
few atomic (nuclear) bomb hydrodynamic calculations.
The internal organization of calculators can be quite arbitrary, and often is.
Functional unit(s) (adder, multiplier, etc) are connected directly to the
memory registers, and to the I/O devices (keyboard/display, etc).
STORED-PROGRAM COMPUTER: defined as a machine organized as an array of addressable storage
locations ("memory") containing machine instructions, or data, or both, and a
control unit that executes instructions from memory.
The essential idea is that of the linear memory and the separate control unit.
Instructions and data can be in separate storage, or in the same; there can be
no data storage at all, but there must be stored instructions.
Machines of this type today are called "Harvard architecture"; examples would be
the Signetics 8X300 (instruction store of 256, 11-bit instructions, data store
of 256, 8-bit words), or a Microchip Inc PIC microcontroller (program store of
1024, 14-bit instructions; data store of 64, 8-bit words).
Modern "calculators" such as the Hewlett Packard 48GX are really stored-program computers;
at this level of functionality the borders become very gray.
SELF-MODIFYING STORED-PROGRAM COMPUTER: Defined as a STORED-PROGRAM COMPUTER that explicitly
does not distinguish instructions and data; the control unit can modify or
execute either or both. This is the modern modern computer. It specifically can
modify its instructions, intentionally and sometimes otherwise. (It was the latter
that was feared in the 1940's.)
All general-purpose, desktop, laptop, and work-station computers
today are of the SELF-MODIFYING STORED-PROGRAM COMPUTER type.
A modern computer has a large memory store, and a collection of programs stored
elsewhere, usually a rotating disk. All of your programs, documents, images,
are stored in the same memory, there is no special "data" memory separate from
"program" memory. They are separated only by the intent of the running programs,
a vastly complex task (that fails all too often).
For brevity, I will call all SELF-MODIFYING STORED-PROGRAM COMPUTERS simply "computers",
and all other machinery "calculators". If I need to be more explicit I'll use the defined
name, capitalized. All general-purpose, desktop, laptop, and work-station computers
today are of the SELF-MODIFYING STORED-PROGRAM COMPUTER type.
need to describe the pressures that created stored-program and put data/inst together.
timeline
--------
On Computable numbers..., Alan Turing, 1937
ENIAC design begun, spring 1943
COLOSSUS, 1943, very programmable, tiny electronic data storage; proto-computer. (secret til late 70s')
"First draft of a report on the EDVAC", JvN, June 1945
WHIRLWIND (digital) design begun, "late 1945" (20TH p 366)
ENIAC, 15 February 1946 demonstration (footnote re: demo vs. first date)
ACE Report [REAL NAME], Alan Turing, early 1946
"Preliminary discussion", JvN, 1946
"Planning and coding", JvN, 1947
ATLAS design begun, August 1947 20TH p490
ENIAC made switch-programmable, November 1947 20TH p459 GOOD DETAIL!
MANIAC design /edvac/ begun, "1948" 20TH p460
Manchester "baby" ("baby MARK I") computer, runs, 21 June 1948 ENIGMA p413, 20TH p433
SWAC design begun /edvac/, May 1948
Pilot ACE design begun, "early 1949" 20TH p108
ORDVAC, ILLIAC design /edvac/ begun, 1949
EDSAC runs, 6 May 1949, first computer in daily service
BINAC, aircraft design, ran Aug 1950, first American computer to run
SWAC dedicated, August 1950
"Computing machinery and intelligence", Turing, October 1950
Pilot ACE, ran on Nov 1950,
ATLAS ran, December 1950, second american machine 20TH p490
EDVAC/Moore School ?
Ferranti Mark 1
IBM 701 design begun, early 1951
MANIAC runs, March 1952
IAS, 1952
ORDVAC delivered, /edvac/ March 1952
ILLIAC delivered, /edvac/ September 1952
IBM 701 delivered, December 1952
DEUCE?
ACE?
describe major contemporary projects
chronology
describe camps
neuronal model
JvNs? Turing seems to have adopted.
outcome of cybernetics
support
--------
Except for the innate ability to manipulate arbitrary symbols, computers today would simply
be really fast calculators; in fact, nearly all of the earliest computers were built for
daunting numerical problems of the time.
Very few people at the dawn of modern computing (1935 - 1955) understood the implications
of symbolic computing;
and it took over half a century for it to be truly understood;
stored-program
--------------
Most histories concentrate on the development of the "stored-program" concept; indeed it's
important, but only part of the story.
To some degree the stored-program idea was rendered "obvious" in the early 1940's;
like many other scientific and technological developments, it occurred to many
people more or less at the same time, because they were all subjected to the same
pressures to solve the same problems.
Computers didn't spring forth fully formed, they incremented out of the background, though
the increments came extraordinarily fast due to many factors,
the largest being the open and sharing nature of mathematical and physical-science societies,
and the pressures, and enormous resources
made available by governments during the war.
By far, the biggest single problem in the way of building computing/calculating machinery
was storage; regardless of architecture, real-world problems contain lots of data,
and real-life solutions require lots of mathematical and logical steps.
Not to make light of the enormous problems of designing and building electronic devices
to perform mathematical functions and control the logical operation of the machine,
the problem of memory determined the capabilities of automatic computers for decades, and
severely limited the ability to even make one at all (the world's first running
stored-program computer[BABY] had 1024 bits of storage).
(Though in general, storage capacity separates the calculators from the
computers; calculating card punches such as the IBM [model] performed
operations on a set of input cards, but had very little "storage", usually
tens of numbers or less, though in later years some grew to [need number].)
A long, linear array of memory or storage "cells", each of fixed size, is the simplest
possible organization for electronic components;
you make one cell work, and replicate as many as you can afford.
You know you want the largest possible store for data and intermediate variables;
and you know that you want your complex and expensive machine to be "programmable",
eg. the operations performed on the data must vary with the particular problem to be solved.
It was abundantly obvious by the mid-1940's that fixed-function machines
(relay calculators or ENIAC) were too inflexible, so the machine's operational orders
need to be as flexible as your data.
Building one memory store was daunting enough; to have to build a second for
machine orders was simply too much!
But with clever electronics you could put orders, and data, into the same
store, saving thousands of tubes, at the cost of increased control complexity
(a few hundred tubes), and avoiding the problem of how large to make separate
order/data stores, etc.
As F.C. Williams said, "fast memory is not cheap, and cheap memory is not fast!".
So stored-program became "obvious" to those actually working on
calculating/computing machinery; no deep theory was needed, it was like falling
off a chair to anyone with a budget and an engineering team.
Pulling it off in actual hardware was another story;
the early machines were at least 100 times as large and complex as
any electronic device ever built.
Only the tiniest portions of the machines had been built and tested,
yet the goal was so tantalizing and the value so high, and
otherwise-conservative funding sources so revved up
post-war, that the seemingly-impossible was actually done.
The stored-program concept was first written down by JvN in [EDVAC REPORT] in 1945,
[HOW ABOUT COMPUTABLE NUMBERS? LOVELACE?]
so he and/or his team frequently get credit for it;
but the idea was well-known at the time, though JvN certainly was a major
contributor to this and many other ideas, such as the
one-address machine with central accumulator, a design feature of EDVAC
and it's American offshoots.
[more]
argument
--------
Symbolic computing, as mentioned earlier, is really what makes computers interesting.
It however did not become "immediately obvious", and it appears that very, very few ever got it
at all.
It's clear today that the innate ability to fiddle about with arbitrary symbols
is what computers are all about. (I mean at the machine-language layer; of course
highly abstract stuff like graphical human interfaces is entirely symbolic, but it
all relies on that innate ability to twiddle bits symbolically, at machine-
instruction layer.)
I assert that not only was this not obvious, that very few understood the implications,
and further, that many didn't get it even after it was well-established. And it is no
coincidence that the people who made all of the earliest advances, and in fact the first
operating computers, were the same people who built, out of dire, life-saving need,
machinery that did nothing else but symbolic manipulations, for code-breaking
purposes during the war.
Rather than a my-data-is-bigger-than-your-data argument, I will make my case by
examining the instruction sets ("orders") of machines designed and sometimes
built in the early years, with the assumption that regardless of claims
made at that time or years later, the instruction sets are the embodied
intent of the original machine.
I will examine the instruction sets of the following machines:
[why these machines]
COLOSSUS, built 1943
ENIAC, built 1946; programmable Nov 1947
EDVAC, designed 1946 (no machine named "EDVAC" built)
Turing's ACE, designed 1946
Manchester Baby computer, built 1948
EDSAC, built 1949
Manchester Mark 1 [DATE]
BINAC, built 1950
Pilot ACE, built 1950 (built by Womersley's crowd)
IAS, MANIAC, built 1952 (EDVAC design)
COLOSSUS, 1943
--------------
Type: special-purpose programmable calculator
Instruction set: "large number" of plugable, random logic gates
Facilities: 5x5 memory (shift register)
501-bit stream generator
Storage: 20 decade counters
http://www.cranfield.ac.uk/ccc/bpark/colossus.htm
A highly-specialized programmable calculator, COLOSSUS had some rudiments of modern
computing, but by no means is it a computer.
It was designed and built under wartime conditions to perform only cryptological
functions (at which it excelled).
Because of the rapidly-changing and often initially-unknown crypto algorithms,
COLOSSUS had to be highly programmable.
It was once "plugged up" to perform multiplications, and other taks far beyond
it's original design requirements.
{ref to performance vs. Pentium-based algorithm; see URL above?]
While it's not a computer in any direct sense it is certain that those
that worked on it or with it gained a massive head-start in understanding the
requirements of a true stored-program computer.
The very existence of the COLOSSUS machines, and therefore their
capabilities, remained secret until the late 1970's [REFERENCE],
and so until relatively recently it wasn't possible to acknowledge their
place in the pre-history of computing.
Furthermore, the British didn't share COLOSSUS-era work with the
Americans, so it remained on their side of the Atlantic.
[COLOSSUS REBUILD]
ENIAC, 1946
-----------
Type: plug-programmable calculator
Instruction set:
Facilities:
Storage: [n] 10-digit decimal numbers
ENIAC was designed and built to be a patch-cord programmable
calculator, but ended up being a true programmable calculator a year later.
It was built to compute ballistic tables fo rth emilitary,
as it's full name, "Electronic Numerical Integrator and Calculator", indicates.
ENIAC was plug-programmable, and had a human-settable function table that
stored constants for calculations. Data in and out was via punched cards.
Like card-calculators before it, it could perform conditional computation,
bsaed upon the sign of an accumulator (eg. stop when minus, etc).
After a November 1947 brilliant modification suggested by JvN, ENIAC became
a true programmable calculator.
JvN's idea consisted of adding a device called simply the "converter",
that translated decimal numbers entered into a large function table
(banks of hundreds of switches arranged in rows, into which numerical
constant tables were entered for calculations) into pulses that
simulated the plugging of patch cords (thereby generating "instructions").
The converter created an added level of abstraction for the programmer/user;
60 order codes, ENIAC's new instruction set, that performed the electrical
equivelant of plugging in patch cords, including a conditional
instruction.
While the added level of abstraction involved in the function table to
plug-cord modifications greatly slowed the operation of the machine,
the order-of-magnitude ease and speed of setup far outweighed any loss.
It also allowed programs to be checked for correctness; the same program
(decimal numbers) could be read in from a stack of punched cards
and the "function table"/program checked for correctness.
(Though it specifically could not load a program from any
source into it's instruction store; hence ENIAC is not a
"stored program" computer, though it is now fully programmable.)
This is almost certainly the first machine with "software" as we know
it today.
JvN's function table::patch cord "hack" is an excellent example of
software abstraction/hardware speed tradeoff (even though the hack is
in fact hardware, it performs a symbolic translation for the users).
Turing's ACE, 1946 (never built)
------------------
Type: Stored-program binary digital computer
Organization: serial
Storage: 6400 32-bit words
register machine [get type name from CPU book]
N address?
32 regs TS1 - 12 special
15 bit addr space
Instruction set: 11 instructions, some sub-modes
Instruction set: ~11 inst., 9 types ACE REPORT (p13, 54)
B branch bit3, lvs PC+1 in reg allows recursion. else type A
K, L, M, N register/data moves
O output to card
P read card
Q CI to TS6 ("acc.")
R logical p61, & | ~ op? =0 rot(n)
S arithmetical (microcoded)
T "microcode"
Facilities:
Storage:
(4) ACE instruction set in Programmers Handbook
Though this machine was never actually built, like von Neumann's
EDSAC design the ACE had a large impact on the soon-to-be-built computers.
It was documented in [ACE REPORT], published the same year as [EDVAC], to which
document the ACE Report refers. However, ACE is not derivative of EDSAC;
further, there is reason to believe [need ref] that JvN and Turing talked about
computer design, not uncommon in the late 1940's. JvN was certainly aware of
Turing's wartime work [ref] [ENIGMA? JvN states he knew of Turing's early work],
in spite of [GOLDSTINE].
Architecturally, ACE is a reasonably "modern" machine, with no major architectural oddities
(if you consider a serial machine not-odd), though there is some juggling of
lower address bits to accomodate memory latency. However, unlike all contemporary
machines, it was a very spare design; the ACE was to be a "software" machine.
At that time, it was considered most
reasonable to arrange that the encoded letter "A" (for instance) on a teleprinter
or punched card be made to correspond, directly, in hardware, to the "ADD" instruction
of the computer. Turing determined, and rightly so, that this was wasteful, and such
things were better done in software.
Rather than a centralized "accumulator" design, Turing designed the arithmetical
elements to be more or less autonomous units, whose functioning ran in parallel with
central control, which made it an extremely fast design.
Turing went much further than this; the ACE Report specifies initial library subroutines
(ACE was capable of reentry and recursion), including a free-form "assembly language".
He makes explicit the conceptual differences between machine language (1's and 0's
contained in memory), relocatable symbolic code (eg. an opcode and a memory reference to
be resolved later) and written, human-readable opcodes, as well as named
variable references and other modern constructs. These ideas were considered
wild, and were not implemented in any first-generation systems.
The ACE design was the only first-generation computer to include logical and bit-manipulation
instructions; for all machines, the emphasis was on mathematics, and nearly all
machines included hardware multiply and divide. The inclusion of logical instructions
(AND, OR, INVERT, etc) was at once far-sighted, and entirely pragmatic -- Turing
realized a decade earlier that all of what a computer could do could be simulated, or
emulated, in a general-purpose symbol-manipulating computer;
and the wartime work at Bletchley Park proved the importance and usefulness of character/symbol
manipulation by machine.
Alas, Turing was his own worst enemy; also in the ACE Report he delved into hardware
design (pedestrian at best, along the way insulting the engineers that might build it) and
scolded his own bureaucratic bosses for cowardice and lack of insight --
not a good way to finesse a difficult project.
Another liability, not his fault, was that the direct experience he
based much of his design on, his wartime cryptological machinery
work, was under extreme secrecy; few could even know what he had worked
on; this made much of his best ideas, based upon years worth of direct
experience in computing, completely opaque, until largely declassified in
the 1970's.
Turing went through a number of versions of the ACE design. The Pilot ACE
was based upon one of his earlier designs, version V. There were further versions
of the ACE design, after the published report.
* Refers to JvN "report on EDVAC
Logical instructions (end sect. 4, p.27)
Photo Pilot ACE, p128, 132+
* cf. Edvac, "risc" order set, do in SW even branch. (prob. from exp w/modding BP machines)
*
EDVAC design, 1946
------------------
Manchester Baby computer, 1948
------------------------------
Type: Stored-program binary digital computer
Instruction set:
Facilities:
Storage: 1024 bits arranged as 32 words of 32 bits each
http://www.computer50.org/index.html
Programmer's reference manual http://www.cs.man.ac.uk/prog98/ssemref.html
EDSAC, 1949
-----------
Type: Stored-program binary digital computer
Instruction set:
Facilities:
Storage:
BINAC, 1950
-----------
Type: Stored-program binary digital computer
Instruction set:
Facilities:
Storage:
Pilot ACE, 1950 (built by Womersley's crowd)
---------------
Type: Stored-program binary digital computer
Instruction set:
Facilities:
Storage:
EDVAC, 1952 (encompasses IAS, MANIAC machines)
-----------
Type: Stored-program binary digital computer
Instruction set:
Facilities:
Storage:
could not modify instructions until 1947, and for tagged instructions only,
eg. address fields. (1) [Need orig. documents]
Pilot ACE
---------
Storage: Mercury delay line; 300 32-bit words
Version V
See 20TH p111
Though Pilot ACE was a faint shadow of Turing's ACE design, it's basic design
features shown through. Turing's floating point routine code with it's optimum coding
(eg. programming around serial memory latencies) made it nearly as fast as
fixed-point arithmetic; on more conventional machines such as EDSAC, floating
point was considered too slow to be useful.
The autonomous-unit design was kept; for example, though the multiplier unit
didn't handle signs, these could be calculated in software while the multiplier
was working.
The Pilot ACE machine however had it's logical instructions gutted by Womersley,
who originally criticised Turing's design for being "outside the mainstream of
computer design" [find ref in ENIGMA].
SOURCES