Read Understanding Computation Online

Authors: Tom Stuart

Tags: #COMPUTERS / Programming / General

Understanding Computation (28 page)

BOOK: Understanding Computation
12.9Mb size Format: txt, pdf, ePub
ads

[
37
]
Tape
, like
Stack
, is purely functional: writing to
the tape and moving the head are nondestructive operations that
return a new
Tape
object rather
than updating the existing one.

[
38
]
For a Turing machine, “nondeterminism” means allowing more than one rule per
combination of state and character, so that multiple execution paths are possible from a
single starting configuration.

[
39
]
Strictly speaking, this is only true for enhancements that we actually know how to
implement. A Turing machine
would
become more powerful if we gave it
the magical ability to instantly deduce the answers to questions that no conventional
Turing machine can answer (see
Chapter 8
), but in practice,
there’s no way of doing that.

[
40
]
Although the grid itself is infinite, it can only ever have a finite number of
characters written onto it, so we only need to store the rectangular area containing all
the nonblank characters.

[
41
]
Binary is base two, unary is base one.

[
42
]
We’ve glossed over exactly how a tape should be represented,
but that’s not difficult either, and we always have the option of
storing it on a simulated second tape by using the technique from
Multiple Tapes
.

Part II. Computation and Computability

Throughout the first part of this book we’ve played around with familiar examples of
computation: imperative programming languages, state machines, and general-purpose computers.
Those examples have showed us that computation is—more or less—the process of using a system
to manipulate information and answer questions.

Now, in this second part, we’re going to be a bit more adventurous. We’ll start by looking for computation in unfamiliar places, and finish by exploring the fundamental limits of what computing machines can do.

As programmers we work with languages and machines that are designed to fit our mental models of the world, and we expect them to come equipped with features that make it easy to translate our ideas into implementations. These human-centered designs are motivated by convenience rather than necessity; even the simple design of a Turing machine is meant to remind us of a mathematician working with pencil and paper.

But friendly, familiar machines aren’t the only places where computation can happen. More unusual systems can be just as computationally powerful, even if their inner workings aren’t as easy for humans to control or to understand. We’ll investigate this idea in
Chapter 6
by trying to write programs in an extremely minimal language that doesn’t seem to have any useful features at all, and follow the thread further in
Chapter 7
, where we’ll survey a variety of simple systems and see how they’re able to perform the same computations as more complex machines.

Once we’ve convinced ourselves that full-powered computation can happen in many different kinds of system, we’ll spend
Chapter 8
examining what computation itself is actually capable of. It’s natural to assume that computers can solve essentially any problem as long as enough time and effort is spent on writing a suitable program, but there turn out to be hard theoretical constraints: certain problems just can’t be solved by any computer, no matter how fast and efficient it is.

Unfortunately some of these insoluble problems are concerned with predicting the behavior of programs, which is exactly the kind of thing that programmers would like computers to help them with. We’ll look at some strategies for coping with these hard limits of the computational universe, and conclude in
Chapter 9
by exploring how to use abstraction to squeeze approximate answers out of unanswerable questions.

Chapter 6. Programming with Nothing

If you wish to make an apple pie from scratch, you must first invent
the universe.


Carl Sagan

In this book, we’ve been
trying to understand computation by building models of it. So far, we’ve modelled
computation by designing simple imaginary machines with various constraints, and seen that
different constraints produce systems with different amounts of computational power.

The
Turing machines from
Chapter 5
are interesting because they’re able to implement complex behavior without
relying on complex features. Equipped with just a tape, a read/write head,
and a fixed set of rules, Turing machines have enough flexibility to
simulate the behavior of machines with better storage capabilities, or
nondeterministic execution, or any other fancy feature we might want. This
tells us that full-blown computation doesn’t require a machine with a lot of
underlying complexity, just the ability to store values, retrieve them, and
use them to make simple decisions.

Models of computation don’t have to look like machines; they can look like programming
languages instead. The
Simple
programming language from
Chapter 2
can certainly perform computation, but it’s not elegant in the way that a Turing machine is.
It already has plenty of syntax—numbers, Booleans, binary expressions, variables, assignments,
sequences, conditionals, loops—and we haven’t even started to add the features that would make
it suitable for writing real programs: strings, data structures, procedure calls, and so
on.

To turn
Simple
into a genuinely
useful programming language would be hard work, and the resulting design
would contain a lot of incidental detail and not reveal very much about the
basic nature of computation. It would be more interesting to start from
scratch and create something minimal, a Turing machine of the programming
language world, so that we can see which features are essential for
computation and which are just incidental noise.

In this chapter, we’re going to investigate an extremely minimal programming language
called the
untyped lambda calculus
. First, we’ll experiment
with writing programs in a dialect of Ruby that approximates the lambda calculus by using as few
language features as possible; this will still just be Ruby programming, but imposing imaginary
constraints gives us an easy way to explore a restricted semantics without having to learn a
whole new language. Then, once we’ve seen what this very limited feature set is capable of,
we’ll take those features and implement them as a standalone language—with its own parser,
abstract syntax, and operational semantics—using the techniques we’ve learned in earlier
chapters.

Impersonating the Lambda Calculus

To see how
programming in a minimal language can work, let’s try solving a problem in Ruby
without taking advantage of its many helpful features. Naturally, that means no gems, no
standard library, and no modules, methods, classes, or objects, but since we’re trying to be
as minimal as possible, we’ll also avoid the use of control structures, assignment, arrays,
strings, numbers, and Booleans.

Of course, there won’t be a language left to program in if we avoid
absolutely
every
feature of Ruby, so here are the
ones we’re going to keep:

  • Referring to variables

  • Creating procs

  • Calling procs

That means we can only write Ruby code that looks like this:

->
x
{
->
y
{
x
.
call
(
y
)
}
}
Note

This is roughly how untyped lambda calculus programs look, and
that’s a good enough approximation for our purposes. We’ll look at the
lambda calculus itself in more detail in
Implementing the Lambda Calculus
.

To make our code shorter and easier to read, we’re also going to allow ourselves to use
constants as abbreviations: if we create a complex expression, we can assign it to a
constant to give it a short name that we can reuse later. Referring to the name is
no different from retyping the original expression again—the name just makes the code less
verbose—so we won’t be making ourselves dependent upon Ruby’s assignment features. At any
time, we can decide to be more strict and undo the abbreviations by replacing each constant
with the proc it refers to, at the expense of making our programs much longer.

Working with Procs

Since we’re going to be
building entire programs out of procs, let’s spend a
minute looking at their properties before we dive into using
them.

Note

For the moment, we’re still using full-featured Ruby to illustrate the general
behavior of procs. We won’t impose the restrictions until we start writing code to tackle
The Problem
.

Plumbing

Procs are plumbing for moving values around programs. Consider
what happens when we call a proc:

->
x
{
x
+
2
}
.
call
(
1
)

The value that’s provided as an argument to the call, in this
case
1
, flows
into
the parameter of the block, in this case
x
, and then flows
out
of
the parameter to all the places where it’s used, so Ruby
ends up evaluating
1 + 2
. It’s the
rest of the language that does the actual work; procs just connect
parts of the program together and make values flow to where they’re
needed.

This already doesn’t bode well for our experiment in minimal
Ruby. If procs can only move values between the pieces of Ruby that
actually do something with them, how are we ever going to be able to
build useful programs out of procs alone? We’ll get to that once we’ve
explored some other properties of procs.

Arguments

Procs can take
multiple arguments, but this isn’t an essential feature.
If we’ve got a proc that takes multiple arguments…

->
x
,
y
{
x
+
y
}
.
call
(
3
,
4
)

…we can always rewrite it as nested single-argument
procs:

->
x
{
->
y
{
x
+
y
}
}
.
call
(
3
)
.
call
(
4
)

Here, the outer proc takes one argument,
x
, and
returns the inner proc, which also takes one argument,
y
. We can call the outer proc with a value for
x
and then call the inner proc with a value for
y
, and we get the same result as in the multiargument case.
[
43
]

Since we’re trying to remove as many features of Ruby as
possible, let’s restrict ourselves to creating and calling
single-argument
procs; it won’t make things much
worse.

Equality

The only way to
find out about the code inside a proc is to call it, so
two procs are interchangeable if they produce identical results when
called with the same arguments, even if their internal code is
different. This idea of treating two things as equal based on their
externally visible behavior is called
extensional
equality
.

For example, say we have a proc
p
:

>>
p
=
->
n
{
n
*
2
}
=> #

We can make another proc,
q
,
which takes an argument and simply calls
p
with it:

>>
q
=
->
x
{
p
.
call
(
x
)
}
=> #

p
and
q
are
obviously two different procs, but they’re extensionally equal, because they do exactly
the same thing for any argument:

>>
p
.
call
(
5
)
=> 10
>>
q
.
call
(
5
)
=> 10

Knowing that
p
is equivalent to
-> x { p.call(x) }
opens up new opportunities for
refactoring. If we see the general pattern
-> x { p.call(x)
}
in our program, we may choose to eliminate it by replacing the whole
expression with just
p
, and under certain circumstances
(which we’ll see later), we might decide to go in the other direction too.

Syntax

Ruby
provides a choice of syntax for creating and calling procs. From this point
onward, we’ll use
->
arguments
{
body
}
to create a proc and square brackets to
call it:

>>
->
x
{
x
+
5
}
[
6
]
=> 11

This makes it easy to see the body and argument of the proc
without too much extra syntax getting in the
way.

The Problem

Our goal is
to write the well-known FizzBuzz program:

Write a program that prints the numbers from 1 to 100. But for multiples of three,
print “Fizz” instead of the number, and for the multiples of five, print “Buzz.” For
numbers that are multiples of both three and five, print “FizzBuzz.”

This is an
intentionally simple problem, designed to test whether an
interview candidate has any programming experience at all. Anybody who
knows how to program should be able to solve it without much
difficulty.

Here’s an implementation of FizzBuzz in full-featured Ruby:

(
1
.
.
100
)
.
each
do
|
n
|
if
(
n
%
15
)
.
zero?
puts
'FizzBuzz'
elsif
(
n
%
3
)
.
zero?
puts
'Fizz'
elsif
(
n
%
5
)
.
zero?
puts
'Buzz'
else
puts
n
.
to_s
end
end

This isn’t the cleverest
implementation of FizzBuzz—
there are plenty of clever ones out
there
—but it’s a straightforward one that anyone could write
without thinking about it.

However, this program contains some
puts
statements, and we have no way to
print text to the console using only procs,
[
44
]
so we’re going to replace it with a roughly equivalent
program that returns an array of strings rather than printing
them:

(
1
.
.
100
)
.
map
do
|
n
|
if
(
n
%
15
)
.
zero?
'FizzBuzz'
elsif
(
n
%
3
)
.
zero?
'Fizz'
elsif
(
n
%
5
)
.
zero?
'Buzz'
else
n
.
to_s
end
end

This is still a meaningful solution to the FizzBuzz problem, but
now it’s one that we have a chance of implementing using only
procs.

Despite its simplicity, this is quite an ambitious program if we
don’t have any of the features of a programming language: it creates a
range, maps over it, evaluates a big conditional, does some arithmetic
with the modulo operator, uses the
Fixnum#zero?
predicate, uses some string
literals, and turns numbers into strings with
Fixnum#to_s
. That’s a fair amount of built-in
Ruby functionality, and we’re going to have to strip it all out and
reimplement it with procs.

Numbers

We’re going to start by
focusing on the numbers that appear in FizzBuzz. How can
we possibly represent numbers without using
Fixnum
s or any of the other datatypes that
Ruby provides?

If we’re going to try to implement numbers
[
45
]
from scratch, we’d better have a solid understanding of
what we’re implementing. What is a
number
, anyway?
It’s hard to come up with a concrete definition that doesn’t
accidentally assume some aspect of what we’re trying to define; for
example, “something that tells us how many…” is not very useful, because
“how many” is really just another way of saying “number.”

Here’s one way of characterizing numbers: imagine we have a bag of apples and a bag of
oranges. We take an apple out of one bag, an orange out of the other, and put them aside;
then we keep taking out an apple and an orange together until at least one of the bags is
empty.

If both bags become empty at the same time, we’ve learned something interesting: in
spite of containing different things, those bags had some shared property that meant they
became empty at the same time; at every point during the procedure of repeatedly removing an
item from each bag, either both bags were nonempty or both bags were empty. This abstract
property shared by the bags is what we can call a number (although we don’t know
which
number!), and we can compare these bags with any other bag in
the world to see if it has the same “number” as them.

So one way to characterize numbers is by repetition (or
iteration
) of some action, in this case, taking an item from a bag.
Each number corresponds to a unique way of repeating an action: the number one corresponds
to just performing the action; the number two corresponds to performing it and then
performing it again; and so on. The number zero, unsurprisingly, corresponds to not
performing the action at all.

Since
making and calling procs are the only “actions” our
program can perform, we can try implementing a number
n
with code that repeats the action of calling a proc
n
times.

For example, if we were allowed to define methods—which we’re not,
but play along—then we could define
#one
as a method that takes a proc and some
arbitrary second argument, and then calls the proc with that argument
once:

def
one
(
proc
,
x
)
proc
[
x
]
end

We could also define
#two
,
which calls the proc once and then calls it again with whatever the
result of calling it the first time was:
[
46
]

def
two
(
proc
,
x
)
proc
[
proc
[
x
]]
end

And so on:

def
three
(
proc
,
x
)
proc
[
proc
[
proc
[
x
]]]
end

Following this pattern, it’s natural to define
#zero
as a method that takes a proc and some other argument, ignores the proc entirely (i.e.,
calls it zero times), and returns the second argument untouched:

def
zero
(
proc
,
x
)
x
end

All of these implementations can be translated into methodless
representations; for example, we can replace the method
#one
with a proc that takes two
arguments
[
47
]
and then calls the first argument with the second one.
They look like this:

ZERO
=
->
p
{
->
x
{
x
}
}
ONE
=
->
p
{
->
x
{
p
[
x
]
}
}
TWO
=
->
p
{
->
x
{
p
[
p
[
x
]]
}
}
THREE
=
->
p
{
->
x
{
p
[
p
[
p
[
x
]]]
}
}

This avoids functionality that we’re not allowed to use, and
instead gives names to procs by assigning
them to constants.

Note

This technique of representing data as pure code is named
Church
encoding
after Alonzo Church, the
inventor of the lambda calculus
. The
encoded numbers are
Church numerals
, and we’ll shortly be seeing
examples of
Church Booleans
and
Church
pairs
.

Now, although we’re eschewing Ruby’s features
inside
our FizzBuzz solution, it would be useful to
translate these foreign representations of numbers into native Ruby
values once they’re
outside
our code—so that they
can be usefully inspected on the console or asserted against in tests,
or at least so that we can convince ourselves that they really do
represent numbers in the first place.

Fortunately we can write a
#to_integer
method that performs this
conversion:

def
to_integer
(
proc
)
proc
[->
n
{
n
+
1
}
][
0
]
end

This method takes a proc that represents a number and calls it
with another proc (which just increments its argument) and the native
Ruby number
0
. If we call
#to_integer
with
ZERO
then, because of
ZERO
’s definition, the incrementing proc
doesn’t get called at all and we get an untouched Ruby
0
back:

>>
to_integer
(
ZERO
)
=> 0

And if we call
#to_integer
with
THREE
, the incrementing proc gets
called three times and we get a Ruby
3
back:

>>
to_integer
(
THREE
)
=> 3

So these proc-based representations really do encode numbers, and
we can convert them into a more practical representation whenever we
want to.

For FizzBuzz, we need the numbers five, fifteen, and one hundred, which can all be
implemented with the same technique:

FIVE
=
->
p
{
->
x
{
p
[
p
[
p
[
p
[
p
[
x
]]]]]
}
}
FIFTEEN
=
->
p
{
->
x
{
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
x
]]]]]]]]]]]]]]]
}
}
HUNDRED
=
->
p
{
->
x
{
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
x
]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
}
}

These aren’t very compact definitions, but they work, as we can
confirm with
#to_integer
:

>>
to_integer
(
FIVE
)
=> 5
>>
to_integer
(
FIFTEEN
)
=> 15
>>
to_integer
(
HUNDRED
)
=> 100

So, going back to the FizzBuzz program, all of the Ruby numbers
can be replaced with their proc-based implementations:

(
ONE
.
.
HUNDRED
)
.
map
do
|
n
|
if
(
n
%
FIFTEEN
)
.
zero?
'FizzBuzz'
elsif
(
n
%
THREE
)
.
zero?
'Fizz'
elsif
(
n
%
FIVE
)
.
zero?
'Buzz'
else
n
.
to_s
end
end
BOOK: Understanding Computation
12.9Mb size Format: txt, pdf, ePub
ads

Other books

Desire #1 by Carrie Cox
All He Saw Was the Girl by Peter Leonard
Sorcerer's Luck by Katharine Kerr
Gargantuan by Maggie Estep
Can't Go Home (Oasis Waterfall) by Stone, Angelisa Denise
Ivy Takes Care by Rosemary Wells
Only with You by Lauren Layne