Thinking Functionally with Haskell (27 page)

BOOK: Thinking Functionally with Haskell
13.74Mb size Format: txt, pdf, ePub

Let’s time
build
:

T
(
build
)(1) = Θ(1),

T
(
build
)(
n
) =
T
(
build
)(
m
) +
T
(
build
)(
n

m
) +Θ(
n
)
where
.
m
=
n
div 2

It takes Θ(
n
) steps to halve a list of length
n
, and then we recursively build two subtrees from lists of length
m
and
n

m
, respectively. The solution is
T
(
build
)(
n
) = Θ(
n
log
n
).

In words, building a tree by the above method takes longer than the length of the list by a logarithmic factor.

Having established this fact, let us define
build2
by

build2 :: Int -> [a] -> (BinTree a,[a])

build2 n xs = (build (take n xs),drop n xs)

This builds a tree from the first
n
elements, but also returns the list that is left. We have
build xs = fst (build2 (length xs) xs)

so our original function can be determined from the tupled version.

Our aim now is to construct a direct recursive definition of
build2
. First of all, it is clear that
build2 1 xs = (Leaf (head xs),tail xs)

For the recursive case we start with

build2 n xs = (Fork (build (take m (take n xs)))

(build (drop m (take n xs))),

drop n xs) where m = n `div` 2

This equation is obtained by substituting in the recursive case of
build
. It suggests that the next step is to use some properties of
take
and
drop
. Here they are: if
m <= n
then

take m . take n = take m

drop m . take n = take (n-m) . drop m

That leads to

build2 n xs = (Fork (build (take m xs))

(build (take (n-m) (drop m xs))),

drop n xs) where m = n `div` 2

Using the definition of
build2
we can rewrite the above as follows:

build2 n xs = (Fork u v, drop n xs)

where (u,xs') = build2 m xs

(v,xs'') = build2 (n-m) xs'

m
= n `div` 2

But as a final step, observe that

xs'' = drop (n-m) xs'

= drop (n-m) (drop m xs)

= drop n xs

Hence we can rewrite
build2
once again to read

build2 1 xs = (Leaf (head xs),tail xs)

build2 n xs = (Fork u v, xs'')

where (u,xs')
= build2 m xs

(v,xs'') = build2 (n-m) xs'

m
= n `div` 2

Timing this program yields

T
(
build2
)(1) = Θ(1),

T
(
build2
)(
n
) =
T
(
build2
)(
m
) +
T
(
build2
)(
n

m
) +Θ(1).

with solution
T
(
build2
)(
n
) = Θ(
n
). Using
build2
as a subsidiary function has therefore improved the running time of
build
by a logarithmic factor.

7.7 Sorting

Sorting is a big topic and one can spend many happy hours tinkering with different algorithms. Knuth devotes about 400 pages to the subject in Volume 3 of his series
The Art of Computer Programming
. Even then some of his conclusions have to be reformulated when sorting is considered in a purely functional setting. Here we briefly consider two sorting algorithms, keeping an eye out for possible optimisations.

Mergesort

The sorting method called
Mergesort
made an appearance in
Section 4.8
:

sort :: (Ord a) => [a] -> [a]

sort []
= []

sort [x]
= [x]

sort xs
= merge (sort ys) (sort zs)

where (ys,zs) = halve xs

halve xs = (take m xs,drop m xs)

where m = length xs `div` 2

In fact there are a number of variants for sorting by merging, and the standard prelude function
sort
uses a different variant than the one above.

As we said above, the definition of
halve
looks fairly inefficient in that it involves multiple traversals of its argument. One way to improve matters is to make use of the standard prelude function
splitAt
, whose specification is

splitAt :: Int -> [a] -> ([a],[a])

splitAt n xs = (take n xs,drop n xs)

The prelude version of this function is the result of a tupling transformation:

splitAt 0 xs
= ([],xs)

splitAt n []
= ([],[])

splitAt n (x:xs) = (x:ys,zs)

where (ys,zs) = splitAt (n-1) xs

It is easy enough to calculate this definition using the two facts that

take n (x:xs) = x:take (n-1) xs

drop n (x:xs) = drop (n-1) xs

provided
0 < n
. Now we have

halve xs = splitAt (length xs `div` 2) xs

There are still two traversals here of course.

Another way to improve
sort
is to define

sort2 n xs = (sort (take n xs),drop n xs)

We have
sort xs = fst (sort2 (length xs) xs)
, so our original sorting function can be retrieved from the general one. An almost exactly similar calculation to the one in the previous section leads to

sort2 0 xs = ([],xs)

sort2 1 xs = ([head xs],tail xs)

sort2 n xs = (merge ys zs, xs'')

where (ys,xs')
= sort2 m xs

(zs,xs'') = sort2 (n-m) xs'

m
= n `div` 2

With this definition there are no length calculations and no multiple traversals of
xs
.

Another way to optimise
halve
is to realise that no human would split up a list in this way if forced to do so by hand. If asked to divide a list into two, you and I would surely just deal out the elements into two piles:

halve []
= ([],[])

halve [x]
= ([x],[])

halve (x:y:xs) = (x:ys,y:zs)

where (ys,zs) = halve xs

Of course, this definition returns a different result than the previous one, but the order of the elements in the two lists does not matter if the result is to be sorted; what is important is that the elements are all there.

That is a total of three ways to improve the performance of
sort
. However, it turns out that none of them makes that much difference to the total running time. A few per cent perhaps, but nothing substantial. Furthermore, if we are using GHCi as our functional evaluator, none of the versions compares in performance to the library function
sort
because that function is given to us in a compiled form, and compiled versions of functions are usually about ten times faster. We can always compile our functions using GHC of course.

Quicksort

Our second sorting algorithm is a famous one called
Quicksort
. It can be expressed in just two lines of Haskell:

sort :: (Ord a) => [a] -> [a]

sort []
= []

sort (x:xs) = sort [y | y <- xs, y < x] ++ [x] ++

sort [y | y <- xs, x <= y]

That’s very pretty and a testament to the expressive power of Haskell. But the prettiness comes at a cost: the program can be very inefficient in its use of space. The situation is the same as with the program for
mean
seen earlier.

Before plunging into ways the code can be optimised, let’s compute
T
(
sort
). Suppose we want to sort a list of length
n
+1. The first list comprehension can return a list of any length
k
from 0 to
n
. The length of the result of the second list comprehension is therefore
n

k
. Since our timing function is an estimate of the worst-case running time, we have to take the maximum of these possibilities:
T
(
sort
)(
n
+1)

= max [
T
(
sort
)(
k
) +
T
(
sort
)(
n

k
) |
k
← [0 ..
n
]] + Θ(
n
).

The Θ(
n
) term accounts for both the time to evaluate the two list comprehensions and the time to perform the concatenations. Note, by the way, the use of a list comprehension in a mathematical expression rather than a Haskell one. If list comprehensions are useful notations in programming, they are useful in mathematics too.

Although not immediately obvious, the worst case occurs when
k
= 0 or
k
=
n
. Hence
T
(
sort
)(0) = Θ(1),

T
(
sort
)(
n
+1) =
T
(
sort
)(
n
) +Θ(
n
), with solution
T
(
sort
)(
n
) = Θ(
n
2
). Thus Quicksort is a quadratic algorithm in the worst case. This fact is intrinsic to the algorithm and has nothing to do with the Haskell expression of it. Quicksort achieved its fame for two other reasons, neither of which hold in a purely functional setting. Firstly, when Quicksort is implemented in terms of arrays rather than lists, the partitioning phase can be performed
in place
without using any additional space. Secondly, the
average case
performance of Quicksort, under reasonable assumptions about the input, is Θ(
n
log
n
) with a smallish constant of proportionality. In a functional setting this constant is not so small and there are better ways to sort than Quicksort.

With this warning, let us now see what we can do to optimise the algorithm without changing it in any essential way (i.e. to a completely different sorting algorithm). To avoid the two traversals of the list in the partitioning process, define
partition p xs = (filter p xs, filter (not . p) xs)

This is another example of tupling two definitions to save on a traversal. Since
filter p
can be expressed as an instance of
foldr
we can appeal to the tupling law of
foldr
to arrive at

partition p = foldr op ([],[])

where op x (ys,zs) | p x = (x:ys,zs)

| otherwise = (ys,x:zs)

Now we can write

sort
[] = []

sort (x:xs) = sort ys ++ [x] ++ sort zs

where (ys,zs) = partition (

But this program still contains a space leak. To see why, let us write the recursive case in the equivalent form

sort (x:xs) = sort (fst p) ++ [x] ++ sort (snd p)

where p = partition (

Suppose
x:xs
has length
n
+1 and is in strictly decreasing order, so
x
is the largest element in the list and
p
is a pair of lists of length
n
and 0, respectively. Evaluation of
p
is triggered by displaying the results of the first recursive call, but the
n
units of space occupied by the first component of
p
cannot be reclaimed because there is another reference to
p
in the second recursive call. Between these two calls further pairs of lists are generated and retained. All in all, the total space required to evaluate
sort
on a strictly decreasing list of length
n
+1 is Θ(
n
2
) units. In practice this means that evaluation of
sort
on some large inputs can abort owing to lack of sufficient space.

The solution is to force evaluation of
partition
and, equally importantly, to bind
ys
and
zs
to the components of the pair, not to
p
itself.

One way of bringing about a happy outcome is to introduce two accumulating parameters. Define
sortp
by

sortp x xs us vs = sort (us ++ ys) ++ [x] ++

sort (vs ++ zs)

where (ys,zs) = partition (

Then we have

sort (x:xs) = sortp x xs [] []

We now synthesise a direct recursive definition of
sortp
. The base case is
sortp x [] us vs = sort us ++ [x] ++ sort vs

For the recursive case
y:xs
let us assume that
y < x
. Then

sortp x (y:xs) us vs

=
{definition of
sortp
with
(ys,zs) = partition (}

sort (us ++ y:ys) ++ [x] ++ sort (vs ++ zs)

=
{claim (see below)}

sort (y:us ++ ys) ++ [x] ++ sort (vs ++ zs)

=
{definition of
sortp
}

Other books

The Shadowlands by Emily Rodda
Destiny Binds by Tammy Blackwell
The Spirit Woman by Margaret Coel
Knight of Runes by Ruth A. Casie
Secret Agent Seduction by Maureen Smith
Lincoln: A Life of Purpose and Power by Richard J. Carwardine
Embrace of the Damned by Bast, Anya
Tiger Milk by Stefanie de Velasco,
Pretty Little Dreams by Jennifer Miller