|
Beating the Averages
http://www.paulgraham.com/avg.html
I must admit that when I first started reading essays by Paul Graham,
I thought that he was pretty obnoxious and that what he had to say was
of little merit. For example, in Java's Cover,
he writes:
So, just in case it does any good, let me clarify that I'm not writing here about Java
(which I have never used) but about hacker's radar (which I have thought about a lot).
And, after reading more of his essays, I still think that he is pretty obnoxious,
but I do believe that there is quite a bit of merit to what he has to say.
As you will soon discover, Paul is pretty hooked on Lisp and isn't too keen on Java.
Apparently, he and Robert Morris made a lot of money with their startup (Viaweb)
and Paul attributes much of their success to deciding to implement everything in Lisp.
Now, Paul and Robert are pretty smart guys, so I often wonder how much of their success can actually
be attributed to Lisp (would they not have been successful had
they written everything in Java?)
As I alluded to above, it has taken me some time to warm up to Paul Graham and listen
to what he has to say. For example, he seems to consider a programming language without
macros some form of blasphemy. I was never quite clear as to why he felt so strongly
about this point, but then I realized how helpful macros would have been on Problem Set 2.
For example, you had to do the following every time you wanted to test if an exception
was thrown:
boolean threwEx = false;
try {
doBadThing();
} catch (MyException ex) {
threwEx = true;
}
assertTrue(threwEx);
Due to the nature of try/catch blocks, you had to copy and paste this template
all over your code. Using Lisp macros would eliminate the need for this copying and pasting
because you could write a macro that took a sequence of statements and an exception to catch,
and this macro would expand to form the equivalent of the Java code above; thus,
no copying and pasting would be required! As I think everyone has seen the pitfalls
associated with copy-and-paste code, I feel no need to belabor this point.
Another interesting thing that I discovered about Java recently was some of the problems
it introduces that do not exist in Lisp. For example, consider the calculation of factorial
in both languages.
In Java:
public static int fib(int n) {
return fibIter(1, 0, n);
}
public static int fibIter(int a, int b, int count) {
if (count == 0) return b;
return fibIter(a + b, a, count - 1);
}
In Lisp:
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
Now, I'm not going to get into any of the stupid character or line counting debates that you
see on Slashdot.
The item of interest is that the
Java code has two snafus that do not exist in the Lisp code:
- Integer Overflow - once an
int has exceeded 232-1 in Java,
it will wrap around and become a negative number. As this can happen with a modest value
of n to cause this overflow in fib , this is a real problem.
(Though be aware that the BigInteger
class exists in Java to help you deal with problems such as these.)
- Stack Overflow - but before you even get an integer overflow in Java, you may have a stack
overflow because the Java version of
fib may run out of memory by recursively calling
fib too many times. In Scheme (the variant of Lisp that you learned in 6.001),
this would not be a problem because Scheme implementations must be tail-recursive,
and therefore the amount of memory used will remain constant while the number of tail-recursive
calls to fib increases. (See footnote 31 on page 36 of the second edition of the
6.001 book for more information.)
In this case, Java seems to create more problems than it solves!
Of course, a Real Programmer would just implement this as:
fib(n) = (((1 + sqrt(5)) / 2) ^ n - ((1 - sqrt(5)) / 2) ^ n) / sqrt(5)
thereby ignoring the whole recursion problem altogether :)
|