Read iWoz Online

Authors: Steve Wozniak,Gina Smith

Tags: #Biography & Memoir

iWoz (21 page)

BOOK: iWoz
11.3Mb size Format: txt, pdf, ePub
ads
The next thing I heard was that it was going to be sold over the counter at MOS Technologies' booth at WESCON. The fact that this chip was so easy to get is how it ended up being the microprocessor for the Apple I.
And the best part is they cost half ($20) of what the Motorola chip would have cost me through the HP deal.
WESCON, on June 16-18, 1975, was being held in San Francisco's famous Cow Palace. A bunch of us drove up there and I waited in line in front of MOS Technologies' table, where a guy named Chuck Peddle was peddling the chips.
Right on the spot I bought a few for $20 each, plus a $5 manual.
Now I had all the parts I needed to start constructing the computer.

• o •

A couple of days later, at a regular meeting of the Homebrew Computer Club, a number of us excitedly showed the 6502 microprocessors we'd bought. More people in our club now had microprocessors than ever before.
I had no idea what the others were going to do with their 6502s, but I knew what I was going to do with mine.
To actually construct the computer, I gathered my parts together. I did this construction work in my cubicle at HP. On a typical day, I'd go home after work and eat a TV dinner or make spaghetti and then drive the five minutes back to work where I would sign in again and work late into the night. I liked to work on this project at HP, I guess because it was an engineering kind of environment. And when it came time to test or solder, all the equipment was there.
First I looked at my design on draft paper and decided exactly where I would put which chips on a flat board so that wire between chips would be short and neat-looking. In other words, I organized and grouped the parts as they would sit on the board.
The majority of my chips were from my video terminal—the terminal I'd already built to access the ARPANET. In addition, I had the microprocessor, a socket to put another board with ran- dom-access memory (RAM) chips on it, and two peripheral interface adapter chips for connecting the 6502 to my terminal.
I used sockets for all my chips because I was nuts about sockets. This traced back to my job at Electroglas, where the soldered chips that went bad weren't easily replaced. I wanted to be able to easily remove bad chips and replace them.
I also had two more sockets that could hold a couple of PROM chips. These programmable read-only memory chips could hold data like a small program and not lose the data when the power was off.
Two of these PROM chips that were available to me in the lab could hold 256 bytes of data—enough for a very tiny program. (Today, many programs are a million times larger than that.) To give you an idea of what a small amount of memory that is, a word processor needs that much for a single sentence today.
I decided that these chips would hold my monitor program, the little program I came up with so that my computer could use a keyboard instead of a front panel.
What Was the ARPANET?
Short for the Advanced Research Projects Agency Network, and developed by the U.S. Department of Defense, the ARPANET was the first operational packet-switching network that could link computers ail over the world. It later evolved into what everyone now knows as the global Internet. The ARPANET and the Internet are based on a type of data communication called "packet switching." A computer can break a piece of information down into packets, which can be sent over different wires independently and then reassembled at the other end. Previously, circuit switching was the dominant method—think of the old telephone systems of the early twentieth century. Every call was assigned a real circuit, and that same circuit was tied up during the length of the call.
The fact that the ARPANET used packet switching instead of circuit switching was a phenomenal advance that made the Internet possible.

• o •

Wiring this computer—actually soldering everything together— took one night. The next few nights after that, I had to write the 256-byte little monitor program with pen and paper. I was good at making programs small, but this was a challenge even for me.
This was the first program I ever wrote for the 6502 microprocessor. I wrote it out on paper, which wasn't the normal way even then. The normal way to write a program at the time was to pay for computer usage. You would type into a computer terminal you were paying to use, renting time on a time-share terminal, and that terminal was connected to this big expensive computer somewhere else. That computer would print out a version of your program in Is and Os that your microprocessor could understand.
This 1 and 0 program could be entered into RAM or a PROM and run as a program. The hitch was that I couldn't afford to pay for computer time. Luckily, the 6502 manual I had described what Is and Os were generated for each instruction, each step of a program. MOS Technologies even provided a pocket-size card you could carry that included all the Is and Os for each of the many instructions you needed.
So I wrote my program on the left side of the page in machine language. As an example, I might write down "LDA #44," which means to load data corresponding to 44 (in hexadecimal) into the microprocessor's A register.
On the right side of the page, I would write that instruction in hexadecimal using my card. For example, that instruction would translate into A9 44. The instruction A9 44 stood for 2 bytes of data, which equated to Is and Os the computer could understand: 10101001 01000100.
Writing the program this way took about two or three pieces of paper, using every single line.
I was barely able to squeeze what I needed into that tiny 256- byte space, but I did it. I wrote two versions of it: one that let the press of a key interrupt whatever program was running, and the other that only let a program check whether the key was being struck. The second method is called "polling."
During the day, I took my two monitor programs and some PROM chips over to another HP building where they had the equipment to permanently burn the Is and 0s of both programs into the chips.
But I still couldn't complete—or even test—these chips without memory. I mean computer memory, of course. Computers can't run without memory, the place where they do all their calculations and record-keeping.
The most common type of computer memory at the time was called "static RAM" (SRAM). My Cream Soda Computer, the
Altair, and every other computer at the time used that kind of memory. I borrowed thirty-two SRAM chips—each one could hold 1,024 bits—from Myron Tuttle. Altogether that was 4K bytes, which was 16 times more than the 256 bytes the Altair came with.
I wired up a separate SRAM board with these chips inside their sockets and plugged it into the connector in my board.
With all the chips in place, I was ready to see if my computer worked.

• o •

The first step was to apply power. Using the power supplies near my cubicle, I hooked up the power and analyzed signals with an oscilloscope. For about an hour I identified problems that were obviously keeping the microprocessor from working. At one point I had two pins of the microprocessor accidentally shorting each other, rendering both signals useless. At another point one pin bent while I was placing it in its socket.
But I kept going. You see, whenever I solve a problem on an electronic device I'm building, it's like the biggest high ever. And that's what drives me to keep doing it, even though you get frustrated, angry, depressed, and tired doing the same things over and over. Because at some point comes the Eureka moment. You solve it.
And finally I got it, that Eureka moment. My microprocessor was running, and I was well on my way.
But there were still other things to fix. I was able to debug— that is, find errors and correct them—the terminal portion of the computer quickly because I'd already had a lot of experience with my terminal design. I could tell the terminal was working when it put a single cursor on the little 9-inch black-and-white TV I had at HP.
The next step was to debug the 256-byte monitor program on the PROMs. I spent a couple of hours trying to get the interrupt
version of it working, but I kept failing. I couldn't write a new program into the PROMs. To do that, I'd have to go to that other building again, just to burn the program into the chip. I studied the chip's data sheets to see what I did wrong, but to this day I never found it. As any engineer out there reading this knows, interrupts are like that. They're great when they work, but hard to get to work.
Finally I gave up and just popped in the other two PROMs, the ones with the "polling" version of the monitor program. I typed a few keys on the keyboard and I was shocked! The letters were displayed on the screen!
It is so hard to describe this feeling—when you get something working on the first try. It's like getting a putt from forty feet away.
It was still only around 10 p.m.—I checked my watch. For the next couple of hours I practiced typing data into memory, displaying data on-screen to make sure it was really there, even typing in some very short programs in hexadecimal and running them, things like printing random characters on the screen. Simple programs.
I didn't realize it at the time, but that day, Sunday, June 29, 1975, was pivotal. It was the first time in history anyone had typed a character on a keyboard and seen it show up on their own computer's screen right in front of them.

Chapter 10
The Apple I

I was never the kind of person who had the courage to raise his hand during the Homebrew main meeting and say, "Hey, look at this great computer advance I've made." No, I could never have said that in front of a whole garageful of people.
But after the main meeting every other Wednesday, I would set up my stuff on a table and answer questions people asked. Anyone who wanted to was welcome to do this.
I showed the computer that later became known as the Apple I at every meeting after I got it working. I never planned out what I would say beforehand. I just started the demo and let people ask the questions I knew they would, the questions I wanted to answer.
I was so proud of my design—and I so believed in the club's mission to further computing—that I Xeroxed maybe a hundred copies of my complete design (including the monitor program) and gave it to anyone who wanted it. I hoped they'd be able to build their own computers from my design.
I wanted people to see this great design of mine in person. Here was a computer with thirty chips on it. That was shocking to people, having so few chips. It was like the same amount of chips on an Altair, except the Altair couldn't do anything unless you bought a lot of other expensive equipment for it. My com
puter was inexpensive from the get-go. And the fact that you could use your home TV with it, instead of paying thousands for an expensive teletype, put it in a world of its own.
And I wasn't going to be satisfied just typing Is and Os into it. My goal since high school was to have my own computer that I could program on, although I always assumed the language on the computer would be FORTRAN.
The computer I built didn't have a language yet. Back then, in 1975, a young guy named Bill Gates was starting to get a little bit of fame in our circles for writing a BASIC interpreter for the Altair. Our club had a copy of it on paper tape which could be read in with a teletype, taking about thirty minutes to complete. Also, at around the same time a book called
101 Basic Computer Games
came out. I could sniff the air.
That's why I decided BASIC would be the right language to write for the Apple I and its 6502 microprocessor. And I found out none existed for the 6502. That meant that if I wrote a BASIC program for it, mine could be the first. And I might even get famous for it. People would say, Oh, Steve Wozniak, he did the BASIC for the 6502.
Anyway, people who saw my computer could take one look at it and see the future. And it was a one-way door. Once you went through it, you could never go back.

• o •

The first time I showed my design, it was with static RAM (SRAM)—the kind of memory that was in my Cream Soda Computer. But the electronics magazines I was reading were talking about a new memory chip, called "dynamic RAM" (DRAM), which would have 4K bits per chip.
The magazines were heralding this as the first time silicon chip memory would be less expensive than magnetic core memory. Up to this point, all the major computers, like the systems from IBM and Data General, still used core memory.
I realized that 4K bytes of DRAM—what I needed as a minimum—would only take eight chips, instead of the thirty-two SRAM chips I had to borrow from Myron. My goal since high school had always been to use as few chips as possible, so this was the way to go.
The biggest difference between SRAM and DRAM is that DRAM has to be refreshed continually or it loses its contents. That means the microprocessor has to electrically refresh roughly 128 different addresses of the DRAM every one two- thousandth of a second to keep it from forgetting its data.
I added DRAM by writing data to the screen—I held the microprocessor clock signal steady, holding transitions off, during a period called the "horizontal refresh."
BOOK: iWoz
11.3Mb size Format: txt, pdf, ePub
ads

Other books

The Rescued by Marta Perry
Shifter Magnetism by Stormie Kent
Snatchers (A Zombie Novel) by Whittington, Shaun
The Rule of Won by Stefan Petrucha
The Lost Starship by Vaughn Heppner
Love Sucks! by Melissa Francis
Wasted Heart by Reed, Nicole
Fever Season by Barbara Hambly
The Miller's Dance by Winston Graham