Stack Computers: the new wave © Copyright 1989, Philip Koopman, All Rights Reserved.


Foreword

During the past 8 years of chairing the Rochester Forth Conference, I have had a chance to focus on a variety of computer applications and implementations. Although the 1985 Conference covered Software Productivity, the greater interest was in real-time AI and the Novix Forth chip. The following conference was on Real Time Artificial Intelligence, which seemed to offer up almost intractable computing problems in everything from speech recognition to Strategic Defense. My invited speakers covered this ground well, though I noted again that an underlying theme of the Conference was Forth Machines. In particular, I listened to papers by Glen Haydon and Phil Koopman on their WISC CPU/16 (then called the MVP CPU/16 processor).

These Forth Machines offered performance gains of an order of magnitude or more over conventional computers, as well as important tradeoffs in reduced resource usage; resources including the commonly observed, but rarely conserved, transistor counts. But the CPU/16 processor also offered another gain: an integrated learning environment from computer architecture through programming languages, and operating systems through real applications. In two or three semesters, a student could reasonably expect to build a computer, use microcode, develop a high level language, add an operating system, and do something with it. Immediately, I saw answer to the continuing fragmentation of computer science and electrical engineering curricula, and a practical way of rejoining hardware and software.

The following year I asked Phil to be an invited speaker at my Conference on Comparative Computer Architectures. By then his ideas on writable instruction set, stack computers were full-fledged, not just in theory, but in fact. During successive nights of the Conference, his 32-bit MSI-based processor amazed a growing group of followers as it cranked out fractal landscapes and performed Conway's Game of Life via an expert system.

After the Conference I knew Phil was beginning intensive studies at Carnegie Mellon University, and starting what would become this book. What I didn't know was that he was also reducing his processor to a chip set. He began hinting at great things to come during the Spring of 1988, and he presented the operational Harris RTX 32P that June. The speed with which the WISC CPU/32 was reduced to the RTX 32P speaks well of Phil's capabilities, the soundness of his architecture, and the support Harris Semiconductor has put behind this technology. Now I can't wait to hear in detail what he's been hinting at doing with the processor, and getting my hands on one too!

As for this book, it presents another view of the RISC versus CISC controversy, and if only for its commentary on that debate it would be worthwhile. Yet, it does considerably more. It provides key insights into how stack machines work, and what their strengths and weaknesses are. It presents a taxonomy of existing serial processors and shows that for over 25 years the stack architecture has been subtly influencing both hardware and software, but that major computational gains have begun in only the past few years. Although stack processors are unlikely to dominate the much-publicized engineering workstation market, they may very well fill enormously larger niches in everything from consumer electronics to high-performance military avionics. After you read this book, find yourself a stack machine and take it for a spin.

Lawrence P. Forsley
Publisher, The Journal of Forth Application and Research
Rochester, New York, 1989


CONTENTS TOP PREV NEXT NEXT SECTION

HOME Phil Koopman -- koopman@cmu.edu