Stack Computers: the new wave © Copyright 1989, Philip Koopman, All Rights Reserved.
In the preceding chapters, we have covered both an abstract description of a stack machine, and several examples of real stack machines that have been built. What we shall examine now is why they are designed the way they are, and why stack machines have certain inherent advantages over more conventional designs.
Three different approaches to computer design are used as reference points for this chapter. The first reference point is that of the Complex Instruction Set Computer (CISC), which is typified by Digital Equipment Corporation's VAX series and any of the microprocessors used in personal computers (e.g. 680x0, 80x86). The second reference point is the Reduced Instruction Set Computer (RISC) (Patterson 1985) as typified by the Berkeley RISC project (Sequin & Patterson 1982) and the Stanford MIPS project (Hennesy 1984). The third reference point is that of stack machines as described in the preceding chapters.
Section 6.1 discusses some of the history of the debates that have taken place over the years among advocates of register-based machines, stack-based machines, and storage-to-storage based machines. A related topic is the more recent debates between proponents of high level language CISC architectures and RISC architectures.
Section 6.2 discusses the advantages of stack machines. Stack machines have smaller program sizes, lower hardware complexity, higher system performance, and better execution consistency than other processors in many application areas.
Section 6.3 presents the results of a study of instruction frequencies in Forth programs. Not surprisingly, subroutine calls and returns constitute a significant percentage of the instruction mix for Forth programs.
Section 6.4 examines the issue of stack management by using the results of a stack access simulation. The results indicate that fewer than 32 stack elements are needed for many application programs. This section also discusses four different methods of handling stack overflows: very large stacks, a demand-fed stack manager, a paging stack manager, and an associative cache memory.
Section 6.5 examines the cost of interrupts and multitasking on a stack-based machine. A simulation shows that context switching of the stack buffers is a minor cost in most environments. Furthermore, the cost of context switching with stack buffers may be further reduced by appropriately programmed interrupts, using lightweight tasks, and by partitioning the stack buffer into multiple small buffer areas.
Phil Koopman -- firstname.lastname@example.org