100% found this document useful (2 votes)
52 views84 pages

Full Virtual Machine Design and Implementation C C 689th Edition Bill Blunden Ebook All Chapters

ebook

Uploaded by

vdgkumur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
52 views84 pages

Full Virtual Machine Design and Implementation C C 689th Edition Bill Blunden Ebook All Chapters

ebook

Uploaded by

vdgkumur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Full download ebook at ebookname.

com

Virtual Machine Design and Implementation C C


689th Edition Bill Blunden

https://ebookname.com/product/virtual-machine-design-and-
implementation-c-c-689th-edition-bill-blunden/

OR CLICK BUTTON

DOWLOAD NOW

Download more ebook from https://ebookname.com


More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Handbook of virtual environments design implementation


and applications 1st Edition Kay M. Stanney

https://ebookname.com/product/handbook-of-virtual-environments-
design-implementation-and-applications-1st-edition-kay-m-stanney/

Professional Multicore Programming Design and


Implementation for C Developers Wrox Programmer to
Programmer 1st Edition Cameron Hughes

https://ebookname.com/product/professional-multicore-programming-
design-and-implementation-for-c-developers-wrox-programmer-to-
programmer-1st-edition-cameron-hughes/

Effective C Covers C 4 0 50 Specific Ways to Improve


Your C 2nd Edition Bill Wagner

https://ebookname.com/product/effective-c-
covers-c-4-0-50-specific-ways-to-improve-your-c-2nd-edition-bill-
wagner/

Professional ASP NET 4 in C and VB 1st Edition Bill


Evjen

https://ebookname.com/product/professional-asp-net-4-in-c-and-
vb-1st-edition-bill-evjen/
C Gotchas Avoiding Common Problems in Coding and Design
1st Edition Stephen C. Dewhurst

https://ebookname.com/product/c-gotchas-avoiding-common-problems-
in-coding-and-design-1st-edition-stephen-c-dewhurst/

Design and Analysis of Experiments Seventh Edition


Douglas C. Montgomery

https://ebookname.com/product/design-and-analysis-of-experiments-
seventh-edition-douglas-c-montgomery/

Design and analysis of experiments 10th Edition Douglas


C. Montgomery

https://ebookname.com/product/design-and-analysis-of-
experiments-10th-edition-douglas-c-montgomery/

Long Dark Road Bill King and Murder in Jasper Texas 1st
Edition Ricardo C. Ainslie

https://ebookname.com/product/long-dark-road-bill-king-and-
murder-in-jasper-texas-1st-edition-ricardo-c-ainslie/

Real World NET C and Silverlight Indispensible


Experiences from 15 MVPs 1st Edition Bill Evjen

https://ebookname.com/product/real-world-net-c-and-silverlight-
indispensible-experiences-from-15-mvps-1st-edition-bill-evjen/
Virtual Machine Design
and Implementation
in C/C++

Bill Blunden

Wordware Publishing, Inc.


Library of Congress Cataloging-in-Publication Data

Blunden, Bill
Virtual machine design and implementation in C/C++ / by Bill Blunden.
p. cm.
Includes bibliographical references and index.
ISBN 1-55622-903-8 (pbk.)
1. Virtual computer systems. 2. C++ (Computer program language). I. Title.

QA76.9.V5 B59 2002


005.4’3--dc21 2002016755
CIP

© 2002, Wordware Publishing, Inc.


All Rights Reserved

2320 Los Rios Boulevard


Plano, Texas 75074

No part of this book may be reproduced in any form or by any means


without permission in writing from Wordware Publishing, Inc.

Printed in the United States of America

ISBN 1-55622-903-8

10 9 8 7 6 5 4 3 2 1
0202

Product names mentioned are used for identification purposes only and may be trademarks of their respective companies.

All inquiries for volume purchases of this book should be addressed to Wordware Publishing, Inc., at the above
address. Telephone inquiries may be made by calling:
(972) 423-0090
To my parents, who bought me a computer
in the sixth grade and encouraged me to study hard

To my nephew Theo, the life of the party

To Danny Solow and Art Obrock, who


told me the truth about life as a mathematician

To Art Bell, whose talk radio show helped


keep me awake while I wrote this book’s final manuscript
About the Author
Bill Blunden has been obsessed with systems software since his first exposure to the DOS
debug utility in 1983. His single-minded pursuit to discover what actually goes on under
the hood has led him to program the 8259 interrupt controller and become an honorable
member of the triple-fault club. After obtaining a BA in mathematical physics and an MS
in operations research, Bill was unleashed upon the workplace. It was at an insurance com-
pany in the beautiful city of Cleveland, plying his skills as an actuary, that Bill first got into
a fistfight with a cranky IBM mainframe. The scuffle was over a misguided COBOL pro-
gram. Bloody but not beaten, Bill decided that groking software beat crunching numbers.
This led Bill to a major ERP player in the Midwest, where he developed CASE tools in
Java, performed technology research, and was assailed by various Control Data veterans.
Having a quad-processor machine with 2 GB of RAM at his disposal, Bill was hard
pressed to find any sort of reason to abandon his ivory tower. There were days when Bill
used to disable paging and run on pure SDRAM. Nevertheless, the birth of his nephew
forced him to make a pilgrimage out West to Silicon Valley. Currently on the peninsula,
Bill survives rolling power blackouts and earthquakes, and is slowly recovering from his
initial bout with COBOL.

iv
Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Part I — Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1 History and Goals . . . . . . . . . . . . . . . . . . . . . 3
Setting the Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
The Need for a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Depreciating Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
A Delicate Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Arguments Against a Virtual Machine. . . . . . . . . . . . . . . . . . . . . . . 15
Looking Ahead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Lessons Learned. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 2 Basic Execution Environment . . . . . . . . . . . . . . 21
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Notation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Run-time Systems and Virtual Machines . . . . . . . . . . . . . . . . . . . . . 22
Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Machine-Level Management . . . . . . . . . . . . . . . . . . . . . . . . 25
Operating System Level . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Application Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Dynamic Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . 35
HEC Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Machine Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
HEC Machine Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Task Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
HEC Task Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Input/Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
HEC I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Part II — The HEC Virtual Machine . . . . . . . . . . . . . . . . . . . . 61
Chapter 3 Virtual Machine Implementation . . . . . . . . . . . . 63
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Global Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
common.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
win32.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
iset.c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

v
Contents

exenv.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
error.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Command-Line Invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Debugging Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Handling Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . 91
Setting Up the Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Instruction Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
load.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
store.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
pushpop.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
move.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
jump.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
bitwise.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
shift.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
intmath.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
fltmath.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
dblmath.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
interupt.c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
intwin32.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Chapter 4 The HEC Debugger . . . . . . . . . . . . . . . . . . . 149
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Debugging Techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Breakpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Single-Step Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Debugging Techniques on Intel . . . . . . . . . . . . . . . . . . . . . . . . . 151
Intel Interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Real-Mode Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Real-Mode Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Real-Mode Interrupt Handling . . . . . . . . . . . . . . . . . . . . . . 154
Dosdbg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Monkey Business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Stack Smashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Multithreaded Mayhem . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Self-Modifying Programs . . . . . . . . . . . . . . . . . . . . . . . . . 167
Mixed Memory Models . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Assorted Fun and Games . . . . . . . . . . . . . . . . . . . . . . . . . 170
HEC File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Header Section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Symbol Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
String Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Bytecode Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Modes of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Implementation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Command-Line Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Storing Debug Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

vi
Contents

The Problem with Structures . . . . . . . . . . . . . . . . . . . . . . . 186


Processing Debug Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 188
? - Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Q - Quit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
F - Executable Information . . . . . . . . . . . . . . . . . . . . . . . . 190
D Start Stop - Dump Memory . . . . . . . . . . . . . . . . . . . . . . . 191
S Start Stop String - Search for String . . . . . . . . . . . . . . . . . . . 193
L String - Symbol Lookup . . . . . . . . . . . . . . . . . . . . . . . . . 195
P - Procedure Display . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
RX - Register Display (Ri, Rf, Rd) . . . . . . . . . . . . . . . . . . . . . 200
T - Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Future Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Faster Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
O(n) Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Dynamic Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Dynamic Breakpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Session Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Program Frequency Counts . . . . . . . . . . . . . . . . . . . . . . . . 216
Symbolic Debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Taking HEC for a Test Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Part III — HEC Assembly . . . . . . . . . . . . . . . . . . . . . . . . . 225
Chapter 5 Assembler Implementation. . . . . . . . . . . . . . . 227
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Data Structure Briefing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
HASM Assembler Algorithms . . . . . . . . . . . . . . . . . . . . . . 229
Abstract Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Vector ADT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Extendable Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Tree ADT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Binary Search Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Dictionary ADT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Hash Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Command-Line Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Global Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Handling Configuration Options. . . . . . . . . . . . . . . . . . . . . . . . . 262
Pass 1 – Populate the Symbol Table . . . . . . . . . . . . . . . . . . . . . . . 269
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
LineScanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
LineTokenizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Pass1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
StringTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
SymbolTable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
HashTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Pass 2 – Generate Bytecode and Listings . . . . . . . . . . . . . . . . . . . . . 334

vii
Contents

Building the Compilation Unit. . . . . . . . . . . . . . . . . . . . . . . . . . 376


Reading a Listing File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Taking HASM for a Test Drive . . . . . . . . . . . . . . . . . . . . . . . . . . 384
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Chapter 6 Virtual Machine Interrupts . . . . . . . . . . . . . . . 397
Overview and Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
INT 0 – File Input/Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
INT 1 – File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
INT 2 – Process Management . . . . . . . . . . . . . . . . . . . . . . . . . . 433
INT 3 – Breakpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
INT 4 – Time and Date Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
INT 5 – Handling Command-Line Arguments . . . . . . . . . . . . . . . . . . 450
INT 6 – Memory Diagnostics. . . . . . . . . . . . . . . . . . . . . . . . . . . 453
INT 7 – Dynamic Memory Allocation . . . . . . . . . . . . . . . . . . . . . . 456
INT 8 – Mathematical Functions . . . . . . . . . . . . . . . . . . . . . . . . . 467
INT 9 – Interfacing with Native Code . . . . . . . . . . . . . . . . . . . . . . 473
INT 10 – Interprocess Communication (IPC) . . . . . . . . . . . . . . . . . . 484
IPC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
TCP/IP Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
TCP/IP Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Chapter 7 HEC Assembly Language . . . . . . . . . . . . . . . . 513
Constituents of an Assembly Language Program. . . . . . . . . . . . . . . . . 513
Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Directives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
Defining Procedures and Labels . . . . . . . . . . . . . . . . . . . . . . . . . 517
Loading and Moving Immediate Data . . . . . . . . . . . . . . . . . . . . . . 519
Direct Memory Addressing Mode . . . . . . . . . . . . . . . . . . . . . . . . 521
Loading and Storing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Bitwise Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Program Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Jumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Manipulating the Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Indirect Memory Addressing Mode . . . . . . . . . . . . . . . . . . . . . . . 548
Defining Global Variable Storage . . . . . . . . . . . . . . . . . . . . . . . . 550
Constructing Activation Records . . . . . . . . . . . . . . . . . . . . . . . . 553
Data Type Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Instruction and Directive Summary . . . . . . . . . . . . . . . . . . . . . . . 574
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577

viii
Contents

Chapter 8 Advanced Topics . . . . . . . . . . . . . . . . . . . . 579


Targeting HEC: Compiler Design . . . . . . . . . . . . . . . . . . . . . . . . 579
Managing Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Supporting Object-Oriented Features . . . . . . . . . . . . . . . . . . . . . . 586
The Basic Tenets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
Exceptions in Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Implementing Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . 615
Example Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 617
Abusing Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
Porting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
Observations on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 626
linux.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
intlinux.c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
Building HEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Rolling Your Own . . . Run-Time System . . . . . . . . . . . . . . . . . . . . . 649
Creating and Following Trends . . . . . . . . . . . . . . . . . . . . . . 649
Project Management — Critical Paths . . . . . . . . . . . . . . . . . . . 650
Run-Time System Critical Path . . . . . . . . . . . . . . . . . . . . . . 651
Operating System Critical Path . . . . . . . . . . . . . . . . . . . . . . 652
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Compiler Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Crypto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
numfmt Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
filedmp Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663

ix
Acknowledgments
Several people have contributed either directly or indirectly in the writing of this book. I
would like to begin by thanking Jim Hill for giving me the opportunity to write this book. I
have had several bad experiences with acquisition editors in the past whom I felt were less
than honest. Jim has been straightforward and direct with me throughout the entire process
and I truly appreciate it.
I would also like to thank Barry Brey, who agreed to be my technical reviewer, and as
such has read through every single line of my book. I started e-mailing Barry several years
ago with all sorts of obscure questions, and he has always done his best to answer them.
Barry has written a multitude of books on the Intel 80x86 platform. He has been docu-
menting and explaining the x86 processor since it arrived on the hardware scene. Because
of his depth of knowledge, I knew that Barry’s advice would be invaluable.
Finally, I would like to thank Paula Price and Wes Beckwith for doing a lot of grunt
work and helping to coordinate everything so that the whole process ran smoothly. Thanks
a lot, guys.

x
Introduction
“Eat flaming death, minicomputer mongrels!”
—IPM Thug

As a former physicist, I am painfully aware that doing any sort of ground-breaking experi-
ment in an established field like particle physics typically requires the backing of a federal
budget. Fortunately, this is not the case with computer science. For a couple hundred dol-
lars, you can purchase a refurbished PC. For $50 more, you can buy a set of development
tools and put yourself in business. You can actually do serious work on a cheap machine
using readily available tools. This is exactly how I made the switch from mathematical
physics to computer science in late 1994, when I salvaged an orphaned 80386 from a
dumpster outside my apartment building. I had been in the process of filling out applica-
tions for various graduate programs in physics. After replacing its hard drive, I was able to
revive the worn machine. Right then and there I decided to tear up my applications and
study computer science. I consider the $200 that I spent for a hard drive to be one of the
best investments I ever made.
In essence, computer science is still a very accessible field of study. It’s extremely
easy to sit down and play with different ideas and approaches. The personal computer is a
superb laboratory. Naturally, I could not resist the urge to do a little empirical tinkering
myself, particularly after devoting seven years of my life to physics, where state-of-the-art
experimental equipment costs millions of dollars. This book is basically the published ver-
sion of the journal that I have maintained for the past two years. The material included
within these pages covers the design and implementation of the HEC run-time system.
The name HEC is borrowed from the “CPU Wars” comic strip, which depicts the
struggle of employees at the Human Engineered Computer company (HEC) against a hos-
tile takeover by the totalitarian thugs from Impossible to Purchase Machines (IPM).
During the Reagan years of the 1980s, I was in high school. I completely missed two very
pivotal decades of software history. I have never programmed with punch cards or entered
commands at a teletype machine. I don’t know any software engineers in my generation
who have ever worked on a minicomputer. In fact, back in 1984 the only real business
computer that I was able to get my hands on was an 8088 IBM PC. Christening my
run-time system with the name HEC is my attempt to remind younger engineers of who
preceded them and also to pay homage to the deep geeks who programmed in hex codes
and paved the way for the Internet boom of the 1990s.

xi
Introduction

Anyone who developed software in the 1960s and 1970s probably understands that
the fictitious corporate names HEC and IPM are thinly veiled references to Digital Equip-
ment Corporation (DEC) and International Business Machines (IBM). In 1961 DEC
introduced the PDP-1, and ushered in the age of minicomputers. Up until that point, com-
puters were monolithic, expensive structures that were cloistered away in their own
buildings and usually protected by a division of armed palace guards. Submitting a job to a
mainframe was like visiting the Vatican to see the pope. At least a couple of older engi-
neers have told me all sorts of horror stories about having to sign up for computer access at
three o’clock in the morning.
The minicomputer changed all this. The minicomputer was smaller, much cheaper,
and offered a convenient time-sharing environment. Rather than submit your punch cards
to the exalted mainframe operators, you could log on to a minicomputer and avoid all the
hassle of groveling and waiting. In contrast to the mainframe, minicomputers were
friendly and accessible to the average developer. The popularity the minicomputer
enjoyed probably gave IBM salespeople the willies.

NOTE DEC was not the only company that stole IBM’s thunder. In 1964, Control
Data Corporation presented the CDC 6600 to the world. It was, hands down, the
fastest and most powerful computer available at the time. It made IBM’s top of the line
look like a lightweight. This is not surprising, considering that the man who led the
development was Seymour Cray. The CDC 6600 initially sold for $7 million, and
Control Data would end up selling about 50 of them. I have been told that high-level
execs at IBM were upset that Seymour had done such an effective job of outshining
them. Supposedly, IBM put up a paper tiger and told the business world to wait for its
supercomputer. Six months later, it became obvious that IBM had been bluffing in an
attempt to rain on the 6600’s parade.

One might even speculate that DEC’s introduction of the minicomputer was partially
responsible for the birth of the Unix operating system. In 1968, Ken Thompson, a
researcher at Bell Labs, stumbled across a little-used DEC PDP-7 and decided it would be
a neat platform for developing a game called Space Travel. Ken was a veteran of the
MULTICS project. MULTICS (which stands for MULTIplexed Information and Com-
puting Service) was a computer project which involved Bell Labs, General Electric, and
MIT. The problem with MULTICS was that it attempted to provide operating system fea-
tures which the hardware at the time was not really capable of supporting. Bell Labs
dropped out of the project, leaving poor Ken with a lot of spare time on his hands.
Although he had initially wanted to use the PDP-7 to implement Space Travel, Ken had
been bitten by the operating system bug. The urge to build an operating system is not the
kind of compulsion that ever goes away. Ken decided that he would scale back on the
requirements of MULTICS and write a smaller version he could call his own. Shortly
afterwards, Unix was born. The name Unix is a slightly modified version of the original,
which was UNICS (UNIplexed Information and Computing Service). Windows people
have been known to mockingly call it “eunuchs,” probably because of the hobbled user
interface.

xii
Introduction

It is now 2002 and DEC is, sadly, nothing more than a memory. After being swallowed
up by Compaq in the late 1990s, the DEC brand name was gradually snuffed out.

Approach
The HEC run-time system, as described in this book, consists of an execution engine, a
machine-level debugger, an assembler, and an assorted variety of other development tools.
I built the HEC run-time system from scratch. During my journey, I was confronted with
several architectural issues. The manner in which I addressed these issues ended up defin-
ing the programs I constructed. Rather than merely present you with a sequential
blow-by-blow account of my implementation and then dump some source code in your
lap, I thought it would be instructive to use a different methodology.
Professional mathematicians follow the standard operating procedure (SOP) of stat-
ing a proposition, providing a proof, and then offering an example. Most mathematics
graduate students don’t even have to think about it; they’ve seen it so many times that the
proposition-proof-example approach is automatic. I decided to adopt a similar approach
that allows me to explain architectural issues with equal rigor and consistency. The SOP
which I formulated involves three fundamental steps:
1. Present a design problem and the necessary background.
2. Provide a solution.
3. Discuss alternatives and their relative merits.
I will spend the majority of Chapter 2 applying this methodology. I start each subject by
providing an overview of fundamental concepts and theory. This sets the stage for an
explanation of my decisions and enables the reader to understand the context in which I
dealt with particular problems. In fact, I’m sure that some readers will scoff at some of my
decisions unless they are familiar with the underlying constraints that resulted from my
design goals.
Finally, every decision involves tradeoffs. If you implement a list data structure with
an array, you sacrifice flexibility for access speed. If you implement a list data structure
with a linked list, you sacrifice speed for the ability to increase the size of the list. There are
rarely solutions that are optimal under every circumstance. This is a recurring theme in
computer science which will rear its head several times in this book. As a result, I follow
up each design decision with an analysis of how I benefited from a particular decision and
what I sacrificed.

NOTE This theme is not limited to the field of computer science. Any scenario
where constraints and demands are placed on limited resources results in a collection
of reciprocal tradeoffs.

Because a run-time system, by necessity, includes functionality that is normally assumed


by the native operating system and hardware platform, the background material I cover

xiii
Introduction

has been gathered from a variety of sources. I have tried to make the book as complete and
self-contained as possible. Readers who desire to investigate topics further may look into
the references supplied at the end of each chapter.

Intended Use
This book is directed towards two groups of people:
n Systems engineers
n Students of computer science
Systems engineers will find this book useful because it offers them an alternative to the
tyranny of computer hardware vendors. As hardware architecture becomes more compli-
cated, engineers will be confronted with greater challenges in an effort to keep pace with
Moore’s Law. For example, current processors based on the Explicitly Parallel Instruction
Computing (EPIC) scheme pose a much more daunting challenge to the compiler writer
than the processors of the 1980s. To support features like instruction-level parallelism,
much of the responsibility for managing efficient execution has been shifted from the pro-
cessor to the shoulders of the systems engineer (whom I do not envy). Implementing
predication to avoid mispredict penalties and generating code that uses speculation is an
awful lot of work. In fact, it’s enough to give any engineer a nosebleed.
To make matters worse, computer processors and their native instruction sets are tem-
poral by nature. By the time a stable, efficient compiler has been released into production,
and has gained acceptance as a standard tool, the engineers who designed it need to start
reworking the back end to accommodate the latest advances in hardware. System software
people, like me, are constantly in a race to play keep-up with the hardware folks, and it’s a
major pain.
There are alternatives to the gloomy prospect of continuously rewriting the back end
of your development tools. Specifically, you have the option of targeting a virtual
machine. Unlike a physical computer, a virtual machine is really just a specification. It is a
collection of rules that can be implemented in any way that the software engineer deems
fit. This effectively makes a virtual machine platform independent. A virtual machine can
exist on any platform and be written in any computer language, as long as it obeys the rules
of the specification. I constructed the HEC execution engine with this in mind. My pri-
mary goal was to create a stationary target that would save systems engineers from
rewriting their development tools every two years. I also wanted to present a run-time sys-
tem that would be straightforward and accessible to the average software engineer, much
in the same way that DEC’s PDP-11 was accessible to programmers in the 1970s.
Students who want to get a better understanding of how a computer functions without
delving into the gory details of direct memory access or interval timers will also find this
book useful. Regardless of which hardware platform a system is based on, the fundamen-
tal mechanism for executing programs is the same: Instructions are loaded from secondary
storage into memory and then executed by the processor. This book invests a lot of effort

xiv
Introduction

into explaining this mechanism. The result is that the student is able to take this basic
understanding and use it as a frame of reference when faced with a new system.
In addition, this book provides a solid explanation of assembly language program-
ming. While developing software strictly in assembly language is a poor use of resources,
an in-depth understanding of assembly-level programming offers certain insights into top-
ics that cannot really be obtained in any other way. For example, sometimes the only way
to discern the finer details of a compiler’s optimizer is to examine what goes on in the
basement, where the processor executes machine-encoded instructions.
“Pay no attention to that man behind the curtain....”
—Wizard of Oz
My own initial understanding of what Borland’s Turbo C compiler did underneath the
hood was very poor. I usually just wrote my code, invoked the compiler, closed my eyes,
and crossed my fingers. When I felt it was safe, I would open my eyes and peruse the
results. It was only after I started taking a look at Turbo C’s assembly code listings that I
was rewarded with a better grasp of what happened behind the scenes. This, in turn,
allowed me to beat the compiler’s optimizer at its own game on several occasions.

Prerequisites
This book assumes that you are fluent in the C and C++ programming languages. If you
are not familiar with C and C++, and you have any sort of latent interest in systems engi-
neering, I would encourage you to learn these languages as soon as you can. C is the
language of choice for implementing system software. It is both a lightweight and versatile
language which provides access to a number of low-level operations, but also abstracts the
computer’s operation enough to make porting easy.
C++ is an extension of C that allows more complicated problems to be addressed
using what is known as the object-oriented paradigm. C++ is one of the three big
object-oriented languages (Smalltalk, C++, and Java). I mention a couple of books at the
end of this introduction which may be of use to those of you who do not speak C or C++.
Learning C, in particular, is a necessary rite of passage for anyone who wants to do
systems engineering. The primary reason for this is that the Unix operating system has tra-
ditionally been implemented in C. The first version of Unix was implemented by Ken
Thompson on a PDP-7 in assembler. After several thousand lines, any assembly program
can become a real challenge to maintain. The fact that Ken was able to pull this off at all is
a testimony to his fortitude as a programmer. Realizing that porting an operating system
written in assembler was no fun, Ken joined heads with Dennis Ritchie and Brian
Kernighan to create C. In 1973, the Unix kernel was rewritten in C for DEC’s renowned
PDP-11. If you think C is an anachronism that has been supplanted by more contemporary
languages, think again. Take a look at the source code for the Linux operating system ker-
nel; it’s free, readily available all over the Internet, and almost entirely written in C.

xv
Introduction

Organization
This book examines both the philosophical motivation behind HEC’s architecture and the
actual implementation. In doing so, the design issues that presented themselves will be
dissected and analyzed. I truly believe that a picture is worth a thousand words, so I
included a diagram or illustration whenever I thought it was appropriate. Sections of
source code are also present throughout the text.
This book is divided into three parts.
Part I — Overview The first two chapters lay the foundation for the rest of the book.
Chapter 1 traces the historical development of computing technology and the requirements
that this evolution has produced. In Chapter 1, I also present a set of design objectives that
define the nature of HEC. In Chapter 2, I sketch out the basic facilities available to the HEC
run-time system and the constraints that directed me towards certain solutions.
Part II — The HEC Virtual Machine In Chapters 3 and 4, I explain the operation of the
HEC virtual machine and debugger. The HEC virtual machine is actually much less com-
plicated than the HEC assembler, so these chapters are a good warm-up for later material.
Chapter 3 covers the operation of the HEC virtual machine. Chapter 4 entails an exhaus-
tive analysis of the debugger. The debugger is embedded within the virtual machine, so
these two chapters are closely related.
Part III — HEC Assembly In the final four chapters, I introduce and discuss topics asso-
ciated with the HEC assembler. I begin in Chapter 5 by investigating the HEC assembler
itself. HEC’s interface to the native operating system is provided by a set of interrupts. Chap-
ter 6 is devoted to enumerating and describing these interrupts. The proper use of HEC’s
assembly language is explained in Chapter 7. Chapter 8 provides some thoughts on how
object-oriented constructs can be implemented in terms of the HEC assembly language.

Companion CD-ROM
Software engineering is not a spectator sport. Eventually you will have to get your hands
dirty. The extent to which you do so is up to you. For those of you who are content to target
and use HEC, I have included a set of binaries on the companion CD-ROM. For those of
you who want to muck about in the source code, I have included the source code to all the
binaries.
I live in California and subsist on a limited, private research budget (i.e., my job). Thus,
I did my initial implementation on Windows. I expect to hear gasps of dismay from the audi-
ence, and I can sympathize with them. However, I chose Windows primarily because I think
it is easier to use than KDE or GNOME. The alternative would have been to purchase Sun
hardware, which I can’t afford. This does not mean that HEC is stuck on Windows. Porting
the run-time system is fairly straightforward and discussed in Chapter 8.

xvi
Introduction

Feedback
Nobody is perfect. However, that does not mean that one should not aspire to perfection.
For the most part, learning through direct experience is the best way to obtain intimate
knowledge of a subject. Hindsight is always 20/20, so the goal should be to implement
enough code so that you gain hindsight.
There’s an ancient Oriental game named “Go,” which is so fraught with complexity
that it takes years of careful study to become proficient. The primary advice to beginners is
to “hurry up and lose.” This is the same advice I would give to software engineers. Make
plenty of mistakes while you’re young. Most managers expect young software engineers
to screw up anyway.
This is the advice that I tried to follow while constructing HEC. A couple of years ago,
I dove into the implementation and corrected flaws after I recognized them. If asked to do
it all over again, I know that there are a number of changes that I would make. Alas, even-
tually you have to pull the trigger and release your code.
If you discover an error in this book, please drop me a line and let me know. If I had
money, I could offer an award like Don Knuth (pronounced Ka-Nooth). He places a
bounty of $2.56 on each new error that is found in his books (32 cents for useful sugges-
tions). He even goes to the extreme of suggesting that you fund your book purchase by
ferreting out errors. Unfortunately, the high cost of living in California keeps me in a con-
stant state of poverty (I don’t know how Don does it). The best I can do is offer my thanks
and perhaps mention your name in the next edition. You may send corrections, sugges-
tions, and invective diatribes to me at:
Bill Blunden
c/o Wordware Publishing, Inc.
2320 Los Rios Blvd., Suite 200
Plano, Texas 75074

References
Andres, Charles. “CPU Wars.” 1980: http://e-pix.com/CPUWARS/cpuwars.html.
This comic strip is an interesting dose of 1960s software culture. Anyone born after
1969 should read this strip, just to see what they missed while they were infants.
Intel. IA-64 Architecture Software Developer’s Manual, Volume 1: IA-64 Application
Architecture. Order Number 245317-001, January 2000. http://www.intel.com.
This is the first volume of a four-volume set on Intel’s upcoming 64-bit processor.
This volume, in particular, discusses some of the issues with regard to compiler
design. After skimming through this volume, you’ll understand why compiler
design for IA-64 processors is such a complicated task.

xvii
Introduction

Maxwell, Scott. Linux Core Kernel Commentary. The Coriolis Group, 1999. ISBN:
1576104699.
An in-depth look at the basic process management scheme implemented by Linux.
It is also a very graphic example of how C is used to construct a production-quality
operating system. This is definitely not something you can read in one sitting.
Ritchie, Dennis M. “The Development of the C Language.” Association for Computing
Machinery, Second History of Programming Languages conference, April 1993.
__________. “The Evolution of the Unix Time-sharing System.” AT&T Bell Labora-
tories Technical Journal 63 No. 6 Part 2, October 1984. pp. 1577-93.
Schildt, Herbert. C: The Complete Reference. Osborne McGraw-Hill, 2000. ISBN:
0072121246.
This book is for people who have little or no programming experience and want to
learn C. It’s a fairly gentle introduction by an author who has a gift for explaining
difficult concepts.
__________. C++ from the Ground Up. Osborne McGraw-Hill, 1994. ISBN:
0078819695.
This is the book you should read after reading Herbert’s book on C.
van der Linden, Peter. Expert C Programming: Deep C Secrets. Prentice Hall, 1994.
ISBN: 0131774298.
This is a truly great book. A lot of the things Peter discusses are subtle issues that
separate the masters from the pedestrians.

xviii
Part I
Overview

Chapter 1 — History and Goals


Chapter 2 — Basic Execution Environment

1
Chapter
1
History and Goals

Setting the Stage


In 1965, Gordon Moore predicted that every 18 to 24 months, the number of transistors
that could be put on a chip would double. This observation evolved into a heuristic known
as Moore’s Law. Gordon’s rule of thumb proved relatively accurate and serves as a basis
for predictions about where transistor dimensions are headed.

ASIDE If the number of transistors in a given processor doubles every 18 months,


this means that the linear dimensions of a transistor are cut in half every three years.
In 1989, Intel came out with the 80486 chip, which had transistors that were on the
scale of 1 micron (a human hair is about 100 microns wide). By crunching through
the math (take a look at Figure 1-1), it becomes obvious that Moore’s Law will hit a
wall in under 40 years. This is because an electron needs a path at least three atoms
wide. In a path less than three atoms wide, the laws of quantum mechanics take over.
This means that electrons stop acting like particles and start acting like escaped con-
victs. Electrons do not like being confined. If you clamp down too hard on them, they
rebel and tunnel through things, like Clint Eastwood in the movie Alcatraz. Given that
IBM has recently discovered a way to use carbon nanotubes as semiconductors, we
may be well on our way to manufacturing transistors which are several atoms wide.
So, if I’m lucky, maybe I will get to see Moore’s Law reach its limit before I’m senile.

Figure 1-1

3
4 Chapter 1: History and Goals

Hardware has not only gotten smaller, it has also gotten cheaper. With the emergence of
cheap, sufficiently powerful computers, there has been a gradual move away from the cen-
tralized models provided by traditional mainframes towards distributed, network-based
architectures.
In the 1950s such things were unheard of. The market for computers was assumed to
be around . . . oh, I don’t know. . . about six. Computers were huge, lumbering giants like
something out of the movie The Forbidden Planet. Back then, when they performed a
smoke test, it was literally a smoke test. The engineers turned the machine on and looked
to see where the smoke was coming from.
The trend towards distributing processor workload actually began in 1961 when Digi-
tal Equipment Corporation (DEC) introduced the PDP-1. This event marks the start of the
minicomputer revolution. Medium-sized companies that could not afford a mainframe
bought minicomputers, which offered comparable performance on a smaller scale. Or, if a
company that already owned a mainframe wanted to increase throughput without purchas-
ing a larger mainframe, they would offload some of the mainframe’s work to a
departmental minicomputer. Regardless of how they were put to use, they were a small
fraction of the cost of a room-sized computer and sold like crazy. The recurring theme that
keeps rearing its head is one of accessibility.
This shift in processing, produced by offloading work to cheaper hardware, became
even more pronounced after the IBM personal computer made its debut in 1981. The pro-
liferation of personal computers (also known as microcomputers, or PCs), and resulting
decrease in popularity of the minicomputer, is often referred to as the “attack of the killer
microcomputers.” The new battle cry was: “No one will survive the attack of the killer
micros!”
I know an engineer who worked at Unisys back when mainframes still ruled the earth.
He used to despise having to sign up for development time on his department’s mainframe.
Instead, he moved his source code onto an 8088 IBM PC and did as much work there as he
could. The reason behind this was simple: Working on his own PC gave him a degree of
control he did not have otherwise. The microcomputer empowered people. It gave them a
small plot of RAM and a few kilobytes of disk storage they could call their own. This was a
breath of fresh air to engineers who were used to surrendering all their control to a surly
mainframe operator.
Using the PC as a server-side workhorse really didn’t take off until 1996 when
Microsoft came out with Windows NT 4.0. Before 1996, Intel’s hardware didn’t have
enough muscle to handle server-side loads and Windows NT 3.51 was not mature as a
product. Eventually, however, Intel and Microsoft were able to work together to come up
with a primitive enterprise server. There were more sophisticated Unix variants that ran on
the PC, like FreeBSD and Linux. The Berkeley people, unfortunately, did not have a mar-
keting machine like Microsoft, and Linux had not yet gained much of a following. By
early 1997, most of the Enterprise Resource Planning (ERP) vendors had either ported or
had started porting their application suites to Windows NT. This was an irrefutable sign
that NT was gaining attention as an alternative at the enterprise level.
Chapter 1: History and Goals 5

ASIDE DEC once again played an indirect part in the rise of the desktop computer.
When Bill Gates wanted to begin development on Windows NT, he hired Dave Cutler
who had done extensive work on operating system design at...you guessed it, DEC.

The latest incarnation of the Windows operating system, Windows XP, was released in
October of 2001. There is a version that targets Intel’s 64-bit Itanium processor. Itanium
was supposed to be Intel’s next big thing. Traditionally, the high-end server market has
been dominated by the likes of Hewlett Packard, Sun Microsystems, and IBM. With its
32-bit processors, Intel was forced to eat with the kiddies. Itanium was touted as the vehi-
cle that would allow Intel to move into the high-end server market and compete with the
other 64-bit architectures. However, with Itanium’s clock speed of 733 to 800 MHz and
high price (starting at $1,177), I think that Itanium was doomed before it hit the market.
The close collaboration between Intel and Microsoft has come to be known as the
Wintel conspiracy. The strongest motivator behind the adoption of Wintel-based servers
by CIOs is the desire to minimize total cost of ownership. The trick is to take a large group
of inexpensive Wintel servers (Intel servers running Windows) and allow them to cooper-
ate so that the responsibility for a workload can be divided among the machines. This is
called clustering. A collection of such servers is also known as a cluster or server farm.
Clustering is a relatively unsophisticated way to provide scalability and reliability. If a
machine fails, for whatever reason, the other machines in the cluster can compensate while
repairs are made. If data throughput starts to degrade, the problem can be addressed by
adding more servers to the cluster. Because the Wintel machines used to build a server
farm are comparatively cheaper than their mainframe counterparts, adding new nodes to
the cluster is not seen as an impediment.
Figure 1-2 displays the server farm arrangement at E*trade, a well-known online
bank. HTTP-based Internet traffic (i.e., a browser client somewhere on the Net) is filtered
through a firewall and then hits a load balancer. Using a round-robin algorithm, the load
balancer picks a web server to initiate a session with the client. Transactions initiated from
the client execute their business logic on a set of application servers. Client transactions
are completed when data is committed to an array of database servers. E*trade performs
most of the clustering on hardware from Sun Microsystems, even though its actual portal
to the stock market is provided by a mainframe.
6 Chapter 1: History and Goals

Figure 1-2

Mainframe pundits will argue that relying on mainframe technology is still a better
solution. A single mainframe can do the work of several hundred commodity servers. Not
only does this save on real estate, but it also saves on electricity. Over the course of a year,
this kind of cost discrepancy can become conspicuous. Especially in California, which has
suffered from a rash of power shortages and rolling blackouts.
Mainframes also tend to have a higher level of reliability and security at both the hard-
ware and software level. This is both a function of the mindset of the mainframe architects
and the technology they employ. When a mainframe crashes, it is treated like a major
catastrophe by the manufacturer. Typically, a mainframe vendor will maintain an elec-
tronic link to every mainframe they sell, in an effort to provide monitoring services.
Responding to a problem costs money and time. Hence, a lot more effort is spent on detect-
ing software problems and recovering from them gracefully.
Traditionally, mainframes have also been the exclusive domain of an advanced tech-
nology known as dynamic partitioning, which allows system resources to be reallocated at
run time to accommodate changing application workloads. This allows mainframes to sus-
tain a higher level of performance for transaction and I/O-heavy operations. Dynamic
partitioning is the type of intricate technology that puts mainframe operating systems on
the same level as rocket science.
In general, mainframes are run in a highly managed and controlled environment,
which is to say that the operator, not the program, decides what kind of resources a pro-
gram will use (memory, processor allowance, etc.). A runaway program on a mainframe
would never, ever, bring the entire machine down. Memory protection is strictly enforced
and usually built into the hardware. In fact, some IBM mainframes are reported to have
had uptimes in excess of 20 years! My Windows 2000 box, on the other hand, crashed last
night while I was backing up files.
The idea of replacing a farm of servers with a mainframe has been labeled as server
consolidation. It is also interesting to note that the term “mainframe” is no longer used.
Chapter 1: History and Goals 7

Companies that manufacture mainframes have been calling their mainframes enterprise
servers.
The punchline is that mainframes are not just big, fast machines. Clustering a bunch of
low-budget servers together to get the equivalent processing power will still not get you
mainframe performance. This is because mainframes are all about reliability and tightly
managed execution, 24 hours a day, seven days a week, for years at a time. Microsoft has
decided to appeal to the average consumer by investing a lot of effort in building an
impressive user interface. They have paid for this decision by shipping an operating sys-
tem that crashes far more frequently than a mainframe system.
Nevertheless, purchasing a mainframe is a huge investment. In a situation where a
CIO has to get things done on a limited budget, being inexpensive is the great equalizer for
the Wintel machine. You probably won’t find many new organizations that can afford to
make the down payment on a mainframe. It’s far easier, cheaper, and faster to start off with
a small cluster of inexpensive servers and increase the cluster’s size as capacity demands
grows.
As Wintel machines are cheap and readily available, the phenomenon of Wintel in the
enterprise is very much a grass roots movement. By dominating the low end, Wintel has
been able to slowly bootstrap itself into the enterprise scene. A Wintel box is a computer
that almost anyone can afford. As a result, there is a whole generation of system admins
who have been born and raised on Wintel. A college undergraduate who owns a couple of
beat-up, secondhand Wintel boxes can hit the ground running in a business that is Win-
dows-centric. This makes it easier for companies to fill positions because less training is
required.
It’s something that has happened many times before. Breakthroughs in manufacturing
and technology lower the barriers to entry. Companies that could not afford the previous
solutions flock to the new, cheaper alternatives. It doesn’t matter if the technology is
slightly inferior to more expensive options. What matters is that the performance provided
by the cheaper technology is “good enough,” and that it is accessible. See Figure 1-3 for a
better look at this trend.

Figure 1-3
8 Chapter 1: History and Goals

ASIDE The relationship between hardware and software has basically inverted
itself from what it was back in the 1950s and 1960s. It used to be that a vendor sold
you hardware and the software came along with the hardware at very little cost. In
other words, the income from selling hardware subsidized the development of soft-
ware. Today, just the opposite is true. Now, a vendor will deploy a business solution
and then add hardware to the deal at little relative cost. Software is now subsidizing
hardware. If you don’t believe me, take a look at the cost of an MSDN subscription
and compare it to the cost of a development machine.

There’s another thing the mainframe people don’t like to talk about. Using an off-the-shelf
parts approach also offers protection against becoming dependent upon a single vendor.
This is a notorious problem with commercial Unix vendors. Each vendor offers their own
flavor of Unix, specially tailored for their own proprietary hardware. The system they
offer strays from the standards just enough to lock the customer into a costly series of
upgrades and investments. Deploying commodity hardware is a way to ensure that this
doesn’t happen. If one OEM fails to live up to standards, there are probably at least three
other suppliers to take their place.
In the 1980s and early 1990s, the majority of new IT systems installed were Unix
based. From the late 1990s on, Windows staked a claim on the corporate landscape. Again,
this is primarily a matter of dollars and cents. Young companies and cash-strapped system
admins often have to squeeze as much as they can out of their budgets. Windows appeals
to this mindset. An incredible example of this is the Chicago Stock Exchange. It is based
entirely on Windows. Not only did they implement a very contemporary clustering
scheme, but they also deployed an object database and CORBA. It is truly a bleeding-edge
architecture. I have to tip my hat to the CIO, Steve Randich, for successfully pulling off
such an unorthodox stunt. It is vivid proof of concept for the clustering school of thought.

NOTE In all fairness I think I should also add that the Chicago Stock Exchange
reboots their servers at the end of each day, a luxury that mainframes are not
afforded and do not require. I suppose they do this to stifle memory leaks. Also, the
system really only has to service about 150 clients, whereas a typical mainframe must
be able to handle thousands. Finally, another fact that speaks volumes is that the lead
software developer at the Chicago Stock Exchange decided not to use DCOM or SQL
Server. One might interpret this decision as a statement on the stability of Microsoft’s
enterprise suite.

The explosion of mid-range Unix servers and Intel machines does not mean that main-
frames are extinct. In fact, most businesses that do transaction-intensive computing still
rely on mainframes. Banks, insurance companies, utilities, and government agencies all
still execute core business operations with mainframes. This is one of the reasons why
there will always be COBOL programmers. Take a look at Table 1-1 to see a comparison
of the mainframe and cluster-based computing models.
Chapter 1: History and Goals 9

Table 1-1
Mainframe - Server Consolidation Commodity Hardware - Clustering
Reliability By design, fault prevention is the principal Through explicit redundancy, one machine
focus and ingrained in hardware and blue-screens and others compensate.
system software.
Management Centralized Distributed
Scalability Through dynamic partitioning and tightly Can scale with finer granularity, one
coupled processors. Processor time and machine at a time, but with less
memory can be allocated dynamically management and control.
at run time.
Dependence Single vendor Multiple vendors
Maturity Established and tested Relatively new

RANT If I were a CIO faced with a monster-sized load of mission-critical business


transactions, my first inclination would be to think about deploying a mainframe.
From an engineering standpoint, I think that high-end products like IBM’s z900
machines are designed with an eye towards stability and raw throughput (something
which, in the enterprise, I value more highly than a slick GUI).
As far as the enterprise is concerned, computers are used to do just that: compute.
Pretty multimedia boxes may impress the average Joe walking through a retail
store, but they are of little use when it comes to handling a million transactions every
hour. This is because the effort spent in developing an attractive GUI is usually
invested at the expense of stability. The only thing that might dissuade me from follow-
ing this path would be budgetary constraints. If I had very limited financial resources, I
might decide to bite the bullet and go with Wintel.

The result of all this history is that corporate information systems have become an amal-
gam of different hardware platforms and operating systems. This is particularly true in
large companies, which may have acquired other companies and have had to assimilate a
number of disparate systems. I’ve worked in places where a business transaction may have
to traverse through as many as five different platforms before completing.

NOTE The question of whether the mainframe or cluster-based computing model


is better is irrelevant. What’s important is that both models have become accepted
and that this development has produced corporate information systems that are var-
ied in nature.
The whole idea of a multiplatform IT system might run contrary to intuition. The
promise of enterprise integration has, so far, turned out to be a myth sold by evange-
lists and salespeople out to make a buck. The CIOs that I’ve spoken with realize this
and prefer to adopt a pet platform and stick with it. With smaller companies also opt-
ing for the low-cost alternative (i.e., Windows), it is no surprise that Microsoft currently
owns about 40 percent of the server market. In spite of this trend, the inertia of legacy
applications, outside influences, and availability constraints can foil a CIO’s attempts
to keep a company’s computing landscape uniform.
10 Chapter 1: History and Goals

The Need for a Virtual Machine


Faced with large, heterogeneous information systems and the accelerated rate of techno-
logical innovation, software engineers are beginning to rediscover the benefits of targeting
virtual machines.
The name of the game in software development is return on investment. Companies
want to invest money in software that will be useful long enough for the resources spent to
be justified. Porting a software package from one platform to another is costly and,
depending on the number of platforms supported, can prove to be a Sisyphean nightmare.
The worst-case scenario occurs when business logic is stranded on an aging system.
Due to historical forces, a large base of code may end up on an obsolete platform that no
one wants to deal with. The people who wrote the original code have either quit or been
promoted and don’t remember anything (sometimes intentionally). The source code has
been transformed by time into an encrypted historical artifact that will require years of
work by researchers to decipher. If the language and tools used to develop the initial code
are linked to the original platform, porting may be out of the question. In pathological
cases, the code may be so obscure and illegible that maintenance engineers are too scared
to touch anything. I’m sure Y2K programmers are familiar with this type of imbroglio.
The only recourse may be a complete rewrite and this is an extremely expensive alterna-
tive, fraught with all sorts of dangers and hidden costs. This is the kind of situation CIOs
lose sleep over.
Using a virtual machine offers a degree of insurance against this kind of thing happen-
ing. When confronted with a new hardware platform or operating system, the only
application that needs to be ported is the virtual machine itself. If a virtual machine is, say,
100,000 lines of code, you might think that actually porting the code is a bad investment.
100,000 lines of code is not a trivial porting job. However, let’s say that the virtual
machine runs an application suite that consists of 11 million lines of code. Suddenly, port-
ing the virtual machine is not such a bad deal because the alternative would be to port the
11 million lines of application code. Don’t think that this isn’t realistic. I once visited an
automobile insurance company that had around 30 million lines of application code.
For developers using a commercial run-time system that has already been ported to
several platforms the savings are immediate and tangible. Sometimes the process of port-
ing code that targets a virtual machine is as easy as copying the binaries.

Depreciating Assets
Like hardware, operating systems come and go. To add insult to injury, they tend to get
bigger, slower, and more complicated.
The reason behind this is pretty simple. In order for software companies to make
money on a consistent basis, they have to create a continuous stream of revenue. In a mar-
ket that is already saturated, one way to do this is to release new versions of a product
which have features that previously did not exist. It is not that you necessarily need these
new features; some people are happy with what they bought back in 1995. Rather, these
Chapter 1: History and Goals 11

new features are an excuse that big software companies use to entice you to keep giving
them your money. It’s a racket, plain and simple. My advice would be to hang on to what
you have as long as you can and try to ignore the marketing slogans.
Studying a subject like mathematics is generally a good investment. Not only will you
meet lots of women and probably become a millionaire (I’m joking), but the foundations
of the subject will never change. In math, you learn topics like geometry and algebra, and
the resulting skills you gain are permanently applicable. Mathematics is the native tongue
of the world around us. Quantitative patterns and relationships are everywhere. Becoming
a skilled mathematician is like building a house on a solid, immovable foundation. The
knowledge base that you build is good forever.
The opposite is true for commercial operating systems. Knowledge of a particular
commercial operating system has a distinct half-life. After a few years, the skill set corre-
sponding to a given platform decays until it is no longer useful or relevant. Try to find an
employer who is interested in hiring an admin who knows how to operate a CDC 6600. If
you find one such employer, it will probably be for a position at the Smithsonian
Institution. Spending vast amounts of effort to master one platform is like investing in a
rapidly depreciating asset. By the time you have attained any sort of proficiency, the next
version has come out. If you identify with a particular platform, you may find yourself try-
ing to resist change. It’s very tempting to stay in familiar surroundings instead of having to
start all over from scratch.
I can remember in 1990 when I felt like I had finally achieved guru status in DOS 3.3
and real-mode programming. It had taken years of relentlessly wading through technical
manuals and reading every magazine article I could get my hands on. I was revered by my
coworkers as a batch file master. In fact, I often mumbled DOS shell commands in my
sleep. I was also very handy with Borland’s Turbo C compiler and Tasm macro assembler.
These tools were great, not only because they were simple, inexpensive, and straightfor-
ward, but also because they didn’t try to push their religion on you. They were not out to
sell you a component architecture or a design methodology. It was straight up ANSI C and
I liked it that way.
At work I had written a serial port driver that allowed our 80386 boxes to talk to the
company’s mainframe. I was feeling like I could finally wrap my hands around everything
and see the big picture. The PC domain was my kingdom and I was intimately familiar
with every TSR and driver from the interrupt table to high memory. Two days after I pro-
claimed myself an expert to my supervisor, I found out about Windows 3.0. In fact, there
was one IT guy who had gotten his hands on the early release and he took great pleasure in
asking me questions I could not answer. To make matters worse, he liked to ask me these
questions in front of my coworkers. My reputation had been put in jeopardy, and people
began to whisper that I was behind the curve.
The gauntlet had been thrown down so I rushed into Windows 3.0. I spent a lot of my
spare time on weekends learning about protected mode addressing and DOS-extenders. I
rolled up my sleeves and bought Visual C++ 1.52 so that I could delve into the black art of
writing VxDs. When integrated networking arrived via Windows for Workgroups in 1992,
I was on it. After sufficient toiling and lost sleep, I regained my title as the departmental
12 Chapter 1: History and Goals

wizard. If someone had a problem setting up file sharing, they called on me — Mr.
NetBEUI. For a couple of years I was riding high again. I thought I had the game licked.
However, while I was basking in my own glory, that same IT guy had secretly
obtained a beta copy of Microsoft’s next big thing. Its code name was Cairo and would
soon be known to the rest of the world as Windows 95. Again, he started bringing up ques-
tions I couldn’t answer.
Rather than face this onslaught of perpetual platform turnover, I wanted to be able to
rely on a stationary target that would give me the ability to design software that would
have a decent half-life. Computer technology is a raging ocean of change. I was looking
for an island so I could avoid being tossed around every time something new came out.
One thing I observed was that there actually was a single, solitary constant that I could
rely on: Every contemporary platform has an ANSI C or C++ compiler.
Rather than anchor a set of development tools to a particular hardware platform, why
not base them on a specification? And why not construct a specification that can be easily
implemented using ANSI C, which is universally available? Instead of relying on a native
platform to execute my applications, I would use a virtual machine that could be easily
ported when hardware transitions were forced on me.
Eureka! I felt like I had made a great discovery, until a few days later when it dawned
on me that other people might have beaten me to it. Obviously, the idea of a virtual
machine is nothing new. IBM has been doing it successfully for decades, and new compa-
nies like TransMeta have incorporated a virtual machine approach into their hardware.
What makes my implementation different is a set of design goals, which ended up defining
the fundamental characteristics of my virtual machine.

A Delicate Balance
I strongly believe that it’s important to sit down before you begin a software project and
create a list of fundamental design goals and requirements. Then, the trick is to faithfully
stick to them. This is harder than it sounds. The urge to sacrifice your overall guidelines to
address an immediate concern is very seductive.
My objective was to build a virtual machine that satisfied three criterion. In order of
priority, these are:
1. Portability
2. Simplicity
3. Performance
Portability is the most important feature because being able to work with a uniform soft-
ware interface, across multiple platforms, is the primary benefit of using a virtual machine.
If you can’t port a virtual machine, then you’re stuck with a given platform, in which case
you might as well use tools that compile to the native machine encoding.
Portability is also the most important priority in the sense that I had to make sacrifices
in terms of simplicity and performance in order to achieve it. There is rarely a solution that
is ever optimal under all conditions in software engineering.
Chapter 1: History and Goals 13

If you’ve ever read any of the well-known journal articles on dynamic memory stor-
age or process scheduling, you’ll notice that the authors always qualify their conclusions
with specific conditions. In other words, the results that an approach yields depend on the
nature of the problem. Because of this, most decisions involve an implicit tradeoff of some
sort.
The HEC run-time execution engine is about 10,000 lines of code. About 1,500 lines
are platform specific. In order to maintain a modicum of portability, I was forced to pepper
certain areas of code with preprocessor #ifdef directives. Naturally, this hurts readability
and makes maintenance a little more complicated. I succeeded in isolating most of the
platform-specific code in two source files. Nevertheless, there are portions of the code
where I sacrificed simplicity for portability.
One way of boosting performance would be to make heavy use of assembly code.
However, using an assembler does not necessarily guarantee faster program execution. In
order to be able to justify using an assembler, the programmer has to have an intimate
knowledge of both the assembly code a compiler generates and the kind of optimizations it
performs. The goal is to be able to write faster assembly code than the compiler can. This
requires a certain amount of vigilance and skill because, in most cases, the optimizer in a C
compiler can do a better job than the average programmer.
Another problem with using assembly language is that it ties your code to the hard-
ware you’re working on. To port the code to new hardware, you’ll need to rewrite every
single line of assembly code. This can turn out to be much more work than you think.
Again, in the effort to maintain a certain degree of portability, I opted out of low-level,
high-performance assembly coding and wrote everything in ANSI C.
While I was a graduate student, I learned a dirty little secret about how to get research
published. If a professor makes a discovery that can be explained in simple terms, he will
go to great lengths and use any number of obfuscation techniques to make the discovery
look more complicated than it really is. This is more of a sociological phenomenon than
anything else. The train of thought behind this is that if an idea is simple, then it must not
be very revolutionary or clever. People who referee journal submissions are more likely to
be impressed by an explanation that wraps around itself into a vast Gordian knot.
This may work for professors who want tenure, but it sure doesn’t work for software
engineers. In his Turing Award lecture, “On Building Systems that will Fail,” one of the
conclusions Fernando Corbató makes is that sticking to a simple design is necessary for
avoiding project failure. You should listen to this man. Corbató is a voice of experience
and one of the founding fathers of modern system design. He led the project to build
MULTICS back in the 1960s.
MULTICS was one of those ambitious Tower of Babel projects that attempted to
reach for the heavens. MULTICS was designed to be a multiuser system that implemented
paging, shared memory multiprocessing, and dynamic reconfiguration. This is especially
impressive when one considers that the hardware available at the time wasn’t really up to
the task. Corbató was forced to stare the dragon of complexity straight in the eyes and he
learned some very expensive lessons.
To me, simplicity has priority over performance by virtue of my desire to make the
code maintainable. A year from now, I would like to be able to modify my code without
14 Chapter 1: History and Goals

having to call in a team of archaeologists. Code that is optimized for performance can
become very brittle and resistant to change. Developers writing high-performance C code
will often use all sorts of misleading and confusing conventions, like bitwise shifting to
divide by 2. What is produced is usually illegible by anyone but the original developer, and
given enough time even the original developer may forget what he had done and why he
had done it. In an extreme case, a programmer who has created a mess and is too scared to
clean it up may opt to quit and go work somewhere else. I’ve heard people refer to this as
“the final solution” or “calling in for air support and pulling out.”
I knew that if I tried to be everything to everyone, I probably wouldn’t do anything
very efficiently. The desktop has already pretty much been conquered by Microsoft and
Internet Explorer. Also, attempting to create a GUI-intensive run-time system seemed like
a poor investment of a tremendous amount of effort. Not to mention that including GUI
code would hurt portability. I’d much rather follow the approach of a mainframe engineer
and concern myself with creating a solution that focused on fault tolerance and stability.
Because of this, I decided that the HEC run-time system would eschew a GUI API and be
aimed at executing application logic on the server side.
Finally, another reason for keeping a simple design has to do with individual creativ-
ity. In order for programmers to exercise the creative side of their brain, they have to feel
like they have a complete understanding of what they’re dealing with. In other words, they
should be able to keep the overall design completely in main memory. Being creative
requires the ability to experiment with different ideas, and this necessitates an intimate
understanding of the ideas. Without a solid understanding, the chance to do any sort of
innovation is stifled.
I worked at a company that has a code base of 16 million lines. The code base is old
enough and big enough that there is no one who really understands how everything works.
There are a couple old veterans in each department who know a certain portion, but that’s
it. If an engineer is lucky, maybe one of the old hands will pass on some knowledge ver-
bally. Being a software engineer in this company is more like belonging to a guild during
the Renaissance. It’s a classic big ball of mud scenario. No one understands anyone else’s
area of specialty and source code has evolved into a large amorphous mess.
Back in the 1980s an executive VP at this company concluded that in-source com-
ments were slowing down compile time (from a day to two days) and had them all
removed. The company now owns 16 million lines of K&R C source code which have zero
documentation. This may have made a very small group of people happy, seeing as how
they now had a monopoly on useful information. New hires, however, faced a 90 degree
learning curve.
In a situation like this, creativity does not exist. In fact, new engineers consider it a
victory if they can understand what a single 5,000-line program does. This is dangerous
because in order to survive, a company has to be able to listen and respond to the demands
of their customers. What are the odds that this company’s system will be able to innovate
enough to compete in the future? Very slim in my opinion. Unless they perform some seri-
ous research, I’d give them a couple more years of survival.
Chapter 1: History and Goals 15

With all that said, I tried to squeeze every ounce of performance out of the run-time
system that I could under the constraints that I did not adversely impact portability or
design simplicity.

Arguments Against a Virtual Machine


There are several arguments against using a virtual machine, the primary one being perfor-
mance. One might argue that because a compiled language, like C, is executed using the
native machine encoding, it will run faster. This is not necessarily true.
In 1996 I was discussing the merits of Java’s portability with a coworker. The com-
pany had decided to rewrite its user interface and the entire business had split into two
camps: C++ evangelists and Java fanatics.
Given that Java was still hot off the press, it didn’t yet have mature GUI components
and its enterprise facilities were nonexistent. To make matters worse, the C++ sect was
showing off an elaborate IDE and other sophisticated development tools. The Java devel-
opers, on the other hand, only had the command-line tools of the JDK. This kind of
distinction doesn’t make much difference to experienced developers. It did, unfortunately,
make a big impression on the executive officers.
One Achilles heel of C++ was its lack of portability across the platforms the company
deployed on. When I mentioned the fact that Java did not have this problem, there was this
one developer from the C++ crowd who would stand up and yell out: “Oh, yeah, Java
...write once, runs like crap everywhere!”
I knew he was wrong, but, hey, this was a religious war and he was merely spreading
disinformation and spouting off propaganda. In the end, upper management gave in to the
slick sales pitch of the C++ proponents. The Java people, however, did get the last laugh:
The C++ project failed miserably, for a variety of reasons, and the vice presidents who ini-
tially shunned us came running back to us and begged us to rescue their jobs after wasting
29 man-years and millions of dollars.
Native C++ code is not necessarily faster than bytecode executed by a virtual
machine. The truth is that the majority of execution time is spent in run-time libraries and
kernel-mode interrupt handlers. The only time native code would actually execute faster is
when you are dealing with an isolated unit of code that stands completely by itself and
does not invoke a user library or system call.
An example of this would be a program that encrypts data. Encryption is a
computationally intensive process that involves a large number of fundamental arithmetic
and bitwise operations. Calls made to the operating system constitute a minority of the
work done. Most of the time is spent twiddling bits and exercising the processor’s most
primitive facilities. In this case, C++ would probably be faster.
If you want to push the envelope for virtual machine performance, most virtual
machines have facilities to convert the virtual machine’s bytecode to native. There are any
number of mechanisms. A program’s bytecode can be converted to the native machine
encoding at compile time using a bytecode-to-native compiler. Or, bytecode instructions
can be converted to native at run time using a just-in-time (JIT) compiler. An even more
16 Chapter 1: History and Goals

sophisticated approach is to have the run-time system monitor which bytecode is executed
the most frequently, and then have the most frequently executed bytecode instructions
converted to native at run time. This is what is known as an adaptive form of JIT compil-
ing. Some people would argue that this is all a moot point because the portability achieved
by targeting a virtual machine can often more than compensate for any perceived loss in
performance.

Looking Ahead
When Moore’s Law succumbs to the laws of physics, the hardware engineers will no lon-
ger be able to have their cake and eat it too. They will not be able to make transistors any
smaller. As a result, the only way to increase the number of transistors on a chip will be to
increase the size of the chip itself. That’s right, computers will have to start getting larger!
In an effort to keep processors at a manageable size, the software algorithms that use
the hardware will have to become more efficient. Thus, part of the responsibility for con-
structing fast programs will be passed back to the theoreticians and their whiteboards.
There is a tendency among commercial software engineers to solve a performance
problem by throwing more hardware at it. It is a common solution because the results are
immediate and it expedites time to market. Within the next 50 years, it may actually
become more cost effective to invest in developing better software instead of relying on
faster machines. Suddenly it will be worth an engineer’s time to sit down and think about
efficient solutions instead of hacking out a kludge.
The end of Moore’s Law may provide enough impetus for a major paradigm shift in
computer science as researchers are forced to look for better algorithms.
By the time Moore’s Law has exhausted itself, computer processors may also be as
sophisticated and powerful as the human brain. This will allow computers to address and
solve problems which have, in the past, only been contemplated by humans. However, this
kind of computing power also provides ample opportunity for misuse. Lest you forget, the
development of other technological innovations, like atomic energy and radar, were first
put to use, on a large scale, as instruments of war.
Computers in the future could be designed to accurately model and forecast human
behavior. If recursive optimization algorithms allowed a computer to beat a chess grand
master, imagine what several generations of advancement will produce. Human behavior
might very well be distilled into an elaborate decision matrix. Given a couple hundred
thousand personality input parameters, a computer could take a particular person and
determine how he or she will behave in a given situation. If you can predict how someone
is going to behave under certain circumstances, then you can effectively control their
behavior, or at least preempt it. Imagine how a government could use this kind of tool to
repress its citizens by instituting behavior modification on a national scale. As computing
technology becomes more powerful and more ubiquitous, several ethical questions will no
doubt rise to the surface and demand attention.
Chapter 1: History and Goals 17

Lessons Learned
There are a few things I’ve learned during the implementation phase of my project that I’d
like to share with you before we move to the next chapter.
First, I’d like to point out that there is no panacea for sloppy code that has been poorly
written in an effort to get a product out the door. Rushing to meet a deadline and relying on
the quality assurance (QA) people to do the rest is a recipe for disaster.
The only way to produce bulletproof software is to do it the slow, gradual, painful
way. Anyone who tells you otherwise has an agenda or is trying to sell you snake oil. This
means that after constructing each individual module of code, you need to barrage it with
test cases. Throw everything at that code but the kitchen sink. The goal is to try and find
that one obscure combination of application data and program state you didn’t prepare for.
One trick I like to use is to imagine that I’m a lawyer preparing a binding contract. Assume
that at every point in this contract, some other opposing lawyer is going to try and find a
loophole so that the agreement can be invalidated. I write every line of source code with
the intention of foiling some imagined adversary who’s going to try to bamboozle me. It’s
a cumulative process; each time you add more functionality to your application you need
to test. This is not the QA’s responsibility either, it’s the developer’s.
There are a couple of engineering techniques I use that can make the process easier.
First, build your own test modules and use them to verify that your code works after you
have made any significant changes. If you’re using an object-oriented language, this is as
easy as adding another method to the class that you’re working on. I understand that creat-
ing a test module can be difficult, especially when the component you’re working on
interacts with several other complicated modules. This means that you’re going to have to
invest the time to recreate the environment in which your component functions. This may
seem like a lot of work, but it’s time well spent because it allows you to isolate your code
from the rest of the system and tinker with it.
Another thing you can do is insert debugging statements into your code that print
run-time information to standard output. One problem with this technique is that debug-
ging statements can take a toll on performance. A program that pipes several lines of data
to the screen before committing a transaction wastes valuable processor cycles. One way
to eliminate the overhead is to simply comment out the debugging statements before send-
ing the program into production. This is a questionable solution because when the
developers discover a bug and need to turn debugging back on, they’ll have to go through
and manually activate each individual debugging statement. If the application in question
consists of 150,000 lines of code, manually un-commenting debug statements is not an
elegant solution.
If you’re using a language like C, which has a preprocessor, you’re in luck. I find that
using macro-based debug statements is a nice compromise. You can activate them on a
file-wide or program-wide basis simply by defining a macro. Likewise, you can strip them
out completely by leaving the macro undefined.
18 Chapter 1: History and Goals

For example:
#define DEBUG 1

#ifdef DEBUG
#define DEBUG_PRINT(arg); printf(arg);
#else
#define DEBUG_PRINT(arg);
#endif

NOTE These techniques may sound stupidly simple. However, to borrow a line
from Murphy, if it’s simple and it works, then it’s not stupid. Another thing I should
point out is that these techniques are intended to be applied during development. QA
engineers have their own universe of testing methodologies and strategies, which I
won’t even try to delve into.

The process of testing during development may also require that a certain amount of tem-
porary scaffolding be written around the partially constructed application. Think of this
like you would think of scaffolding set up around a building under construction. Scaf-
folding provides a temporary testing environment. It allows the developer to move around
and add features without having to rely on the paths of execution that will be there when
the application goes into production. When the application is done, the scaffolding natu-
rally falls away and you can pass things on to the QA people.
Some engineers might think that writing all this throwaway code is a waste of effort,
particularly if they are under a tight schedule. The investment in time, however, is well
spent. I’d rather spend a little extra time during development and write a moderate amount
of throwaway code than get called at 3 A.M. by an angry system admin whose server has
crashed because of my software, and then be forced to dissect a core dump.
Once you’ve integrated all the modules of your program, and you feel like you have a
finished product, you should stop and take a moment to remind yourself of something.
Don’t think about adding new features. Don’t even think about optimizing the code to
speed things up. Go back to your source code and fix it. I guarantee you, there’s something
that’s broken that you haven’t taken into account. Pretend you’re Sherlock Holmes and go
find it. Only when you’re spending several days searching without finding a problem
should you feel secure enough to allow the QA people to take a look.
This mindset runs in direct opposition to that of the marketing employee. This is par-
ticularly true in a startup company where the primary objective of every manager is to get
product out the door. Marketing people don’t want to hear: “slow, gradual, painful.” Mar-
keting people want to hear: “Next week.” I have a special dark corner of my heart reserved
for people in sales, primarily because they spend a good portion of their days trying to
manipulate potential customers by telling them half-truths. Not only that, but most of them
are unaware of what is involved in developing software. I don’t know how many times
some salesperson has put me in a difficult position because he made an unrealistic promise
to a customer about when the next release of a product was due.
As a software developer, you can take steps to protect yourself. One thing you should
realize is that all the software projects that you work on may end up being “code red”
Chapter 1: History and Goals 19

emergencies. In these cases, you should realize that management is merely crying wolf.
Their proclamation that the sky is falling is just a technique they use to get you to work
faster.
Another evil that may be attributed to marketing people is feature creep. In order to
distinguish their product from the competitors, a salesperson will often take liberties with
the product description. A few days later, said salesperson will appear in your cube to
break the news about the features you have only three days to implement. Again, you can
protect yourself. When a project begins, get the requirements signed by the head of mar-
keting. When the salesperson shows up in your cube and attempts to coerce you, shove the
requirements in his face and tell him to take a hike.
The only alternative to pedantic testing is to ship software that is “good enough.” I
hate this idea. I hate it because it attempts to legitimize the practice of shipping software
that contains bugs that the developers know about and could fix. Instead of fixing the bugs,
they decide to ship it, and that is unacceptable. Once shipping significantly faulty software
becomes the status quo, where does it stop? It effectively lowers the bar for the entire
industry.
Fortunately, free enterprise, to an extent, takes care of this problem. If company A
slips up and sells a lemon, then company B can step up to the plate and offer something
better. This normally causes company A to institute sufficient quality controls so that com-
pany B does not have the opportunity to offer its services. The only case where shipping
“good enough” software allows a company to stay in business is when that company has a
captive audience.
The bad news is that it’s easy to become a captive audience. Consulting firms have
been known to burn clients this way. Once a client has hired a consulting firm and started
shelling out cash, it is basically at the mercy of the consulting firm. To make matters
worse, the name of the game for consultants is billable hours. If things head south, the con-
sultants may decide to keep their mouths shut and their heads low while the meter runs. In
this scenario they may very well freeze their code, declare victory, and then get the heck
out of town. By the time the client realizes that the system he was promised doesn’t func-
tion properly, the consultants are already on the airplane to their next victim. Naturally the
client has the option of litigation, but most of the damage has already been done: Time and
money have been invested that will never show a return.
The lesson I have learned from this is that people cannot discover the truth about
developing robust software because the truth is not always something people want to
accept. In a way, it’s similar to physical fitness. There is no easy way to become physically
fit other than to sacrifice, be rigidly disciplined, and be consistent. There is a whole indus-
try of magic pills and diets which has been built up around appealing to naive people who
think that they can sidestep the necessary hard work. I think software is the same way.
Robust software is best constructed through a lot of careful planning, gradual software
development, constant vigilance with regard to isolating errors, and frequent testing. The
only way is the painful way.
20 Chapter 1: History and Goals

References
Brooks, Frederick. The Mythical Man-Month. Addison-Wesley: 1995. ISBN:
0201835959.
This reprint contains extra material, like Brooks’ famous “No Silver Bullet” essay.
Brooks touches on some interesting points with respect to creativity being a neces-
sary ingredient in great designs. This book is nothing short of the canon of modern
software engineering.
Buyya, Rajkumar. High Performance Cluster Computing: Volume 1 Architectures and
Systems. Prentice Hall: 1999. ISBN: 0130137847.
A solid, though somewhat theoretical, look at design issues for clustered systems
and contemporary implementations.
Corbató, Fernando. “On Building Systems that will Fail.” Communications of the ACM,
34(9): 72-81, September 1991.
This is Corbató’s Turing Award lecture on building robust computer systems.
Halfhill, Tom. “Crash-Proof Computing.” BYTE, April 1998.
This is a very enlightening article on what, besides size and processing power,
exactly distinguishes a mainframe from a PC.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelli-
gence. Penguin Paperback, 2000. ISBN: 0140282025.
Microsoft, http://www.howstevedidit.com.
This link will take you to a web site that includes an in-depth look at how the Chi-
cago Stock Exchange runs its entire operation on commodity hardware and
Windows NT. This case study is a graphic example of cluster computing.
Stross, Randall. The Microsoft Way. Addison-Wesley, 1996. ISBN: 0201409496.
Recounts the development of Microsoft before the Internet explosion. This book
focuses on internal issues like how Microsoft hires new employees and develops
new products. Stross seems to emphasize that Microsoft’s success is not just a mat-
ter of luck.
Chapter
2
Basic Execution
Environment
Overview
This chapter discusses the basic resources the HEC run-time environment has at its dis-
posal and how it manages them. Determining how HEC would handle its resources
entailed making far-reaching architectural decisions. Because of this, I made it a point to
do some research on the issues of memory management, process management, and proces-
sor operation. In this chapter, I share a few of my findings with you and then explain how
they influenced HEC’s design.

ASIDE It may seem like I am wading through an awful lot of irrelevant theory. This
is, however, a necessary evil. Just like learning algebra, trigonometry, and calculus
before you jump into differential equations, covering the fundamentals is a prerequi-
site to understanding certain decisions that I made about HEC’s makeup.
In fact, some of the choices that I make will seem arbitrary unless you can understand
the context in which I make them. The somewhat dry topics I present in this chapter
provide an underlying foundation for the rest of this book. So don’t doze off!

Notation Conventions
Numeric literals in this text are represented using the standard conventions of the C pro-
gramming language. Decimal literals always begin with a decimal digit in the range from 1
to 9. Hexadecimal literals begin with a 0-x prefix (0x or 0X). Octal literals begin with 0.
314159 Decimal numeric literal
0xA5 Hexadecimal numeric literal
0644 Octal numeric literal

NOTE Some people may view octal literals as a thing of the past. It really depends
mostly on your own experience. Engineers who muck about in Windows probably
think that octal is some type of historical oddity. Octal constants are, however, used all
over the place in Unix to define file access rights, application permissions, and other
system-related attributes. As one of my co-workers, Gene Dagostino, once said:
“Sometimes, it just depends on where you are coming from.”

21
22 Chapter 2: Basic Execution Environment

A bit is defined as a single binary digit that is either 0 or 1. A byte is a series of eight bits. A
long time ago in a galaxy far, far away the size of a byte varied from one hardware platform
to the next. But almost everyone, with the exception of maybe Donald Knuth (who is stuck
in the 1960s with his hypothetical MIX machine), treats a byte as a collection of eight bits.
A whole hierarchy of units can be defined in terms of the byte.
1 byte 8 bits
1 word 2 bytes (bytes, not bits)
1 double word 4 bytes
1 quad word 8 bytes
1 paragraph 16 bytes
1 kilobyte (1 KB) 1024 bytes
1 megabyte (1 MB) 1024 KB = 1,048,576 bytes
1 gigabyte (1 GB) 1024 MB = 1,073,741,824 bytes
1 terabyte (1 TB) 1024 GB = 1,099,511,627,776 bytes
Bits within a byte will be displayed in this book from right to left, starting with the least
significant bit on the right side. For an example, see Figure 2-1.

Figure 2-1

Veterans of the hardware wars that took place in the turbulent 1960s tend to get a nos-
talgic, faraway look in their eyes when they recall the 16 KB core memory that their fabled
supercomputers used to have (like the CDC 6600). Back when memory was implemented
using circular magnets, 16 KB of memory required the same space as a deluxe model
refrigerator. Engineers who have worked on such machines have been known to say things
like: “It was quite an impressive machine in its day. . . sigh.”

NOTE Let this be a lesson to you. Don’t get too attached to hardware; there’s
always something faster and more powerful around the corner. Today’s supercom-
puter is tomorrow’s workstation. It is wiser to attach yourself to an idea. Take the
quicksort algorithm, for example. It was discovered by C.A.R. Hoare in 1962 and is
still used today as an in-place sorting mechanism. Good ideas live on; hardware is
temporal and fleeting. C.A.R. Hoare is like Dijkstra; he tends to pop up everywhere in
computer science.

Run-time Systems and Virtual Machines


A run-time system is an environment in which computer programs execute. A run-time
system provides everything a program needs in order to run. For example, a run-time sys-
tem is responsible for allocating memory for an application, loading the application into
Chapter 2: Basic Execution Environment 23

the allocated memory, and facilitating the execution of the program’s instructions. If the
program requests services from the underlying operating system through the invocation of
system calls, the run-time system is in charge of handling those service requests. For
instance, if an application wants to perform file I/O, the run-time system must offer a
mechanism to communicate with the disk controller and provide read/write access.
There are different kinds of run-time systems. One way to classify run-time systems is
to categorize them based on how they execute a program’s instructions. For programs
whose instructions use the processor’s native machine encoding, the run-time system con-
sists of a tightly knit collaboration between the computer’s processor and the operating
system. The processor provides a mechanism for executing program instructions. The
CPU does nothing more than fetch instructions from memory, which are encoded as
numeric values, and perform the actions corresponding to those instructions. The operat-
ing system implements the policy side of a computer’s native run-time system. The CPU
may execute instructions, but the operating system decides how, when, and where things
happen. Think of the operating system as a fixed set of rules the CPU has to obey during
the course of instruction execution.
Thus, for programs written in native machine instructions, the computer itself is the
run-time system. Program instructions are executed at the machine level, by the physical
CPU, and the operating system manages how the execution occurs. This type of run-time
system involves a mixture of hardware and software.
Programs whose instructions are not directly executed by the physical processor
require a run-time system that consists entirely of software. In such a case, the program’s
instructions are executed by a virtual machine. A virtual machine is a software program
that acts like a computer. It fetches and executes instructions just like a normal processor.
The difference is that the processing of those instructions happens at the software level
instead of the hardware level. A virtual machine also usually contains facilities to manage
the path of execution and to offer an interface to services normally provided by the native
operating system.
A virtual machine is defined by a specification. A virtual machine is not a particular
software implementation, but rather a set of rules. These rules form a contract that the
engineer, who builds an instantiation of the virtual machine, must honor. A virtual
machine can be implemented in any programming language on any hardware platform, as
long as it obeys the specification. You could create a version of the HEC virtual machine
on an OS/390 using APL if you really wanted to. This is what makes the idea of a virtual
machine so powerful. You can run HEC executables, without recompilation, anywhere
there is a run-time system that obeys the specification.

NOTE Virtual machines are run-time systems, but not all run-time systems are vir-
tual machines.
24 Chapter 2: Basic Execution Environment

Memory Management
Memory provides a way to store information. A computer’s storage facilities may be clas-
sified according to how quickly the processor can access stored data. Specifically, memory
can be broken up into five broad categories. Ranked from fastest to slowest access speed,
they are:
n Processor registers
n Processor cache
n Random access memory
n Local disk-based storage
n Data stored over a network connection
A processor’s registers are small storage slots (usually 16 to 128 bits in length) located
within the processor itself. Some processors also have a larger built-in area, called a cache,
where up to several hundred kilobytes of data can be stored. Because both the registers and
the cache are located within the processor, the processor can most quickly access data in
the registers and cache. The amount of storage provided by the registers and cache, how-
ever, is strictly limited due to size constraints. As a result, most of the data a processor
works with during program execution is stored in random access memory.

NOTE As our ability to miniaturize transistors nears the three-atom limit, more
memory and peripheral functionality will be placed within the processor itself. This
could potentially lead to an entire computer being placed within a single chip — a
system on a chip.

Random access memory holds the middle ground in the memory hierarchy. It is usually
provided by a set of chips that share real estate with the processor on the motherboard.
Random access memory is slower but more plentiful than the resources on the processor. It
is, however, much faster than disk-based or network-based storage. Because disk- and net-
work-based storage involve much longer access times than random access memory, they
are primarily used only when random access memory has been exhausted. Disk and net-
work storage are also used to persist data so that it remains even after the computer is
powered down.

NOTE In the remainder of the book, when the term “memory” is used, I will be
referring to random access memory.

The question of what memory is has been answered. Now we will look at how memory is
used. There are three levels at which memory can be managed by a computer:
n Machine level
n Operating system level
n Application level
Chapter 2: Basic Execution Environment 25

Machine-Level Management
At the machine level, memory consists of a collection of cells that are read and written to
by the processor. A memory cell is a transistor-based electrical component that exists in
two possible states. By mapping the digits 1 and 0 to these two states, we can represent the
state of a memory cell using a bit.
Memory cells can be grouped together to form bytes such that memory can be viewed
as a contiguous series of bytes. Each byte in the series is assigned a nonnegative sequence
integer, starting with 0. Figure 2-2 illustrates this approach. The sequence number
assigned to a byte is referred to as the address of the byte. This is analogous to the way
houses on a block are assigned addresses.

Figure 2-2

The processor accesses and manipulates memory using buses. A bus is nothing more
than a collection of related wires that connect the processor to the subsystems of the com-
puter (see Figure 2-3). To interact with memory, the processor uses three buses: the control
bus, the address bus, and the data bus. The control bus is used to indicate whether the pro-
cessor wants to read from memory or write to memory. The address bus is used to specify
the address of the byte in memory to manipulate. The data bus is used to ferry data back
and forth between the processor and memory.

Figure 2-3
26 Chapter 2: Basic Execution Environment

The number of wires in an address bus determines the maximum number of bytes that
a processor can address. This is known as the memory address space (MAS) of the proces-
sor. Each address bus wire corresponds to a bit in the address value. For example, if a
processor’s address bus has 32 lines, an address is specified by 32 bits such that the proces-
sor can address 232 bytes, or 4 GB.
To read a byte, the following steps are performed:
1. The processor places the address of the byte to read on the address bus.
2. The processor sends the read signal to memory using the control bus.
3. Memory sends the byte at the specified address on the data bus.
To write a byte, the following steps are performed:
1. The processor places the address of the byte to be written on the address bus.
2. The processor sends the write signal to memory using the control bus.
3. The processor sends the byte to be written to memory on the data bus.
The above steps are somewhat simplified versions of what really occurs. Specifically, the
processor usually reads and writes several bytes at a time. For example, most Intel chips
currently read and write data in four-byte clumps (which is why they are often referred to
as 32-bit chips). The processor will refer to a 32-bit packet of data using the address of the
byte that has the smallest address.
In general, a processor executes instructions in memory by starting at a given address
and sequentially moving upwards towards a higher address, such that execution flows
from a lower address to a higher address (see Figure 2-4).

Figure 2-4

Memory is not just used to store program instructions. It is also used to store data.
When a data value only requires a single byte, we can refer to the data value using its
address. If a data value consists of multiple bytes, the address of the lowest byte of the
value is used to reference the value as a whole. Figure 2-5 illustrates this point.
Chapter 2: Basic Execution Environment 27

Figure 2-5

There are two different ways to store multibyte data values in memory: big-endian
and little-endian. The big-endian convention dictates that the most significant byte of a
value has the lowest address in memory. The little-endian convention is just the opposite
— the least significant byte of a value must have the lowest address in memory (Figure 2-5
uses the little-endian format).
Here’s an example. Let’s say you have the multibyte value 0xABCDEF12 sitting
somewhere in memory (for example, starting at address 24). The big- and little-endian
representations of this value are displayed in Figure 2-6.

Figure 2-6

The storage method used will vary according to the hardware platform you’re on. For
example, the Intel family of 32-bit processors is a little-endian platform. If you own a PC
that uses an Intel processor, you can prove this to yourself with the following program:
#include<stdio.h>

void main(int argc, char *argv[])


{
unsigned long value = 0xABCDEF12;
unsigned char *arr;
arr = (unsigned char *)&value;
printf("%X %X %X %X\n",arr[0],arr[1],arr[2],arr[3]);
return;
}

If you are on an Intel-based platform, this program should print out: 12 EF CD AB


28 Chapter 2: Basic Execution Environment

NOTE Arrays in C are always indexed from low address to high address. Thus, the
first element of the array (i.e., arr[0]) also has the lowest address. The reason behind
this is that arr[3] is the same as arr+3. This means that the index is really an offset
from the first element of the array.

ASIDE Endianness is a major issue in terms of porting code from one platform to
the next. For the sake of keeping HEC applications uniform, I arbitrarily decided that
all multibyte values will be stored in a HEC executable file using the big-endian for-
mat. It is up to the HEC run-time system to convert these values to the native format at
run time. If the native format is big-endian, the run-time doesn’t have to do anything.
If the native format, however, is little-endian, the run-time system will have to do a
quick conversion. The specifics of this conversion are covered in Chapter 3.

NOTE Data represented using the big-endian method is said to be in network


order. This is because network protocols like TCP/IP require certain pieces of informa-
tion to be sent across a network in big-endian format.

Operating System Level


Instead of viewing memory in terms of component subsystems and bus interfaces, operat-
ing systems have the luxury of an abstracted view of memory, where memory consists of a
series of contiguous bytes. Each byte has its own unique integer address starting with 0.
The byte whose address is 0 is said to be at the bottom of memory.
An operating system is really just a complicated program that governs the execution
of other programs and handles requests that those other programs might make for machine
resources. Operating systems have a lot of responsibility when it comes to administrating
the internal operation of a computer. Think of an operating system like a den mother who
is presiding over a whole troop of Cub Scouts. If a certain amount of discipline is not occa-
sionally applied, chaos will ensue.
Rather than treat memory as one big amorphous blob of bytes, and face almost certain
application failures and memory shortages, there are two mechanisms an operating system
can apply to manage memory effectively: segmentation and paging.
Computers usually have less memory than their memory address space allows. For
example, the typical 32-bit Intel processor can address 4 GB of memory, but unless you’ve
got a lot of cash to burn on RAM chips you’ll probably have only a fraction of this. Most
people are satisfied with 128 MB of memory. Disk storage is much cheaper and much
more plentiful than memory. Because of this, disk storage is often used to simulate mem-
ory and increase the effective addressable memory. This is what paging is all about.
When paging is used by an operating system, the addressable memory space artifi-
cially increases such that it becomes a virtual address space. The address of a byte in the
new, larger virtual memory space no longer matches what the processor places on the
address bus. Translations must be made so that physical memory and virtual memory
remain consistently mapped to each other.
Paging is implemented by dividing virtual memory (memory that resides in the virtual
address space) into fixed-size “pages” of storage space (on the Intel platform, the page size
Chapter 2: Basic Execution Environment 29

is usually 4,096 bytes). If an operating system detects that it is running out of physical
memory, it has the option of moving page size chunks of memory to disk (see Figure 2-7).

Figure 2-7

Modern implementation of paging requires the integration of processor and operating


system facilities. Neither the operating system nor the processor can do it all alone; they
both have to work together closely. The operating system is responsible for setting up the
necessary data structures in memory to allow the chip to perform bookkeeping operations.
The processor expects these data structures to be there, so the operating system has to cre-
ate them according to a set specification. The specifics of paging are hardware dependent.
Most vendors will provide descriptions in their product documentation.

ASIDE The current family of 32-bit Intel processors can page memory to disk in
three different sizes: 4,096 KB, 2 MB, and 4 MB. There is a system register named
CR4 (control register 4). The fourth and fifth bits of this register are flags (PSE and PAE)
that can be set in certain combinations to facilitate different page sizes, although, to
be honest, I have no idea why anyone would want to use 4 MB memory pages. I pre-
fer to buy a lot of RAM and then turn paging off entirely.

The hardware/software cooperation involved in paging is all done in the name of perfor-
mance. Paging to disk storage is such an I/O-intensive operation that the only way to
maintain an acceptable degree of efficiency is to push the work down as close to the pro-
cessor as possible. Even then, that might not be enough. I’ve been told that
hardware-based paging can incur a 10 percent performance overhead on Microsoft Win-
dows. This is probably one reason why Windows allows you to disable paging.
Segmentation is a way to institute protection. The goal of segmentation is to isolate
programs in memory so that they cannot interfere with one another or with the operating
system. Operating systems like DOS do not provide any protection at all. A malicious pro-
gram can very easily take over a computer running DOS and reformat your hard drive.
Segmentation is established by dividing a program up into regions of memory called
segments. A program will always consist of at least one segment and larger programs will
have several (see Figure 2-8). Each of an application’s segments can be classified so that
30 Chapter 2: Basic Execution Environment

the operating system can define specific security policies with regard to certain segments.
For example, segments of memory containing program instructions can be classified as
read only, so that other segments cannot overwrite a program’s instructions and alter its
behavior.

Figure 2-8

ASIDE Some paging schemes, like the one Intel processors use, allow pages of
memory to be classified in a manner similar to the way in which memory segments
are classified. A page of memory can be specified as belonging to a particular pro-
cess, having a certain privilege level, etc. This means that paging can be used to
augment the segmentation mechanism, or even replace it completely.

Because memory is referenced by programs so frequently, segmentation is normally


implemented at the hardware level so that run-time checks can be performed quickly.
There are, however, a couple of ways that protection may be instituted without relying on
hardware. There are two purely software-based mechanisms I stumbled across in my
research: sandboxing and proof-carrying code.
Initial research on sandboxing was done in an effort to speed up interprocess commu-
nication. Researchers (Wahbe et. al.) took the address space allocated to a program and
artificially divided it into two subspaces and a small common space. The small common
space was used to allow the two subprograms to communicate. Traditionally, interprocess
communication involves using operating system services. Using operating system ser-
vices, however, results in a certain amount of overhead because the program has to stop
and wait for the operating system to respond to its requests. The initial goal of this research
was to find a way to speed up IPC, but in the process they also discovered a software-based
protection mechanism.
The researchers found a way to divide a single memory segment into sections whose
barriers were entirely enforced by software checks, which can be performed at run time.
Using a DEC Alpha 400, the researchers determined that the additional run-time checks
incurred an additional overhead of 4 percent when compared to a traditional IPC approach.
Chapter 2: Basic Execution Environment 31

The idea of proof-carrying code (PCC) is to use specially modified development tools
that will compile applications to a special format. The resulting executable is encoded in
such a way that the run-time system can verify that the application obeys a certain security
model. This is an interesting alternative to sandboxing because the proof verification
occurs once when the application is loaded into memory, as opposed to a continuous series
of run-time checks.
The bytecode verification facilities provided by Java are an example of this approach.
Safety constructs are embedded in the language itself to facilitate protection without the
need to rely on hardware. This is particularly important for Java because the language
requires a degree of platform independence.
There are problems with proof-carrying code. The first one is performance. If a large
application is loaded into memory, there is the potential for a considerable time lag before
execution begins. I worked for a bank that had this problem. The application server they
used was written completely in Java and consisted of thousands of individual class files.
When the server restarted, usually after a crash of some sort, there would often be notice-
able time lags as objects were dynamically loaded into memory. To address this issue, they
merely forced all the objects to load into memory when the Java virtual machine was
invoked. This, in turn, forced verification to be performed before the system actually went
back into production.
Another problem with proof-carrying code is that there is no mechanism to verify that
the verifier itself is functioning correctly. This is like a police department that has no inter-
nal affairs division. The verifier might be corrupt and allow malicious code to execute.

Application Level
As stated earlier, an operating system allocates memory on behalf of an application and
then divides this memory into one or more segments. There are several different types of
segments an application makes use of (see Figure 2-9). These types are:
n Text segment
n Data segment
n Stack segment
n Heap
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of Illustrations
of Political Economy, Volume 8 (of 9)
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Illustrations of Political Economy, Volume 8 (of 9)

Author: Harriet Martineau

Release date: August 12, 2024 [eBook #74238]

Language: English

Original publication: London: Charles Fox, Paternoster-Row, 1834

Credits: Emmanuel Ackerman, KD Weeks and the Online Distributed


Proofreading Team at https://www.pgdp.net (This file was
produced from images generously made available by The
Internet Archive)

*** START OF THE PROJECT GUTENBERG EBOOK ILLUSTRATIONS


OF POLITICAL ECONOMY, VOLUME 8 (OF 9) ***
Transcriber’s Note:
The volume is a collection of three already published texts,
each with its own title page and pagination.
Minor errors, attributable to the printer, have been corrected.
Please see the transcriber’s note at the end of this text for
details regarding the handling of any textual issues encountered
during its preparation.
The image of the blank front cover has been cleaned up and
enhanced with basic data from the title page, and, so modified,
is placed in the public domain.
Any corrections are indicated as hyperlinks, which will
navigate the reader to the corresponding entry in the
corrections table in the note at the end of the text.
ILLUSTRATIONS
OF
POLITICAL ECONOMY.

BY
HARRIET MARTINEAU.

——o——

BRIERY CREEK.
THE THREE AGES.

——o——

IN NINE VOLUMES.

VOL. VIII.

LONDON:
CHARLES FOX, PATERNOSTER-ROW.
MDCCCXXXIV.

LONDON:
Printed by William Clowes,

Duke-street, Lambeth.
CONTENTS.

BRIERY CREEK.
CHAP. PAGE CHAP. PAGE
1. The Philosopher at 1 5. Introductions 94
Home
2. The Gentleman at 22 6. A Father’s Hope 122
Home
3. Saturday Morning 46 7. The End of the 142
Matter
4. Sunday Evening 65

THE THREE AGES..

1. First Age 1 3. Third Age 93


2. Second Age 35
BRIERY CREEK.
Chapter I.

THE PHILOSOPHER AT HOME.

The sun,—the bright sun of May in the western world,—was going


down on the village of Briery Creek, and there was scarcely a soul
left within its bounds to observe how the shadows lengthened on
the prairie, except Dr. Sneyd; and Dr. Sneyd was too busy to do
justice to the spectacle. It was very long since letters and
newspapers had been received from England; the rains had
interfered with the post; and nothing had been heard at the
settlement for a month of what the minister was planning in London,
and what the populace was doing in Paris. Dr. Sneyd had learned, in
this time, much that was taking place among the worlds overhead;
and he now began to be very impatient for tidings respecting the
Old World, on which he had been compelled to turn his back, at the
moment when its political circumstances began to be the most
interesting to him. There had been glimpses of starlight in the
intervals of the shifting spring storms, and he had betaken himself,
not in vain, to his observatory; but no messenger, with precious
leathern bag, had appeared on the partial cessation of the rains to
open, beyond the clouds of the political hemisphere, views of the
silent rise or sure progress of bright moral truths behind the veil of
prejudice and passion which was for a season obscuring their lustre.
Day after day had anxious eyes been fixed on the ford of the creek;
night after night had the doctor risen, and looked abroad in starlight
and in gloom, when the dogs were restless in the court, or a fancied
horse-tread was heard in the grassy road before the house.
This evening Dr. Sneyd was taking resolution to file the last
newspapers he had received, and to endorse and put away the
letters which, having been read till not an atom more of meaning
could be extracted from them, might now be kept in some place
where they would be safer from friction than in a philosopher’s
pocket. The filing the newspapers was done with his usual method
and alacrity, but his hand shook while endorsing the last of his
letters; and he slowly opened the sheet, to look once more at the
signature,—not from sentiment, and because it was the signature
(for Dr. Sneyd was not a man of sentiment),—but in order to observe
once again whether there had been any such tremulousness in the
hand that wrote it as might affect the chance of the two old friends
meeting again in this world: the chance which he was unwilling to
believe so slight as it appeared to Mrs. Sneyd, and his son Arthur,
and every body else. Nothing more was discoverable from the
writing, and the key was resolutely turned upon the letter. The next
glance fell upon the materials of a valuable telescope, which lay
along one side of the room, useless till some glasses should arrive to
replace those which had been broken during the rough journey to
this remote settlement. Piece by piece was handled, fitted, and laid
down again. Then a smile passed over the philosopher’s
countenance as his eye settled on the filmy orb of the moon, already
showing itself, though the sun had not yet touched the western
verge of the prairie. It was something to have the same moon to
look at through the same telescopes as when he was not alone in
science, in the depths of a strange continent. The face of the land
had changed; he had become but too well acquainted with the sea;
a part of the heavens themselves had passed away, and new worlds
of light come before him in their stead; but the same sun shone in at
the south window of his study; the same moon waxed and waned
above his observatory; and he was eager to be once more
recognising her volcanoes and plains through the instrument which
he had succeeded in perfecting for use. This reminded him to note
down in their proper places the results of his last observations; and
in a single minute, no symptom remained of Dr. Sneyd having old
friends whom he longed to see on the other side of the world; or of
his having suffered from the deferred hope of tidings; or of his
feeling impatient about his large telescope; or of any thing but his
being engrossed in his occupation.
Yet he heard the first gentle tap at the south window, and, looking
over his spectacles at the little boy who stood outside, found time to
bid him come in and wait for liberty to talk. The doctor went on
writing, the smile still on his face, and Temmy,—in other words,
Temple Temple, heir of Temple Lodge,—crept in at the window, and
stole quietly about the room to amuse himself, till his grandfather
should be at liberty to attend to him. While the pen scratched the
paper, and ceased, and scratched again, Temmy walked along the
bookshelves, and peeped into the cylinder of the great telescope,
and cast a frightened look behind him on having the misfortune to
jingle some glasses, and then slid into the low arm-chair to study for
the hundredth time the prints that hung opposite,—the venerable
portraits of his grandfather’s two most intimate friends. Temmy had
learned to look on these wise men of another hemisphere with much
of the same respect as on the philosophers of a former age. His
grandfather appeared to him incalculably old, and unfathomably
wise; and it was his grandfather’s own assurance that these two
philosophers were older and wiser still. When to this was added the
breadth of land and sea across which they dwelt, it was no wonder
that, in the eyes of the boy, they had the sanctity of the long-buried
dead.
“Where is your grandmamma, Temmy?” asked Dr. Sneyd, at
length, putting away his papers. “Do you know whether she is
coming to take a walk with me?”
“I cannot find her,” said the boy. “I went all round the garden, and
through the orchard——”
“And into the poultry yard?”
“Yes; and every where else. All the doors are open, and the place
quite empty. There is nobody at home here, nor in all the village,
except at our house.”
“All gone to the squirrel-hunt; or rather to meet the hunters, for
the sport must be over by this time; but your grandmamma does not
hunt squirrels. We must turn out and find her. I dare say she is gone
to the Creek to look for the postman.”
Temmy hoped that all the squirrels were not to be shot. Though
there had been far too many lately, he should be sorry if they were
all to disappear.
“You will have your own two, in their pretty cage, at any rate,
Temmy.”
Temmy’s tearful eyes, twisted fingers, and scarlet colour, said the
“no” he could not speak at the moment. Grandpapa liked to get at
the bottom of every thing; and he soon discovered that the boy’s
father had, for some reason unknown, ordered that no more
squirrels should be seen in his house, and that the necks of Temmy’s
favourites should be wrung. Temmy had no other favourites instead.
He did not like to begin with any new ones without knowing whether
he might keep them; and he had not yet asked his papa what he
might be permitted to have.
“We must all have patience, Temmy, about our favourites. I have
had a great disappointment about one of mine.”
Temmy brushed away his tears to hear what favourites grandpapa
could have. Neither cat, nor squirrel, nor bird had ever met his eye
in this house; and the dogs in the court were for use, not play.
Dr. Sneyd pointed to his large telescope, and said that the cylinder,
without the lenses, was to him no more than a cage without
squirrels would be to Temmy.
“But you will have the glasses by and by, grandpapa, and I——”
“Yes; I hope to have them many months hence, when the snow is
thick on the ground, and the sleigh can bring me my packages of
glass without breaking them, as the last were broken that came over
the log road. But all this time the stars are moving over our heads;
and in these fine spring evenings I should like very much to be
finding out many things that I must remain ignorant of till next year;
and I cannot spare a whole year now so well as when I was
younger.”
“Cannot you do something while you are waiting?” was Temmy’s
question. His uncle Arthur would have been as much diverted at it as
Dr. Sneyd himself was; for the fact was, Dr. Sneyd had always twice
as much planned to be done as any body thought he could get
through. Temmy did not know what a large book he was writing; nor
how much might be learned by means of the inferior instruments;
nor what a number of books the philosopher was to read through,
nor how large a correspondence was to be carried on, before the
snow could be on the ground again.
“Now let us walk to the creek” was a joyful sound to the boy, who
made haste to find the doctor’s large straw hat. When the
philosopher had put it on, over his thin grey hair, he turned towards
one of his many curious mirrors, and laughed at his own image.
“Temmy,” said he, “do you remember me before I wore this large
hat? Do you remember my great wig?”
“O yes, and the black, three-cornered hat. I could not think who
you were the first day I met you without that wig. But I think I never
saw any body else with such a wig.”
“And in England they would not know what to make of me without
it. I was just thinking how Dr. Rogers would look at me, if he could
see me now; he would call me quite an American,—very like a
republican.”
“Are you an American? Are you a republican?”
“I was a republican in England, and in France, and wherever I
have been, as much as I am now. As to being an American, I
suppose I must call myself one; but I love England very dearly,
Temmy. I had rather live there than any where, if it were but safe for
me; but we can make ourselves happy here. Whatever happens, we
always find afterwards, or shall find when we are wiser, is for our
good. Some people at home have made a great mistake about me;
but all mistakes will be cleared up some time or other, my dear; and
in the mean while, we must not be angry with one another, though
we cannot help being sorry for what has happened.”
“I think uncle Arthur is very angry indeed. He said one day that he
would never live among those people in England again.”
“I dare say there will be no reason for his living there; but he has
promised me to forgive them for misunderstanding and disliking me.
And you must promise me the same thing when you grow old
enough to see what such a promise means.—Come here, my dear.
Stand just where I do, and look up under the eaves. Do you see
anything?”
“O, I see a little bird moving!”
Temmy could not tell what bird it was. He was a rather dull child—
usually called uncommonly stupid—as indeed he too often appeared.
Whither his wits strayed from the midst of the active little world in
which he lived, where the wits of everybody else were lively enough,
no one could tell—if, indeed, he had any wits. His father thought it
impossible that Temple Temple, heir of Temple Lodge and its fifty
thousand acres, should not grow up a very important personage.
Mrs. Temple had an inward persuasion, that no one understood the
boy but herself. Dr. Sneyd did not profess so to understand children
as to be able to compare Temmy with others, but thought him a
good little fellow, and had no doubt he would do very well. Mrs.
Sneyd’s hopes and fears on the boy’s account varied, while her
tender pity was unremitting: and uncle Arthur was full of indignation
at Temple for cowing the child’s spirit, and thus blunting his intellect.
To all other observers it was but too evident that Temmy did not
know a martin from a crow, or a sycamore from a thorn.
“That bird is a martin, come to build under our eaves, my dear. If
we were to put up a box, I dare say the bird would begin to build in
it directly.”
Temmy was for putting up a box, and his grandpapa for furnishing
him with favourites which should be out of sight and reach of Mr.
Temple. In two minutes, therefore, the philosopher was mounted on
a high stool, whence he could reach the low eaves; and Temmy was
vibrating on tiptoe, holding up at arms’ length that which, being
emptied of certain mysterious curiosities, (which might belong either
to grandpapa’s apparatus of science, or grandmamma’s of
housewifery,) was now destined to hold the winged curiosities which
were flitting round during the operation undertaken on their behalf.
Before descending, the doctor looked about him, on the strange
sight of a thriving uninhabited village. Everybody seemed to be out
after the squirrel hunters. When, indeed, the higher ground near the
Creek was attained, Dr. Sneyd perceived that Mr. Temple’s family was
at home. On the terrace was the gentleman himself, walking
backwards and forwards in his usual after-dinner state. His lady (Dr.
Sneyd’s only daughter) was stooping among her flowers, while
Ephraim, the black boy, was attending at her heels, and the figures
of other servants popped into sight and away again, as they were
summoned and dismissed by their master. The tavern, kept by the
surgeon of the place, stood empty, if it might be judged by its open
doors, where no one went in and out. Dods was not to be seen in
the brick-ground; which was a wonder, as Dods was a hard-working
man, and his task of making bricks for Mr. Temple’s grand alterations
had been so much retarded by the late rains that it was expected of
Dods that he would lose not a day nor an hour while the weather
continued fair. Mrs. Dods was not at work under her porch, as usual,
at this hour; nor was the young lawyer, Mr. Johnson, flitting from
fence to fence of the cottages on the prairie, to gather up and
convey the news of what had befallen since morning. About the rude
dwelling within the verge of the forest, there was the usual fluttering
of fowls and yelping of dogs; but neither was the half-savage
woodsman (only known by the name of Brawn) to be seen loitering
about with his axe, nor were his equally uncivilized daughters (the
Brawnees) at their sugar troughs under the long row of maples. The
Indian corn seemed to have chosen its own place for springing, and
to be growing untended; so rude were the fences which surrounded
it, and so rank was the prairie grass which struggled with it for
possession of the furrows. The expanse of the prairie was
undiversified with a single living thing. A solitary tree, or a cluster of
bushes here and there, was all that broke the uniformity of the
grassy surface, as far as the horizon, where the black forest rose in
an even line, and seemed to seclude the region within its embrace.
There was not such an absence of sound as of motion. The waters
of the Creek, to which Dr. Sneyd and Temmy were proceeding,
dashed along, swollen by the late rains, and the flutter and splash of
wild fowl were heard from their place of assemblage,—the riffle of
the Creek, or the shallows formed by the unevenness of its rocky
bottom. There were few bird-notes heard in the forest; but the
horses of the settlement were wandering there, with bells about
their necks. The breezes could find no entrance into the deep
recesses of the woods; but they whispered in their play among the
wild vines that hung from a height of fifty feet. There was a stir also
among the rhododendrons, thickets of which were left to flourish on
the borders of the wood; and with their rustle in the evening wind
were mingled the chirping, humming, and buzzing of an
indistinguishable variety of insects on the wing and among the grass.
“I see grandmamma coming out of Dods’s porch,” cried Temmy.
“What has she been there for, all alone?”
“I believe she has been the round of the cottages, feeding the
pigs and fowls, because the neighbours are away. This is like your
grandmamma, and it explains her being absent so long. You see
what haste she is making towards us. Now tell me whether you hear
anything on the other side of the Creek.”
Temmy heard something, but he could not say what,—whether
winds, or waters, or horses, or insects, or all these. Dr. Sneyd
thought he heard cart wheels approaching along the smooth natural
road which led out of the forest upon the prairie. The light, firm soil
of this kind of road was so favourable for carriages, that they did not
give the rumbling and creaking notice of their approach which is
common on the log road which intersects a marsh. The post
messenger was the uppermost person in Dr. Sneyd’s thoughts just
now, whether waggon wheels or horse tread greeted his ear. He was
partly right and partly wrong in his present conjectures. A waggon
appeared from among the trees, but it contained nobody whom he
could expect to be the bearer of letters;—nobody but Arthur’s
assistant Isaac, accompanied by Mr. Temple’s black man Julian,
bringing home a stock of groceries and other comforts from a distant
store, to which they had been sent to make purchases.
The vehicle came to a halt on the opposite ridge; and no wonder,
for it was not easy to see how it was to make further progress. The
Creek was very fine to look at in its present state; but it was
anything but tempting to travellers. The water, which usually ran
clear and shallow, when there was more than enough to fill the deep
holes in its bed, now brought mud from its source, and bore on its
troubled surface large branches, and even trunks of trees. It was so
much swollen from the late rains that its depth was not easily
ascertainable; but many a brier which had lately overhung its course
from the bank was now swaying in its current, and looking lost in a
new element. Isaac and Julian by turns descended the bank to the
edge of the water, but could not learn thereby whether or not it was
fordable. Their next proceeding was to empty the cart, and drive
into the flood by way of experiment.—The water only half filled the
vehicle, and the horse kept his footing admirably, so that it was only
to drive back again, and to bring the goods,—some on the dry seat
of the waggon, and some on the backs of Isaac and Julian, as the
one drove, and the other took care of the packages within. Two
trips, it was thought, would suffice to bring over the whole, high and
dry.
“What are you all about here?” asked Mrs. Sneyd, who had come
up unobserved while her husband and grandchild were absorbed in
watching the passage of the Creek. “The goods arriving! Bless me! I
hope they will get over safely. It would be too provoking if poor
Arthur should lose his first batch of luxuries. He has lived so long on
Indian corn bread, and hominy, and wild turkeys, and milk, that it is
time he should be enjoying his meal of wheaten bread and tea.”
“And the cloth for his new coat is there, grandmamma.”
“Yes; and plenty of spice and other good things for your papa. I
do not know what he will say if they are washed away; but I care
much more for your coffee, my dear,” continued she, turning to the
doctor. “I am afraid your observations and authorship will suffer for
want of your coffee. Do try and make Isaac hear that he is to take
particular care of the coffee.”
“Not I, my dear,” replied Dr. Sneyd, laughing. “I would advocate
Arthur’s affairs, if any. But the men seem to be taking all possible
care. I should advise their leaving the goods and cart together on
the other side, but that I rather think, there will be more rain before
morning, so as to make matters worse to-morrow, besides the risk
of a soaking during the night. Here they come! Now for it! How they
dash down the bank! There! They will upset the cart if they do not
take care.”
“That great floating tree will upset them. What a pity they did not
see it in time! There! I thought so.”
The mischief was done. The trunk, with a new rush of water, was
too much for the light waggon. It turned over on its side,
precipitating driver, Julian, and all the packages into the muddy
stream. The horse scrambled and struggled till Isaac could regain his
footing, and set the animal free, while Julian was dashing the water
from his face, and snatching at one package after another as they
eddied round him, preparatory to being carried down the Creek.
Dr. Sneyd caught the frightened horse, as he scampered up the
briery bank. Mrs. Sneyd shouted a variety of directions which would
have been excellent, if they could have been heard; while Temmy
stood looking stupid.
“Call help, my dear boy,” said Dr. Sneyd.
“Where? I do not know where to go.”
“Do you hear the popping of guns in the wood? Some of the
hunters are coming back. Go and call them.”
“Where? I do not know which way.”
“In the direction of the guns, my dear. In that quarter, near the
large hickory. I think you will find them there.”
Temmy did not know a hickory by sight; but he could see which
way Dr. Sneyd’s finger pointed; and he soon succeeded in finding the
party, and bringing them to the spot.
“Arthur, I am very sorry,” said the doctor, on seeing his son come
running to view the disaster. “Mortal accidents, my dear son! We
must make up our minds to them.”
“Yes, father, when they are purely accidents: but this is
carelessness,—most provoking carelessness.”
“Indeed, the men did make trial of what they were about,” said
the doctor.
“The great tree came down so very fast!” added Temmy.
“Yes, yes. I am not blaming Isaac. It was my carelessness in not
throwing a bridge over the Creek long ago. Never mind that now!
Let us save what we can.”
It was a sorry rescue. The cart was broken, but it could be easily
mended. The much-longed-for wheaten flour appeared in the shape
of a sack of soiled pulp, which no one would think of swallowing.
The coffee might be dried. The tea was not altogether past hope.
Sugar, salt, and starch, were melted into one mass. Mr. Temple’s
spices were supposed to be by this time perfuming the stream two
miles below; his wax candles were battered, so that they could, at
best, be used only as short ends; and the oil for his hall lamps was
diffusing a calm over the surface of the stream. Mrs. Sneyd asked
her husband whether some analogous appliance could not be found
for the proprietor’s ruffled temper, when he should hear of the
disaster.
The news could not be long in reaching him, for the other party of
squirrel-hunters, bringing with them all the remaining women and
children of the village, appeared from the forest, and the tidings
spread from mouth to mouth. As soon as Temmy saw that Uncle
Arthur was standing still, and looking round him for a moment, he
put one of his mistimed questions, at the end of divers remarks.
“How many squirrels have you killed, uncle? I do not think you can
have killed any at all; we saw so many as we came up here! Some
were running along your snake fence, uncle; and grandpapa says
they were not of the same kind as those that run up the trees. But
we saw a great many run up the trees, too. I dare say, half a dozen
or a dozen. How many have you killed, uncle?”
“Forty-one. The children there will tell you all about it.”
“Forty-one! And how many did David kill? And your whole party,
uncle?”
Arthur gave the boy a gentle push towards the sacks of dead
squirrels, and Temmy, having no notion why or how he had been
troublesome, amused himself with pitying the slaughtered animals,
and stroking his cheeks with the brushes of more than a hundred of
them. He might have gone on to the whole number bagged,—two
hundred and ninety-three,—if his attention had not been called off
by the sudden silence which preceded a speech from uncle Arthur.
“Neighbours,” said Arthur, “I take the blame of this mischance
upon myself. I will not say that some of you might not have
reminded me to bridge the Creek, before I spent my time and
money on luxuries that we could have waited for a while longer; but
the chief carelessness was mine, I freely own. It seems a strange
time to choose for asking a favour of you——”
He was interrupted by many a protestation that his neighbours
were ready to help to bridge the Creek; that it was the interest of all
that the work should be done, and not a favour to himself alone. He
went on:—
“I was going to say that when it happens to you, as now to me,
that you wish to exchange the corn that you grow for something
that our prairies do not produce, you will feel the want of such a
bridge as much as I do now; though I hope through a less
disagreeable experience. In self-defence, I must tell you, however,
how little able I have been till lately to provide any but the barest
necessaries for myself and my men. This will show you that I cannot
now pay you for the work you propose to do.”
He was interrupted by assurances that nobody wanted to be paid;
that they would have a bridging frolic, as they had before had a
raising frolic to build the surgeon’s tavern, and a rolling frolic to clear
Brawn’s patch of ground, and as they meant to have a reaping frolic
when the corn should be ripe. It should be a pic-nic. Nobody
supposed that Arthur had yet meat, bread, and whisky to spare.
“I own that I have not,” said he. “You know that when I began to
till my ground, I had no more capital than was barely sufficient to
fence and break up my fields, and feed me and my two labourers
while my first crop was growing. Just before it ripened, I had
nothing left; but what I had spent was well spent. It proved a
productive consumption indeed; for my harvest brought back all I
had spent, with increase. This increase was not idly consumed by
me. I began to pay attention to my cattle, improved my farm
buildings, set up a kiln, and employed a labourer in making bricks.
The fruits of my harvest were thus all consumed; but they were
again restored with increase. Then I thought I might begin to
indulge myself with the enjoyment for which I had toiled so long and
so hard. I did not labour merely to have so much corn in my barns,
but to enjoy the corn, and whatever else it would bring me,—as we
all do,—producing, distributing, and exchanging, that we may
afterwards enjoy.”
“Not quite all, Mr. Arthur,” said Johnson, the lawyer. “There is your
brother-in-law, Mr. Temple, who seems disposed to enjoy everything,
without so much as soiling his fingers with gathering a peach. And
there is a certain friend of ours, settled farther east, who toils like a
horse, and lives like a beggar, that he may hoard a roomful of
dollars.”
“Temple produces by means of the hoarded industry of his fathers,
—by means of his capital,” replied Arthur. “And the miser you speak
of enjoys his dollars, I suppose, or he would change them away for
something else. Well, friends, there is little temptation for us to
hoard up our wealth. We have corn instead of dollars, and corn will
not keep like dollars.”
“Why should it?” asked Dods the brickmaker. “Who would take the
trouble to raise more corn than he wants to eat, if he did not hope
to exchange it for something desirable?”
“Very true. Then comes the question, what a man shall choose in
exchange. I began pretty well. I laid out some of my surplus in
providing for a still greater next year; which, in my circumstances,
was my first duty. Then I began to look to the end for which I was
working; and I reached forward to it a little too soon. I should have
roasted my corn ears and drank milk a little longer, and expended
my surplus on a bridge, before I thought of wheaten flour and tea
and coffee.”
“Three months hence,” said somebody, “you will be no worse off
(except for the corn ears and milk you must consume instead of
flour and tea) than if you had had your wish. Your flour and tea
would have been clean gone by that time, without any return.”
“You grant that I must go without the pleasure,” said Arthur,
smiling. “Never mind that. But you will not persuade me that it is not
a clear loss to have flour spoiled, and sugar and salt melted together
in the creek; unless, indeed, they go to fatten the fish in the holes.
Besides, there is the mortification of feeling that your toil in making
this bridge might have been paid with that which is lost in the
purchase of luxuries which none will enjoy.”
Being vehemently exhorted to let this consideration give him no
concern, he concluded,
“I will take your advice, thank you. I will not trouble myself or you
more about this loss; and I enlarge upon it now only because it may
be useful to us as a lesson how to use the fruits of our labour. I have
been one of the foremost to laugh at our neighbours in the next
settlement for having,—not their useful frolics, like ours of to-
morrow,—but their shooting-matches and games in the wood, when
the water was so bad that it was a grievance to have to drink it. I
was as ready as any one to see that the labour spent on these
pastimes could not be properly afforded, if there were really no
hands to spare to dig wells. And now, instead of asking them when
they mean to have their welling frolic, our wisest way will be to get
our bridge up before there is time for our neighbours to make a
laughing-stock of us. When that is done, I shall be far from satisfied.
I shall still feel that it is owing to me that my father goes without his
coffee, while he is watching through the night when we common
men are asleep.
”That is as much Temple’s concern as the young man’s," observed
the neighbours one to another. “Freely as he flings his money about,
one would think Temple might see that the doctor was at least as
well supplied with luxuries as himself.” “Why the young man should
be left to toil and make capital so painfully and slowly, when Temple
squanders so much, is a mystery to every body.” "A quarter of what
Temple has spent in making and unmaking his garden would have
enabled Arthur Sneyd’s new field to produce double, or have
improved his team; and Temple himself would have been all the
better for the interest it would have yielded, instead of his money
bringing no return. But Temple is not the man to lend a helping hand
to a young farmer,—be he his brother-in-law or a mere stranger."
Such were the remarks which Arthur was not supposed to hear,
and to which he did not therefore consider himself called upon to
reply. Seeing his father and mother in eager consultation with the
still dripping Isaac, he speedily completed the arrangements for the
next day’s meeting, toils, and pleasures, and joined the group. Isaac
had but just recollected that in his pocket he brought a packet of
letters and several newspapers, which had found their way, in some
circuitous manner, to the store where he had been trafficking. The
whole were deplorably soaked with mud. It seemed doubtful
whether a line of the writing could ever be made out. But Mrs.
Sneyd’s cleverness had been proved equal to emergencies nearly as
great as this. She had once got rid of the stains of a stand full of ink
which had been overset on a parchment which bore a ten-guinea
stamp. She had recovered the whole to perfect smoothness, and
fitness to be written upon. Many a time had she contrived to restore
the writing which had been discharged from her father’s manuscript
chemical lectures, when spillings from his experiments had occurred
scarcely half an hour before the lecture-room began to fill. No
wonder her husband was now willing to confide in her skill—no
wonder he was anxious to see Temmy home as speedily as possible,
that he might watch the processes of dipping and drying and
unfolding, on which depended almost the dearest of his enjoyments,
—intercourse with faithful friends far away.
Chapter II.

THE GENTLEMAN AT HOME.

Master Temple Temple was up early, and watching the weather,


the next morning, with far more eagerness than his father would
have approved, unless some of his own gentlemanlike pleasures had
been in question. If Mr. Temple had known that his son and heir
cared for the convenience of his industrious uncle Arthur, and of a
parcel of labourers, the boy would hardly have escaped a long
lecture on the depravity of his tastes, and the vulgarity of his
sympathies. But Mr. Temple knew nothing that passed prior to his
own majestic descent to the breakfast-room, where the silver coffee-
pot was steaming fragrantly, and the windows were carefully opened
or scrupulously shut, so as to temper the visitations of the outward
air, while his lady sat awaiting his mood, and trembling lest he
should find nothing that he could eat among the variety of forms of
diet into which the few elements at the command of her cook had
been combined. Mrs. Temple had never been very happy while
within reach of markets and shops; but she was now often tempted
to believe that almost all her troubles would be at an end if she had
but the means of indulging her husband’s fastidious appetite. It was
a real misery to be for ever inventing, and for ever in vain, new
cookeries of Indian corn, beef, lean pork, geese and turkeys, honey
and milk. Beyond these materials, she had nothing to depend upon
but chance arrivals of flour, pickles, and groceries; and awfully
passed the day when there was any disappointment at breakfast.
She would willingly have surrendered her conservatory, her splendid
ornaments, the pictures, plate, and even the library of her house,
and the many thousand acres belonging to it, to give to her husband
such an unscrupulous appetite as Arthur’s, or such a cheerful temper
as Dr. Sneyd’s. It was hard that her husband’s ill-humour about his
privations should fall upon her; for she was not the one who did the
deed, whatever it might be, which drove the gentleman from English
society. The sacrifice was quite as great to her as it could possibly be
to him; and there was inexpressible meanness in Temple’s
aggravating, by complaints of his own share, the suffering which he
had himself brought upon her. Temple seemed always to think
himself a great man, however; and always greatest when causing
the utmost sensation in those about him.
This morning, he stalked into the breakfast room in remarkable
state. He looked almost as tall as his wife when about to speak to
her, and was as valiant in his threats against the people who
disturbed him by passing before his window, as his son in planning
his next encounter with Brawn’s great turkey.
“Come away from the window, this moment, Temple. I desire you
will never stand there when the people are flocking past in this
manner. Nothing gratifies them more. They blow those infernal horns
for no other purpose than to draw our attention. Ring the bell,
Temple.”
When Marius appeared, in answer to the bell, he was ordered to
pull down that blind; and if the people did not go away directly, to
bid them begone, and blow their horns somewhere out of his
hearing.
"They will be gone soon enough, sir. It is a busy day with them.
They are making a frolic to bridge the Creek, because of what
happened—"
A terrified glance of Mrs. Temple’s stopped the man in his
reference to what had taken place the evening before. It was hoped
that the stock of coffee might be husbanded till more could arrive,
that the idea of chocolate might be insinuated into the gentleman’s
mind, and that the shortness of the wax candles, and the deficiency
of light in the hall at night, might possibly escape observation.
“The bridge over the Creek being much wanted by every body, sir,”
continued Marius, "every body is joining the frolic to work at it; that
is, if——"
“Not I, nor any of my people. Let me hear no more about it, if you
please. I have given no orders to have a bridge built.”
Marius withdrew. The cow-horns were presently no longer heard—
not that Marius had done any thing to silence them. He knew that
the blowers were not thinking of either him or his master; but
merely passing to their place of rendezvous, calling all frolickers
together by the way.
“Temple, you find you can live without your squirrels, I hope,” said
the tender father. “Now, no crying! I will not have you cry.”
“Bring me your papa’s cup, my dear,” interposed his mother; “and
persuade him to try these early strawberries. The gardener surprised
us this morning with a little plate of strawberries. Tell your papa
about the strawberries in the orchard, my dear.”
In the intervals of sobs, and with streaming eyes, Temmy told the
happy news that strawberries had spread under all the trees in the
orchard, and were so full of blossom, that the gardener thought the
orchard would soon look like a field of white clover.
“Wild strawberries, I suppose. Tasteless trash!” was the remark
upon this intelligence.
Before a more promising subject was started, the door opened,
and Dr. Sneyd appeared. Mr. Temple hastened to rise, put away, with
a prodigious crackling and shuffling, the papers he held, quickened
Temmy’s motions in setting a chair, and pressed coffee and
strawberries on “the old gentleman,” as he was wont to call Dr.
Sneyd. It was impossible that there could be much sympathy
between two men so unlike; but it singularly happened that Dr.
Sneyd had a slighter knowledge than any body in the village of the
peculiarities of his son-in-law. He was amused at some of his foibles,
vexed at others, and he sighed, at times, when he saw changes of
looks and temper creeping over his daughter, and thought what she
might have been with a more suitable companion; but Temple stood
in so much awe of the philosopher as to appear a somewhat
different person before him and in any other presence. Temmy now
knew that he was safe from misfortune for half an hour; and being
unwilling that grandpapa should see traces of tears, he slipped
behind the window blind, to make his observations on the troop
which was gathering in the distance on the way to the creek. He
stood murmuring to himself,—"There goes Big Brawn and the
Brawnees! I never saw any women like those Brawnees. I think they
could pull up a tall tree by the roots, if they tried. I wonder when
they will give me some more honey to taste. There goes Dods! He
must be tired before the frolic begins; for he has been making bricks
ever since it was light. I suppose he is afraid papa will be angry if he
does not make bricks as fast as he can. Papa was so angry with the
rain for spoiling his bricks before! There goes David——" And so on,
through the entire population, out of the bounds of Temple Lodge.
“I came to ask,” said the doctor, “how many of your men you can
spare to this frolic to-day. Arthur will be glad of all the assistance
that can be had, that the work may be done completely at once.”
The reply was, that Arthur seemed an enterprising young man.
"He is: just made for his lot. But I ought not to call this Arthur’s
enterprise altogether. The Creek is no more his than it is yours or
mine. The erection is for the common good, as the disaster last
night"—(a glance from Mrs. Temple to her husband’s face, and a
peep from Temmy, from behind the blind)—“was, in fact, a common
misfortune.”
Mr. Temple took snuff, and asked no questions at present.
“I have been telling my wife,” observed the doctor, “that I am
prodigiously tempted to try the strength of my arm myself, to-day.”
"I hope not, my dear sir. Your years——The advancement of
science, you know——Just imagine its being told in Paris, among
your friends of the Institute, that you had been helping to build a
bridge! Temple, ring the bell."
Marius was desired to send Ephraim to receive his master’s
commands. In a few minutes, the door slowly opened, a strange
metallic sound was heard, and a little negro boy, stunted in form and
mean in countenance, stood bowing in the presence.
"Ephraim, go into the park field, and tell Martin to send as many
labourers as he can spare to help to bridge the creek. And as you
come back——"
During this time, Dr. Sneyd had turned on his chair to observe the
boy. He now rose rapidly, and went to convince himself that his eyes
did not deceive him. It was really true that the right ankle and left
wrist of the little lad were connected by a light fetter.
“Who has the key of this chain?” asked Dr. Sneyd of his daughter,
who, blushing scarlet, looked towards her husband.
“Give it me,” said the doctor, holding out his hand.
“Excuse me, my dear sir. You do not know the boy.”
“Very true: but that does not alter the case. The key, if you
please.”
After a moment’s hesitation, it was produced from the waistcoat
pocket. Dr. Sneyd set the boy free, bade him make haste to do his
master’s bidding, and quietly doubling the chain, laid it down on a
distant table.
“He never made haste in his life, sir,” protested Mr. Temple. “You
do not know the lad, sir, believe me.”
“I do not: and I am sorry to hear such an account of him. This is a
place where no one can be allowed to loiter and be idle.”
Ephraim showed that he could make haste; for he lost no time in
getting out of the room, when he had received his final orders. At
the moment, and for a few moments more, Dr. Sneyd was relating to
his daughter the contents of the letters received from England the
night before. Mr. Temple meanwhile was stirring the fire, flourishing
his handkerchief, and summoning courage to be angry with Dr.
Sneyd.
“Do you know, sir,” said he, at length, "that boy is my servant? Let
me tell you, that for one gentleman to interfere with another
gentleman’s servants is——"
Dr. Sneyd was listening so calmly, with his hands resting on the
head of his cane, that Temple’s words, somehow or other, failed him.
"Such interference is——is——This boy, sir, is my servant."
“Your servant, but not your slave. Do you know, Temple, it is I
who might call you to account, rather than you me. As one of the
same race with this boy, I have a right to call you to account for
making property of that which is no property. There is no occasion, I
trust, for you and me to refer this matter to a magistrate: but, till
compelled to do so, I have a full right to strike off chains wherever I
meet with them.”
"You may meet with them in the woods, or as far over the prairie
as you are likely to walk, my dear sir, for this lad is a notorious

You might also like