Topics in Dependable Embedded Systems
Carnegie Mellon University
Electrical and Computer Engineering Department
Spring 1999
Editor:
Philip Koopman |
|
Authors:
Michael Carchia
Michael Collins
John DeVale
Adrian Drury
Chris Inacio
|
|
Kanaka Juvva
Philip Koopman
Jiantao Pan
Leo Rollins
Mike Scheinholtz
|
|
Charles Shelton
Ying Shi
Robert Slater
Eushiuan Tran |
This is a collection of student-written reports discussing various aspects
of dependable embedded systems. The papers are the result of a graduate course
that involved intense effort preparing presentations, writing papers, and
collectively exchanging reviews and ideas. Due to time constraints the scope of
the course and the papers is necessarily finite, but it does a good job of
outlining just how great a breadth of knowledge an engineer must have to truly
understand this important, growing area. The editing of these papers is in
progress, and thus they should not be considered definitive on any technical
point.
Reports on 44 areas on-line:
Informal conclusions from the experience of putting together this material:
- CONCEPT/DESIGN
- Complexity is bad
- Emergent properties are where the action is
- Safety/dependability is an art (and as are most emergent properties)
- Abstraction is a key tool
- Abstract/emergent concepts are hard to quantify
- Life cycle tradeoffs/optimization are required
- IMPLEMENTATION/TOOLS
- Diversity and redundancy are the historically key tools to improve
dependability
- Software !=hardware (design !=implementation)
- Creating intentionally diverse designs is difficult; and worse yet, design
of even one version is often a bottleneck
- Process !=product; improving process is largely the only way we know to
ensure improved designs, but that doesn't guarantee better product
- Verification/validation; but, few metrics and few functions to evaluate
partial designs
- Learn from historical mistakes
- Future tools: graceful degradation(?); formal methods(?); process
improvement (ISO 9000/TQM/CMM/...)
- UNPLEASANT REALITY
- We're better at building systems that are not novel, but the market often
demands novelty
- No system is perfect (both hard to do, and fundamental tradeoffs)
- Even if we could do optimal designs (which we can't), ultimately tradeoffs
are required, both technical & economic; driven by profit motive at the
high level
- As if that weren't bad enough, systems must deal with unpredictability in
real world (so specifications are never perfect/complete)
- People are part of the system (and are unpredictable both as users and
designers)
- Social/legal issues are a part of reality, and add additional non-technical
constraints
- People will use systems in unexpected/innapropriate ways (for critical
applications when they aren't built for that, and past end of life)
- SOME THINGS WE DO KNOW HOW TO DO
- Error coding
- Mechanical over-design
- Worst case design in absence of failures (e.g., hard reservations)
- Checkpoint/rollback (?)
- Learning from the past, if we pay attention (often attention is commanded
by a high mortality rate)
- SOME THINGS WE'RE BAD AT
- Predicting the future: safety / exceptions / environment
- Highly interdisciplinary cost accounting (life-cycle/businesss profit
maximization -- it is an emergent property)
- As we learn how to deal with complexity, systems will become more complex
-- so maybe our reach will ever exceed our grasp
© Copyright 1999, Philip Koopman, All Rights Reserved
Home page | e-mail
koopman@cmu.edu