18-649 Simulator Debugging Tips
Last modified 2010-05-09
This page describes some approaches to debugging the elevator
simulation and some debugging tools that are available in the
simulation framework. These are certainly not the only ways
(or
even
necessarily the best ways) to debug the system. You can use
any
approach you prefer that allows you to identify the source of bugs
in
the elevator. Just remember that the code you submit with
your
portfolio must be compatible with the officially released
version
of the simulator from the course webpage.
Use the -seed option
to
repeatably exercise bugs
One of the features of the elevator simulator is that events that
occur
simultaneously (at the same simulation time) are executed in a
pseudorandom order. This simulates jitter in real
systems.
The
-seed option allows
you
to specify the random seed used to generate the simulation.
If
you do not specify a random seed, the current timestamp is
used.
The seed used in each test (the one you specified or a the default
value) is printed in the .stats file for every test run. If
you
run the simulator with the same inputs and the same random seed,
you
should observe the same pseudorandom event ordering, and thus, be
able
to exactly repeat any tests that result in an error. Being
able
to repeatably run (and debug) the exact test that exercised the
bug can
be very helpful.
It is very common for subtle bugs (such as race conditions) not to
be
exercised every time a given the test is run. So a best
practice
is to omit the
-seed argument
during
normal testing. That way, you will run the test with a
variety of random seeds and you are more likely to exercise subtle
bugs. Then use the
-seed argument
to
repeat
a
failed
test
for
debugging, or if you are required to do so
by a project handin requirement.
Use breakpoints for bugs in long simulation runs
One of the most difficult debugging tasks is a bug that occurs
near the
end of a long simulation. The simulator has a breakpoint
feature
built in. Using the
-break
option
(see
the
commandline
reference
for
full details), you can specify a
list of simulation times where the simulator will automatically
pause
the execution of the simulator at the time or times
specified.
Then you can start the simulation at a slow speed to observer the
GUI
output and any messages printed to stdout. If you are using
an
interactive debugger (like netbeans), you can also activate
breakpoints
in the code before unpausing the simulator. Note that you
must
use the
-gui option in
conjunction with the
-break option
because
the
gui
has
the
buttons
to pause the simulation and control the
rate. If you are setting breakpoints, you can also use the
-rate INFINITE option to
automatically make the simulation run at top speed (otherwise the
default rate with the gui enabled is 1.0 x realtime).
Don't
use the
-gui flag when the cluster machines are congested
with many users at the same time and you want to complete the
simulation quickly.
Verbose options and simulator performance considerations
For unit and integration tests, you should always enable verbose
output
on your controllers. This is the "best practice" and is
necessary
in some cases where the testing framework cannot capture rapid
changes
in system state. See the discussion document in the
soda machine
portfolio example for more details on using the debug output
in
unit
and integration tests.
For acceptance tests, the simulation framework provides a set of
options controlling which system modules print their status to the
command line
(look for various "verbose" flags in the commandline reference,
like
-pv,
-mv, -
cv, etc.).
If you run an acceptance test with no verbose flags, the only
output
you will see is a progress message printed every 60 seconds of
simulation time. Once you start invoking the various various
verbose outputs, you will see the performance of the simulator
decrease. This is because the performance bottleneck is how
fast
the system can handle the i/o operations to stdout. There
are a
couple of things you can do to improve the performance of the
simulator:
- Run locally - if you are debugging remotely on one of the
campus
cluster machines, all terminal I/O has to travel across the
network, so
a slow or congested network connection can reduce
performance. It
is also likely that you will not be the only one running jobs
on the
cluster machines. If you execute your code locally, you
can
access the full computing power of your machine and avoid the
network
overhead for terminal I/O. Setting up your code in
Netbeans or
another java IDE is an easy way to run locally and have access
to
interactive debugging features. This is especially true
on
Windows machines where command line execution of the simulator
is
possible but iffy.
- Redirect stdout to a file - file I/O is significantly faster
than
terminal I/O, so redirecting output to a file can increase the
performance
of the simulator significantly. You can analyze the
output file
using commandline tools like grep and awk. However, this
limits
your ability to observe the debug output while watching the
simulator
gui, so it may not be useful in all cases
- Limit debug output - the 'verbose' options do not give you
fine-grained
control to activate the verbose options in specific
controllers.
If you have traced the problem to one module or the
interaction between
just a few modules, you can directly modify the source code to
override
the verbose argument with a 'true' value. You should
only do this
on a temporary basis, otherwisethere will be extraneous debug
output when we run your tests for grading, which is prohibited
by the
portfolio requirements.
Logging framework and simulator
performance
considerations
A common approach to debugging is to print information about the
system
state to the command line. For example, you might write an
entry
every time the controller runs to indicate its current state
variable. You could also print the current values of inputs
and
outputs, and print a message every time the controller makes a
transition. You can see example of this in the Testlight
example
code and in the code given in the soda machine example
portfolio.
The rest of this section discusses best practices for getting good
performance out of the simulator while printing the necessary
debug
messages.
Most of the system objects already have log messages built into
them,
but feel free to add your own for debugging purposes.
However,
the place you will likely do most of your debug logging is in your
controller implementations. The
simulator.framework.Controller
parent class provides a log() method. Messages created using
the
log() method will be printed whenever the
verbose constructor parameter
is
true. You should use this method for logging instead of
directly
writing to
System.err or
System.out. Not only
does this
automatically tag the log entry with the name of the module and
the
current system time, it gives you an easy way to enable and
disable log
messages without having to comment and uncomment lines of
code.
The proper way to use the log() method is:
log("Current
controller
state", stateVariable, "; current output: ",
mHallCall.getValue()); //THIS
IS
THE
BEST
WAY
the
improper way to use
the
log method is:
log("Current
controller
state"
+
stateVariable
+ "; current output: " +
mHallCall); //THIS
IS
NOT
THE
BEST
WAY
-- BAD
The difference between these two lines is that the proper way
sends
each variable as an argument, while the improper way uses the '+'
operator to concatenate the parts of the message into a single
string
argument. Although it may not seem like an important
difference,
the first way will allow the simulator to run faster, which means
less
of your valuable time spent waiting for simulations to
complete!
If you are interested in the technical reasons why this provides a
performance boost in the simulator, see note 1 below.
Otherwise,
just take our word for it!
Note 1:
If you are used to C/C++ programming, a standard practice is to
enclose
debug print statements in a conditional compilation block.
If the
conditional compilation variable is not set, then the code inside
the
conditional block is not even written into the executable
source.
However, java does not support conditional compilation. This
is
an important difference because it means that the log messages
calls
are always executed, even if the verbose flag results in them not
being
written to stdout.
If you look at the definition of the log() method, you will notice
that
its
protected void
log(Object... msg). If the verbose flag is true,
then
whatever argument list you provide is passed to
Harness.log(String name, Object...
msg) for
logging
to
the
command
line.
Inside Harness.log(), each Object in
the msg array is converted to a String (using an implicit call to
the
object's toString method) and written to System.out.
Calling the log() method with the message parts as an argument
list
(the 'proper' method described above) allows the objects in the
message
to be evaluated
lazily.
That
is, if the verbose flag is false, then the toString() and
concatenation operations never take place. If you use the
improper method, then the concatenate and toString() operations
take
place
before the call to
log(), so the toString() operations are performed regardless of
whether
or not the message is ultimately written. Some of the more
complex system objects have very complex toString() methods, so
avoiding unnecessary toString() calls provides a significant
performance improvement.
*Please submit all project-related questions to
-- thanks!
Back to the
codebase