This metric (see Section 7.5) gives you e, the "experimentally
determined serial fraction" for a parallel application, where
e is defined as:
serial time + parallel overhead
-------------------------------
time on one processor
The metric is helpful in two ways:
It gives an easy way to measure e, requiring only that you
measure speedup for several values of p.
This is often easier than figuring out exactly what f
is in Amdahl's law.
Since it includes both serial time and parallel overhead, one
can sometimes see which one is to be blamed for poor parallel
performance. Basically, if e grows with p, then that
can be blamed on overhead rather than inherently sequential work.
See Examples 1 and 2 on page 169.
Isoefficiency
Recall that increasing the problem size often improves
parallel speedup (for a given number of processors p).
The isoefficiency metric asks this question: "as I increase the
number of processors, by how much do I have to increase the total
problem size in order to maintain a given parallel efficiency?"
If problem size has to grow at a rate no faster than
the number of processors, then the application is perfectly
scalable according to this metric. In this situation, the problem
size per processor would remain bounded.
But if the problem size has to grow faster than the number of
processors, then the application is not scalable. In this situation,
eventually the problem size per processor gets too large.