We pay for our "cutting edge" labs to tell us something we already know?
"October 28, 2010
Is Underutilizing Processors Such an Awful Idea?
As we move from multicore to manycore processors, memory bandwidth is going to become an increasingly annoying problem. For some HPC applications it already is. As pointed at in a recent HPCwire blog, a Sandia study found that certain classes of data-intensive applications actually run slower once you try to spread the computations beyond eight cores. The problem turned out to be insufficient memory bandwidth and the contention between processor for memory access."
Sandia "discovers" that insufficient memory bandwidth and memory processor contention can slow down computations.
TTFN
(Peace, Skepticism, Bright, Humanism, Green, TED)
So what's the big deal? Anyone that's worked with the first IBM MP systems in 1975 could have told you that.
To put it more simply, it has long been known that the fastest a computer can go is limited by the bandwidth of the slowest component. That's why, among other things, we have level 1 and level 2 cache for memory and special instructions to lock and unlock common memory locations.(IBM even had a "spin lock" where one CPU would literally loop on a flag until it was freed!)
In data intensive applications there is more data movement, hence more contention. So, we needed Sandia to tell us that?
TTFN
(Peace, Skepticism, Bright, Humanism, Green, TED)
No comments:
Post a Comment