Advertisement

Wait for Haswell refresh?

Started by May 02, 2014 01:36 AM
12 comments, last by ChaosEngine 10 years, 4 months ago

Also remember, it is a tool. Nothing more.

The tool is constantly depreciating and there will be newer, faster, and cheaper versions before yours is even unpacked.

Buy it because it meets your needs and has a good cost/benefit ratio. If you can wait you can find a better deal. Wait a month, wait six months, wait ten years, there will always be a better deal. Buy when you need the better tool, not because it is an investment.


but more than 4 cores is of little benefit to a programming and compiling, unless your job involves writing very parallelizable code.

It appears you seem to be implying there is a link between the degree of parallelizability of the execution and compiling of a piece of code; but there is none that I know of. Afaik, msvs makes great use of additional cores during compilation (although you may have to enable it in the options for C++). Compilation involves a lot of steps which are trivially parallelized.

I wouldn't jump all the way to trivially-parallizable. Yes, many build steps and modules can run in parallel, but also the way that you structure your code has a great deal to do with it, because you, as a programmer, can create serial dependencies between code. Well-written code avoids this (not just for build times, but for general modularity as well), I concede fully that more cores will build this kind of code more quickly, until you reach other system bottlenecks. But what I contest is this -- either your code is *not* well written in this way, in which case you will gain more by re-structuring your code, or your code is written this way, and it probably already builds acceptably fast on a machine with 4 cores -- faster by any margin is always "better" of course, but unless you're re-building very large code-bases completely, with relative frequency, its unlikely that having or scaling to 8+ cores is going to be of any great benefit to you -- unless, perhaps, you're the type runs *nix of some flavor and builds all your software from source. Otherwise, most of us don't build in sufficient volume.

Keep in mind that going from a 4-core intel CPU to an 8-core model running at the same clockspeed will cost you 3x as much at a minimum for the CPU alone. A supporting motherboard will likewise come at a premium. If you go Xeon, you'll also need ECC RAM, another premium, and a still-more expensive motherboard.

For compiling code, hyperthreading does quite well on its own -- between 20-30 percent throughput increase in benchmarks of popular open source projects -- which is why the 50% price premium is well worth while in going from a 4-core i5 to 4-core-4-hyperthreads i7 of equal speed. You can see some analysis here: http://blog.stuffedcow.net/2011/08/hyperthreading-performance/

But the structure of C++ itself is a problem, at least for now. This is part of the reason we have idioms like pImpl/Firewall, and all of the reason why they speed up build times. One needs to look no further than Go, D, or even C# to see that those languages compile in seconds what would take C++ possibly minutes, with no special effort on the part of the programmer -- A speed that even fast C++ compilers like CLang can't match, even with source that does make special effort to increase build throughput. If they manage to get Modules into the standard one of these days, the picture might change, but for now its a trait of the language that can only be mitigated.

throw table_exception("(? ???)? ? ???");

Advertisement


but more than 4 cores is of little benefit to a programming and compiling, unless your job involves writing very parallelizable code.

It appears you seem to be implying there is a link between the degree of parallelizability of the execution and compiling of a piece of code; but there is none that I know of. Afaik, msvs makes great use of additional cores during compilation (although you may have to enable it in the options for C++). Compilation involves a lot of steps which are trivially parallelized.

I wouldn't jump all the way to trivially-parallizable. Yes, many build steps and modules can run in parallel, but also the way that you structure your code has a great deal to do with it, because you, as a programmer, can create serial dependencies between code. Well-written code avoids this (not just for build times, but for general modularity as well), I concede fully that more cores will build this kind of code more quickly, until you reach other system bottlenecks. But what I contest is this -- either your code is *not* well written in this way, in which case you will gain more by re-structuring your code, or your code is written this way, and it probably already builds acceptably fast on a machine with 4 cores -- faster by any margin is always "better" of course, but unless you're re-building very large code-bases completely, with relative frequency, its unlikely that having or scaling to 8+ cores is going to be of any great benefit to you -- unless, perhaps, you're the type runs *nix of some flavor and builds all your software from source. Otherwise, most of us don't build in sufficient volume.

Keep in mind that going from a 4-core intel CPU to an 8-core model running at the same clockspeed will cost you 3x as much at a minimum for the CPU alone. A supporting motherboard will likewise come at a premium. If you go Xeon, you'll also need ECC RAM, another premium, and a still-more expensive motherboard.

For compiling code, hyperthreading does quite well on its own -- between 20-30 percent throughput increase in benchmarks of popular open source projects -- which is why the 50% price premium is well worth while in going from a 4-core i5 to 4-core-4-hyperthreads i7 of equal speed. You can see some analysis here: http://blog.stuffedcow.net/2011/08/hyperthreading-performance/

But the structure of C++ itself is a problem, at least for now. This is part of the reason we have idioms like pImpl/Firewall, and all of the reason why they speed up build times. One needs to look no further than Go, D, or even C# to see that those languages compile in seconds what would take C++ possibly minutes, with no special effort on the part of the programmer -- A speed that even fast C++ compilers like CLang can't match, even with source that does make special effort to increase build throughput. If they manage to get Modules into the standard one of these days, the picture might change, but for now its a trait of the language that can only be mitigated.

I agree; some code takes longer to compile than others. But if it takes really long (as per the OP), more cores is more id say. That said, I single source file full of template magic can already be a massive pain in the ass, and there is probably no parallelism at the level of a single source file (maybe there is, never tested it).

My rules for keeping C++ programming fun are:

  • if your code contains a main function, you are abusing C++
  • if your activities can be described as software design rather than algorithm implementation, you are abusing C++

Anything else takes too long to compile to meet my productivity standards. REPL FTW. But what about jobs that require the maintenance of large legacy C code bases, you may ask? Your mileage may vary, but my solution is to turn them down categorically.


It appears you seem to be implying there is a link between the degree of parallelizability of the execution and compiling of a piece of code; but there is none that I know of.

I missed that part earlier, and just to clear my good name, no, I'm not so naive as that :) I merely meant to imply that if your job as a programmer is to write programs that scale across cores, it's generally a good idea for that programmer to have many cores with which to test his program. Either because the test data sets might be sufficiently large, or because they need to keep an eye on how the app scales beyond two and four cores.

throw table_exception("(? ???)? ? ???");


But what about jobs that require the maintenance of large legacy C code bases, you may ask? Your mileage may vary, but my solution is to turn them down categorically.

The idea is to gradually surround and chip away at the legacy code. I've already made significant inroads here, but there's over 10 years of development to deal with.

The majority of my work involves C# and I am the decision maker on everything there, so most of the time I'm pretty happy with it.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

This topic is closed to new replies.

Advertisement