We’re talking about same projects, different IDE here. A colleague told me “nah, you have to understand, all that RAM is because VS is now rendered with WPF”; I very much know that 3D acceleration needs shadow copies of GPU data in CPU ram due to how the hardware works, and that some rendering paths need more copies of the same data; but still I replied to him that I highly doubt that’s the reason; Visual Studio looks like is doing everything very inefficiently: for example when I switch between Release & Debug mode, it does not only take forever, disk activity fires like crazy, they’re obviously reparsing stuff they shouldn’t, the same must be happening with RAM consumption“. Not really, it was dead obvious without looking at a single line of VS’ source code.

The fact that previous versions of the same IDE (and other competing IDEs) can switch instantly should’ve been a strong hint, but if you had any doubts Sysinternals’ Process Explorer is enough.

The good news from the Ogre team, we support unity builds which bring every compiler down to 1 minute (VC 2008 being the fastest at 49 seconds, VS 2012 being the slowest at 1 min 29; while GCC and Clang are nearly at a tie in the middle 1 min 12 seconds and 1 min 20 seconds respectively) But Unity builds are a sub-optimal solution, since they suck when one is working directly on the code because recompiling a cpp file means recompiling many.

vs 2016 updating intellisense slow-33vs 2016 updating intellisense slow-5

I couldn’t care less about their j Query integration, Javascript debugging, or even their C# IDE. Now that I’ve cleared that out, here’s a brief of the problems in VS; which I’ll go into detail, one by one: For medium sized to large projects, this is a real PITA. So, more than double compile time between 20; and exactly the double between 20.

From the look of it, you’re doing a wonderful job since comments seem to be positive. The following timings are for compiling Ogre 2.0 (Ogre Main only), I forced the MSVC 2008 IDE (yes, it can be done) on both to maximize available RAM (and thus avoid HDD bottlenecks, msvc 2012 and its new build tools consume a ridiculous amount of memory). That is a major productivity hit, not to mention this gets on my nerves every time I hit the F7 button.

Worse performance and less features than a competitor. Because running multiple instances of Visual C is actually quite common (normally 2, but sometimes up to 4; why so many?

sometimes because this is required, sometimes the projects are not entirely related, sometimes is due to modularity, and sometimes is due to lack of 64-bit versions , see next problem), whereas it runs ultrasmooth and responsive with VC 2008 with just 4GB, VC 2012/2013 requires at least 12GB (16GB to get a good experience).

Those unfortunately contain a lot subjectivity when one wants to argue.

But let’s keep our talk about the objective failures of Visual Studio.

Also, let’s clear something out: I will only focus on C .

Also, VS 2013 is still very new, so I will talk mostly about my experience with VS 2012.

I’ve upgraded to 8GB while writing this post, and the problem persists. That’s the whole point of this article: If the VS team doesn’t improve these serious pitfalls, on the long run more and more developers will walk away.