Can managed code perform computations as fast as unmamanged?

Yes, of course. Even without unsafe/unverifiable code, well-optimized . NET code can outperform native code.

Case in point, Dr. Jon Harrop's answer on this thread: F# performance in scientific computing Will it? Usually, no, unless you go way out of your way to avoid allocations.

All I'm saying is that that guy got an implementation from 1977 and is comparing it to when he's using compiler and parallelism techniques from 2010. It's hardly a genuine comparison with what a native language, from 2010, using a parallelism technique from 2010, could pull. He also compared a full LAPACK implementation to his implementation of a single algorithm, which is used as the basis for others, when discussing the quantity of code involved.

– DeadMG Jun 1 at 23:31 @DeadMG : Read it again -- he's comparing the number of lines of code to the Fortran implementation, but he's comparing the performance to Intel MKL. – ildjarn Jun 1 at 23:32 3 Allocations is the one area where a managed program outperforms native code all the time. The coding battle between Raymond Chen and Rico Mariani is famous.

Floating point math tends to be another. Code that requires a smart optimizer that takes its merry time is where managed code loses. – Hans Passant Jun 2 at 0:27 @Hans : Yet in my experience when profiling managed code, GC churn can be a huge performance killer.I.e.

, allocation itself may be fast, but if the GC is working too hard it can still be a net loss by a wide margin. – ildjarn Jun 2 at 0:33 @Hans Passant: Garbage collection is just a different algorithm to the native heap. If I wanted to have garbage collection in my C++ program, (and I had a lot of money and developers), I could do.

That's like saying that a C++ insertion sort on a mostly sorted array is faster than C's qsort. Of course it would be- but it's not fundamentally a language difference, it's just that . NET pre-provides garbage collection and C++ doesn't, and GC suits some applications.

A native language that implemented the same GC algorithms would likely run it faster. – DeadMG Jun 2 at 16:34.

Simple answer is no. Computationally intensive applications where each millisecond counts will still be written in unmanaged languages such as C, C++. GC slows a lot when collecting.

For example nobody writes 3D engines in C# or XNA. There are some, but nothing is close to CryEngine or Unreal.

1 The more complicated answer is, "There's more to the equation than CPU cycles. " Most of the interesting new development I've been seeing in data mining (which can be pretty intensive) has been happening in Java. The reason is, the practically-attainable performance advantage of low-level code isn't large enough to justify the cost.

Not when computers are so very, very inexpensive nowadays. Nowadays, when lower-level languages are used the motivation has to do with historical considerations or what languages are well-supported on the target platform at least as often as anything else. – Sean U Jun 1 at 23:14 @Sean I definitely agree with part C#/Java vs C++ when comparing time/speed of development.

But at end of the day, . NET will always be slower for 99% of time compared to C++. There is no question about that.

– Tomas Voracek Jun 1 at 23:32.

. NET is not super-super slow- but nor is it in the same realm as a native language. The speed differential is something you could easily suck up for a business app that prefers safety and shorter development cycles.

If you're not using every cycle on the CPU then it doesn't matter how many you use, and the fact is that many or even most apps simply don't need that kind of performance. However, when you do need that kind of performance, . NET won't offer it.

More importantly, it's not controllable enough. In C++ then you destroy every resource, manage every allocation. This is a big burden when you really don't want to have to do it- but when you need the added performance of fine-tuning every allocation, it's impossible to beat.

Another thing to consider is the compiler. I mean, the JIT has access to more information about both the program and the target CPU. However, it does have to re-compile from scratch every time, and do so under far, far greater time constraints than the C++ compiler, inherently limiting what it's capable of.

The CLR semantics, like heap allocation for every object every time, are also fundamentally limiting of it's performance. Managed GC allocation is plenty fast, but it's no stack allocation, and more importantly, de-allocation. Edit: Of course, the fact that .

NET ships with a different memory control paradigm to (most) native languages means that for an application for which garbage collection is particularly suited, then . NET code may run faster than native code. This isn't, however, anything to do with managed code versus native code, just picking the right algorithm for the right job, and it doesn't mean that an equivalent GC algorithm used from native code wouldn't be faster.

Short answer yes, long answer with enough work. There are high frequency trading applications written in managed C# .NET. Very few other applications ever approach the time criticalness as a trading engine requires.

The overall concept is you develop software that is extremely efficient that your application will not need the garbage collector to ever invoke itself for non generation 0 objects. If at any point the garbage collector kicks in you have a massive (in computing terms of time) lag lasting dozens or hundreds of milliseconds which would be unacceptable.

1 there is no conceivable way in which you won't have to garbage collect ever (unless you find the way to completely abdicate even System. String, just to name an example). What you probably mean is, to limit garbage collections to generation 0 (which is pretty quick, hardly noticable) – sehe Jun 1 at 23:08 1 @sehe you're right I missed that originally.

Thank you – Chris Marisic Jun 1 at 23:11.

You can use unsafe and pointers to get "raw" memory access, which can give you a significant speed boost at the cost of more responsibility for your stuff (remember to pin your objects). At that point, you're just shifting bytes. Garbage Collection pressure could be an interesting thing, but there are tactics around that as well (object pooling).

This seems like an overly broad question. There are some gratuitous hints to throw around: Yes you can use unsafe, unchecked, arrays of Structs and most importantly C++/CLI. There is never going to be a match for C++'s inlining, compiletime template expansion (and ditto optimizations) etc.But the bottom line is: it depends on the problem.

What is computations anyway. Mono has nifty extensions to use SIMD instructions, on Win32 you'd have to go native to get those. Interop is cheating.

In my experience, though, porting toy projects (such as parsers and a Chess engine) is going to result in at least an order of magnitude speed difference, no matter how much you do optimize the . NET side of things. I reckon this has to do, mainly, with the Heap management and the service routines (System.

String, System. IO). There can be big pitfalls in .

NET (overusing Linq, lambdas, accidentally relying on Enum. HasFlag to perform like a bitwise operation...) etc.YMMV and choose your weapons carefully.

This seems like an overly broad question". Yes it is. Sorry about that.

Still, I'm very interested in the answers. I have already got a few very interesting ideas out of the answers. Chris Marisic reminded me about GC, which, I know is very true.

Thomas Minor suggests that it is not a stratch to think that given right setup managed code can out perform unmanaged. I'm not sure about this one, but this is why I'm asking. Your own comment on gen 0 is also helpful.

– zespri Jun 1 at 23:18.

In general, managed code will have at least some speed loss compared to compiled code, proportional to the size of your code. This loss comes in when the VM first JIT compiles your code. Assuming the JIT compiler is just as good as a normal compiler, after that, the code will perform the same.

However, depending on how it's written, it's even possible that the JIT compiler will perform better than a normal compiler. The JIT compiler knows many more things about the target platform than a normal compiler would--it knows what code is "hot", it can cache results for proven pure functions, it knows which instruction set extensions the target platform supports, etc, whereas depending on the compiler, and how specializing you (and your intended application) allow it to be, the compiler may not be able to optimize nearly as well. Honestly, it completely depends on your application.

Especially with something so algorithmic as a chess engine, it's possible that code in a JIT compiled language, following expected semantics and regular patterns, may run faster than the equivalent in C/C++. Write something and test it out!

If you write a high performance chess engine, you also better know your target platform. :-) Ordinary compilers will use profiled runs to measure the things the JIT would see, and reoptimize accordingly. They also have a lot more time to spend on this.

– Bo Persson Jun 2 at 12:14.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions