Marc Hoffman kindly took the time to respond to my previous post and prompted me to re-formulate and expand my observations on Jim McKeeths post (which never made it to the comments thread on the RO blog for reasons best known to The Cloud).
The biggest issue I had with Jim’s post in particular was that he conflated JIT technology with managed runtimes.
Whilst it may be true that JIT is something that is currently only found together with managed runtimes, there is no technical reason why JIT should and could not be incorporated into un-managed frameworks if there were genuine benefits to be gained. It would however complicate the distribution of applications produced by with unmanaged code.
Managed frameworks simplify the distribution of code that incorporates JIT in the bootstrap because the requirement to have the managed runtime environment pre-installed (or distributed with the app) comes as a necessary and unavoidable overhead (some might say “complication” – a significant, recurring support issue for a .NET application that I am familiar with comes from variations in the .NET framework environment on which it relies at the various sites on which it is used).
With an unmanaged application it would be an additional runtime/distribution complication, regarded by and large as a negative quality.
I’ve said it before, and I’ll say it again now: .NET is basically the 21st Century VBRUNx00.DLL (on steroids and huge amounts of growth hormone). 😉
But if a managed runtime is not a pre-requisite for JIT, an obvious question is why is it that only managed runtimes incorporate it (apart from the runtime/distribution wrinkle mentioned above) ?
It’s almost as if managed runtimes are trying to compensate for something… 😉
Also, the benefits of JIT are largely – afaik – theoretical. Yes, Microsoft (to take .NET as an example) could invest the time and effort to create the JIT compilers necessary to target every new evolution of the latest and greatest Intel and AMD silicon, but in practice I don’t believe they do.
And why should they ?
As Jim himself points out in his response to some comments, if you need performance, managed frameworks would not be your first choice (an observation somewhat at odds with the position taken in his post itself). He (or perhaps the comment he responded to – I don’t recall) isolates games as an example of where managed frameworks are not the best fit, but “game” is simple a very specific example of “demanding application”, not the ONLY example.
The real point is that not all applications need that best performance and that there is therefore a trade-off to be struck with productivity and that is the true advantage of frameworks in general, but again, not an exclusive advantage of managed frameworks.
Delphi’s biggest selling point is the VCL and the way that it makes developing native code Windows applications massively productive compared to other native code development tools.
With FireMonkey you still get the native code but you lose the competitive edge against the other native code tool set for the platform(s) it targets because you are no longer comparing like with like. Those other native code tools produce truly native applications.
Delphi with FireMonkey delivers native code, but not native applications.
I should also stress that this distinction is not confined to the UI toolkits but the wider native capabilities of the platforms, much of which – in the case of OS X / iOS are difficult if not impossible to access.
Which is why I am currently teaching myself Xcode, Objective-C and Cocoa, for it is increasingly obvious to me that that is easiest and most reliable path to creating native applications for OS X and iOS.
The best reason to learn C# is the number of Idiots willing to pay you Giant Gobs of Cash to be a Senior C# Developer. There’s 20 such jobs for C# for every one out there for Delphi.
C# 4.0 is a great language, and the managed .net platform does provide some of it’s power, especially in reflection, and in framework services like WCF, which are (for business application development) light-years ahead of what is possible in Delphi.
Nevertheless, I am a confirmed Delphi snob, and shall remain so. Like you, I’m learning Objective-C and XCode because it’s the sane way to write iPhone apps. I know it takes more effort to get started, but I have a feeling that Firemonkey for Mac OS X and iOS may be, in the end,a 90% solution – the first 90% of the effort goes faster, but the second 90% of your effort will be solving issues that wouldn’t have even happened had you just done it in objective-C in the first place.
W
“.NET is basically the 21st Century VBRUNx00.DLL”
Anyone that built their apps in VB will remember the shock when Microsoft declared VB6 the end of the line, and that to move to VB.NET would require rewriting.
How could they do this? Simply because Microsoft had no significant products based on VB6.
I repeat here what I said in Jim’s blog: if your goal is a long product life, write to the platform Microsoft is basing it’s main products on.
And is Microsoft building any of its main products in .NET?
I think you missed his point. I think he meant not to write it in .NET 🙂
Arguably all AAA games (ie. all the unmanaged ones) are using JITting these days.
At the GPU level, shaders (even asm ones) are all JITted to GPU instructions by the DirectX/OpenGL drivers.
At the CPU level, as more and more computations tend to be deferred to the GPU or extra CPU cores, techs like CUDA or OpenCL also leverage JITting, and there are various more or less proprietary techs that are used in game engines that rely on JITting to SIMD CPU instructions for the computation-heavy aspects.
However, AFAIK, none of those JITters rely on .Net or Java JITters for the high-performance portions.
I think what many people fail to understand is that FireMonkey is about writing programs that run on MULTIPLE platforms. It’s not about writing OS X programs, it’s about programs that run on Windows AND Mac (and hopefully Linux in the future). It’s not about writing iOS apps, it’s about writing apps that work with iOS, Android and Windows 8 devices (it doesn’t do that yet of course). It’s even about apps that work on desktops AND mobile devices.
Giel, many of us understand that perfectly, and that is what we find so annoying. It is the exact same philosophy that lie underneath Kylix and is why many people were so dead against going down that same blind alley once again. And my recollection is that we were assured that lessons had been learned and that the same mistakes would not be made.
And to an extent, that has been the case. Instead we have a whole NEW set of mistakes but – crucially – all still being made in pursuit of the same flawed philosophy in Kylix. The Pipe Dream of write once, compile for anything.
When writing code that runs on multiple platforms it is unavoidable that you will have to write code specific to each platform in some cases. As a tools/language vendor you can choose to ignore that and just hope that you can eventually abstract away all the differences, or you can embrace the differences and extend and enrich your tools and language to enable your developers to fully exploit the platforms you support.
Embarcadero have chosen to stick their head in the sand and hope the differences will go away. Look at the different approach taken just over the fence in the neighbouring Pascal garden – FreePascal. It is staggering how easily and naturally the Pascal language s able to be extended to fully express the concepts in the OS runtime, in some cases even leveraging (reusing and repurposing) syntax extensions introduced by Delphi.
That is what we needed – a compiler that could produce CROSS platform code when appropriate but PLATFORM SPECIFIC code when needed.
And in this day and age, when I thought all the grown up developers had learned that THIN GUI was the path to enlightenment, creating not only a STRICTLY cross platform only and heavily GUI centric platform was an incredibly dumb thing to do and a criminal waste of resources and talent.
Jonyon,
i think the biggest question is: if we remove the JIT part from the managed vs unmanaged discussion, then what’s left that makes a “managed” platform that people can complain about?
just about all arguments i have heard *against* managed platforms have been centered around the JIT part, ie the fact that when you hit Build, the .exe you get does not contain the Delphi developer’s holy grail “native” x86 code, and the assumptions that either (a) Delphi, by definition, supposedly can produce more efficient x86 code than the JITTer can (false), (b) JITting is terribly slow and makes your app start slow every single time (falce) or that (c) IL is interpreted at runtime (blatantly false, of course).
Of course, all of this is hogwash. No matter how you get there, when your .NET application is running, what the CPU sees is 100% x86 instructions (or x64, or ARM, or whatever, of course) that’s for practical purposes indistinguishable form the kind of instructions it would see when running a Delphi or C++ app (ignoring that of course any compiler has patterns that make it distinguishable — you can even tell Oxygene apart form C# on IL level, if you know what to look for — but that distinction is not the point of course).
So, whether you use a “cpu native” compiler like Delphi or C++, or JIT-based compiler like any .NET or Java, the actual code that ends up running is THE SAME, for app practical purposes. Obviously, there will be performance differences in either direction. Just like GNU C++ might generate more efficient code for one pattern, and Delphi emit better code for another pattern, there will be differenced between Delphi and and the .NET JITter; differences between Java and .NET, and differences between .NET and C++.
These are really just that: small compiler differences, that are largely out of your control, in either case. some Delphi code will run faster than .NET; some .NET code will run faster than C++, and some C++ code will run faster than Delphi.
But that doesn’t really constitute a fundamental difference in how that code runs; it just means the individual compiler engineers were better than their competition, in a certain area.
And none of this really affects what makes a “managed” platform. A managed platform needs the JIT, of course, because it relies (in part) on validations it can only do with the extra meta data it gets from JITting. But that doesn’t mean that the JIT is what *makes* the managed platform.
On an unrelated note, as i mentioned elsewhere, i have to admit that Microsoft did themselves a HUGE disservice with how they implemented WinForms. WinForms UI is incredibly slow — it because it uses unmanaged code, but because it is built on on (CPU-native code, btw) GDI+ APIs, which do not have hardware acceleration. Of course this gives everyone the excuse to say “look. managed code. see how SLOW it is?”, but that’s completely misleading, as it’s not the actual managed code that runs slow, it;s the “native” graphics drawing code *invoked* by that managed code that’s slow.
If the VCL used GDI+, it’d be just as slow as WinForms. VCL.NET, as many flaws as it had, did use GDI, and it’s UI was just as fast as a unmanaged Delphi VCL app. Similarly WPF apps dont use GDI+, but DirectX, and their UI is just as fast as — well, we cant compare because there’s no unmanaged WPF access. Metro XAML apps can be written in C++ and .NET, and you’ll find their UI will be indistinguishable, as well.
That’s because .NET code is not inherently slower than code generated by, say, Delphi or C++. If you think about it and are are scientific about it, there’s really no inherent reason why it SHOULD be, either — it’s all down to the same JMP and MOV and ADD instructions. This is all in people’s heads because somehow, subconsciously , they cannot get over the fact that this is just compiled to IL, so it’s gotta be interpreted at runtime somehow. which it is not.
Well, even with the supposed advantage (not just benefit) of JIT, there are numerous cases – REAL cases – where application performance of really quite ordinary applications is simply inadequate to the point of being unacceptable. I have to say that real cases of unacceptable performance seem to make a stronger case than postulated theoretical advantages.
Even Microsoft don’t claim that JIT will deliver code even as efficient as out-of-sequence compilers. They themselves say that a JIT compiler is a “watch the clock” overhead (that is, a user of an app doesn’t notice the time it takes a compiler to produce the code they run, but they will notice a jIT compiler if it takes too long), so this places a limit on the amount of time the JIT compiler can take to do it’s work, and that in turn limits the extent of the work it is able to do.
As for what you say about it all being JMP and Mov and ADD instructions at the end of the day, that is the case when you get down to simple things like arithmetic statements, but the managed runtimes also come with huge managed frameworks abstracted much further away from the “metal” than even a framework such as the VCL, and with characteristics that make common operations very easy to implement, but also very easy to implement inefficiently.
That is common to all frameworks of course – even the VCL (example: the huge re-alloc hit of growing large lists/stringlists without pre-allocing he capacity).
The difference is that the managed frameworks take alleviating the burden from the developer to such an extent that the developer tends to lean on the framework more and more and use their brain and put the effort in to be efficient less and less and this flows though into general problem solving, in my observation any way.
“there are numerous cases” — right. and i gave an explanation that probably covers 95% slow .NET apps: WinForms.
“but the managed runtimes also come with huge managed frameworks abstracted much further away from the metal”. that’s really s strawman, and one you contradict yourself in the next paragraph. *If* the .NET base class library is bad (and that’sa big if — i’d consider it vastly superior to VCL), that’s not a flaw with managed platforms, that just means its a badly implemented class library.
i don’t see how the .NET BCL makes/forces developers write worse code than, say, the VCL, to the the level of it’s abstraction. IMHO they are fairly comparable with regard to the *level* of API they aim to provide, the .NET BCL is just a lot vaster and covers many more things that VCL doesn’t do (and doesn’t aim to) out of the box, and it does so more consistently.
but again, all that compares the libraries, and says little about the benefits or downsides of the managed infrastructure under the hood.
I was hampered by typing my response on my tablet and so forced to be more brief than I needed to be.
No, WinForms was not the problem in the real cases where .NET apps were unacceptably slow. These were not GUI apps at all, primarily. The problems lay primarily at the door of the garbage collector and at the overhead that the managed runtime itself incurs.
I don’t contradict myself by pointing out an area in the VCL that can suffer from poor implementation decisions by a developer. I was pointing out an aspect of the VCL that can catch out the unwary but which can then be accommodated very easily by those willing to put in the effort. The difference with .NET and similar managed runtimes is that they position themselves as specifically not requiring that effort and the result is developers who are (to generalise) less inclined to put in that effort and certainly less inclined to accept that they even should (“Oh it will be better in the next version of the runtime”).
But fortunately (being still on my tablet) Zenon explained what I meant better than I did myself. 🙂
“No, WinForms was not the problem in the real cases where .NET apps were unacceptably slow”
ok, its obvious that our experiences differ vastly, then. we have huge codebases of .NET, in internal tools, in our products, and none of them are slow. The Oxygene compiler itself is very fast. Just about our entire CI build system is written in Oxygene and runs on .NET and Mono (parts of it are open source with Train and Script), and its all blazingly fast.
the only place i have ever seen performance issues with .NET apps has been the WinForms UI.
So both my experience *and* common sense (see my explanation above for how managed code actually *runs*, which you conveniently just skipped over) back up that there’s really no reason to assume that — all other things being equal — code written in a managed language should be any slower that code written in Delphi or C++.
If you don’t mind my saying so, I think your experience is very narrow.
Tools that are “fire, forget and terminate” won’t run into the cumulative problems that were the underlying cause of the issues in the cases that I repeatedly come across, which as I said before are primarily due to the garbage collector. Not worrying about memory management and letting the managed runtime manager do it’s management is fine when the entire fabric of the thing being managed is torn down after only a short period of time.
It’s the difference between the infrastructure and organisational effort required to smoothly run a School Sports Day vs The Olympics. What works very well for one won’t even begin to be adequate for the other.
Also, you are a high technology company with roots in the unmanaged era. I very much doubt that your developers are in that group that are easily tempted down the path of carelessness that plagues the great proportion of the “business app” developer community, where speed of delivery often takes higher precedence than quality of build and there is an active disinterest/inverse snobbery when it comes to learning the ins/outs and nuts and bolts of the tools (“we’re here to solve business problems, we’re not compiler engineers!”).
I didn’t think I skipped over your comments about how all code eventually boils down to JMP, ADD and MOV instructions. My observation that Zenon had made my point was intended to cover that. But if you would like me to respond directly, here you go: 🙂
When comparing simple arithmetic expressions and looping/control constructs, what you say of course is true. But what that – and you – conveniently ignore is what I and others have pointed out: In a managed runtime there is by definition *something* going on other than the application code. If it is a managed runtime then something is doing the managing, and that something is by necessity in addition to any code you write. i.e. an overhead. Yes, it’s an overhead that delivers some benefits to developer productivity, but it certainly isn’t free. TANSTAAFL applies, as always.
So yes, it is perfectly possible to devise benchmarks that “prove” that the IL code that eventually executes is just as efficient as the equivalent binaries spewed out by a compiler, but artificial benchmarks aren’t worth a hill of beans. Real world results are what matters and beyond the narrow field of highly technical software written by highly technical people, the overwhelming experience that I – and it seems many/most others – seem to encounter is that managed code is SLOW. Sometimes merely noticeably so, sometimes unacceptably so.
But perhaps we should just leave it to the people that created this stuff.
Not even they claim that the JIT delivers an advantage, only that it brings managed code performance that is “comparable to traditional native code—at least in the same ballpark“.
Whatever your common sense tells you to expect, the authors of this technical miracle appear to have far less ambitious expectations.
Jolyon,
you’re clearly picking and choosing what to read into my text so suite the message you want to see.
the .NET-written tools i depend on every day are far from “fire, forget and terminate” (although we do have many of those too, of course, and the fact that they are fast also disproves the supposed repeated startup cost, as that is what “fire, forget and terminate” would get to suffer form, most). our CI system, for example, runs 24/7 and across 5 different servers, and it runs rock solid and fast. the middle tier to our bug tracking system runs 24/7 on a tiny Linux box (an EC2 Micro imstance) and it runs solid. our website is completely done in custom ASP.NET with Data Abstract, and it runs 24/7 and solid.
None of these are “fire, forget and terminate” tools. but of course that won’t keep you from finding another example why these examples, too, don’t matter at all, for whatever point you are trying to make.
everything you say about the garbage collector is the usual Chicken Little “the sky is falling” i’ve been hearing from die-hard Delphi-Lovers/.NET-haters for over ten years, and it’s as bogus now as it was then — if not maybe more so.
as i said elsewhere in this thread: bad code leads to bad apps, good code leads to good apps. i don’t for a second doubt that you’ve maybe seen some pretty crappy apps written in .NET. but that doesn’t mean .NET is bad. or slow. it just means the apps (or tools, or long-running systems) you saw were bad.
re overhead: there’s overhead in every application. you think the Delphi heap magically manages itself, for example? essentially, your argument boils down to the same old reductio-ad-absurdum: writing low-level assembler would be best, because that way you control every single bit that flows thru the system.
that last part is true, and i wont argue that increasing levels of abstraction remove increasing levels of control from the developer. but that’s not, in itself, a BAD thing. a managed system can actually be BETTER at managing memory for you than you are, just like a high-level language compiler can be better at constructing that optimized FOR loop for you then you would be, in assembler.
your problem (and in all fairness it’s a problem that most people get stuck in, and i don’t necessarily exclude myself from that) is that you are so comfortable in your current environment that you decided (probably subconsciously) that the CURRENT level of abstraction you have is just perfect and cannot be improved on. anything less would suck (you’d never wanna go back to writing machine code), but anything more would take away too much previous control from you.
people said the same when they were using Borland Pascal 7, and Delphi came out (“pah, VCL! that’ll just be so much worse and slower than if i can fine-tune my Window Message function.”), when Turbo Pascal with Objects came out (“pah, objects. all those virtual method calls will just slow things down!”, or when Pascal and non-assembly languages themselves came out (“how could a a fancy-schmancy ‘compiler’ ever create as efficient assembly code as my brain can?”).
what makes you think that your *current* sweet spot is somehow different, beyond reproach, and can *not* be vastly improved upon by advanced technologies that, yes, take even MORE control away from you, inorder to make things easier AND better?
ps: re: “Not even they claim that the JIT delivers an advantage” i thought we were beyond the JIT at this stage of the discussion.
et tu Brute. You saw the part where I point out that your company and your developers are not representative of the vaster majority of the .NET developer community, yet choose to ignore it. That’s fine. Just don’t go taking the moral high ground when shooting from the moat. 😉
You must have missed my recent posts on Google+. If you did then you might be interested to learn that I am so thoroughly comfortable and entrenched in my comfort zone that I am currently teaching myself Objective-C and Xcode, and further more actually enjoying it and appreciating it.
For some reason, .NET was never as interesting, exciting or enjoyable in the same way. Perhaps because coming at it from a Delphi perspective the benefit just wasn’t as great in comparison to what I was used to or to my skill level, or because the differences weren’t that great (as in significant) either, just largely unnecessary (in the sense that getting to grips with the differences would take a great deal of time and effort, with the net result that I could do what I already can do, just differently).
oh, and with the VCL you can fine tune the message proc. That’s what’s so great about that level of abstraction in the VCL … it gives a great leg up most of the time but is happy to step aside and let you climb the walls yourself if you really want.
Maybe that’s why I’m finding Objective-C and Cocoa so welcoming – I haven’t got that far with it yet but I get the same impression – that it’s there to help when you want and happy to get out of the way when needed. Maybe that’s a false impression. I shall find out I am sure.
But why should the customer’s machine need to jit on every program run? The whole just in time compilation idea seems like a huge waste of computing power to me. Why not compile once for each processor architecture (there are only 3 relevant out there right now) and be done with it?
Generally I think “the proof is the pudding”. Show me one managed app that doesn’t suck from a performance standpoint. VS2011, Expression Blend, Monodevelop, Eclipse, Netbeans, Jetbrain’s various IDEs – they are all slow as hell.
I agree that in the end the processor is executing simply a bunch of machine code regardles of the type of language used to write the program, how else can it be? The problem is however what is that code doing? Is is executing your application’s logging or maybe doing something else?
Managed applications tend to be bulky, and with their bigger size comes the speed penalties. Note that smaller code which is more likely to fit into cache memory will run faster than the bigger one.
Almost any C++ compiler will produce the code which is not worse that the one generated by JIT while the leading C++ compilers generate much more optimized one. Intel compilers are know for producing the good results but it looks like Microsoft is also working hard on improving theirs:
http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-7-VC11-Auto-Vectorizer-C-NOW-LangNEXT
Another performance bottleneck in .NET is garbage collector. It simply stops all threads in your application for a fraction of time and you can do nothing about it.
Herb Sutter also pointed the differences between running managed code vs native code from the energy efficiency point of view. It looks like it is not only important for the mobile devices but amazingly also for the huge data centers.
In the end, if you find a time, please take a look on:
http://channel9.msdn.com/Events/GoingNative/GoingNative-2012
I especially recommend Bjarne Stroustrup’s Day 1 Keynote and also the Day 2 Keynote by Herb Sutter
Regards,
Zenon
I meant
… application logic…
not
… application logging…
🙂
Both these articles are great but I have never seen Delphi being capable of dealing will all OSs when it comes to building business apps.
So when it comes to Mac I would prefer to use other tools rather than go for a XCode and the like. I personally prefer to use RealBasic as it makes it a piece of cake to produce applications that look and feel native on OSX.
The advantage of using RB is that I don’t have to fumble with nibs, Objective C and other such stuff. RB gives me facility to develop like I am used to in Delphi. Place a control on the form, select the event that you want to code for and just code! Finally I can develop the software in Windows and deploy in OS X.
I like to program in Delphi but only and only when I am developing a solution that is targeted for Windows. When it comes to other OSs I would prefer to use RB always.
“But why should the customer’s machine need to jit on every program run?” — it doesn’t.
“VS2011, Expression Blend, Monodevelop, Eclipse, Netbeans, Jetbrain’s various IDEs” — sure. personally, i find the Delphi IDE bloated and slow as hell, too (and that’s not because it uses managed code for a handful of obscure tasks, so don’t give me that).
Visual Studio 2005, to give an example, was lean, mean and fast. heck, it launched almost as fast as notepad. Since then, Microsoft has been busy bloating it up again in versions 2008, 2010 and now 11, sure. But that’s not .NET’s fault. That’s because it contains 387 things that get loaded jup on start and each throw 53 caught exceptions internally for just about any task you perform.
bad code or bad architecture leads to bad apps. good code and architecture leads to good apps. *regardless* of managed or unmanaged code.
As an aside, i played with JetBrain’s IDEs (IntelliJ *and* their Mac IDE), and why i really really really don’t like how they work, i didn’t find them slow, at all. Similarly, i’m not a huge fan of MonoDevelop, mainly because it completely fails to behave like a good mac app, when run on Mac (and i have no use for it on Windows, as i have VS for that), but again, i wouldn’t say it’s been slow, in my use. Bad UI, yes. but not slow.
But: just because no-one wrote a 100% managed development IDE in managed code that is any good, that is not proof that managed code is bad. it’s just proof one one single thing: no-one wrote a 100% managed development IDE in managed code that is any good, yet.
absence is not proof.
Well, absence is not proof, that is true. But .NET has been around for so long now, there should be at least a couple of great client applications out there by now. Maybe I’ve been looking in the wrong places, but I haven’t found any.
Anyway, the point of this thread is that some people, who by coincidence have put many of their eggs into the managed world basket, suddenly start calling managed apps native (reminds me a bit of the free as in beer vs free as in freedom discussion).
The reality is two factors
1. Each your own dog food
2. If you create and use your own tool and framework to create an application and it is slow, this is an indication of a fish is rotten
For MS case, it is obvious slow and a snail
For Embarcadero, until they use FireMonkey in their IDE GUI, don’t expect too much. At some day, they may abandon it (same as .NET from MS)
Cheers
My C# experience was not with VS but rather with mono and monotouch. I hated it every step of the way, but there were some factors i enjoyed. For instance, that you can tell the compiler to generate native code, one .exe file right there and then.
The benefit is that you have platform independent libraries that you can copy between mac and pc, which are just symbolic CIL libs really (think delphi packages), and then simply compile your app to iOS or OS X.
What you end up with is a very fast, native executable.
But then you might argue, what is the point (or difference) between that and what we already have.
To which i must answere: nada. I am a delphi programmer who was forced to learn C# to survive for a while, but mono is actually not half bad once you get the hang of it. They have added the missing bits that microsoft left out. When microsoft writes “platform independent” they are really talking about different versions of Windows. Mono blew all that apart and also added “real” compilation. To me that ended up as the best solution to an otherwise awful situation.