Current version

v1.10.4 (stable)


Main page
Archived news
Plugin SDK
Knowledge base
Contact info
Other projects


Blog Archive

First impressions of C#

I've long been a believer in tuning for speed and efficiency. Part of this comes from having used underpowered machines for a long time, and part comes from being very impatient. Some of this also comes from my early days of coding on various Microsoft-derived BASIC interpreters such as Applesoft and AmigaBASIC. (These aren't nearly as bad as, say, TI-99/4A BASIC, but you haven't seen a real interpreted BASIC until you've used AMOS Professional.) I did a little bit of 6502 assembly after that, some 68000, and then a bit of 16-bit x86 — painfully. And of course, after that, 32-bit x86. My language of choice nowadays however is very clearly C++, because of its expressiveness, power, and leanness.

Recently I've had the opportunity at work to write a bit of C# code. My first impressions of C# were somewhat tainted by my bad memories of Java 1.1, which seemed to combine a deliberately painful subset of C++ with a slow VM and an abysmally bad library. (Not to mention the IDE used by my school, Symantec Café, which we affectionally called Crapfé and which I fondly remember for its use of a listbox as an output window.) C# definitely stays a lot closer to C++ than Java did, and I think it is a better language for having done so. I would definitely not even consider rewriting VirtualDub in it at this point, but it's definitely a much more amiable language than most others in its class that I've seen.

I should note at this point that I've only tried the barest amount of C# 2.0; most of my experience so far is with C# 1.1 (VS2003), so bear with me if I complain about something that has already been fixed.


The C# designers seemed to have genuinely tried to fix some of the warts in the C++ language. It has override, so you can't screw over a subclass accidentally when modifying a method prototype. It has foreach and with. Reference parameters now have to be tagged as in, out, or inout so you know which way the data flows and the compiler can catch missing assignments. You have your choice between checked and unchecked arithmetic, so you can detect overflows in some code and still abuse integer math in others. It complains about variable names being reused in inner scopes, which while it drove me crazy when I was coding, is actually a practice I need to wean myself off of.

C# also includes a rudimentary form of the preprocessor, which I have mixed feelings about. I hated not having the preprocessor in Java, as it meant you couldn't easily guarantee that debugging code didn't ship in a release build. However, the C# preprocessor doesn't allow you to do much more than this, which makes it unusable for debug trace macros (very important). The preprocessor is also extremely useful for building data structures, tables, and cleaning up repetitive comparison tests in a readable manner. I will admit that some of these uses would better be served by direct language support, however; I'd like not to have to write the same ugly debug trace macros every time I start a new C++ project, and System.Diagnostics.Debug.WriteLine() is too long.

Support for pointers and unsafe code in general is also a nice plus, and is something that I consider essential for a real-world Win32 application. The stackalloc keyword, which is a more formal version of alloca(), is really cool. Contrary to popular opinion, virtual machines that execute code in a safe manner do still impose a performance penalty, and in particular anything that deals with large arrays, such as image and audio data, takes a noticeable performance hit from bounds checking. Being able to just mark a routine as unsafe and do C++-like pointer arithmetic to close the gap is great.

One thing I don't like, though, is the removal of separate declarations and definitions for classes. I hated this when I was using Java and I still find it a bad idea in C#. The main problem is that it makes it very difficult to present the direct API of a class that doesn't implement a separate interface, such as a value type. While a declaration in a C++ header is more work to maintain, it's very easy to scan. It's true that modern IDEs can collapse the class and only present the prototypes, but this usually ends up being badly formatted and doesn't allow you to order the methods differently in code and in header, which I often do for organizational reasons.

Coding in C# also seemed more verbose than C++ due to the longer, namespaced names of the standard classes in the .NET Framework. While it takes time to learn the names and headers, I find std::vector<> much less intrusive than System.Collections.ArrayList, and using defeats many of the benefits of namespaces. I also found that attributes in particular tended to be repetitive and long, especially when doing interop, and was wanting for a shorthand way to apply the same attribute to multiple items, or preprocessor support for doing so myself. Frankly, I appreciate that in C++ I only have to type std::max<> instead of something like Standard::Math::Max() and suggest that you don't need whole words for a name to be readable, particularly for something as common as the System namespace.


I didn't attempt to actually write a whole application in C#, but one of the things I ended up doing in it was to write a routine to weld vertices in a 200,000+ vertex 3D mesh. This was in the middle of an interactive tool for display purposes, so it was important that this execute as quickly as possible. The performance was abysmal compared to the C++ routine. I won't quote numbers, since I'm not supposed to and I don't have hard data anyway, but it was definitely noticeable, as in the C# routine was slow and the C++ one wasn't. It's important to note that most of this penalty centered around my use of a ulong-to-ulong Hashtable, which performed very badly compared to STL due to its need to box and unbox both key and value (leading to intense heap activity), call generic comparison routines through interfaces, and worst of all, the requirement to do a redundant second lookup for a read-modify-write operation. Most of these issues theoretically go away with the introduction of generics and generic containers in STL.NET, but I wonder if the last issue can be fixed, as the C++ solution requires references to a value type. In the end the routine was fast enough to use, but I still had a Zawinski-like urge to yet again write my own hash table.

I will note in passing that I really, really, really, really enjoyed having a foreach control structure and can't emphasize strongly enough to the C++ standards committee and Microsoft Visual C++ leads that we need typeof or decltype NOW.

In general, C# felt fast enough to use for routines that didn't have to process a 2MB vertex buffer, and definitely didn't feel as sluggish as Java did when it was introduced. However, as is usually the case, my views of performance at the low-level innards didn't seem to generalize to the whole-application level, as the application itself still felt a little sluggish — perceptably, not just that it was slower than native code. In general, I've seen very few applications written in JITted, garbage-collected languages that didn't feel slow, one of the notable exceptions being Azureus on Windows. I think this is primarily due to UI toolkits, which in the .NET case is Windows Forms, and with Java 1.1 was the AWT (Awful Windowing Toolkit). Using the unaccelerated GDI+ for drawing probably doesn't help, either.

I've tried to come up with excuses for writing some little utilities in C# just to get more familiar with it, but it seems that every time I come up with better reasons to do it in C++, probably because I'm so much more familiar with it and because STL is just too handy. That property grid control, however, makes it very tempting. I've also toyed with the idea of rewriting Filter Factory so it compiles to .NET bytecode instead, but I don't know enough about .NET bytecode or the Framework facilities for dynamic code to know if this is feasible.


This blog was originally open for comments when this entry was first posted, but was later closed and then removed due to spam and after a migration away from the original blog software. Unfortunately, it would have been a lot of work to reformat the comments to republish them. The author thanks everyone who posted comments and added to the discussion.