Nulls, exceptions, and no discriminated unions. I use C# professionally and those traits completely ruin the language. And, no, the nullable stuff doesn't cut it. It has some pretty awkward quirks.
Also, almost nobody writes code that is fast. Sure, aspnetcore might be optimized to the teeth, but drop EF in and that's all wiped out and then, not some, a ton.
Idiomatic C# has also become what I call abstraction infatuation. It's impossible to tell what actually happens in that perfect 5 line method that uses 30 different interfaces and 10 different factories. Better have a good debugger handy (you won't: they all barf on async code). Aspnetcore and EF take abstraction to such academic levels that the most hardcore of Java EE devs would blush.
I started learning it months after it launched, I am a truly seasoned C# veteran. I thought it was the best thing since sliced bread. But hear-you-me, I have had one too many exception in my time, and C# is massively overated.
I've never used a language without null, besides C# nullables (which I also find perfectly fine, though agreed to be somewhat awkward). What is the alternative and its benefit? Null fits so many issues perfectly naturally, like a database entry without a value. And I find the C# '?' syntax to be amazing. For those unfamiliar with it:
string foo = someDb.GetRow(id)?.Value
foo is null when the row is null or the value is null, with 0 null pointer exceptions possible. If you wanted to allow someDb to possible be null, instead of intentionally crashing out, you could also add another ? after that reference. Really wish C++ had this. But in general I find null pointer exceptions to be desirable, because more often than not they indicate a failure of assumptions.
F# doesn’t have nulls by default and can completely interact with C# bybusi by Option types with the value set to None in case of null and Some<T> when it actually has a value.
This is so much more explicit and composes so much better than C#’s null.
If I declare a variable with a certain type in C#, its type is implicitly set to that type and null. I am allowed to call every operation that is supported by that type on the variable, and yet, it’s actually a lie because if the variable is set to null the purportedly supported operation won’t run and will throw an exception.
In F#, however, if I declare something of a certain type, it’s guaranteed to be of that type. Any operations on that variable supported by that type are guaranteed to run.
And if something might indeed not have a value (say a database result) I have to explicitly declare it as an option type so there is a None option for the value. This way the F# compiler will ensure I’m handling the None scenario properly and don’t end up calling unsupported operations on Null.
This is a much better design. I in fact struggle to understand why anyone thought it was a good idea to allow someone to declare a certain type but create an object which is not actually that type but is implicitly Type | null.
It makes sense in an unmanaged language like C, but makes no sense whatsoever in advanced languages like Java or C#.
I much prefer languages without Null (Rust being a personal favorite). I'll speak to that because I have a bit of experience with both C# w/ Nullable and Rust.
In Rust, you express types using the Option syntax. Its basically saying "This value could be None". It behaves similarly to nulls, but instead of Option being implicit (like C# nulls) its explicit. You can't do `let foo: String = None` because its an invalid type assertion. Whereas you can do `let foo: Option<String> = None`. In C#, you can't express "This is a string that cannot be null" (even with nullables if I recall correctly).
Sibling comments mentioned Option types, which are assumed the currently best approach. I agree, but would like to add Python as a different example.
Null handling in Python is actually sane, despite there being no Option type.
Its Null is called None, which is of type NoneType, which sits at the very top of the type hierarchy (inherits from object). That’s it. Everything else is separate from it. You perform a check for None, and after passing, you are guaranteed no null reference. Dereferencing cannot blow up.
A variable of type “Union of None and SomeClass” is safe to use after a None check. Type checkers will flag missing checks as errors. This is in contrast to C# or Java, where any reference type can end up in a rug pull and cause a null dereference.
In Python, assuming a type checker, checks are mandatory and cannot be forgotten (quite similar to Option types actually). In the other languages they’re not mandatory and can be forgotten (by default).
> Sibling comments mentioned Option types, which are assumed the currently best approach
Option types are actually just the tip of the iceberg though. The real power is in languages that support algebraic data types. Option is just one common example, used to distinguish between just Some and None, but it's reasonable to have an enum with other states, each optionally with its own associated data.
Imagine if Nullable<T> allowed ref (class) types, and then disallow ref types from actually ever being null, and you have an approximation of the alternative: option/maybe types. https://en.wikipedia.org/wiki/Option_type
I absolutely did not get it until I really used it. It's 10x the sensation of fearlessness that C# non-nullable ref types give you. For example, you can't do something like this:
var customers = new Customer[10];
(If I'm remembering correctly, C# in nullable mode allows this - and it's incorrect). You actually have to give it 10 instantiated Customer, or do:
var customers = new Option<Customer>[10];
Then, once you've filled it:
var filled_customers = customers.Select(x => x.Unwrap());
Now what's especially cool is that these are often treated like Lists with a maximum count of 1. What does that mean?
var customer = new Nullable(new Customer());
customer.Select(x => Console.WriteLine(x));
customer = Nullable.Empty;
customer.Select(x => Console.WriteLine(x));
That only does something in the first Select, the second is like an empty list. When you start thinking this way, you can do stuff like:
customers.SelectMany(x => x);
So, keeping in mind what SelectMany does (it essentially flattens a list of lists into a single list), you'd filter out all the unallocated customers. Think about all the stuff you do with Linq (including the Linq syntax), and how nice your code would get if you could just treat null as a list of length 0. It's super-neat, ? on steroids (? does win in the brevity department though) and once you learn the mindset [generally good] code just flows out of you. It's like the Matrix, you have to see it for yourself to understand it. It teaches you a new way of thinking.
Please go take a look at Rust and/or Swift to see how nice a language without null feels like.
It’s not that there’s no way to model nullability/optionality. It’s just that there’s a much better way to do it. Simple, obvious, composable, no quirks.
I remember C# being great a few years back when I used it. I am surprised that it doesn't have better discriminated unions since languages like TypeScript, and F# have them.
Is the interop between C# and F# not good? I've never used it myself, but I remember (and this is validated by a quick web search) that it's possible to just directly call F# code.
TypeScript only has "discriminated unions" in an extremely busted form. While they are encoded in its type system, there is little support in control flow. Exhaustiveness checking is essentially optional: it's enabled in switch statements by either `assertNever` in the `default` case, or by limiting your function return from within switch cases with the `strictNullChecks` option enabled.
Without a match expression and first-class exhaustiveness checking, discriminated unions are far less useful. Of course, this is because they don't want the TypeScript compiler to actually generate code, so they can't add runtime features. While TypeScript is better than no TypeScript, it seems like a massive waste to build a compiler and then shy away from making people's lives easier when possible.
> it's enabled in switch statements by either `assertNever` in the `default` case, or by limiting your function return from within switch cases with the `strictNullChecks` option enabled.
What's wrong with using if statements?
```
if(shape.key === 'rectangle'){}
else if (shape.key === 'circle') {}
else {x: never = shape} // todo: use exhaustiveCheck(x: never)
F# discriminated unions are always heap-allocated - the .Net runtime doesn't currently support address punning (having a single memory location possibly be a heap value or a stack value is UB). That may (always profile) put you at odds with writing fast code - and probably will: remember that significant portions of aspnetcore's recently skyrocketing performance have come down to eliminating allocations.
F# does have [<Struct>] DUs now but I believe because of the runtime limitations you mention, it ends up as sizeof(fields of case 1 + fields of case 2 + ...). They might reuse fields of the same type so if your DU cases can carry an A of string or B of string*int, the underlying struct is sizeof(string+int) not sizeof(string+string+int), but I'm not sure.
Regardless F#'s biggest downfall is that to use it you basically have to be a C# expert already and learn the F# syntax and additional features on top of that. Otherwise it's a baffling standalone language because you're left wondering why there are two ways to do almost everything -- Task vs Async, struct ("tup", "les") vs ("regular", "tuples"), two kinds of extension methods for classes, Nullable<T> vs Option<T>, it goes on and on.
Exceptions are invisible control flow. You have no guarantees about what piece of code will execute anywhere in your codebase. I agreed with you maybe 2-3 years ago but have since done a hard 180 in that opinion after seriously using a language with no exceptions. I'm too stupid to reason about exception control flow, just like I am too stupid to do manual memory management.
There are also huge differences between exception alternatives, and I definitely think more progress is possible. I also dislike how Go approaches exceptionless, but Rust does a far better job. Hare seems like it could have one of the best approaches (so far) but I haven't actually used it.
Erlang/Elixir/Gleam are also something I want to try out. Allowing things to just crash and restart is definitely a different perspective worth learning about.
Y'know what, I just took a look and I think I would like Rust-style error handling.
It reminded me about one thing that I dislike about exception handing in C#, that methods don't have to declare the exceptions that they throw, unlike Java.
And the thing I disliked about the Java way was declaring exceptions that would be panics in Rust.
I also like that ? Rust operator for propagating errors.
I've done some Erlang and I definitely like Erlang monitors.
Maybe the Rust way combined with monitors would be the ultimate in error handling :-)
What if there was syntax that was built into a language that required annotating exceptions functions that threw exceptions. That way at least you would know that an exception was possible to be thrown in a function and could handle it appropriately
Could you share the stuff that you're working with which make feel that the issues you mentioned are completely ruining the language? It sounds quite intriguing. I have mostly used in the last decade for basic JSON APIs and desktop application where I haven't really encountered those.
Have a look at the codebase for OHM (Open Hardware Monitor). It's fairly clean C#, and I found it refreshingly easy to grasp. I know what you mean about all the abstraction but you can use the language without all of that.
Good C# is certainly possible, XNA was a thing of beauty. There are still fundamental issues with the language. Another example (beyond nulls etc.): why do C# switch statements require break? Fall-through isn't supported, so it's just dangling there as an artifact of a previous language and not, oh I don't know, doing something useful like breaking out of the loop that contains the switch. When was the last time you took advantage of the variable scoping in switch, vs. when was the last time you had to figure out what to call a variable because of the scoping rules.
Ignoring the big issues, it's paper cut after paper cut.
It's infuriating after you've used a language with thoughtful ergonomics.
Blame C/C++ for the switch complaint. C#'s design was to avoid a lot of those nasty pitfalls by making you be explicit about your intent. You don't have to use a break, but you do have to explicitly exit out of the switch, so a return, or throw, or (gasp) goto works also. This means forgetting a break is a compiler error, not the silent execution of subsequent cases, which is probably a trick employed purposefully in C++ like 5% of the time and a nasty bug for every other instance.
Switch in C# is pretty much Rust's 'match' (aside from Rust having unit type and being expression oriented which gives it an advantage). It support list pattern matching, object type and member pattern matching, inline slice patterns, etc. You don't need to use switch statements for that either, switch expressions is the recommended syntax today and IDE is likely to even suggest to auto-fix some of the idioms it can recognize.
You might have missed the bit where I said the tools barf on async code - something a printf debugger wouldn't care about. It logically follows that I am indeed a user of a debugging tool, and not a "peasant". Though primitive tools like printf and memory dumps do have their advantages. Notably, printf is resilient in the face of async code.
You were also practically forced to use VS for C# 1.0, and by consequence a debugger.
VS loses track of where the async stack is (the parallel stacks view sometimes saves the day), the Rider debugger outright crashes and takes the process with it - and if it doesn't, completely fails to step into async methods.
I haven't found the async call stack in VS to be bad unless I'm doing stuff like firing off "async void" calls or firing off "Task.Run"s. It sure does make the stack trace on Exception.ToString() ugly but the call stack in the debugger is ordinarily quite navigable.
Now what can be annoying is when your problem is in some ASP.NET core middleware and you can't find it debugging your controller methods.
Also I don't love the "fluent API" configuration style they've adopted. I always have to Google what my chain of services.AddMvc(options => ...) and app.UseMvc(...) is supposed to look like. These APIs seem like they should be very discoverable through intellisense but since you can't "see" extension methods until you open their namespace, sometimes you have to already know exactly what you're looking for to find it. But then again, the old way of configuring a lot of those same things was via web.config and even less discoverable, so that's kind of a wash.
Edit: one other thing about the modern style of C# library design is the total rejection of statics in favor of dependency injection. Which I think is done with good intentions but somewhat overzealous. We no longer use HttpContext.Current, we have an IHttpContextAccessor that needs to be injected. We no longer use ConfigurationManager.AppSettings["foo"] we have an IConfiguration which again, needs to be injected. I get why this was done but I feel it trades convenience in the 99%-of-the-time case where those things really are quite reasonable to treat as static, for elegance in the 1% of cases where they are not.
Also, almost nobody writes code that is fast. Sure, aspnetcore might be optimized to the teeth, but drop EF in and that's all wiped out and then, not some, a ton.
Idiomatic C# has also become what I call abstraction infatuation. It's impossible to tell what actually happens in that perfect 5 line method that uses 30 different interfaces and 10 different factories. Better have a good debugger handy (you won't: they all barf on async code). Aspnetcore and EF take abstraction to such academic levels that the most hardcore of Java EE devs would blush.
I started learning it months after it launched, I am a truly seasoned C# veteran. I thought it was the best thing since sliced bread. But hear-you-me, I have had one too many exception in my time, and C# is massively overated.