Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obsessed? I don't think so.

It's just that C has been around for 40 years, it'll stay around for at least another 40 years. You can still compile source code from 20-30 years ago with little or no modifications.

C gets the job done fast and efficient.

That said, I'm waiting for a good excuse to learn Rust. Prior to that there have been very few alternatives to C.



I've seen C used for safety critical software. I've used it for that. I've seen the insane amount of tooling and support and processes and rules to make it suitable for the task it is utterly not suitable for.

It does not get the job done fast and efficient, if you consider the develpment costs. The fast code the compiler generates could be genrated from other sources just as well, it's not especially the merit of that language.


I write security critical software with C and I know exactly what you're talking about.

It's written in C because the tooling to analyze and certify it for security in embedded/automotive/aerospace is targetting C (or C++). For some industries, Ada might be an alternative.

It may be paradoxical, but you're not going to be able to write safety critical software in Rust because the tooling to certify it for security doesn't exist and it'll still take years to get there.

Do I like this situation? No, I do not. Do I think it's a big problem? No, not big enough that it can't be solved by pouring money and engineering resources on it.


I also write safety critical software, in C++. I disagree that the necessary tooling for certification doesn't exist for Rust. You don't need that much tooling. Measuring test coverage is the hardest part (and Rust doesn't do this very well currently, but it is possible). Most of the things coding standards for C (e.g. MISRA) require are irrelevant for safer languages, so you don't need complex checkers. For static analysis the story is similar.


In safety stuff, not only has the source to be vetted. Also the compiler building the target binary must be blessed and hopefully well understood.


I have seen a certified compiler (GreenHill) compiling invalid code happily (not one relying on undefined behaviors), despite its certifications, for which even an ancient 3.x gcc complained (rightfully!)

These certifications unfortunately often are on the level of the golden shields CAs provide to customers to put on their sites as sign of trustworthiness.

So yes, blessed is a nice word for it.


I hate GreenHill C compiler as much as the next guy [insert huge rant] but the Blessed Compiler of Choice in its frozen buggy state at least usually means it's a KNOWN ENTITY. We code around the bugs, we compile in -O0 mode etc etc, we analyze the assembler output to death and so on.

I see these "certified" stamps less as "secure" and more of a huge slowing down of progress and change - which strangely can translate to more secure since you know what you are dealing with.

If I can dream, I'd take a "certified" GNU Ada, or Rust, or something over C. But that is bound to take many years to happen if ever. I think we sooner will see Rust compiled to C which is fed to something like the abominable Greenhill so security consultants can pour over the intermediate C output and put it through the abominable Greenhill.


For safety stuff I've worked on, the source had to be vetted, and then someone goes through line by line and confirms that the assembly generated for each line of code matches the source code. For C or Ada with conservative optimizations, this is fairly mechanical (and extremely boring), but means that trusting the compiler is not a requirement.


Then you must have worked on tiny software.


Blessing a compiler consists of running it on a trivial test suite and writing down the results, however bad they may be.


Actually, it is the merit of the language. C - and FORTRAN for that matter - are fast because their restrictions force you to program a certain way. A way that just-so-happens to be efficient (if not particularly safe).

For example, generally, people use a lot of static arrays in C. Arrays are efficient. The way you program with fix-sized structures like arrays tends to be different to how you program with dynamically-sized structures and that style of programming tends to be more efficient.

In the vast majority of cases, a C or FORTRAN program will be more efficient than other languages not because they are C or FORTRAN, but because they force you to use an efficient paradigm.

Even other compiled languages like C++ or D are, more often than not, slower than the C counterpart UNLESS they are programmed in a "C style".

e: To be clear, the fact that they are compiled low-level languages obviously does help, but I'm trying to make the point that it's not the only reason.


C is fast because decades of work have been spent on making good compilers and excellent libraries. C's lack of e.g. templates makes it more difficult to write efficient code in certain cases, c.f. sorting. I disagree that a typical desktop program that hasn't been specially optimized for speed will always turn out more responsive in C that, say, in Java.


> Prior to that there have been very few alternatives to C.

There were, but the market chose otherwise.


> There were, but the market chose otherwise.

That depends on what you mean by alternative to C. If you are just looking for a "portable assembly" or systems programming language: there were many alternatives.

If you are looking for the systems programming language that has a chance building and running the same source code on multiple different systems then you have to wait until Posix which is tied to C.

Of course you can well argue that once you have Posix, C is not actually required. You just need a language that can call C style functions (this is a trivial exercise for many languages) with compilers for the platforms you are interested in (not trivial, but it is straight forward work that has been done often enough that it is well understood).

Most people mean Posix+C when they say there were no alternatives to C, and in this form they are correct. The other choices may be better for most definitions of better, but there are few alternatives that let you write code and have a chance of it running on something else.


You don't need POSIX at all if the language has a rich set of libraries.

In a way, I always though of POSIX as the C batteries that ANSI didn't want to make part of ANSI C, to make it easier to create compliant compilers.

Which was kind of wasted effort, because to make it easier to port code, many C compilers outside UNIX always bundled a subset of POSIX with them.


>You don't need POSIX at all if the language has a rich set of libraries.

Agreed, but that increased the cost of porting a language to the new platform. The larger (and richer) the library the more expensive it is. Unless your large library is built on a smaller internal library (like Posix).

Note that I'm arguing that C was the first to really achieve this. I'm not arguing C is the best choice, not am I arguing that the other languages couldn't have reached that. There are other languages (some better than C) that could have done just as well, but for some reason didn't.


Out of interest, what were the alternatives? As much as I'd love a world in which we were on Lisp machines or running Dylan, these weren't exactly alternatives to C at the time to my knowledge.


Before C was brought into the world, OSes were being written in Algol, PL/I dialects, since 1961.

At Xerox PARC they moved from BCPL into Mesa, used to write Xerox Star and Pilot OSes. Also one of the first IDEs, also known as Xerox Development Environment (XDE). The year was 1976.

Mesa eventually got automatic memory management support (RC with a local tracing GC for collecting cycles) and became known as Mesa/Cedar.

Niklaus Wirth created Modula-2 in 1976 after his first sabbatical at Xerox given his experience with Mesa, used it to create the Lilith workstation at ETHZ, this was followed a few years later by Oberon for the Ceres workstation, inspired by Mesa/Cedar after his second sabbatical at Xerox.

The OOP extensions that Borland added into Turbo Pascal are actually from Apple's Object Pascal, used to create Lisa's OS and the initial versions of Mac OS, before Apple decided to make the development tools appealing to the growing UNIX workstation market and introduced Macintosh Programmer's Workshop.

On MS-DOS compatible systems, which were written in Assembly, there was a plethora of Basic, Pascal, Modula-2, C and C++ compilers to choose from. Plus business languages like Cobol and xBase.

It was only with the success of Watcom C++ adoption among game developers, thanks to its DOS extender, and the move to OS/2 and Windows 3.1 that C and C++ started to grow in adoption.

However most developers on OS/2 and Windows 3.1 were actually adopting C++ frameworks like CSet++, OWL and MFC, or alternative environments like TPW, Delphi or VB. Mac guys had Powerplant.

On Windows 3.1, C++ patterns like RAII were already common place, and even though each compiler had its own library, all of them provided support for safe strings, vectors and some form of smart pointers.

Writing pure C on Windows, besides Microsoft themselves, has always been mostly done by those porting UNIX stuff into Windows.

Even Microsoft by the time they released the Windows 3.1 SDK, a new set of macros was introduced to try to make it safer to code in plain C.

https://support.microsoft.com/en-us/help/83456/introduction-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: