Glib is OK except for the fact that it has a memory cache on top of that of malloc. This prevents tools like asan or valgrind from detecting memory related bugs. It caused my team a lot of grief, to the point that we regretted the choice of using glib in the first place.
I am not sure why there is a memory cache in the first place. Malloc may have been slow in the 90s, but these days there is no reason to cache and reuse allocations.
It’s also a major security risk, since it nullifies hardening measures from the standard library, as we have seen with openssl/heartbleed recently.
This bug should actually be caught during compilation if we store the list in a GSList* typed variable instead of void*. You’d think: who does that?
Except that this is what you actually end up doing when using any kind of nested glib containers. Elements are always void*, so you have to cast them correctly. So for non-trivial applications, it’s very easy to make mistakes, since you lose the type checking support of the compiler.
C is already a tricky language. Removing the type checker makes it even worse.
You seem to know a lot about Glib. What did you do with it?
I looked to port Gtk to MCU, and then gave up. Too much C trickery, and hard to replace, heavy dependencirs.
For somebody who knew Qt, and Gtk since 2003, it looks like a miracle now how Qt got smaller, and faster than Gtk, and can even run on an MCU, and quite well!
Very big contribution to that was Qt's team willingness to undo the wheel reinvention, and willingness to throw out their hacky attempts to replicate new c++, and standard lib functionality.
The Glib+Gtk world, unlike Qt, still lives in ANSI C, and C99 era, and refuses to concede on reinventing functionality of modern standard libraries, language features, and compilers.
I worked on some Linux apps in another life. I switched to Qt at some point. It was a breath of fresh air! Qt is very reliable and well documented. One can be very productive with it.
I think a lot of people have a bad opinion about it as they confuse it with KDE. While KDE does use Qt, the two projects are otherwise independent.
I’ve also looked inside the Qt code base a few times. It’s very tidy and quite easy to read. You can tell that their team is very experienced. I used to read their blog as well, they had a lot of good articles.
I really hope that they can continue to survive financially. Selling a mostly open source library is not very profitable.
The problem with Qt is The Company. They have a tendency to cater to their exclusive needs and not bothering much with Linux. A bit like Mozilla. Yet they're claiming to be fully cross-platform. They pushed 6 with regressions.
Qt is undoubtedly interesting but really have steering issues.
The Glib+Gtk world chose to limit itself to a C ABI, which makes it far easier to interop with other languages than C++ as used by Qt.
This is the case both for statically compiled as well as dynamically interpreted language implementations; the latter can use automatically generated bindings via gobject-introspection, which has no equivalent in the Qt world, where all language bindings are hand-crafted at great effort.
It also means that the implementation behind the ABI can be replaced with a different language such as Rust, as has already been done with librsvg. On the other hand, Qt will forever be stuck with legacy C++ language, which appears designed to be nigh impossible to interop with.
And if you have a requirement to use C++, there is the gtkmm binding too, which doesn't require a separate language extension such as Qt's MOC to use.
Qt can trivially offer a C ABI if they want. Glib can't be type safe no matter how hard they try. People don't seem to have any trouble making Qt bindings for languages like Go, Python, Java, all of which you could say strongly prefer interop with C. Part of the reason is that Qt is written in a way that avoids use of more exotic features and templates for most things. MOC could be replaced by template magic nowadays, but I don't see any value in doing so - you will just make compilation slower and bindings with other languages harder.
I don't know anything about this cache you're referring to, but, is it so pervasive within the library that you can't easily do a patch to remove it? Would such a patch be accepted by upstream? If you're opposed to using the environment variable, those would be my next thoughts if you wanted to get back the ability to use valgrind et al.
Another problem is that there is no type checking in the containers. So you have to cast everything to/from void pointers, even for basic structures like lists and arrays. This makes it easy to introduce tricky bugs.
The BSDs basically solved this problem decades ago: <sys/queue.h> and <sys/tree.h> provide type-safe core data structures sufficient for most C application needs. The amount of wheel reinvention and dependency complexity outside the BSD universe blows my mind. (Though, FreeBSD projects do have a greater tendency to complicate things, perhaps owing to the stronger corporate-induced feature chasing.)
The only real universal sore spot IME has been arrays and vectors. But nobody seems to pitch glib as a way to get a fast and ergonomic FIFO buffer. There are many other areas without simple, go-to solutions, but then that's the nature of C programming. Most of the C programmers I interact with are multi-language programmers, as opposed to many C++, Java, etc engineers who lean toward single-language, monolithic approaches.
I can understand using glib for GUI applications, considering it's already a requirement for Gtk, and because of the OOP emphasis in GUI programming. But IMNSHO, in most other areas the right reasons for selecting C as your implementation language are mutually exclusive with the need for cookie-cutter, void-pointer heavy data structure implementations a la glib.
EDIT: Removed outdated discussion of systemd + glib.
In addition, using void pointers harms performance as this strategy often incurs additional heap allocations, hurts memory locality and prevents compiler optimization. Personally, I avoid glib like a plague.
I think it becomes an application specific or domain specific thing.
For example, I've done some work with audio or video. Nobody working on that goes straight to malloc on every packet or frame. It'd just be asking for pain.
But a general purpose allocator doing its own free-list on the assumption that libc is going to suck? I think that's outdated. If you do want to support it, I think it's better to allow the caller to replace the allocator through a function pointer, rather than just do it by default in a library.
One reason I can think of wrapping malloc is that C is more likely to give you that than whatever OS-specific API you are going to use to get memory from the kernel (if you even have one!).
I think that 3/4 of their issues would probably be solved by a well thought use of modern C++. I've seen too many C projects (including some of mine) starting by swearing out C++ for one reason or another and then ending out reimplementing half of it using macros or poorly written hashmap implementations taken from somewhere.
Glib2 is one excellent example of how people have been shunning C++ due to its complexity, while on the other hand implementing overly complex libraries to mitigate the fact it's too barebones, which is oxymoronic to me. Either you say C is better because its simplicity and you keep stuff simple, or you are just being a zealot for the sake of it.
Basically everything I can put my hands on supports C++, I've been running massive applications on embedded microcontrollers and it works as fine as C. If you don't like some features, just write C-style C++, use the C ABI and #include <> all the containers you need.
Absolutely. It makes perfect sense, and in an ideal world this would be the direction which would benefit many, if not most, projects using C and/or GLib2. Unfortunately, there are far too many people who are wedded to the philosophy that C is perfect for every task, and C++ is the devil. Despite the fact that GLib/GObject are more complex than C++, more error-prone than C++, slower than C++ and make static code analysis impossible (due to all of the unsafe typecasting).
A few years ago now, I ported a C GLib/GObject-based application to C++. In removing all of the unnecessary typecasts I found a couple of minor (but real) bugs which were previously hidden from the compiler. Simple use of real classes, along with basic containers like vector and map, was the vast majority of the C++ usage in the whole application. It benefitted greatly in becoming smaller, simpler, easier to read, easier to maintain, and having the compiler able to typecheck everything.
Like yourself, I've also used C++ on MCUs. Some vendors even provide an "Embedded C++" C++ subset you can use, which is "safe" for safety-critical real-time code. Works fine. A lot of C embedded projects would benefit from the extra safety it provides. So long as you don't go overboard with the features; stick to a simple and easy to understand subset.
I gateway drug to C++, was Turbo C++ 1.0 for MS-DOS, released in 1990 as per Wikipedia. I got my copy in 1992.
Basically C++ARM as per language standard, on a 386SX running at 20 MHz, 2 MB but 640 KB was more than enough, right? :)
Our high school teacher giving us C classes with Turbo C 2.0 also had it around, so as it were back in those days, I eventually got a copy.
My gateway drug to programming until then were Turbo Pascal 6.0 and TASM, and C++ was in the same ballpark of features and culture for safer systems programming.
Never had any issues using it in such kind of PCs, including with my own bounds checked string and array classes, hardware that most modern MCUs can easily outperform.
Absolutely. Modern MCUs are phenomenal. More powerful than mid '90s top-end PCs. Add some external SDRAM and storage, and they have more and faster memory and storage as well. C++ is perfectly good. Better than all the nasty C macros in the vendor HALs, if you replace that nastiness with some type-safe enums and inline functions. There's plenty of capacity for running Python or Lua as embedded scripting languages even on smaller variants, both of which wrap C++ nicely. I'm fairly new to the embedded world, but so far it's been mostly a pleasure to write code for a variety of TI, ST and Nordic MCUs; some with C, some with C++.
I do see why people like C for embedded use; when it comes to hardware interaction you have complete visibility into all of the interactions with special registers. Looking through the disassembly when debugging is nice and straightforward. But C++ does this and more, so long as you don't go overboard with unnecessary complexity. It's fine with a bit of self-discipline, and all that extra bounds checking and such is of value.
Agreed in general on using C++ over C, or C-style C++ in constrained environments. One thing that is nice about using C though is that it is quite easy (compared to C++) to expose to other languages. Glib, and most libraries built on it like GTK, GStreamer, etc can be used in languages such as Python, JavaScript et.c.. And a lot of development of applications and tools has been moving to such languages over say C++.
So C is still useful for low-level libraries, due to being the lowest common denominator. Though arguably one could today write the library core in a better language like Rust maybe, and expose a C API/ABI from that. Rsvg is growing into an example of that.
C++ seems difficult to embed because there is no standard way to expose features that don't map well to the C calling convention. Just use `extern "C"` if you want a C API. For everything else you'd have to commit to an application binary interface first, such as GObject, COM or the CLR. You need these to define calling conventions and semantics (such as initialization, exceptions, memory and resource management) well enough that other languages can bind to them.
Do you mean glib? It is in every server and desktop Linux distro, including those with commercial support from RedHat, Canonical and SUSE. For embedded Linux devices glib is included in Ubuntu Core from Canonical, as well as Wind River Linux from Intel.
The GStreamer SDK includes glib and is supported by Fluendo and Collabora on Android, Windows and Mac.
There is one significant thing that C has and C++ has not; a stable ABI. This is particularly important if you want to create a library that can be shared and used by everyone. You cannot do that with C++ without a lot of caveats. Google, for instance, forbid creating libraries in C++ for good reasons. Creating a C++ library with a C ABI sounds stupid. I.e. if the library’s functions and types can be presented as simple C, you have already folded down to C and C++ is just an implementation detail and you will probably also have to static link libstdc++. That’s no longer a small, simple library anymore. The stable ABI makes C also famously easy to include and inter opt with other languages and frameworks. Swift for example can easily utilize C libraries and code but cannot inter opt with C++ (yet, apparently it’s wip)
C++ was already everywhere on desktop computing back in the 90's, with Apple, IBM, Microsoft, BeOS adopting it on their frameworks, and even on the mobile with Symbian.
Then FOSS happened with its manifesto to use C for portability, war on KDE due to licensing gave raise to Gtk and related eco-system, and here we are.
Thankfully I learned C++ on MS-DOS and became enlighted, even with its 640 KB limit it was already so much better than the primitive C, in regards to type safety, generic code and yes RAII was already a thing.
The main practical benefit of C is the simplicity and stability of its ABI.
There are currently 8 different language bindings listed on LibVirt's website. It's not clear to me that this situation would improve by switching to a language as notoriously difficult to interop with as C++.
The point is, C++ supports the C ABI. You can write your entire application in C++ and expose only a C-style API with the stable ABI you wish. I've done it a million times and it's absolutely fine.
> which parts of libstdc++ can you safely use without exceptions?
Almost all of it, as long as you're happy to abort on memory allocation failure - which, according to the article, libvirt is now willing to do.
In fact for me, that's one of the main questions to ask when deciding between C and C++ for a project. Is it OK to abort on allocation failure? If so, use C++ (without using exceptions). If not, use C.
> These problems are common to many applications / libraries that are written in C and thus there are a number of libraries that attempt to provide a high level “standard library”. The GLib library is one such effort from the GNOME project developers that has long been appealing.
I've been a fan of glib for some time, other than glib and apache libapr, what other "high level standard libraries" for C should I know about?
> Netscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions. The API is used in the Mozilla clients, many of Red Hat's and Oracle's server applications, and other software offerings.
I can't offer extra recommendations, but just wanted to offer full agreement.
GLib is the awesome standard library I get to use everywhere thanks to gobject-introspection. From work projects in Vala, window manager¹, image viewer², video player³. It is especially useful with lua configurable projects, given the sparsity of the language itself.
Incidentally, mpv is extensible enough to replicate sxiv's features with the right configuration and scripts. I like to think of mpv as the emacs of multimedia.
There is QtCore that is more similar in scope to glib and only containers, json and a few more useful stuff. It's a sort of alternate C++ standard library.
Personally I wonder why people would choose today using C and Glib over C++ for system programming. I could understand why not Rust but for having to deal with glib in the past its so much of a pain.
As a mostly C and sometimes Rust programmer, I don't know why I'd ever reach for C++ instead of Rust these days. If I'm going to take on the mental complexity of these big languages, I'd rather have Rust's safety properties. (Not to mention, superior, portable, single-vendor standard library features — something C++ struggled with for a long time and probably still struggles with.)
Using C++ in moderation, without getting too crazy with classes, multiple inheritance, lambdas and other such things, works very well if you want to port a legacy C code base.
For a new project, there are many choices: rust, go etc.
Even the Qt folks accepted, though, that for run/event-loop integration between glib and Qt code, it was Qt that provided a way to use the glib event loop not vice versa (because Qt doesn't offer a sufficiently hook-able event loop abstraction).
Qt is more like gtk rather than glib. Although it does come with a fantastic core library.
But if you can move to C++, the standard library already offers great replacements for most glib features. Glib exists mostly because the C standard library is lacking in many aspects.
That's only partially true. You'd really something like boost to cover a lot of things that glib provides for C. glib covers many, many things that are outside of the scope of both the C++ and C standard libraries.
In the same way that using glib does not mean you are forced to use Gtk, you can use pieces of Qt without pulling in the GUI library. That's why I mentioned "non-GUI" explicitly.
I sure love gvoid and gint.
I also love my program crashing instead of returning an error when I use glib lists and its internal malloc invocation fails.
I am not sure why there is a memory cache in the first place. Malloc may have been slow in the 90s, but these days there is no reason to cache and reuse allocations.
It’s also a major security risk, since it nullifies hardening measures from the standard library, as we have seen with openssl/heartbleed recently.