I did not know that Pascal is still in use till today. What kind of freelancing projects are in demand? I mean in Delphi/Pascal. If you do not mind my asking.
Anything a business needs. The idea is to actually hunt for a business owner that has problems and they don't care about the programming language you use to solve the problems. When I quote the estimated hours of some application they need in Delphi vs React or C# or whatever other programming language they heard through the vine is good for them, they always choose Delphi. In the end they care about problem solving, not the tech behind.
OOP looked really neat at the times when computers had one CPU, very little RAM, and no easy in-process concurrency.
OOP made a few genuinely good things popular in the common software practice: modules, public vs private interface separation, the idea of interfaces. Other features, like inheritance, or encapsulated mutable state, proved to be more problematic, both in general, and in the modern computing environment in particular.
UML, at the time, was exceedingly useful if you just used the whiteboard version of it. Being able to clearly communicate with another developer by drawing a class diagram and having each line, box, and cardinality mean something precise was very helpful.
UML as a programming language was awful. UML as a specification language was often even worse.
There is some connection to object-oriented programming and desktop GUI paradigms, that I remember were written about in computer magazines at the time, but is never mentioned now. Like, if you want to delete this document, you drag it to the trashcan! Object oriented!
Am I right or do I remember it wrong?
Not clear to me how that's object-oriented. You're not sending a "put yourself in the trashcan" message to the document, or sending a "pull the document to yourself" message to the trashcan. If anything, you're invoking the environment's "drag" function with two arguments, "document" and "trashcan"; that's object-oriented only insofar as the environment contains every function and can be sent messages to invoke them.
To continue the over-analysis in good faith: I think the trashcan example points out the inheritance of draggability. Any file object in a directory gets it by default, without having to think too much about it.
Then the trashcan inherits from a directory, making it able to have things dropped on it. But it overwrites the action taken from movement to deletion. That maps "nicely" onto the human mind.
> If anything, you're invoking the environment's "drag" function with two arguments, "document" and "trashcan";
A perfectly valid alternative description. But having a do-it-all environment with every function doesn't sound OOP to me.
I have to agree. I saw FSF members argue with uutils coreutils [0] developers about having their project licensed under MIT. And for no good reason, they started attacking the project and the people behind it.
I mean if it continues to get better, it will be just a matter of time till we see a cloud provider sepcifc usershell that is closed sourcd and built on top of the rust rewrite.
and naturally its going to have its own features and options, away from the GNU's standard.
then we start seeing every provider doing the same, and out goes the GNU standard, and welcome to proprietary cloud linux.
Okay, not yet. I don't think RISC-V rivals you in any market except micro-controllers. However, it's a matter of time before RISC-V dominates every market. Due to its openness and lack of fees.
>I don't think RISC-V rivals you in any market except micro-controllers.
I used to think like this. Then I saw the sad state of mobile SoCs[0], and learned that Cortex-A55 is pretty much still the peak, with full awareness of U74[1] and U74-MC[2], which is considerably better (less area, faster, and more efficient to boot).
Considering that P650[3] has existed for a while competing with ARM's performance-focused cores, and X280[4], basically an U74 with Vector, also exists today, I simply do not see a 2023 without a pack of new SoC based on these cores (particularly, I expect to see X280 pop up everywhere), and phones based on these SoCs. Android was already demonstrated on RISC-V years ago, and a lot of effort is being put in polishing and upstreaming this work.
That's how immediate we're talking about. As for the ARM story we're discussing, it's clear to me: Softbank wants to sell ARM ASAP, and that's about the only thing they care about. They thought that doing this statement is helping them, but I am not sure it is.
This is not a standard memory leak, and would not have been avoided by using rust.
Edited and re-edited: I was too quick to presume commenter was just spouting the common “rust is a panacea” theme. Kernels are all about “unsafe” concurrent access and reentrant code, so rust is not a panacea. For this case of multi-threaded/multi-process access (presumably from ring-0 kernel code accessing shared kernel memory), using rust primitives to help prevent race conditions could make sense (smart pointers), because the code is unlikely to be performance sensitive and the feature is there to protect against a fairly extreme corner case (crazy ad hoc GC for cyclic graph of processes sending each other file descriptors). Reliable discussion on rust for kernel drivers here: https://security.googleblog.com/2021/04/rust-in-linux-kernel... Disclaimer: not a kernel nor rust dev. In past dabbled with embedded kernel debugging. I keep tweaking this edit, because it is complicated!
By my understanding, Rust's ownership model would prevent concurrent access to the socket buffer garbage collector data structures without proper synchronization, which was the source of this bug.
This is in fact an example of a class of bug that Rust's compiler is uniquely able to protect from - other memory safe languages don't make guarantees about concurrent accesses at all - at least not Java, C#, Go, Python, Haskell, OCaml etc. Perhaps Ada does have something?
This CVE appears to be due to a race condition despite using atomics, so likely this could've happened in Rust code. Really to implement this sort of GC I'd wager that unsafe rust would also be required unless an entirely different algorithm was used.
Also this is kernel code running in a kernel contex, so the code can’t just use std::sync::{Arc, Mutex}[1] because Mutex uses user-space pthread_mutex_lock[2]. The implementation is knarly™ multi-threaded kernel code so it would probably require unsafe custom code (rather than using kernel locking and concurrency-management mechanisms which were not used by the existing code for presumably valid reasons). The rust code could easily have had the same fault. Excepts from [3]:
The VFS layer is a complicated beast; it must manage the complexities of the filesystem namespace in a way that provides the highest possible performance while maintaining security and correctness. Achieving that requires making use of almost all of the locking and concurrency-management mechanisms that the kernel offers, plus a couple more implemented internally
the kernel may find itself with a set of in-flight Unix-domain sockets that are only referenced by unconsumed (and unconsumable) SCM_RIGHTS datagrams; at this point, it has a cycle of file structures holding the only references to each other.
there is more complexity than has been described above and some gnarly locking issues involved in carrying out these operations. See Viro's message for the gory details.
Today the Rust std::sync::Mutex type (on Linux) just uses a futex, not the unwieldy pthread_mutex_lock (which on Linux ultimately has a futex inside it anyway).
This why Rust's Mutex<[16; u8]> (a Mutex protecting an array of 16 bytes) is significantly smaller than C++ std::mutex (which doesn't protect anything itself). This was already true on Windows, and for a few months it has been true on Linux too.
But you're correct that Rust for Linux doesn't have std, and so doesn't have std::sync::Mutex. However it does have kernel::sync::Mutex which is reminiscent of the standard library Mutex, e.g. Mutex<T> is a thing, locking defaults to giving you a guard with access to the protected contents, and so on. But being the kernel it has unsafe methods that look way more dangerous than I'd be comfortable with. The Linux kernel already needs a mutex type (in C) and so kernel::sync::Mutex<T> builds on that.
That's my guess too. These GC data structures need to be accessed from multiple threads, if I understood TFA, which means they won't compile normally in Rust. That is exactly Rust doing its job and preventing bugs, but it means that the developer then needs to use unsafe (or find a workaround with runtime checks, at the cost of overhead).
You're absolutely right, I had initially understood that revmsg with the MSG_PEEK flag was concurrently accessing GC data structures and presumably corrupting them.
Instead yes, this is essentially a logic bug in the presence of concurrency, and no programming language can help with those. It would have happened just as well with Software Transactional Memory or with Erlang message passing.
> By my understanding, Rust's ownership model would prevent concurrent access to the socket buffer garbage collector data structures without proper synchronization
Possibly. But the first question is whether the person writing this in Rust would have used unsafe. Without knowing more details here, it's hard for me to guess.
> other memory safe languages don't make guarantees about concurrent accesses at all - at least not Java
Well, Java does have synchronized methods. Those lock the entire class. You can imagine writing a "manager" class that encapsulates all the GC data structures here, and that would have made this perfectly safe in Java using existing language features.
Of course, that would have been slower - so, again, it is tempting to use unsafe approaches, even in a memory-safe language like Java, but then you do risk bugs like this.
But of course I do agree that Rust, even with some amount of unsafe, would be a far safer language than C!
The difference though would still be that, if they don't use unsafe or proper synchronization in Rust, their code won't compile. In Java, their code will compile just the same whether they use `synchronized` or not.
Of course the Rust compiler can't force you to write correct synchronization, but it can at least prevent you from forgetting about synchronization entirely.
D sort of does. We have a type qualifier for shared data that is picky about accesses but it's not completely there yet i.e. still requires some knowledge.
Proprietary compilers, some limitations in the type system meant the standard library wasn't as useful even though it is a safer language as a result. The verbosity also turned me of initially.
C and C++ also only had proprietary compilers, mostly.
However both were born alongside UNIX and that helped C++ to be quickly adopted by all major C compiler vendors, whereas Ada was always something extra to pay on top.
When targeting UNIX, with C and C++ compilers on the box, who is going to pay extra for the Ada compiler unless required to do so?
> C and C++ also only had proprietary compilers, mostly.
I was thinking more in the 90s where GNU already had a freely available C compiler, but GNAT didn't get a free version until the late 90s, and even then it was built on a fork of gcc you had to download separately until like 2000. It was just a lot of work to get up and running, as opposed to the bundling of C/C++ compilers with Linux that was common, as you say. The initial C compatibility helped C++ a lot too.
GCC only took off because Sun decided to split their UNIX into user and developer editions, and other UNIX vendors followed.
Still same rule applies, when a UNIX shop paid for UNIX developer tooling, usually languages like Ada and Modula-2 weren't in the box, you needed to pay extra.
Ada has some really good synchronization primitives, I don’t know if that would have helped here as I haven’t looked at the problem that closely. I work at a much higher level so I lack the experience at this level. Ada was phased out for C++ primarily because the devs are cheaper.
It would be impossible to say because it depends on the hypothetical Rust implementation. A kernel needs a huge amount of unsafe, all of which is surface area for these types of bugs.
Amen brother. Most people will claim that Rust would probably take years to compile on 30 yrd old hardware but I say to them "why is your heart so full of doubt?". You have to believe.
The more you believe and trust Rust, the more limitless your possibilities become for your family, your career and your life!.
What is the benefit of having multiple compilers for programming languages? Is there a scenario where a GCC compiled rust program would do something that an LLVM one can't do?
Doesn't this cause fragmentation in the rust ecosystem?
P.S.:I understand that people can work on any project they want. And I don't have the right to tell them not to. I'm just curious about the technical reasons for having multiple compilers.
1. GCC has more backends than LLVM.
2. Competition is good in general.
3. I expect this will trigger inconsistencies between GCC and rustc; because Rust doesn't really have a specification. Which will force both parties to discuss and solve them.
Being on gcc, a long-lived platform, also helps ensure the survival of the language even if development of the current compiler (or LLVM) dies or withers.
> I expect this will trigger inconsistencies between GCC and rustc; because Rust doesn't really have a specification. Which will force both parties to discuss and solve them.
More likely that GCC has to follow all the bugs and quirks of rustc or no people will use GCC for Rust.
One advantage is it forces the language to articulate standards instead of the implementation defining the feature set. Standards tend to give stability and longevity to the language, as well as making it possible to write new compilers and make it more portable.
Ah, a Lisp user. Common Lisp is Exhibit A for standardization. Every Lisp user claims it is great because of either standardization of advanced features in the days when the Berlin Wall has barely fallen or the mere existence of macros. No real first-party improvement to the language in almost three decades after ANSI standardisation. Massive fragmentation in the compiler ecosystem, rarely do libraries work out of the box on non-SBCL tooling. Yes, I can definitely see the advantage of standardization now, very much so.
> No real first-party improvement to the language in almost three decades
That sounds like a benefit to me :)
However, I think that's due to the general lack of interest in Lisp. You can see the C++ community has a similar ANSI standard and updates it every few years.
> Massive fragmentation in the compiler ecosystem
I wouldn't call it massive. They are pretty consistent, up until things like POSIX and FFI APIs. Let's agree there is some fragmentation. Isn't this still a better situation than if nothing was guaranteed?
Yup I have had the same exact experience with Common Lisp and every time somebody talks about standardization being so great I think back on this.
Scheme is suffering from the same issues. Scheme is standardized, but implementations end up being incompatible with each other in subtle ways and the level of fragmentation is very painful. Scheme does get some updates unlike CL I guess, but all of the implementations either don't implement the the modern standard, don't have useful extensions for real world programming or are simply immature and don't have enough people working on them to get them into a nice state. In practice it's very difficult to use Scheme for anything non-trivial because of these issues.
I would much rather have no standard at all, and a single high quality implementation that everyone targeted instead of the current mess for both CL and Scheme. Until we get a new dialect that solves these issues Lisp is going to be more or less dead and irrelevant.
> I would much rather have no standard at all, and a single high quality implementation that everyone targeted instead of the current mess for both CL and Scheme.
That would be Chez Scheme [0], maintained actively by Cisco, a company that you may have heard of - who also use the language extensively.
Racket is porting to using Chez, because it is the industry standard, it's performant, and rock solid.
The GNU alternative to Chez is Guile. Emacs can run with Guile, and Guix is built on it. It's got a fairly large community.
Outside of Chez and Guile, there are implementations and communities, but comparatively, they're tiny. Those two are the only big names you need. Like GCC and Clang for C. There are other C compilers. But you only need to know those two.
There are simply lacking of such strong requirements in language standards. C/C++ even have the specific "linkage" concept to abstract the binary details under the source form away. And you may know, many libraries are distributed by binaries.
The standards implying binary compatibility rules are about ABI (application binary interface), which usually depend on the ISA (instruction-set architecture) or the OS (if any) being used. You cannot have the unique one once there are multiple ISAs/OSes supported. Even when you only want to rely on some external "exchanging" representations not tied to specific ISAs, there are already plenty of candidates: CLI, JVM, WebAssembly... Plus there are more than one executable (and mostly, runtime loadable) image formats widely used (PE/COFF, ELF, Mach-O ...). You will not have the unique combination, and any attempts to ensure it "work across compilers" in that way will likely finally just add a new instance not fully compatible to existing ones, making it more fragile.
Standards are often incomplete, or full of "implementation specific" behaviour, since AIUI standards often end up catering to implementations, instead of the other way around (For example C/C++ standards, you can read a plethora of blog posts about the experience of people trying to contribute to them, and some of the hurdles are related to how strongly tied to existing implementations they are). That means you can often have 2 "standards compliant" compilers that are wildly different. Another reason is compiler extensions. Sometimes a compiler is "standards compliant", but also implements a superset of the standard (Sometimes by default, sometimes under a flag) which means code gets written for that "superset" instead of according to a "standard" (for example, the linux kernel and gcc extensions to C).
Typically, some library features contain things that require cooperation with a specific compiler to work correctly. Consider something like std::is_standard_layout in C++, or java.lang.Object in Java, or std::panic::catch_unwind in Rust.
That sounds a little circular. The benefit of alternate compilers is that it makes making alternate compilers easier.
For stability compilers already have a large incentive not to break old programs. For longevity I don't really see how a standard affects it that much. For being more portable you do not need an entirely knew compiler.
Being able to specify a language outside an implementation is extremely useful to prevent hidden logical inconsistencies between different parts of the language, and makes the language more robust.
It also allows people to design new backends (looking at CUDA LLVM backends)by finding out the right abstraction to support performance. For example, implementing a C or C++ compatible CUDA backend required the C++ committee to make changes to the memory model / consistency guarantees of C++ atomics.
If C or C++ had only depended on compiler implementation for it, then there would have just been different implementations with different guarantees with no consistencies between them, and no single way to even define why they were different.
The reason it isn't circular is that if there is one implementation, even with good documentation, there will inevitably be lots of corner cases where the implementation does something, but it isn't written down anywhere. Independent implementations will discover many of these issues and they get clarified as part of the standards process.
So you can't really produce a high quality standard with only one implementation. You'll miss important details.
Rust is ,,A language empowering everyone to build reliable and efficient software.'' (from the home page)
Reliability at the extremes (where it may be even life or death situation) requires the developer knowing what the program (s)he is writing exactly expresses in Rust.
Right now Rust often limits what it does to what is supported by LLVM. An example is the become statement. This is a reserved keyword which will eventually act as a jump to a function without saving a return address, the current stack frame becomes a new one. This is tricky since things like deconstructors need to still work. It is only recently that LLVM supported this well, and Clang did it first. Separate implementations and having a standard or some other form of communicatiom between implementers can help with these delays.
EDIT: Clang's version is the attribute 'musttail,' if anyone is interested.
There's also political/legal considerations: The modern GCC codebase is derived from the egcs codebase, which forked off the original GCC codebase because the developers of GCC at that time didn't want to prioritize faster development speed:
Doesn't GCC support more architectures than LLVM? Wasn't that the issue a while back with the Rust dependency that a cryptography module for Python introduced?
You already need an existing C++ compiler to bootstrap GCC though I don't see how this is much different. Plus there is already mrustc as a C++ Rust implementatation specifically designed for bootstrapping.
One big benefit in this case is that bootstrapping gcc is somewhat easier than rustc, and presumably gcc-rust can then be used to compile rustc if needed.
In addition to what others have said, since there is a plan to introduce Rust kernel modules into the Linux kernel, being able to compile with GCC helps avoid dependence on another toolchain, which was something I've seen mentioned as a concern w.r.t. Rust in the kernel.
To be clear, while it's a concern some people on the internet have expressed, it's not an actual problem for landing the current work to get Rust in as a framework for writing drivers.
> What is the benefit of having multiple compilers for programming languages?
I'll give you one, or two, depending on what you'd like to count. At some point in the conceivable future we'll be able to compile some meaningful Rust code base with both compilers and measure; a.) how long it takes to compile and b.) the performance of the compiled code.
Obviously that will induce what it always has: incentive to improve.
>What is the benefit of having multiple compilers for programming languages?
Rust will need a standard.
The main reason why I don't take it seriously is that code written 5 years ago will often not compile today. For a language that pretends to be a systems language that is a non-starter. If you can't guarantee a 40 year shelf life of your code then no one working on systems cares.
People working on systems in the wild don't have the brain power to learn a new tool chain every decade, let alone every year. They are solving real problems and not writing blog posts.
> code written 5 years ago will often not compile today.
citation needed. Yes, there's a few programs that relied on unsound things for which this is true, but that's a relatively small part of the overall amount of code.
Doesn’t every implementation of a new function on a standard type possibly break existing code?
For example if I have a trait Foo with a function bar, and I impl Foo for HashMap, and then a new version of std comes out that has named something HashMap::bar, now every call to my_map.bar() is ambiguous
In that specific case, the inherent impl is preferred, so there’s no ambiguity. however this can still cause a breaking change if the inherent impl has a different type signature than the trait.
And yes, there are tons of things that can subtly break code. That’s why the rust project runs the entire open source ecosystems’ tests as part of the testing process for the compiler. It’s not all of the code in existence, but it’s pretty good at flushing out if something is going to cause disruption or not.
In practice, the experience that the vast majority of users report to us is that they do not experience breakage when upgrading the compiler.
> In that specific case, the inherent impl is preferred, so there’s no ambiguity.
That is even worse. It means your code can silently start doing the wrong thing rather than erroring, if the inherent impl does something different from the trait.
> And yes, there are tons of things that can subtly break code.
This isn't some obscure bug in some deep edge case though, it's a completely normal and common way of using the language (implementing your own traits on foreign types) predictably leading to breakage in an obvious way. I am not sure why it should be called "subtle".
Anyway, given this issue, I think the meme that Rust is backwards-compatible is really oversold. It'd be more honest to frame it as "we hope releases are backwards-compatible, but we like adding new functions to the stdlib, so no promises" rather than marketing BC as a major selling point as is done now.
> In practice, the experience that the vast majority of users report to us is that they do not experience breakage when upgrading the compiler.
It's anecdotal for sure, but I'm personally aware of times when the exact situation I'm describing has happened and caused headaches for people.
>but that's a relatively small part of the overall amount of code.
Yes and?
Systems programming isn't front end JS work where breaking things doesn't matter. It's no surprise that Rust came out of the browser space. Only people who don't take their work seriously could ever think the above is a justification and not a red flag for never using it.
I welcome Rust becoming ossified in GCC so I can build 30 year old code without modification like I can in C. Until then, it's a toy for people with more time than responsibility.
A language with one implementation can't really be said to have a specification.
It may have a very detailed accompanying technical documentation of what the implementation is supposed to do, this may be called a specification, but a specification deserves the name with a minimum of two implementations.
This is technically incorrect. A programming language can be designed with specification in mind, even with a formal one (e.g. SML). It is just true that the specification is not likely effectively verified before more than one real implementations landed, if it is not formally verified. (Anyway, verification by testing of existing implementations _is_ the fallback where people cannot afford the cost of formal methods.)
It helps ensure that the language standard is relevant, and it can help separate the standard committee from the compiler developers, helping bring in more interest groups to the table (since they're not entirely beholden to a single compiler dev team).