When you add those to the mix you're dealing with destructors, ownership, etc. so I don't see anything wrong with using stuff like unique_ptr/shared_ptr.
And C tends to reinvent object oriented programming with each library so you might as well use language defined classes.
So I don't really see the point.
You don't need to use stuff like concepts, or "showing off how smart I can be with template libraries" (looking at boost), but C++ has a lot of features that make it much more productive than C.
Yeah that's basically how I use it for embedded programming c with classes, raii, better strings, and smart pointers :) . I generally avoid rolling my own pointers and anything much fancier that the simpler algorithms/sorts/etc
C# still doesn't deliver on all platforms where there is a JVM available, and has yet to even offer something as portable as Swing across all those platforms.
Kotlin really only matters on Android.
In fact right now we are having issues with .NET RFPs, because everything that comes through the door are Java related RFPs.
Not really - Java designers were just plain wrong and it took forever to admit it - things like local type inference (var) made the language pointlessly verbose for ages, and simple features like lambdas made the standard libraries terrible.
There was a 8 year gap where C# users could use stuff like enumerable operators while Java users were stuck in the stone age of writing loops for collection manipulation in a high level language ...
Java is stable, and evolves with a huge amount of thought (they employ theorists to model the impact of changes to the Java language even.)
The benefit of this is a long-term platform, and an excellent implementation. Due to this Java has become the platform where the majority VM research is done, which means Java has many excellent state-of-the-art garbage collectors and a JIT a level beyond that of C#.
Java has a long history of saying “we don’t support that feature because it’s d-a-a-a-angerous.” Personally, I think that’s an insult to Java programmers, but Java programmers don’t seem to take it the same way. They’re happy that somebody at Sun or Oracle is able to keep all the sharp corners away from them.
You know, coming from Java and seeing new languages like Go say the same thing except for features that Java had essentially forever is just maddening.
I’m annoyed that things aren’t “complex,” “brittle,” “difficult,” etc.; they’re just “dangerous.” They’ll draw blood through the computer screen. I know programmers who have actual fear of pointer arithmetic even though they have no idea what it is.
To be fair, Java has a lot to recommend it. For one, there are, of course, an incredible number of packages available to build off of. But I find it incredible that the language that replaced finalizers with phantom references still says pointers are too dangerous for programmers.
>which means Java has many excellent state-of-the-art garbage collectors and a JIT a level beyond that of C#.
Unfortunately this isn't enough to make up for the latency hit from not being able to directly stack allocate things like C# allows (especially with its recent addition of Span<T>).
Can automatic scalar replacement turn my ArrayList<Pair<Long, Double>> into the equivalent of std::array<std::pair<int64_t, double>, N> (or Span<Tuple<long, double>>)?
It doesn't turn it into the equivalent of another data structure, and it doesn't allocate it on the stack, that's why it's better. It turns each value in the collection into a dataflow edge which then goes into a register, or the stack, or anywhere.
Java designers don't follow fashion, they rather wait to see what actually works when the dust settles and how to keep those binary only 25 year old jars running on modern JVMs.
I have been working with Java, .NET and C++ since they exist, and C# being more feature rich than Java doesn't help if the libraries or OS support that a customer needs isn't there.
C# doesn't follow fashion either, it does things that actually work before the dust settles on them.
I think you'll find many people outside the HN crowd also appreciate their approach over trying to appease the kinds of customer needs that involve keeping 25 year old binary only jars running.
They sometimes deprecate higher-level stuff, but very conservatively so. E.g. asp.net web forms (2002) is deprecated, while windows forms (also 2002) is still supported.
.NET Core has been so good managed, that I a project back in the .NET Core 2.0 days to port an application to Java, because the customer wanted to move the application to UNIX (not just Linux flavours), and not all necessary features could be done in Core.
To this day there are plenty of libraries that are yet to run on either Core or outside UNIX.
Plus not everyone is happy how the whole Core, .NET 5, .NET Native, Reunion, UWP, CoreRT, Xamarin, MAUI, Blazor is being managed.
I mean .NET Core is targeting Linux, is that really a sign of mismanagement?
And to your second point.. I mean you're comparing concepts at very different levels of a stack. Your list includes a language runtime, a web framework, an OS specific application format/framework?
It's like saying "Java, HotspotVM, JavaFX, Tomcat, Android APK, Vert.X"...
In reality it's ".NET Core and .NET Framework".
And while there have been some growing pains as the term .NET became overloaded, it's always been clear .NET Core is where they want people to be, .NET Framework exists because migration to a new platform wasn't going to happen overnight.
Every year there's more and more .NET Core compatibility, they've done a good job with .NET Core so more people are willing to use it (and port packages to it)
Naturally I am comparing all levels of the stack, as I mention on my comment a language alone is useless.
Yes, the future is .NET Core, however pretending that outside Windows it can match Java offerings just reveals a complete lack of knowledge of all kinds of platforms that have Java support available for them.
Guys like PTC, Aicas, Gemalto, microEJ doing embedded real time Java, selling M2M devices, IBM and Unisys mainframes, 80% of the mobile world (even if it is an adulterated flavour of coffee), smartcards, blue ray players, healthcare and TV settop boxes, kiosks and plenty of other use cases.
.NET is catching up with 25 years of Java doing cross platform development, while anything that came out of Redmond has been mostly Windows only for 20 years.
.NET Core only supports the three major mainstream OSes, zero support for anything else, and has the growing pains of a platform where plenty of third parties are yet to release anything on Core.
Sitecore just released their first version on .NET Core earlier this month, and I don't see anyone rushing to upgrade.
Doing WPF, Forms? Good luck with many GUI component libraries.
Apparently the designers are going to miss .NET 5 for full stability.
The beautiful thing with being a polyglot consultant is that I don't have to convince myself that I am using the best stuff as "Developer X", I just use whatever stack the customer asks for and then move on.
> Yes, the future is .NET Core, however pretending that outside Windows it can match Java offerings just reveals a complete lack of knowledge of all kinds of platforms that have Java support available for them.
I have no idea what on earth you're on about... you didn't bring up anything I didn't know as far as places where various JVMs live.
This just reads like another distraction from the topic of language direction just like your last comment trying to confuse web frameworks and language runtimes...
Why is .NET Core supposed to be blindly chase platform parity with JVM down, especially down to embedded devices?
You realize the JVMs used on embedded devices aren't the same ones used on desktop right?
Like there are C# frameworks for embedded development on microcontrollers, why on earth would that be .NET Core's domain? Do you think HotspotVM is running on those smartcard microcontrollers?
-
Your entire comment you seem to be under the impression .NET Core exists to be a drop-in replacement for the JVM for every usage.
Which is especially strange because you're using JVM as a generic term for every JVM in parts of your comment, which would maybe be comparable to the CLR at best (but still be an odd comparison to make)
If anything you're speaking to the strength of C# and a product like .NET Core, they're not chasing the same goals that skewered the development pace of Java.
The C# team is not worried that their language standard changes might be hard on people embedded runtimes in smart cards or 25 year old binary only jars
The same mentality is why C# paid the price to break backwards compatibility on generics back in the 2.0 days, and enjoyed a much more powerful implementation going forward in perpetuity.
You're free to feel one approach is better than the other, but it's non-sequitur at best (and disingenuous at worst) to start spouting off about how the JVM runs on smartcards and so that means .NET Core is supposed to be matching that in a conversation about language growth...
Sure it does, because what is relevant are ecosystems, not language grammars.
Yes, C# the language is better designed than Java the language, while .NET the ecosystem is is tiny spot of Java the ecosystem.
When I mention JVM, I mean any implementation, I don't mix Java with Hotspot.
And yes PTC and Aicas sell full Java SE compliant implementations for embedded development.
The only embedded options for .NET are Netduino, hardly market relevant, and Wilderness Labs, which doesn't cover at all the kind of industrial deployments PTC, Aicas, microEJ and Gemalto are doing.
I work with Java and .NET alongside each other since they exist, so I know pretty well the pros and cons of each platform, specifically outside the implementations that people mix with the language.
> .NET Core only supports the three major mainstream OSes, zero support for anything else
This is true, but this level is support for the majority of use cases. I do personally find it very annoying that refuse to support 32-bit Linux tho.
> has the growing pains of a platform where plenty of third parties are yet to release anything on Core
This was true a few years ago, but certainly isn't now. I honestly can't even remember when I last tried to use a library that didn't have dotnet Core support.
No it won't, because .NET Core doesn't support everything from .NET Framework, like WCF server happily running on 4.7.2.
Here is another example, Oracle drivers for .NET Core only support a subset of their capabilities on Core.
I can keep feeding examples of stuff that is actually relevant for Fortune 500's, which Microsoft keeps out of their .NET Core marketing or just hand waves as yet another porting effort, as if we didn't had better ways to spend our money.
You can keep feeding off examples of the subset of Fortune 500s that refuse to compete for top talent in the wider market and instead try and keep zombie projects shuffling because of their mismanagement using contractors and outsourced labor... they also tend to have the kind of engineering culture that pushes away innovators, treats development as a cost center, and generally avoid investment in any sort of long term growth of their talent...
But yeah, why compete with the rest of the Fortune 500 that hire top talent to build out new systems...
when you can pay a "fixer" who prides themselves on self-flagellation keeping legacy systems with no source targeting platforms designed for computing as it existed decades ago (that only exist because of gross mismanagement and a general view of development as a cost center)?
What's interesting to me is someone would actually try to hold this up this dance as something the rest of the tech industry should aspire to uphold lol. Maybe because it makes for less "war stories" deep in the bowels of decade old systems with no documentation successfully migrated to the latest Jenga block in their leaning towers?
Imagine if other fields worked like this
"Our materials science company prides itself on only releasing plastics that can be molded using what was state of the art 20 years ago"
"Our competitor Macrohard insists on occasionally releasing new plastics that are stronger and cheaper to develop with, but we cater to those shops that refuse to invest in techs who know newer processes!"
Mainly because its the colored function problem. You end up having to change the call stack all the way up to Task<T>. Once you start using them, everything is a Task.
I've never had anything but problems trying to run Java apps. Maybe in server land it's been great but in desktop land, outside of Minecraft, it's always been hell for me.
> "Java designers don't follow fashion, they rather wait to see what actually works when the dust settles and how to keep those binary only 25 year old jars running on modern JVMs."
Yup, for better or for worse, Java will be the COBOL of the 21st century.
Apparently 21st century developers don't have much issues dealing with UNIX and C, just about 10 years younger than COBOL, speaking of which, both about 30 years older than Java.
> local type inference (var) made the language pointlessly verbose for ages
I guess that this is a question of taste.
What for you is "less verbose" for me is more confusing to read. I like to see the types as they complement variable naming. To avoid typing a few letters the code will for ever require me to double check the types with help of the IDE.
I have worked in medium sized corporate companies. The code base is quite big and one of the 20+ development teams may get transferred a project from another team (does not happens super-often, but it happens) or they may create pull-requests for bug fixes (this is more common).
Clear and easy to understand code is life saving. In one of the companies I worked for a Javascript team send a -1 instead of a "-1" the cost ramped up the hundreds of thousands of dollars and our clients were not happy about it. Rollback mechanisms were used as fast as our clients detected revenue problems on their own customers.
And the tests did not got the error as the values is used at the integration layer between our clients and us.
I see safety an increasing value as our programs control more and more money and more and more services. And, I have to admit, I feel more comfortable with more verbose code.
That is a very good point. I find the diamond operator more of my taste as it requires you to think if you want to expose ArrayList or List.
List<String> list = new ArrayList();
But, as you point, the second example removes information that will require to spend time to check types each time that someone reads the code. I do not like that.
I use auto-complete and I type quite fast, so, I do not see to write code as a problem. I spend most of my time understanding the functional needs, looking for better patterns or algorithms to implement performance-critical sections or finding names for exposed APIs that state clearly its function when I "write code". But, most of the time, I am just reading my old or other people's code to just change a few lines or decide on a local refactoring.
> Did you spend any significant time working with C#?
I have no experience with it at the corporate level. There, I have seen, it mixes a lot with .NET. So, I guess that the corporate equivalent to Java is "C#/.NET".
I spend some time several years ago with C# in Unity3D, thou.
I really liked the C# language. I found the Auto-Implemented Properties a neat compromise between encapsulation and verbosity (Verbosity has no value if does not add information).
Java is trying to be everything to everyone and that is a mistake. I liked Java more in the past, and I would have added just a few things from the past iterations of the language (e.g. Modules is a good idea that actually simplifies the language and moves much code to "frameworks" instead of being part of the core language).
C# seemed more focused on its initial style were Java is stretching all over the place.
You are right. I see that some of that features were added much later on.
For non null references, I used to work with C++ and I really liked some of its reference/pointer/smart-pointer semantics even that everything can be used also really badly.
I have done some basic training in Rust, and I am happy to see that it seems quite close at the way I was using C++.
But, I see C# in the future replacing part of what Javascript does nowadays once WebAssembly takes off. I would like to learn more about that.
Pointless typing and verbose code has nothing to do with writing code - it's all about code being readable.
Java type declarations can be 20+ characers - just scanning through the code and having to skip all that junk makes my eyes more tired reading through. Types are implicitly deducible when you know the codebase 90% of the time (and should be added when they are not), and if you don't know the context you will be slow no matter what.
Yes. When I was younger I worked in solo projects. I knew my code almost line by line.
In my last decade, in middle sized companies, nobody knows all the hundreds of micro-services code. And code changes while on vacation, that can be 6 weeks of the team working without you. That is not ideal, but in such a big code base it is difficult to have everyone reviewing all the changes on a single micro-service, impossible to have all 20+ teams reviewing all of each others code.
Different problems need different solutions and code styles, I guess.
My point is that in a good code base types should be obvious from context. If you don't know the context then you will be slow (and make a lot of mistakes) no matter what the type say because you'll likely misunderstand the domain logic (context) unless it's something trivial. I would hate to work somewhere where I'm expected to randomly drop into micro services I didn't have anything to do with and debug/support them - sounds stressful.
Yes, but that seems like a clunky add-on. Something as basic as iterating the characters in a string should not require special constructs like this IMO.
Swing is a horror show from both a developer and an end-user perspective, don't know why you would even mention that. Windows desktop apps based on Forms or WPF are in another league both in terms of ease of development and user experience.
Swing is only an horror show for those that don't bother to read books like "Filthy Rich Clients", or don't want to spend money on a proper design team.
Why do I mention? Because until now .NET Core still doesn't have anything like Swing across all supported platforms.
MAUI might do it (Xamarin renamed), or you might just get Blazor running on WebWidgets.
I rather have Swing if the alternative is running Blazor that way.
Not counting community toolkits here, just what is available out of the box.
So C# definitely benefited with the second mover advantage. It learned a bunch of lessons from Java such as:
- No checked exceptions
- Properties (I wish Java would add this to the core language instead of relying on things like Lombok as it shouldn't change the IR and really is just syntactic sugar)
- Partial classes. This really isn't in contrast to Java because Java has nothing like it. But it is a neat feature for partial code generation;
- LINQ. Java eventually added streams but last I checked it still had performance issues and IMHO LINQ is just cleaner;
- Conditional compilation. This is really a huge oversight. One of the huge benefits of the preprocessor in C/C++ was conditional compilation. It's great than C# included it. It's bizarre to me that Java hasn't.
- Async/await. Honestly I still find C#'s version of this more awkward than Hack's. Experience has taught me that whenever you spawn a thread, you've probably made a mistake as subtle mutlithreading bugs are the devil and cooperative multitasking like you have in Go, C# and Hack is usually far safer and sufficient most of the time. Still, C# is still better than Java here.
- C#'s reified generics vs Java's type erasure. I think it was the right decision to break backwards compatibility here (and I don't usually say this). This was pretty early too (IIRC generics were added in C# 2.0).
But all that being said, I still think Java has a large mindshare and install base than C# by a mile. it's not sexy so it gets less attention on HN but Java is still massive.
As for Kotlin? Much like Scala I see this as nothing more than a curiosity. Android developers seem to like it but I think it's a tiny fraction of Java still.
“One of the huge benefits of the preprocessor in C/C++ was conditional compilation. It's great than C# included it. It's bizarre to me that Java hasn't.”
The C preprocessor can make it way to easy to break code, for example when it is used for feature flags and/or multi-platform support. If you have N feature flags, you have to compile 2^N different programs. Multiply by M for supporting M platforms.
It also makes it impossible to check whether source code can be compiled. The text inside a
#ifdef FOO
...
#endif
block doesn’t have to be valid source code, but the compiler cannot know whether it has to be.
I think that’s why Java ditched it, and I think that makes sense. Adding a more limited feature, like C# did, makes sense, too, though. I’m not sure C#’s variant is limited enough, though. it still is subject to that 2^N problem, but gets saved from its main problems because C# isn’t running on as diverse environments as where lots of C code evolved.
That’s true, but it is done a lot cleaner than in the wild west days of the C preprocessor, where feature-specific, compiler-specific, and platform-specific #ifdef’s were sprinkled throughout the source code seemingly without much thought (but keeping things working must have taken lots of thought), nested #ifdef’s were common, and often not all cases had separate paths in the code.
Dependency injection can be made messy, too, but that takes more of an effort. You can also test injected code in isolation. That may take some effort, but those preprocessor messes only could be tested as part of the entire product.
As I said, I can see why they wanted to get rid of that. Not adding a same replacement may not have been the best choice, but I am not sure of that. Programming culture also had to change, and that sometimes requires drastic action. Apple also did that in the Mac by not providing any text mode (forces programmers to make windowed applications) and by removing cursor keys from the first keyboard (forces programmers to provide a good mouse interface)
LINQ is cleaner, but I find streams to be more elegant when considering the language as a whole (its just a pure library - no new syntax/rules/etc to learn).
I feel like a lot of languages these days trend towards "kitchen sink" languages that toss in everything and the kitchen sink in the name of clean looking code. IMHO this tends to sacrifice language elegance. This is probably why I tend to like languages like Java/Go/Clojure.
There is not new syntax for LINQ either -- it's just fluent-style via extension methods.
I don't know many people that use the SQL-like syntax any more -- and it only covers a tiny amount of the functionality.
> As for Kotlin? Much like Scala I see this as nothing more than a curiosity. Android developers seem to like it but I think it's a tiny fraction of Java still.
Java is massive for legacy reasons, but would be nice to have some statistics regarding new projects. The last two companies I've worked for had backend teams using Kotlin so I've been assuming this is also preferred by backend people. Could well be that this has was an anomaly though.
I mostly develop for Android so my own perception is obviously biased, as Java is pretty thoroughly erased from this world now.
Anything else on the JVM is nice to have, but isn't where all the goodies are.
Google has a special interest in getting rid of Java, hence Kotlin.
In fact, it is going to be fun to watch how they will keep up with the pace of the JVM, because when the majority of key libraries move to modern Java, many of them will stop being compatible with D8/R8, forcing Android developers to use pure Kotlin libraries.
Then JetBrains needs to think how to make use of Java libraries that use the new FFI, virtual threads, inline classes, generics, SIMD vector types, while keeping the language compatible with what ART is capable of.
If you want to avoid changing the complete chain to Task, there’re workarounds.
You can block the caller thread waiting for the task. Deadlocks are possible with this approach due to synchronization context shenanigans, but they’re easy to fix, VS debugger is quite good at multithreading.
Another method, you can run whatever dispatcher on your main thread, in modern .NET that’s usually Dispatcher.PushFrame.
Also, if the async method returns no values, you don’t need to change the call chain, you just need to catch and handle exceptions.
If we look at programming languages as tools, it makes sense for them to get out of date and new ones taking their place, with all the lessons learned.
So its possible that programming languages like C++ that keeps extending their own life through adding features (while keeping weaknesses), will ultimately cost the community/industry more in the long run.
I'm not saying Java should have introduced ground breaking features - I'm saying they made design mistakes that they should have corrected far sooner (var/type inference) and refused to add some basic features (lambdas) that would have made the code a lot better for it.
They did add those features eventually (Java 8) - about 8 years behind C# (since C# 3.0)
>If we look at programming languages as tools, it makes sense for them to get out of date and new ones taking their place, with all the lessons learned.
Part of me thinks that this is how it should be. Extending a language constantly while providing backwards compatibility can lead to some awkward syntax too.
The thing is that this is not a huge issue in practice. I stick to uniform initialization in 99% of cases and it works out pretty well. I have far fewer bugs from initialization screwups than from other things.
But what kind of an effect does this have on someone new learning the language? It seems to me like it would make it slightly harder to learn and get used to.
It seems to me that for many developers, all they want from a language is more and more features. I’m grateful for languages like Go and Lua that try to buck this trend.
Oracle is all the hate, but when it comes to java, Oracle really got the show rolling again. Prior to Oracle, little happened. After Oracle, a lot is happening.
Oracle haters seem to disregard that Oracle was the only company bothering to actually make an offer and keep it, without that offer, everyone would be busy maintaining Java 6 codebases, or porting code to something else.
People put scare quotes in seemingly at random these days - are you suggesting it's possibly not their strategy and actually they're lying? What do these scare quotes mean?
I'd guess there is there is too much money in it due to the fact that central banks killed interest rates and started buying government bonds as needed - where pension funds would hold those they now need to search positivie returns and there's just not enough good bets there - would explain why the valuations are sky high
> and someone brings up the fact that ISIS used one of them...
They used Facebook, Twitter and YT for propaganda and Gmail for coordination (that thing about saving a mail draft on gmail instead of sending it), it's obvious the other side isn't using logic to make the argument (but beliefs, feelings or simply pushing an agenda) so using logic to argue back is pointless.
>Yes, it gains you some of the economics of factory construction and that you can start small and scale a location, but on the other side you lose that again because you lose the economics of scale that traditional PWR gets.
You mean they lose operational efficiency ? Economies of scale come from the ability to mass produce.
You forgot to mention the largest differentiator - eliminates the possibility of a global catastrophe.
Not in nuclear they haven't historically. Economies of scale drove light water reactor designs from tens of megawatts to hundreds to over a thousand universally from all vendors around the world historically. The big institutional nuclear economics reports all agree that going big improves nuclear economics. The hypothesis that SMRs will somehow overpower this is popular but is very much unproven. This agrees with OECD reports like last month's [1] and all the older ones listed in [2].
Historically, one of the few successful ways to lower the price per generated power from a nuclear power plant has been to make the reactor larger. So yeah, there's a reason why the latest traditional PWR designs such as the French EPR are huge (1600 MWe).
The gamble with these small reactors like Nuscale is that series production of the reactors in a factory would make up for the loss of the traditional economy of scale due to size. It remains to be seen how well that will work out.
Economics of scale are the reasons modern Gen3+ reactors are so huge.
From AP1000 wikipedia:
> The design traces its history to the System 80 design, which was produced in various locations around the world. Further development of the System 80 initially led to the AP600 concept, with a smaller 600 to 700 MWe output, but this saw limited interest. In order to compete with other designs that were scaling up in size in order to improve capital costs, the design re-emerged as the AP1000 and found a number of design wins at this larger size.
So modern PWR are usually build with 1GWe one location one reactor, huge economics of scale in terms of the size of the power plant. A AP1000 is not much bigger then an AP600.
> You forgot to mention the largest differentiator - eliminates the possibility of a global catastrophe.
I disagree. First of all, I think the possibility of a global catastrophe with a traditional PWR are already incredibly small, and when talking a modern build like an AP1000 the NuScale doesn't have that much better safety characteristics.
PWR are inherently problematic and require tons and tons of complex engineering to make them save and the error potential in such a solution are always there.
If you can pump out standardized large scale reactors like France did, then they are way more efficient than these smaller reactors.
The problem of course is that takes a large government to mandate a huge public project, which is not really likely these days. The advantage of these small reactors for now is that they hopefully prevent expensive,bloated, one-off site designs that go over budget and miss their schedules.
Thing is this kind of design benefits massively from economies of scale, same kind of thing that has drawn the price of PV down and other green energy.
> same kind of thing that has drawn the price of PV down
A nuclear power generator is a complex beast, that requires a ton of material, of very different kinds, worked into some detailed and non-repetitive patterns. (Have you looked at a steam turbine?)
PV is a simple pattern of a few different substances, repeated over and over again.
Even if scale was all going into the PV price, nuclear will never be able to achieve the same amount of it.
Up until recently the "economy of scale" meant massive reactors when it came to nuclear, and small modular reactors were abandoned because people didn't think they could be economical.
We have radically different construction skill sets now than we did in the 1970s, so the economics may be different now, and it could have been that the planners were wrong before.
But I'm any case, until a few of these have shipped, I'm not sure we'll know the true cost.
These are manufactured like airplanes, a few at a time. Whereas solar has massive plants with hundreds of thousands of the same part assembled and shipped. I'm hopeful that they will provide another tool in the fight against climate change, but not super optimistic. There are many many technologies that are at a similar stage of development that could be used instead, such as cheap hydrogen electrolyzers, long-duration storage flow batteries, etc. And if these other techs succeed, they will also help SMR nuclear, assuming SMR nuclear can compete with renewables on cost!
Given the general cost disease affecting large construction projects, I think that massive reactors are an unviable proposition in western countries at this point. While the reactor core designs seem to be templatized, the projects to build them are not, and so there is huge inefficiency. E.g. see https://www.nytimes.com/2017/07/31/climate/nuclear-power-pro....
If NuScale can build hundreds or thousands of these small reactors, they should be able to perfect a turnkey installation playbook that would hopefully reduce costs significantly, and perhaps more importantly, reduce variance on project spend/timelines. I think an unpredictable total cost of ownership is one of the things hurting nuclear projects.
The big question in my mind is whether they can deploy enough of these to get to that scale, given that there's a fairly universal NIMBYism against nuclear power, even where this would be displacing CO2-emitting sources.
This kind of design actually starts out with losing massive economics of scale of traditional PWR and hopes to get it back by economics of scale in manufacturing.
I don't think NuScale will have much trouble competing with traditional PWR, but if they can compete in the overall market is a huge question.
>As I said in another thread, the problem is not writing the SQL. The problem is everything else after you have written it (applying on deploy, automating for other's dev envs, rollback, etc)
What's wrong with migration managers ? They handle all you mentioned just fine. The only problem I've seen with this approach were when working on a team that didn't have proper CI flow, then it was possible for people to check in bad migrations and screw up branches, etc.
I've worked with both approaches in multiple languages, and frankly I prefer the no-magic migration manager approach.
Migrations automatically generated from model also mean your DB is tied to your app, which I'm not a fan of, if anything the reverse should be true, and the fat model approaches of rails are terrible IMO (I haven't used Django for anything serious in over 10 years so I don't remember how fat their model layer is - I know it has enough metadata to generate the admin CRUD but don't remember if it also encourages having logic on models).
The best solution I've seen is F# which uses dynamic type providers (basically compiler plugin) to auto-generate models from DB/schema at compile time, and yesql in clojure (clojure being dynamic and built arround working with data using SQL results inside of it is natural and amazing).
IMO watching a few sci-fi movies in 4k+ looks ridiculous - I start noticing the difference between CGI environment and actors and it kills the immersion completely, it goes from "that character is really there" to "this guy is larping in front of a green screen"
Higher resolution, better color replication, and frame rate make very obvious the fact that there seems to be a magical glowing orb following around the characters right behind the camera. Immersion breaking because you can get away with it with less quality, it's more difficult to notice.
Something that I've also found more and more irritating is the foley artists doing ridiculous things for sound effects, especially in nature documentaries, but all over the place really.
that's not the point - they were saying that retroactively going back and viewing old content on 4k, you can easily spot the CG/issues etc.
ex: Farscape - all the CG was rendered in low res, so even when you view a 1080p copy it looks silly. Imagine all the content that would have to be re-done and upscaled to make it watchable in 4k.
> I am really stoked for the future of programming languages, when localization is just a matter of translating some words.
My experience with localised codebases (in my native language) have been horrifying - like it or not terminology is developed in EN, you either get unnatural sounding "borrow" words and the translation is pointless or worse you get people coining new terminology nobody but them understands. Not to mention fragmented communities, knowledge bases, etc.
IMO localised codebases would be a regression not progress and I dread every time I see Chinese in a codebase (simply because they are a large enough market to split the dev community)
> I dread every time I see Chinese in a codebase (simply because they are a large enough market to split the dev community)
They aren't going to split it, the rest of us will adapt. English was arbitrary, so was French and Latin and Greek before it. Probably in the form of some Romaji-like equivalent (ideograms are too high a bar) but we will start to adopt it. The largest economy dictates the lingua franca because they produce the most output.
> English was arbitrary, so was French and Latin and Greek before it.
Might be arbitrary but there isn't really a good reason to change. Compared to the past, right now we have way more people than ever before from around the planet being able to communicate in a single language - English might not be everyone's first language, but it is a fine second language (if anything i'm certain that there are way more people speaking English as a second language than there are people speaking it as a first language).
The main point of a language is communication, why spoil that?
(and FWIW my first language isn't English but i have worked in a couple of other countries with other people whose first language also wasn't English - actually it was several different languages - yet thanks to English everyone was able to communicate, which i think is something to be treasured, not try to disrupt... i mean... we're just discussing things here in English after all)
right now we have way more people than ever before from around the planet being able to communicate in a single language
My father worked in international trading. He would talk (or Telex) with people in 50 different countries each week. He always said, "English is the international language."
> They aren't going to split it, the rest of us will adapt
Yup, but there's no real reason to go from one arbitrary language to another (and I'm saying this as someone who struggles to learn languages, and English was definitely not my first).
So it would be a straight set back (overhead of switching), for no real benefit (arbitrary to arbitrary).
Another big issue is the split in resources. Right now, anyone can learn English and get access to most programming resources. You can post your code online and get the majority of the programming community to help around the world. A long time (10 years+) ago, I was heavily involved in forums for a programming language that had a large amount of Chinese developers. They'd post their code, and to help them I'd have to start pattern matching symbols to try to figure out which function was which (or paste it in my IDE and use my IDE's tools to figure it out). It was suboptimal at best. Starting over with all the community building that's been done would be a major (if temporary) set back, in a field that reinvents the wheel way too much as it is.
I realize being able to learn English is a privilege, and requiring it acts as a form of gate keeping. But having everyone on the same natural language provides a fantastic global maximum (at the cost of gate keeping at the local level), and no matter which language it is, someone will have to learn it. Furthermore, asking people who went through the trouble to learn this one, to learn ANOTHER is even worse (if also temporary)
> So it would be a straight set back (overhead of switching), for no real benefit (arbitrary to arbitrary).
The convenience of the millions of Chinese speakers dwarfs your inconvenience. That is why it will happen.
Already there are plenty of data sheets for electronic components where the English is barebones and there is a lot more Chinese text. Presumably, most of their customers are Chinese and thus their effort goes there. It makes me tempted to learn to read it so I can make use of it... but electronics for me is just a hobby.
>Probably in the form of some Romaji-like equivalent (ideograms are too high a bar) but we will start to adopt it.
CCP will make you adopt it as-is, or GTFO.
I find it interesting their approach to language compared to Japanese. Modern Japanese borrows so heavily from English, especially if you're doing technical work.
Chinese, at the governments request, hasn't done that. Instead new words are coined as needed. They're keen on protecting the language.
How is that? I don't know about Chinese, but surely Japanese has much better entropy in bytes? As would other languages with more expressive character sets.
That's the issue. English can be represented with 7 bits. Good luck doing that for any logographic language.
And that doesn't even take into account that since English (and a lot of alphabet based languages) use spaces to mark where words begin and end. In Japanese, you can have a word that consists of a kanji plus a few hiragana characters as a grammatical marker. But there's no space between that word and the next. How do you know decide where to insert a line break?
And even 5 bits was enough for a long time, across a few European languages: https://en.wikipedia.org/wiki/Baudot_code (which I just learned is also where "baud" comes from!)
Yeah world languages like Chinese, Russian, Arabic have a chance of building their own thriving developer communities, but for a language like Italian, or even German, forget it. It's too small to be up to a language with 10 times as many speakers. Writing in the local language would put you at a disadvantage far more than you are at a disadvantage by being second speaker of a language.
And comments, and documentation, and stack overflow, and blog posts talking about your problem, and books, and so on. No, the cost caused by translation is not just solved by some function content hashing.
>Simply, if a restaurant offers 20 different dishes, it means each dish gets 20x less attention than if you had only one choice. Less fresh ingredients, less time perfecting the recipe and preparation. Serving only one dish makes things simpler and the restaurant is going to spend more energy making sure it is worth it.
One of my favourite places has a decently sized menu (would say 40 items on there including starters/sides/deserts) while they aren't "the best" at anything, it's really good and I love the atmosphere.
I like the flexibility and that I can invite anyone there since the selection is varied and quality is reliable. I can't call my vegan sister to a smokehouse even if their ribs are the best I tried, can't take my wife to a fish restaurant but my friends could be in the mood for fish, I have no clue what my in-laws will want to eat when I have to take them somewhere, etc.
And C tends to reinvent object oriented programming with each library so you might as well use language defined classes.
So I don't really see the point.
You don't need to use stuff like concepts, or "showing off how smart I can be with template libraries" (looking at boost), but C++ has a lot of features that make it much more productive than C.