I've done some porting of VB6 into C# with WinForms. It's actually a pretty direct mapping. I wrote a converter, and then spend a few minutes cleaning up the converted code per form. Took me a couple weeks to convert a 500KB program, which mostly involved going through and converting VB6-isms that failed to auto-convert into C#. I can't imagine trying to translate something like that into such a different language and UI environment as Python. Unfortunate that they'd reject such a thing just because C# came from Microsoft.
If the statement is "if (!isGreen)", it's much clearer to say "if is not green" than it is to say "if is green is false". Putting == true or == false makes you convert a clear statement "is green" into "true" or "false" instead of just being a natural English statement. It would be like saying in conversation, "I want to go to the store is false" instead of "I don't want to go to the store".
> If the statement is "if (!isGreen)", it's much clearer to say "if is not green" than it is to say "if is green is false"
I agree that when you read it, it's clearer. And yet I still prefer "if(isGreen == false)" for reasons of clarity in another sense.
The "!" being right next to the "(" makes it easier to miss the "!" when scanning quickly through the code, hence reading the logic the wrong way round and seeing "(isGreen" instead of "(!isGreen". And that's enough of a risk to ignore the readability advantage of "(!".
(Edit: To be clear, I don't suggest "== true" for the opposite cases, as the lack of a "!" in those means the risk is gone)
A lot of the improvements are just logically extending the language to remove arbitrary restrictions. That's what most of C# 10 and 9 appear to be. So most of the features you wouldn't go out of your way to use, but instead stuff that used to be impossible is now possible. I can't say that I've ever wanted a generic attribute, or a constant interpolated string, but if I did want them, I'd be surprised when they didn't work. C# 8 was the last major new language features.
But a lot of them seem totally arbitrary and inelegant. CallerArgumentExpression requires a lot of attributes. CallerMemberName gives you the name of the caller but not the class name. This is hard to explain. It just seems random.
It's not hard to explain, it's a logical and small extension of other compiler-driven attributes. Just because you don't like it doesn't mean it's a bad change to the language.
StackOverflow actually has a checkbox when you ask a question that says: "Answer your own question – share your knowledge, Q&A-style" which lets you post your answer at the same time as your question.
A lot of things a human can do can be found in other species, but I think the ability to read and write arbitrarily complex ideas really does set us apart, and caused a phase transition. It allows us to transmit ideas across time and space that would otherwise be lost. It allows the accumulation of knowledge that no brain is large enough to hold. Outsourcing our memories to persistent storage. Someone can write down an idea, it can be forgotten for hundreds of years, and then rediscovered and applied. It can allow one person to design something, and communicate that design to thousands of other people. Take that away, and local achievements in knowledge remain local until they are forgotten because other knowledge has takes precedent.
A beaver might instinctively know how to build a dam because it's built into its DNA, but a human can read the writings of past dam builders, learn the abstract theory of dam building, come up with their own improvements, debate those changes with other people interested in dam building, communicate their design to dam builders, and publish their work to become a permanent part of dam building literature. For a beaver to change how it builds dams, it would require a change in their DNA to pass on to future generations.
> but I think the ability to read and write arbitrarily complex ideas really does set us apart,
I'm not denying that humans, in their current form, as a species are set apart from the rest of life on this planet.
Rather what I'm trying to get at is, that if you'd backtrace the evolution of humans, at no particular point in time you could make a clear desitinction of "this generation of pre homo sapiens species fundamentally differs in their linguistic capabilities from the next evolutionary step".
The linguistic capabilities of hominids and humans more likely than not developed gradually, just like every other feature that makes a distinct species. Eventually you'll be able to clearly tell them apart. But when applying a "derivative" operator on it, you'll find that evolutionary development is smooth and continuous.
And I think that also applies to linguistic capabilities. The proposition I'm making is, that the linguogenesis of homo sapiens can not be explained within the boundaries of that species. Rather I'd say that the roots of our language can be traced back far further than you'd presume by presence of certain vocal anatomical features alone.
And then they decided to end that line of succession. .NET Framework 4.8 is the last version of the framework. I've started experimenting with converting code to .NET 5, but it's a rather large jump compared to any previous upgrade. Going from 2.0 to 4.0 had a few minor hiccups, but going to .NET 5 is basically a rewrite of the framework and runtime, and I'm not sure how old and new assemblies will co-exist. It feels like a fragmenting of the ecosystem, where a bunch of code will be stuck on .NET 4 forever, and other code will move to the .NET 5 and later.
I'm almost expecting a few years down the road, Microsoft will go back on 4.8 being end-of-the-line for .NET 4 and start releasing new minor versions of it because of all the customer code that can't be ported to .NET 5. Or maybe it will just end up like VB6. Stuff written in it still works, and will continue to work, but it's considered a dead language.
As a full time developer, I'm sure my typing speed directly affects my productivity (probably between 80-100 wpm in Dvorak). There are a lot of times I'm typing at full speed for hours, aided by code auto-completion and vim macros, and since the program already exists in my head, the limiting factor is how fast I can type it out. Other times not so much, I'm sitting there reading code, or debugging line by line, or whatever. But when I'm ready to code, my typing speed is the single biggest factor of how fast I can get it out.
Hmm I have a bit forgotten how things were, before Colemak. I was typing mostly software things, when I decided to switch. I think I a bit felt like you do.
> But when I'm ready to code, my typing speed is the single biggest factor
Especially when writing docs / comments for that code, then it's mostly English words (rather than `(){};!+-,[]` things)
So did you switch from Dvorak to Colemak? I just did a typing test on code-like things, and my typing speed is much slower, around 45 wpm, although that's normally accelerated by an IDE. I started learning Colemak for about a day a few years ago, and then decided that it wouldn't actually benefit me over Dvorak, especially because vi keybindings are quite important. Is the big improvement that it leaves {} in their original place instead of the farther reach?
I just had a quick look at Dvorak, and then picked Colemak instead because of more similar shortcuts. — It took weeks or months to get up to speed in Colemak. But it was a bit fun, fortunately. (How long did it take, with Dvorak?)
When the typing speed difference between Dvorak and Colemak is so small (it is?), I would say to myself that any of them is better than good enough and be happy :- )
There's also tools like tmux and fzf / skim, don't know if you're using them already; if not, I'd guess they'd save more time than learning Colemak.
> especially because vi keybindings are quite important
Oh I have nice Vi keybindings in Colemak — I use Vi always: VSCode, IntelliJ etc.
On the keyboard, I've mapped the N physical key to down, H to up, and left is Backspace (and Y which I never use), and right is Space and U. So I don't use the HJKL keys for navigation (well, H but it means Up for me).
This leaves the J K L keys available to do their usual things, in Vi — they map to N E I in Colemak (apparently you've noticed this :- )) which I use all the time.
> Is the big improvement that it leaves {} in their original place
I don't think so. I'd say, it's probably the shortcuts, and that many shortcuts continue working with the left hand only (e.g. using the mouse to select text, then the left hand to CTRL+C +V copy-paste).
Dvorak plays pretty nice with default vi keybindings, for the most part. Both y (yank) and p (put) are on the left hand, as well as x (delete characters), . (repeat last), u (undo), q (record macro), @ (execute macro) and " (choose register). And even though d is technically on the right hand, it's still in easy stretch range of the left. You can do a lot quite quickly with those keys and a mouse. So I don't miss ctrl-c/v too much when I have vi keys available. Whereas the right hand tends to be a lot of movement keys, which are more important when not using a mouse: h (move left) l (move right), b (previous word), f (jump to character), t (jump before character), n (next search result), g (goto top/bottom/line number/etc).
The worst thing I've experienced with Dvorak is typing "ls<enter>" repeatedly. It's really painful on the pinky. Putting L on the right pinky stretch was a really bad decision.
One advantage Dvorak has over Colemak is that it's included with every OS and easy to switch to, whereas Colemak often requires a download and an install to use. And some games include Dvorak keybindings, but I haven't seen one with Colemak keybindings yet.
I learned Dvorak back in college, and got fast by playing muds. Type fast, or you're dead. Took a few months to get up to speed.
> The worst thing I've experienced with Dvorak is typing "ls<enter>" repeatedly. It's really painful on the pinky. Putting L on the right pinky stretch was a really bad decision.
What about `alias s=ls`, a way to avoid the bad decision :- )
> Both y (yank) and p (put) are on the left hand ... right hand .. movement keys, which are more important when not using a mouse ...
Oh I didn't know. That sounds nice. Hmm makes me wonder if maybe I'd preferred Dvorak instead.
> I learned Dvorak back in college, and got fast by playing muds. Type fast, or you're dead
Hmm I wonder if type fast MUDs and spell correctly and the right grammar, or you're dead, could be a fun way to learn languages in high school :- )
The only two places I use Chrome are Netflix and Costco. Costco's behavior is just plain weird:
"Access Denied
You don't have permission to access "http://www.costco.com/" on this server."
Is this from running NoScript? Or does it affect all Firefox users? (Also the URL is https://, not http://, so the error message doesn't match the URL).
I've used Costco's site plenty of times on Firefox. I just double-checked Windows right now, and I'm pretty sure I've used it on OSX/Firefox in the past.
I cleared my cookies in Firefox for everything Costco related, and it works now. Thanks for pointing out that it works. No clue how it got in that state.
Nope, I get Error Code F7701-1003. I have Wildvine enabled, and I tried completely disabling NoScript. It's easier to just use Chrome for that one thing than have to troubleshoot the problem.
I think I figured out what it is. I turned off web assembly in Firefox to reduce my attack surface for general web browsing (I wish I could turn off Javascript completely, but that doesn't really work these days, so NoScript is as close as I can come). I think Netflix must be the only site I actually care about that won't work without WASM, so I'm fine relegating it to a separate browser with a higher exposed surface that I never use for untrusted sites.
Maybe you can help enlighten me on this. I've been struggling to understand the basics thermodynamics of carbon capture for quite some time.
So we have a hydro-carbon, we mix it with oxygen, and the oxygen combines with the hydrogen and the carbon, and releases heat as a byproduct. The heat energy increases the pressure of the newly created CO2. This higher pressure is placed on one side of a turbine or a piston, and we extract useful work by moving it from a high density state to a low density state, causing it to cool in the process.
Now it seems like if you want to re-concentrate that CO2, it should take at least as much work to compress it back to its original size as it released when you burned it in the first place, and probably a lot more, because the CO2 has been diffused into the general atmosphere.
To state it more succinctly, we extract work through a pressure differential, and by reversing that pressure differential, won't that require more work than we got out in the first place by the second law of thermodynamics?
I ignored the part where part of the energy is coming from the hydrogen. Is the hydrogen -> water where most of the energy is coming from, and the carbon part relatively insignificant?
This is a really good question, and a bit deeper than it first appears. So here is some semi-educated spitballing (I'm a chemist, but thermodynamics was a while ago):
1. Immediately after ignition, you have a low-volume, high-pressure, high-temperature amount of gas. Sequestration does not aim to turn CO2 back to this exact same state, but only a high-ish, average-temperature state.
2. Combustion often evolves more molecules of gas (look at the formula for the combustion of octane, and remember that water after combustion will be a gas). This increases the pressure, but is not something that needs to be reversed during sequestration.
3. Carbon dioxide isn't bad, but having too much in the atmosphere is. Sequestration doesn't aim to completely reverse the reaction in the first place, it just aims to remove it from the atmosphere so that it can't act as a greenhouse gas.
That's not how chemistry works. Burning 1 CH4 is not the same as burning 1 C and 2 H2. The formation of carbon dioxide (aka the combustion of carbon) contributes 393.5 kJ per mole of CH4 combusted, and the formation of water (aka the combustion of hydrogen) contributes 483.6 kJ per mole of CH4, the activation energy to split the CH4 molecule is 74.8 kJ per mole of CH4 combusted. So 55% of the heat released per mole is from the hydrogen.
> he heat energy increases the pressure of the newly created CO2. This higher pressure is placed on one side of a turbine or a piston, and we extract useful work by moving it from a high density state to a low density state, causing it to cool in the process.
You run the hot, high-pressure gas through a turbine to give you less-hot, lower-pressure gas. You then extract as much waste heat out of that stream as you can via a heat exchanger process, to pre-heat incoming fuel/air and to recover more energy by boiling water to run through another turbine.
At the end of the process, you have a medium-temperature stream of combustion product that has high concentrations of CO2. You capture the carbon from this stream, before releasing the last bits of gas to the atmosphere.
You gain usable energy out of the process because all of the heat movement happens through a turbine (to directly generate energy) or through a heat exchanger (to recycle the heat to other more useful parts of the process).
Heat engines take advantage of the fact that expanding a hot fluid releases more energy than it takes to compress a cold fluid - the difference being that combustion energy. Indeed the pressure difference is only there to make the process more efficient - so long as there is a temperature gradient you can extract energy even with no pressure gradient.
You are mostly right, but the missing part is that you don't need to turn the CO2 back into a hydrocarbon fuel, you just need to turn it into something that isn't gaseous CO2.
So carbon capture hinges on the idea that we can find a low-energy route that involves a chemical reaction with CO2 that produces something that isn't a fuel but isn't gaseous CO2 either. And that we can find a LOT of it.