Money is just a means to an ends. It acts as a means of exchange, unit of account, and store of value.
What we really want are goods and services to consume. If people aren't going to work and producing goods and providing services then there will be less to go around making us all poorer.
> That's thanks to competition between companies. That's market forces at play. You want it there? Accept it on salary negotiation too.
This is itself a bit naive. In a basic market with voluntary transactions the distribution of the gains from trade is undefined.
If your labor sells for $100/hour and costs you $10/hour to produce (basic food, water and shelter) then it is a perfectly legitimate equilibrium for the company to pay you $10.01/hour and take the other %89.99 as profit.
It's only through politics that we can make sure that the gains from trade are evenly split. The math of markets doesn't define it.
> It's only through politics that we can make sure that the gains from trade are evenly split. The math of markets doesn't define it.
That's clearly not the case since my employer could legally pay me $15/hour for my work, but pays significantly more. They do so, because they are competing with other employers for talent.
> They do so, because they are competing with other employers for talent.
Yes, but the RESULT of the two-way competition where both employees and employers are competing is undefined in a simple model of the market.
If you work as a cleaner or an uber driver, you may find that even though you generate some surplus, you don't actually get any of it (or you get almost none of it)
Yes, it CAN but whether it actually does depends on the details. If you are a cleaner or an uber driver, you might generate some surplus but you might see the other parties take all or almost all of that surplus.
A theme that comes out again and again in the comments here is that in a technical community, complaining that a tool could be better is a reliable signal of a dumb/lazy worker.
At the same time, many tools could actually be a lot better and if we all magically traveled 50 years into the future we would find all these better tools and see that it wasn't only the dumb/lazy people who used them - it would be everyone. For example, how many people write applications in low-level languages like assembler today? Not many - but at one point that was the only option and anyone who complained about it would be labelled as a lazy worker.
The signalling aspect of this certainly distorts the discussion, but bear this in mind:
It is simultaneously true that git is a tool with a poor interface and a bunch of warts AND every aspiring developer should do the work and learn it in detail so they can use that knowledge to signal to people that they're not lazy and/or dumb (and the lazy/dumb people - even knowing this - will not do it, so the signal works).
One of the reasons that we able to achieve so much with computers is that there is a separation of concerns between different areas.
Requiring everyone to fully understand git is like requiring people who code in high-level languages to understand and apply chip design in their day-to-day work.
Imagine if when you tested some Python or Java code you got an error from your CPU and needed to take it out and debug it with an electron microscope.
Contemporary git has this disease. Its (command line) user interface is a mess. I could say more about what you would do to make a better version but a comment isn't the place for that.
The defence that you are using - that smart people should be able to learn git fully - is like saying that any and every smart programmer should learn IC design and buy their own electron microscope, and that's why it's OK that the chips keep breaking.
There is a minimum bar for programmers I'd want to work with. Fully understanding git, for me, is part of that bar. The problem is that many programmers don't know how fundamental git is:
1) It's like understanding the basics of databases, network protocols, or compilers. It gives a lot of insight to how things work in a pretty deep and generalizable way. How do you organize data, and why are DAGs, Merkle trees, and hashes awesome? It's a beautiful case study in data engineering.
2) It's like knowing the shortcuts in your editor. It makes you more productive. If a programmer is hunt-and-pecking to type, and gets confused by shortcut keys, they'll be less productive.
Yes, I understand not all programmers will know how important it is to know this stuff, and I won't disadvantage someone who hasn't done this YET in hiring. But I would never hire the type of programmer who says "I don't need to know this." You do.
I'm sorry, but it takes a couple weekends of work to write yourself a git end-to-end from scratch. That's 0.5% of the time you put into a CS degree. If you don't have the interest, discipline, or drive to do that, there are plenty of jobs out there.
git internals are simple, but hard. Like Go. If you don't understand them, the userspace is a near-infinite pile or arcane complexity, incantations, half of which break something in counter-intuitive ways. If you do, it's a matter looking up the right command in the docs in a few minutes.
Yes, compilers, database, and other tools abstract away a lot of stuff. But if you don't understand the internals, you're likely to hurt yourself and my system in very bad ways. I don't want that on my team. My experience is good programmers are fluent one or two abstractions up and down, to not e.g. make a database query that does a full table walk, run out of stack space with a compiler that doesn't do tail recursion (and conversely, know they can use tail recursion with ones that do), etc. A tool you use every day definitely falls into the category of Stuff You Ought to Know, in a way that understanding how quantum tunneling is used in an SSD is in the category of Stuff You Don't Need to Know.
If you're hurting yourself with git, that's a good signal it's in the Stuff to Know category. And if you've wasted more than a few hours fighting git, as it sounds you have, it sounds like making a focused effort to learn it will save you time in the long term. Probably in a few months, even.
> A tool you use every day definitely falls into the category of Stuff You Ought to Know, in a way that understanding how quantum tunneling is used in an SSD is in the category of Stuff You Don't Need to Know.
I use an SSD every day though, as well as an LCD display and a laser in my mouse. So by this reasoning, I need to study quantum mechanics; it would only take a few weekends of focused study to understand the Schrodinger Equation etc.
These things fall into the "Don't need to know" category because we as a species have made very effective user interfaces to them whereby you really need to know almost nothing about their internals to use them.
The ideal version control system would work like a mouse or a monitor. Completely intuitive, just works™.
With that attitude, I would never, ever, ever, hire you.
As a footnote, I would expect you to be able to understand things like SSD performance and reliability, and how it's affected by complex algorithms in the drive controller (e.g. wear leveling, garbage collection, write block size, etc.). I would also expect you to be able to understand things like how subpixel rendering works, how rendering engines coordinate within LCD refresh, or how displays advertise their parameters to computers.
You shouldn't take those as black boxes either. You do get into bugs and issues which relate there, and an experienced software engineer will have a depth of knowledge around oddball topics like that. That brings huge value.
It sounds like you're not a nerd. Why did you go into software engineering? It sounds like you're not interested in the stuff. There are lots of career tracks which don't expect people to do those sorts of deep dives, and where willful ignorance is okay. Engineering, including software engineering, just doesn't happen to be one of them. All the good software engineers I know will do dives into this stuff, and that expertise accumulates over time.
The key thing is most of us enjoy those deep dives. That's what makes the career track a good fit.
If you don't, you'll be doing the equivalent of maintaining a COBOL database on a mainframe as you get older.
For my engineers, I'm not looking for tools which are "Completely intuitive, just works™." That's Scratch. I'd advise you to code in Scratch if that's what you want. I want tools which enable people to be productive, efficient, and get stuff done at a high level of quality. If that has a learning curve, that's okay. People are coding 40 hours per week. If my programmers spend a month learning each year, and that makes them 50% more productive, they'll beat your Scratch team. That's why good programmers get paid the big bucks, and mediocre programmers can't find jobs.
> With that attitude, I would never, ever, ever, hire you.
There are two separate issue here though.
(A) How much work does a given person want to put in
(B) How much work does a given tool require.
It can simultaneously be the case that git is bad/overcomplicated AND that you should only hire people who bother to learn it really well.
Why?
Well, learning hard things is a reliable signal of diligence and hard work, which are generally useful traits.
But at the same time, forcing everyone to learn something annoying and time-consuming just as a test of grit isn't maximally efficient. The same effort could be put into more productive tasks.
> Why did you go into software engineering?
Well, I'm not a software engineer - in the Data/ML area so I am much more interested in the properties of data than the properties of code. But having said that I certainly like clean, efficient code and I care about languages (maybe just spoiled by python?!).
I can't see myself as a software engineer so I think your instinct is right. My passion is data and ML.
It's not a test of grit. git happens to exemplify -- as well as any system I know -- many aspects of good data engineering. If you're into data and ML, those are things you ought to know too.
For a data/ML position, in most cases, I'd expect you to be able to handle data cleanly and efficiently.
If you can't, there are jobs far over on the data side, but:
1) As a business data analyst, you're fine with Excel and PPT, but you'll be paid roughly 1/3 of an ML/SWE position, and you should have excellent communication skills.
2) There are primary mathematical positions, where you work with a data engineer, but you'd better be awesome at math. AND it still helps to be able to handle data cleanly.
Even so, good data workflows require knowing what you did, when, and to which version of data. Properly used, git provides an archival log of some of that. I use very similar data structures when I build some of my own data pipelines too, with data stored under its hashes, Merkle trees, DAGs, and similar. If you find that "annoying and time-consuming," I'd hire you for a business data analyst, and not much more.
It sounds like you find that stuff boring, though. It's a test of interest, passion, and drive, much more so than diligence and grit. Although those are important too.
Probably the biggest problem with the git interface is that the interface exposed to the user isn't sufficiently abstracted from the implementation.
Insanely complicated commands to undo things are one symptom of this.
As a user, you want to "save" some code and you also want to "share" it with others. You also want a historical record of what you did.
But obviously you will occasionally save and share things you didn't mean to. Like your Python virtual environment, like a bunch of pictures in their binary format, etc.
A sane version control would provide a point-and-click way to make these disappear from history (though with adequate security protections to make sure that only authorized people can do it).
Then you'd need centralised protections to decide who can delete and you instantly lose the distributed features.
If I'm a user and I want to save, I do it with my text editor. If I want to share, I can do it with Github, Gerrit, Gitlab, email, pigeon.
I never email people things I didn't mean to and, if I did, I wouldn't expect my email software to let me delete it. The centralised services allow it to some degree but even they don't allow information to be un-disseminated.
VCS is necessary to deal with changes that overlap and conflict. It's not for backup and it's not a method of communication. If all you need to do is save and share, you don't need VCS.
That doesn't matter, the way people actually use git is that almost all repositories have someone or some small group who rule them, so nothing is really lost by having an easy option to purge things from history. And an inconvenient option (possibly more than one!) does exist.
It matters a lot, because the use cases you've seen aren't the only ones that exist. In a tool so widespread as git, that is really not surprising.
By removing it's decentralised nature you've fundamentally built a different VCS. Perhaps SVN is acceptable for your use case, that's great! It's definitely not git though - basic expected use cases were lost as predicted at the start of this thread.
Any system will get dragged down by the need to support all sorts of legacy use-cases. I think there's an xkcd cartoon about this?
The core task of a VCS is versioning and collaborating on text of some kind. Git doesn't do this in an optimal way, so eventually it will get replaced by something better. In the meantime we'll all get on with learning its ins and outs, just like previous generations learned how to use punchcards.
You mostly lost me at "point-and-click", but I'll bite.
git rebase -i {hash before your changes}
git push -f
And since you talked about point and click, you could easily enough use github permissions on branches to prevent this on protected branches or you could configure your git repo to disallow force pushes. To get specific branch protections on a normal git repo you would need to use a hook to validate the update before it's accepted.
For example, suppose I init a new repo and accidentally commit my entire virtual environment, then push, then do some real work, push a few times more and then a colleague notices (after they have pulled, worked on and pushed) that the venv stuff is there.
In an ideal VCS, you would have a simple command like git purge /badfolder that would make it as if it never existed.
But AFAIK that doesn't exist, or at least the ways to accomplish that are pretty gnarly and dangerous.
You can run a command against every commit and it will then recommit. That would let you remove, for instance, an entire subdirectory. The downside here being that you are rewriting history on something you've shared with the world and that has larger potentials for causing issues with contributors.
I guess my point is really this, git is simply one of the many tools you likely have to use on a daily basis. If you have to use a tool in your daily job it's in your interest to really grok the various ways your tool can be used. You'll want to really understand the primary use cases in detail and the less used ones you'll want to know in passing at least. That allows you to realize that something is possible with the tool, even if you don't recall the exact specifics. A machinist would have the Machinery's Handbook, a programmer will have multiple internet references. Maybe the real point is that it's _ok_ to not know the exact syntax and need to reference it for more esoteric operations.
Yes but filter-branch is horrendous and could very easily result in unintended side-effects, as well as potentially being very slow. I think this command is a great example of the problem; what a user really wants is an undo button but what git gives them is this thing.
I use an x86 CPU every day for work and I have no idea how it works in detail, and thanks to the magic of separation of concerns I don't have to (perhaps apart from a few specific things like vecorizing instead of loops that I really do need to know about).
Git demanding a large chuck of user mindspace isn't an advantage for git, it's a signal that git is bad and needs replacing.
It comes down to your distance from a specific 'tool' or system.
You aren't writing x86 assembler. You are presumably writing some other, higher-level language. I would fully expect you to know that language in detail and even better to understand the performance implications of the choices you make in that language. Knowing the lower level details helps there, but it's not 100% required.
With git, it's something you _directly_ interact with so I would expect you to understand it in great detail.
> With git, it's something you _directly_ interact with so I would expect you to understand it in great detail.
Yes, you are describing what is broken about git: its abstractions are leaking too much so people who touch it have to know all its internals.
I touch x86 assembler every time I run high-level code, it's just that other kind folks have gone to a lot of effort to make it so that I don't have know how the internals of that low-level stuff work.
Abstractions allow people to be productive without knowing in great detail how absolutely everything in the universe works. A good tool has simple, non-leaky abstractions with a simple interface. Git is not a good tool.
So do interpreters, compilers, package managers, virtual environments, build systems, CI systems, test frameworks, targets, hosts. Your code doesn't exist in isolation, you need to grok how it works with the codes others have written and will write.
But for each one of these, the less I have to know about its internals, the better.
The ideal option for each of these things is that it "just works". When you have to think about the internals of your package manager or your CI or your virtual environment, that's a flaw in it, not a reason to celebrate.
You want simple internals, and one would expect programmers to understand those internals. You want the userspace to do obvious things with those internals. You don't want magic in between. Understanding internals is a feature, not a bug.
I picked Debian over Red Hat since, at the time, I could understand how .deb packages worked, and look over the state of the system. Red Hat had more opaque internals. If something broke on a Debian system, rare as it was, I could go in and fix it manually. If something broke on a Red Hat system, it was generally in a binary database file, and meant a reinstall. Red Hat also broke more often, I think for very similar reasons in design philosophy.
If I were making a tool for grandma to manage her photos, that's be something different. If you're making a coding tool for 3rd graders, perhaps you want to hide more stuff too, but even there, many modern coding environments translate blockly into Python/JS/etc. code, and show the code to kids so they can see under the hood.
I have a car, and as a car user, I want thinks to just work. As a car mechanic, I'd like things to be understandable, fixable, documented, and transparent.
git is like that. It has simple internals. Once you understand them, the userspace become very understandable too. The upsides of the elegant internals far outweigh the downside of a slightly clumsy userspace, which is why it's the dominant VCS right now.
It took over precisely from things which "just worked" with a simple userspace, and clumsy internals, like SVN and CVS.
I take the view that developers shouldn't be spending their time fixing their VCS.
I use PyCharm. I don't know how PyCharm works internally. I don't even know what language it is written in. I know that it provides syntax highlighting, smart replace, code completion, etc.
Similarly, my car mechanic has a bunch of tools that he doesn't understand in detail; they have interfaces (like a gauge on a pressure sensor).
Progress requires these interfaces, it requires these abstractions, and over time I'm pretty confident that we'll get a better VCS than git that has better, more user-friendly abstractions and it will take over the market.
> The more we learn about nuclear, the more expensive it seems to get
This is a pretty gross falsehood.
Nuclear power is could be cheap and safe, but we _make_ it expensive by banning any form of innovation and by lying to the public about how risky it is compared to alternatives. The idea that nuclear is bad and costly becomes a self-fulfilling prophecy as the public incentivize politicians to ban it or regulate it into oblivion.
We should not be building the same basic reactor design as we have in the 1960s and 1970s. The fact that we're stuck on big, expensive containment vessels and water is a clue about what is happening to nuclear.
Similarly as in Chernobyl, the chain reaction produced excess heat, and this likely caused water to dissociate into oxygen/hdydrogenium which caused the big explosion. In that sense, it was not a nuclear explosion, but one caused by an out-of-control chain reaction.
If I remember correctly, there were also reports of some observations of short-lived fission products. But we still do not know exactly what happened.
Roughly the same could have happened in Three Miles Island, because the reactor was melting down, but the process was stopped, possibly very little time before the reactor would have been destroyed.
The Chernobyl explosion was not due to hydrogen and oxygen reacting. If you heat water hot enough to dissociate, it will explode just from the pressure of the steam. There's no need to add endothermic chemical reactions to the scene.
There was no nuclear explosion at Chernobyl. I guarantee you will find zero sources for your theory other than conspiracy sites. It was physically impossible for there to have been a nuclear explosion.
Odd how every non market alternative is normally one of those human right violating machines and actively collaborates with them -- far more than those in markets do.
Who is the largest trading partner of the US? China. Also the market is global these days, it is very hard to isolate oneself, without strong policies and market regulation. The market alone is not ushering in new eras of freedom, it's feeding the beasts, and no one really puts their money where their mouth is.
That is evil. Consumer companies (aka Apple) must work for their customers (citizens). If consumer companies are working to be betray citizens, their customers, then corporations are evil. Customers pay the bill and Apple choses to betray the very people who pay the bill.