Hacker Newsnew | past | comments | ask | show | jobs | submit | more notdonspaulding's commentslogin

Doesn't it strike you as odd that devs get "lost" often enough to need this "feature"?

I've used CVS, SVN, hg, bzr, and git. Why does git stand alone as the only VCS where devs need so many foot bandages?


if you are satisfied with CVS/SVN version - any commit is final and there is no way to undo or modify them - then you don't need reflog. If you mess up, you just say: "oh well this sucks" and leave commit in place. This is what I was doing back when I used SVN and CVS.

hg has similar thing but it's different for each command. For example to undo "hg strip" you manually dig into .hg/strip-backups directory.. I never understand why people call it more intuitive.

Can't tell you anything about bzr, never used it.


Not really. I think of it as git’s undo button. I don’t criticize my word processor for being so complex that it needs an undo button.


That analogy is verging on dishonest. A word processor’s undo button is typically used in service of the operator’s thought process, as it pertains to their actual output, not how they’re using the word processor. The analogue to this is git itself.

I said typically though. We all know the “you move an image in Word and your layout completely changes” meme. Sometimes someone wants to undo something because the tool didn’t do what they intended. Even though I have no empirical basis beyond my no experience, I am incredibly confident that the typical use of Undo in a word processor is because the operator has changed their mind about what they want their output to be, rather than because they had already concluded what they want the end result to be and they just can’t get the software to do it. I have to imagine that you agree with this.

So yes, be thankful that git has reflog. All other things equal, git with reflog is better than git without. I don’t see how your analogy invalidates or refutes the the critique that typical use of git’s reflog is as a result of the operator not knowing how to use git. And to simply say “you’re holding it wrong”, when ‘holding’ git routinely involves standing due East at the next full moon, is absurd.


A person who isn’t “good at” word processors will use it both when they change their mind and when the program doesn’t do what they want (e.g. I didn’t realize that I had a text selection before I started typing and now my selected text is deleted). It’s the same with git, and now we are just arguing about what’s “typical”. An example of when I change my mind about what I told the program to do, maybe when rebasing I decide my merge resolution wasn’t actually what I wanted. That is certainly in the “user changed their mind” class of error.

I’m not saying git has a great user interface that users intuitively grok and rarely make mistakes in. But I am saying that having an undo button is not an admission of that, either.


The point is that other tools achieve the same with less complexity in the users mental model.

If you use the undo button because your word processor doesn't do what you want a lot, maybe look for a word processor that's better designed.


Because we insist on telling people to Rebase as part of the standard workflow, which is just a huge footgun for people who dont understand what they are doing.


I haven't used reflog in probably 2-3 months but it's just a small thing to know it doesn't hurt.


I came here to mention Dave Beazley's courses and talks.

In particular, I recently prepped/ran a week-long, in-house training session of Dave's Python-Mastery[1] course at my day job. We had a group of 8 with a mix of junior and senior Software Engineers and while the juniors were generally able to follow along, it really benefited the senior SEs most. It covers the whole language in such depth and detail that you really feel like you've explored every nook and cranny by the time you're done.

[1] https://github.com/dabeaz-course/python-mastery/

(I enjoyed teaching the class so much that I've considered offering my services teaching it on a consulting basis to other orgs. If that interests anyone, feel free to reach out to the email in my profile.)


> From start of game to 50 squares left is enumerable.

A nitpick, but I don't think you meant to say "enumerable" here

"enumerable" means "able to be enumerated, or counted"

https://webstersdictionary1828.com/Dictionary/Enumerate

"innumerable" means "unable to be numbered, or counted"

https://webstersdictionary1828.com/Dictionary/innumerable


Enumerable as in "computationally enumerable", or "able to be enumerated in a reasonable amount of time x cpu x memory"

The beginning and end of the game have less possibilities to consider. The midgame has the most possible moves = hardest to calculate.

If you read carefully, the midgame is the bit the author avoids explaining how to solve ;)


> Its the same failure mode regardless if played via everyone paying taxes or everyone paying FDIC...

But if everyone is paying FDIC via fees levied against private banks, then the POTUS and Yellen and everyone else can release a statement saying "no taxpayer funds will be used to bail out SVB". That way everyone can breathe an ironic sigh of relief: "Oh good, I was worried I was going to have to foot the tax bill for that. It's bad enough I have to pay all these greedy capitalists at my local bank a new fee every other week, at least the government is looking out for the little guy!"


Your first thought is completely right.

Your second thought sounds like a good idea. Let's have another competitor (government) funded by private entities via debt equity (bonds) which is paid back by customer receipts (profits) where failure equals unemployment. Actually, this is starting to sound like a business, but with more steps and the power to pass laws affecting its own operating environment. What's the advantage to having the government be the competitor?


> What's the advantage to having the government be the competitor?

Voters choose management instead of shareholders. Maybe they'd make different choices. Diversity is good.


Fatal flaw for outdoor games: what keeps this from picking up pieces of gravel or debris from the court?

Anything sharp or rough it picks up is then immediately headed for somebody's hands on the next pass.

And even if it wasn't sharp, it would shift the CG away from the center of the ball, which isn't a problem that traditional balls typically have as they deteriorate with age.

And even if it wasn't heavy enough to impact the CG, it would rattle around on the inside.

Still, I guess it could be nice on an indoor court.


I also feel like material fatigue would affect the ball's performance over time in ways not experienced by ordinary balls.

i.e. It doesn't need air, but does it bounce the same after thousands of bounces and hours of sunlight? Is the wear evenly distributed when left in the sun but not used? What about when it is used, but damage occurs for other reasons?


I think this is a valid concern for pro, semi-pro, collegiate and competitive recreational players. Not so sure for the millions of other players who play with crappy, uneven, knotted, too rough/smooth “ordinary” balls everyday. Of course that doesn’t mean there is a market for it.


I don't believe ordinary balls would experience the same level of degradation simply because the pressurized air inside is most of what makes it bounce


I've had basketballs fail. Suddenly your ball has a lump. Happens after a hard hit sometimes.

On the 3D printed one, I'd assume the opposite problem. A hard hit causes a dead spot or actual indent.

Both make it hard to play with.


The purpose of this ball is to get media outlets to play a story about Wilson basketballs. It’s a marketing play, not a product.


> My threshold for "serious news organization" is that CNN gets there, Fox News doesn't.

I don't put those 2 channels in different categories at all. And certainly they don't divide from each other along lines of objectivity. They are both in the News Entertainment industry. Neither cares in the least about objectivity.

The only split I see between them is their mutually exclusive audiences.

Fox News is actually in a better place because they don't seem to be hiding the fact that they are there for entertainment and audience-building. They both care about their ratings first and foremost, but CNN is still trying to keep some veneer of serious journalism.

As a test: I haven't watched it recently, but how has CNN mea culpa'd over the news that the Hunter Biden laptop was real? A "serious news organization" should have had a real period of soul-searching over that. I bet it was barely a blip on their radar.


But pattern recognition is not intelligence.

I asked my daughter this morning: What is a "promise"?

You have an idea, and I have an idea, they probably both are something kind-of-like "a statement I make about some action I'll perform in the future". Many, many 5 year olds can give you a working definition of what a promise is.

Which animal has a concept of a promise anywhere close to yours and mine?

Which AI program will make a promise to you? When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise? Will it de-prioritize non-obligations for the sake of keeping its promise? Will it learn that it can only break its promises so many times before humans will no longer trust it when it makes a new promise?

A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us. If we picked a different word (or didn't have a word in English at all) the fundamental concept wouldn't change. If you had never encountered a promise before and someone broke theirs to you, it would still feel bad. Certainly, you could recognize the patterns involved as well, but the promise isn't merely the pattern being recognized.

A rose, by any other name, would indeed smell as sweet.


The word you are looking for is an _embedding_. Embeddings are to language models as internal, too-rich-to-be-fully-described conceptions of ideas are to human brains. That's how language models can translate text: they have internal models of understanding that are not tied down to languages or even specific verbiage within a language. Probably similar activations are happening between two language models who are explaining what a "promise" means in two different languages, or two language models who are telling different stories about keeping your promise. This is pattern recognition to the same extent human memory and schemas are pattern recognition, IMO.

Edit:

And for the rest of your post:

> Which AI program will make a promise to you? When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise? Will it de-prioritize non-obligations for the sake of keeping its promise? Will it learn that it can only break its promises so many times before humans will no longer trust it when it makes a new promise?

All of these questions are just as valid posed against humans. Our intra-species variance is so high with regards to these questions (whether an individual feels remorse, acts on it, acts irrationally, etc.), that I can't glean a meaningful argument to be made about AI here.

I guess one thing I want to tack on here is that the above comparison (intra-species variance/human traits vs. AI traits) is so oft forgotten about, that statements like "ChatGPT is often confident but incorrect" are passed off as meaningfully demonstrating some sort of deficiency on behalf of the AI. AI is just a mirror. Humans lie, humans are incorrect, humans break promises, but when AI does these things, it's indicted for acting humanlike.


> That's how language models can translate text: they have internal models of understanding that are not tied down to languages or even specific verbiage within a language

I would phrase that same statement slightly differently:

"they have internal [collections of activation weightings] that are not tied down to languages or even specific verbiage within a language"

The phrase "models of understanding" seems to anthropomorphize the ANN. I think this is a popular way of seeing it because it's also popular to think of human beings as being a collection of neurons with various activation weightings. I think that's a gross oversimplification of humans, and I don't know that we have empirical, long-standing science to say otherwise.

> This is pattern recognition to the same extent human memory and schemas are pattern recognition, IMO.

Maybe? Even if the embedding and the "learned features" in an ANN perfectly matched your human expectations, I still think there's a metaphysical difference between what's happening. I don't think we'll ever assign moral culpability to an ANN the way we will a human. And to the extent we do arm ChatGPT with the ability to harm people, we will always hold the humans who did the arming as responsible for the damage done by ChatGPT.

> All of these questions are just as valid posed against humans. Our intra-species variance is so high with regards to these questions (whether an individual feels remorse, acts on it, acts irrationally, etc.), that I can't glean a meaningful argument to be made about AI here.

The intra-species variance on "promise" is much, much lower in the mean/median. You may find extremes on either end of "how important is it to keep your promise?" but there will be wide agreement on what it means to do so, and I contend that even the extremes aren't that far apart.

> Humans lie, humans are incorrect, humans break promises, but when AI does these things, it's indicted for acting humanlike.

You don't think a human who tried to gaslight you that the year is currently 2022 would be indicted in the same way that the article is indicting ChatGPT?

The reason the discussion is even happening is because there's a huge swath of people who are trying to pretend that ChatGPT is acting like a human. If so, it's either acting like a human with brain damage, or it's acting like a malevolent human. In the former case we should ignore it, in the latter case we should lock it up.


> Which AI program will make a promise to you?

GPT will happily do so.

> When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise?

It will if you condition it to do so. Or at least it will say that it does feel bad or good, but then with humans you also have to take their outputs as accurate reflection of the internal state.

Conversely, there are many humans who don't feel bad about breaking promises.

> Will it de-prioritize non-obligations for the sake of keeping its promise?

It will you manage to convey this part of what a "promise" is.

> A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.

This is not a dichotomy. "Promise" is a word that stands for the concept, but how did you learn what the concept is? I very much doubt that your first exposure was to a dictionary definition of "promise"; more likely, you've seen persons (including in books, cartoons etc) "promising" things, and then observed what this actually means in terms of how they behaved, and then generalized it from there. And that is pattern matching.


> GPT will happily [make a promise to you]

GPT will never make a promise to you in the same sense that I would make a promise to you.

We could certainly stretch the meaning of the phrase "ChatGPT broke its promise to me" to mean something, but it wouldn't mean nearly the same thing as "my brother broke his promise to me".

If I said to you "Give me a dollar and I will give you a Pepsi." and then you gave me the dollar, and then I didn't give you a Pepsi, you would be upset with me for breaking my promise.

If you put a dollar in a Pepsi vending machine and it doesn't give you a Pepsi, you could say, in some sense that the vending machine broke its promise to you, and you could be upset with the situation, but you wouldn't be upset with the vending machine in the same sense and for the same reasons as you would be with me. I "cheated" you. The vending machine is broken. Those aren't the same thing. It's certainly possible that the vending machine could be setup to cheat you in the same sense as I did, but then you would shift your anger (and society would shift the culpability) to the human who made the machine do that.

ChatGPT is much, much, much closer to the Pepsi machine than it is to humans, and I would argue the Pepsi machine is more human-like in its promise-making ability than ChatGPT ever will be.

> there are many humans who don't feel bad about breaking promises.

This is an abnormal state for humans, though. We recognize this as a deficiency in them. It is no deficiency of ChatGPT that it doesn't feel bad about breaking promises. It is a deficiency when a human is this way.

> > Will it de-prioritize non-obligations for the sake of keeping its promise?

> It will you manage to convey this part of what a "promise" is.

I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.

It will be able to talk about deprioritizing non-obligations before then, for sure. But it will have no sense or awareness of what that means unless and until it is programmed to do so.

> > A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.

> This is not a dichotomy.

You missed the word "merely". EITHER a promise is merely pattern recognition (I saw somebody else say the words "Give me a dollar and I'll give you a cookie" and I mimicked them by promising you the Pepsi, and if I don't deliver, I'll only feel bad because I saw other people feeling bad) OR a promise is something more than mere mimicry and pattern matching and when I feel bad it's because I've wronged you in a way that devalues you as a person and elevates my own needs and desires above yours. Those are two different things, thus the dichotomy.

Pattern recognition is not intelligence.


> GPT will never make a promise to you in the same sense that I would make a promise to you.

It's a meaningless claim without a clear definition of "same sense". If all observable inputs and outputs match, I don't see why it shouldn't be treated as the same.

> This is an abnormal state for humans, though. We recognize this as a deficiency in them.

We recognize it as a deficiency in their upbringing. A human being that is not trained about what promises are and the consequences of breaking them is not any less smart than a person who keeps their promises. They just have different social expectations. Indeed, humans coming from different cultures can have very different feelings about whether it's okay to break a promise in different social contexts, and the extent to which it would bother them.

> I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.

If by manual programming you mean telling it, I still don't see how that is different from a human who doesn't know what a promise is and has to learn about it. They'll know exactly as much as you'll tell them.

> Pattern recognition is not intelligence.

Until we know how exactly our own intelligence work, this is a statement of belief. How do you know that the function of your own brain isn't always reducible to pattern recognition?


> > Pattern recognition is not intelligence.

> Until we know how exactly our own intelligence work, this is a statement of belief.

I would agree, with the addendum that it logically follows from the axiomatic priors of my worldview. My worldview holds that humans are qualitatively different from every animal, and that the gap may narrow slightly but will never be closed in the future. And one of the more visible demonstrations of qualitative difference is our "intelligent" approach to the world around us.

That is, this thread is 2 humans discussing whether the AI some other humans have made has the same intelligence as us, this thread is not 2 AIs discussing whether the humans some other AIs have made has the same intelligence as them.

> How do you know that the function of your own brain isn't always reducible to pattern recognition?

I am a whole person, inclusive of my brain, body, spirit, past experiences, future hopes and dreams. I interact with other whole people who seem extremely similar to me in that way. Everywhere I look I see people with brains, bodies, spirits, past experiences, future hopes and dreams.

I don't believe this to be the case, but even if (as you say) all of those brains are "merely" pattern recognizers, the behavior I observe in them is qualitatively different than what I observe in ChatGPT. Maybe you don't see it that way, but I bet that's because you're not seeing everything that's going into the behavior of the people you see when you look around.

As one more attempt to show the difference... are you aware of the Lyrebird?

https://www.youtube.com/watch?v=VRpo7NDCaJ8

The lyrebird can mimic the sounds of its environment in an uncanny way. There are certain birds in the New England National Park in Australia which have been found to be carrying on the tune of a flute that was taught to a pet lyrebird by its owner in the 1930s[0]. I think we could both agree that that represents pure, unadulterated, pattern recognition.

Now if everyone went around the internet today saying "Lyrebirds can play the flute!" can you agree that there would be a qualitative difference between what they mean by that, and what they mean when they say "My sister can play the flute!"? Sure, there are some humans who play the flute better (and worse!) than my sister. And sure, there are many different kinds of flutes, so maybe we need to get more specific with what we mean when we say "flute". And sure, if you're just sitting in the park with your eyes closed, maybe you can't immediately tell the difference between my sister's flute playing and the lyrebird's. But IMO they are fundamentally different in nature. My sister has hands which can pick up a flute, a mouth which can blow air over it, fingers which can operate the keys, a mind which can read sheet music, a will which can decide which music to play, a mood which can influence the tone of the song being played, memories which can come to mind to help her remember her posture or timing or breathing technique or muscle memory.

Maybe you would still call what my sister is doing pattern recognition, but do you mean that it's the same kind of pattern recognition as the lyrebirds?

And to your other point, do you need to perfectly understand exactly how human intelligence works in order to answer the question?

[0]: https://en.wikipedia.org/wiki/Lyrebird#Vocalizations_and_mim...


> A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.

It's probably even stronger than that: e.g. a promise is still a promise even if we're just brains in a vat and can be kept or broken even just in your mind (do you promise to think about X?—purely unverifiable apart from the subject of the promise, yet we still ascribe moral valence to keeping or breaking it).


I'm not the GP, but I agree with this stance. But the specific policy proposed really matters if this is your goal. Politicians often want to write very narrow regulations which are intended to tweak some small behavior by some specific business they find distasteful (or politically desirable to modify). So they play whack-a-mole with problems like this by writing policies that look like: "in Zone 5, businesses operating as data centers must operate cooling equipment that does not increase ambient noise levels above X decibels within Y meters of their property. "

This kind of policy does not accomplish the cost-shifting you talk about. Rather, a cost-shifting policy looks like: "in Maricopa county, a business may make as much noise as it likes, but must pay fees to the owner of any plot of land which has its ambient noise levels raised in the amount of $X per Y decibels per minute per square meter experiencing increased noise. "

I'm not aware of much regulation in the latter vein, at least not where I live. But it has the advantage of cost shifting, from ANY business, to those affected by the negative externality. That reinternalizes the cost without hampering freedom of businesses to operate in whatever way most efficiently accomplishes their goals, and helps C-suite execs see the true cost of their operations, which incentivizes them to innovate that cost away. It's also not specific to the cause of the noise, so church bells pay the same as the data center which pays the same as the glass factory which pays the same as the concert venue.

My $0.02 anyway.

Vote notdonspaulding 2024!


I think the reason such broad brush stroke laws aren't made is probably partly a lack of system thinking, but also partly a result of it. If you try and monetise church bells, people will oppose it. Easier to monetise the evil corporations.


They found an object which has now been shot down by a US fighter jet over Canadian territory:

https://apnews.com/article/united-states-government-canada-o...


That's different from the situation noted in linked article. That happened before the airspace was closed.


I'm very curious to hear more about this. All we know so far is that it was apparently cylindrical and the size of a small car.

I'm wondering if it was a small zeppelin. The shape would be more suitable for piloting, but it could stay aloft longer than a drone; would not surprise me at all if the Chinese were experimenting with those.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: