Hacker Newsnew | past | comments | ask | show | jobs | submit | madsbuch's commentslogin

Based on the image with the reflection, the photographer is probably closer to 12 - so there is indeed a chance they are still alive.

The meaning of copyright is changing as these images are being divorced from their physical medium.

It can be read that the film was found and sold - a practice that was probably fine when an image had a physical manifestation.

But yes, we are at juncture now with copyrights, as more and more things are virtual.


Numbers do lie - it would be nice to have a breakdown of this.

Maybe the difference, 16% points, is what affords the European a fair treatment upon illness where half of that is "shareholder value" in the US.

Feel free to link the study you used so we don't just have to trust some internet rando.


Just google 'percentage of GDP driven by government spending' for your country of choice.

This is some of the most basic and widely available economic data that exists.

https://www.imf.org/external/datamapper/exp@FPP/USA/FRA/JPN/...

(edited my original comment to include the source as well)


Common source, great!

First, you seems to conflate "free markets" (Whatever free means here?) with decentralized spending. How does that make sense?

And more importantly: If we can agree that a democracy ought to be an aspiration for a society, and a functioning democracy requires some minimum level of equality, how will you ensure that under your "free market"?


Definitionally, government spending is centralized activity.

Definitionally, non-government spending is market activity (controlled by each market participant, so decentralized).

I would question why this data upsets you so much instead of trying to shoot the messenger.


Fair enough.

Now, can you justify your sentiment? Which was what I asked about?

(Or is it more fun to just cycle around in indifferent ontological ramblings?)

Answer to the edit:

I am not upset by the numbers. I am asking how you will ensure a society with equality and democracy under your proposed system.

In particular, taxes are an effective way to ensure equality - something that you are probably seeing in the numbers you refer.

In the US we see that trump is increasing taxes on consumers (Tariffs) while lifting taxes from the highest earners.

To cut to the core: I am asking if you are pro oligarchies? and if not, how do your propose that we ensure the equality needed to uphold a democracy?


> The first duty of your documentation is to be complete and correct.

It is probably unrealistic to expect documentation to be complete as it is not clear where the border on inclusiveness goes.

Take eg. AWSs node packages. Here they spend words recommending using async / await which IMHO is firmly outside the scope of library docs.

I am curious on what you feel is missing? is it elements concrete to the piece of software you use, or is it pieces that can be deemed expected knowledge from a professional software developer.


The former.


Yep, let's wing serving 400 million sandwiches - whatever the risk that the US population dies of salmonella or listeria.

Anyways, one of the things about growing up is realizing that there is more to the world than just innovation.


I think one of the things about growing up is accepting personal responsibility and not looking at the government/daddy to protect you from everything. If I sell 400 million skateboards - do we need a regulatory board to approve skateboard design changes?

I'm sure millions of people make unregulated sandwiches at home just fine.


>I think one of the things about growing up is accepting personal responsibility and not looking at the government/daddy to protect you from everything. If I sell 400 million skateboards - do we need a regulatory board to approve skateboard design changes?

Yes, especially if your target market for those skateboard are kids / minors.

>I'm sure millions of people make unregulated sandwiches at home just fine.

If someone makes a sandwich for themselves incentives are aligned to prevent unhygienic practices. I'm not going to cut corners to maximize some different measure. If some restaurant produces food for me, they are incentivized to maximize profit margin, which is not directly aligned with my desire for non-dangerous food.


What I hate about this argument is that the FDA does not predate civilization. In fact, it's a relatively recent development. Not only has this idea been tried, but throughout most of human history, people lived in the world you describe, died of salmonella, and the people who lived in that world decided they'd be better if that wasn't a thing anymore.


In the world predating didn't have single factories serving hundreds of millions of people - such a concentration of risk very much merits a FDA.

It is all about risk.

FDA enables civilization to grow above a certain threshold.


Yeah, making sure there's a standard of cleanliness or food safety in restaurants seems kind of pointless, right? If the consumer eats that food, it's their fault for sure.


Well, even without regulations, restaurants that poison their customers will have bad reputation and go out of business.

So the market incentivizes cost cutting but not too much of it.


I mean, I didn't get poisoned my whole life! Let's get rid of all the regulations obviously they are useless.


> I'm sure millions of people make unregulated sandwiches at home just fine.

Very little about that sandwich is unregulated. The bread they bought in the store is regulated. Whatever they put on the sandwich is regulated.

Without the FDA, companies would put profits above food safety.


> I think one of the things about growing up is accepting personal responsibility

What could I have done here to know that the sandwich is contaminated with salmonella before eating it?


I can see a world where there's a private alternative to the FDA going around and certifying that food is safe for consumption. I just know that the world before the FDA didn't have one, and the FDA works well enough that I'm not willing to find out. I think this has a lot of parallels to software - if it ain't broke don't fix it.


And that organization would be bought off by Big Food quickly


That is a really good point, what would be the business model of such an organization? Who funds them?

If it is the government, then that is just the FDA with extra steps

I could imagine food companies funding it to keep their competitors in check, don't know how likely that is in practice

Maybe there could be a way to make the consumer pay for the service. Provide a website where customers pay a fee, enter the name of the product/restaurant then get their safety levels. You could even include fancy graphs and charts to sweeten the deal. How to do that profitably I dont know.


Part of the thing about growing up is realizing that you are a PRIVELEGED little product of a stable society. And maybe it's worth caring about others in that society instead of "corporate innovation" that threatens to fully destabilize said society.


You don't know anything about me. By the way, how many regulators/states have "fully destabilized" society through war and genocide?


Do you seriously think corporate "innovation" isn't involved in wars and genocide?


> personal responsibility

A sense of personal responsibility dilutes very quickly as more people get involved. This is a well researched dynamic in groups and collectives.

As it turns out, it's very easy to rationalize your own actions if you can defer your responsibility to a wider context. On an operational level: "My job - HR, SRE engineering, project management,... - didn't hurt anyone.", "I received an industry award last year for my work",... On a strategic level: "Too many people rely on us, so we can't fail.", "Our original mission didn't change.", "Our mission was, is and will be a net positive", ... Not just that, actually being convinced that those rationalizations are 100% true, and not being able to consciously notice how your own actions in a small, or large, way contribute to a negative impact. Just listen to testimonies of these people, the truly are convinced to their core that their work is a net positive for humanity.

> If I sell 400 million skateboards - do we need a regulatory board to approve skateboard design changes?

Suppose your design involves a wonky wheel. If you sell 10 skateboards, and 1 person falls, breaks their leg and decides to sue you for damages: that's a private problem between you and that person. If you sell 400 million skateboards, and millions of people people break their leg: that's a problem for the entirety of society.

Safety is also why car design is heavily regulated. Not necessarily to ensure individual safety, but to make sure that society, as a whole, isn't crippled by hundreds of thousands of people requiring care or getting killed in car accidents.

If you are able to sell 400 million skateboards, I sure hope there are regulations that enforce the safety of your product design.


I'm sure millions of people make unregulated sandwiches at home just fine.

You're on the verge of uncovering the actual meaning of personal responsibility.


The market doesn't protect all those kids who were maimed or died trying out your regulation-free skateboard.

A basic level of safety might mean that your skateboards sell faster, now that parents don't have to risk the health of their offspring.


This is a nice fantasy, it's just a shame we live in a world full of psychotic C-suites that would do anything and everything they could if it meant the magic line goes up half a percentage point. I guess you could just "take personal responsibility" to not drink polluted water tainted by unfiltered chemical dumps, but I'd much rather we tell companies to get bent when they try pollute rivers and lakes en-masse to save a buck.


There is a concept I'd recommend you to get familiar with: Systemic risk.

Nobody really cares about you and your sandwich.

But whenever we introduce single point of risk into the society these needs to be managed.

Fair enough, you are personally responsible and don't eat the sandwich.

The rest of the US was not.

- at least you retain your right to claim "What did I say".


> do we need a regulatory board to ...

yes, because it's clear from history that companies can't be trusted to not cut corners to boost profits at the expense of consumer safety


What I always find hilarious about these naive libertarian types is they never even bother to check their hypotheticals against reality. For example, FutureMotion had to have a regulatory body intervene because they were killing and injuring people with their skateboard designs:

https://www.theguardian.com/sport/2023/oct/03/future-motion-...

So the answer to your question is, “yes, that needs to and did happen.”


google survivorship bias


Why shouldn't it apply to larger contracts?

There is a very easy way to get around such a requirement from legislation: Just call it licensing instead of a purchase.

The need for such a legislation is corporations reckless use of the words "purchase" and "buy" for goods that have been licensed.


> very easy way to get around such a requirement from legislation: Just call it licensing instead of a purchase.

Licensing should come with enhanced consumer rights e.g. an explicit license duration and allow the consumer to return the license for pro-rata refund within that duration. The license should be for the IP and honoured for all formats and platforms that meet some regularity threshold. The absurdity of having to rebuy things you "own" because you switch device or format has to end.

Along similar lines, hardware should be called "hire" not "sell" when the manufacturer maintains control over the device e.g. locked bootloaders, encryption keys, online service dependencies, forced updates with no downgrade, remote-access privileges, telemetry, not meeting "right-to-repair", hardware locking preventing component replacement or choice of consumable (e.g. ink) etc.

Similar return rights should apply if hardware is leased. Seller would need to be insured/escrow to meet consumer refunds if they break their side of the lease (e.g. going bust and shutting down required online services).


This is a part of it, but I don't think it's all there is. Words mean little when there is such a large power and information asymmetry between consumers and sellers.

The law should formalize the concepts of rentals (licenses) and purchases. Purchases create non-waivable rights of transferability and allows the owner to demand compensation when a service closes: for example if I shut down my movie platform, you should get to download a copy of everything you own.

Licenses on the other hand do not confer such rights, but should still be transferable under a certain value and have a set period during which its terms must be fulfilled, otherwise the licensee needs to be compensated. No "we change the terms at our sole discretion at any moment" nonsense.

Vendors can give you extra rights (like prorated returns or exchanges), but they can't take any of the codified rights away. I assume there needs to be some details about companies just setting a license of one day but never revokes access to avoid the regulation, but someone smarter than me can probably figure that out.

For larger licenses I think the customer (usually customers) has a greater negotiating leverage, so it isn't as necessary to codify these terms, but of course this is contingent on there not existing trillion-dollar corporations, which is not the world we live in.


Licenses or rights can be sold, too.

If you buy a physical book or DVD, you basically have a perpetual right to read, watch, listen to that copyrighted material. But you can sell that physical item to anyone and with that the right to consume that material.

This is not (or at least should not be) different even if there is no physical item or even if the copyrighted material is software and not audiovisual media.


But licensing is a very obfuscatory word. What are the terms of the licenses? Consumers don't really know and they vary widely anyway.

Buying may have been a misnomer but it had some useful baggage that is shed with the terminology shift.


We have this in Denmark where you can get up to 2 years of unemployment benefit - also when you quit yourself (you will get a one-month quarantine, though).

Denmark has a 2.9% unemployment rate: https://www.dst.dk/en/Statistik/emner/arbejde-og-indkomst/be...

I don't entirely know how that compare to the US.


The government of Denmark does not provide unemployment insurance. Unemployment is voluntary and provided through privately owned insurance funds. The government does not pay out of unemployment benefits at all.

To get the extended benefits, you have to pay more. Like any private insurance, pay more to get better benefits.


That is wrong - while you need the insurance to be eligible, the governments finances most of the unemployment payout. It is a hybrid model.

The insurance premium is also entirely tax deductible and nowhere enough to cover the scheme - the scheme which is highly regulated, well, because the government pays for it.


There is an immensely strong dogma that, to my best knowledge, is not founded in any science or philosophy:

        First we must lay down certain axioms (smart word for the common sense/ground rules we all agree upon and accept as true).
        
        One of such would be the fact that currently computers do not really understand words. ...
The author is at least honest about his assumptions. Which I can appreciate. Most other people just has it as a latent thing.

For articles like this to be interesting, this can not be accepted as an axiom. It's justification is what's interesting,


It’s a reasonable axiom, because for many people understanding involves qualia. If you believe LLM have qualia, you also believe a very large Excel sheet with the right numbers has an experience of consciousness and feels pain or something where the document is closed.


As I wrote, I appreciate that the author wrote it out as they did. It might be reasonable in the context of the article. But fixing it as an axiom just makes the discussion boring (for me).

> If you believe LLM have qualia, you also believe a ...

You use the word believe twice here. I am actively not talking about beliefs.

I just realise, that the author indeed gave themselves an out:

> ... currently computers do not really understand words.

The author might believe that future computers can understand words. This is interesting. Questions being _what_ needs to be in order for them to understand? Could that be an emergent feature of current architectures? That would also contradict large parts of the article.


Amusingly, the author does not appear to fully understand the meaning of "axiom".

While practice, axioms are often statements that we all agree on and accept as true, that isn't necessarily true and isn't the core of it's meaning.

Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.

In this case, the assertion isn't really used as part of a argument, but to bootstrap an explanation of how words are represented in LLMs.

Edit: I find this so amusing because it is an example of learning a word without understanding it.


> Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.

Uhm… no?

They are literally things that can't be proven but allow us to prove a lot of other things.


It seems like you fully agree with the parent.

I also agree, that the author probably not meant to establish an axiom: The axiom being established, while not having any support right now, does seem like something we can reduce in the future. The author also uses the word "currently" in their axiom, which contradicts axioms (or is temporal axioms a thing?).

I think the author merely meant to establish the scene for the article. Something I truly appreciate.


"unprovability" is not a property that it is necessary to prove to pick something as an axiom.

There is generally a project to reduce axioms to the simplest and weakest forms required to make a proof. This is does result in axioms that are unprovable but does not mean the "unprovable" is a necessary property of axioms.


Yeah, for axioms like the above my next question is define 'understand'. Does my dog understand words when it completes specific actions because of what I say? I'm also learning a new language, do I understand a word when I attach a meaning (often a bunch of other words to it) to it? Turns out computers can do this pretty well.


Oh please, enough with the semantics. It reminds me of a post modernist asking me to define what "is" is. The LLM does not understand words in the way a human understands them and that's obvious. Even the creators of LLMs implicitly take this as a given and would rarely openly say they think otherwise no matter how strong the urge to create a more interesting narrative.

Yes, we attach meaning to certain words based on previous experience, but we do so in the context of a conscious awareness of the world around us and our experiences within it. An LLm doesn't even have a notion of self, much less a mechanism for attaching meaning to words and phrases based on conscious reasoning.

Computers can imitate understanding "pretty well" but they have nothing resembling a pretty good or bad or any kind of notion of comprehension about what they're saying.


It's the most incredible coincidence. Three million paying OpenAI customers spend $20 per month (compare: NetFlix standard: $15.49/month) thinking they're chatting with something in natural language that actually understands what they're saying, but it's just statistics and they're only getting high-probability responses without any understanding behind it! Can you imagine spending a full year showing up to talk to a brick wall that definitely doesn't understand a word you say? What are the chances of three million people doing that! It's the biggest fraud since Theranos!! We should make this illegal! OpenAI should put at the bottom of every one of the millions of responses it sends each day: "ChatGPT does not actually understand words. When it appears to show understanding, it's just a coincidence."

You have kids talking to this thing asking it to teach them stuff without knowing that it doesn't understand shit! "How did you become a doctor?" "I was scammed. I asked ChatGPT to teach me how to make a doctor pepper at home and based on simple keyword matching it got me into medical school (based on the word doctor) and when I protested that I just want to make a doctor pepper it taught me how to make salsa (based on the word pepper)! Next thing you know I'm in medical school and it's answering all my organic chemistry questions, my grades are good, the salsa is delicious but dammit I still can't make my own doctor pepper. This thing is useless!

/s


Maps are useful, but they don't understand the geography they describe. LLMs are maps of semantic structures and as such, can absolutely be useful without having an understanding of that which they map.

If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.


> If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.

Got it, so an LLM only understands my words if it has full mastery of every new problem domain within a few thousand milliseconds of the first time the problem has been posed in the history of the world.

Thanks for letting me know what it means to understand words, here I was thinking it meant translating them to the concepts the speaker intended.

Neat party trick to have a perfect map of all semantic structures and use it to trick users to get what they want through simple natural-language conversation, all without understanding the language at all.


> Got it, so an LLM only understands my words if it has full mastery of every new problem domain within a few thousand milliseconds of the first time the problem has been posed in the history of the world.

That's not what I said. Please try to have a good faith discussion. Sarcastically misrepresenting what I said does not contribute to a healthy discussion.

There have been plenty of examples of taking simple, easy, problems, and then presenting them in a novel way that doesn't occure in the training material, and having the LLM get the answer wrong.


Sounds like you want the LLM to get the answer right in all simple, easy cases before you will say it understands words. I hate to break it to you but people do not meet that standard either and misunderstand each other plenty. For three million paying customers, ChatGPT understands their questions well enough and they are happy to pay more than for any other widespread Internet service for the chance to ask it questions in natural language, and even though there is a free tier available with high amounts of free usage.

It is as though you said a dog couldn't really play chess if it plays legal moves all day every day from any position and for millions of people, but sometimes fails to see obvious mates in one in novel positions that never occur in the real world.

You're entitled to your own standard of what it means to understand words but for millions of people it's doing great at it.


> I hate to break it to you but people do not meet that standard either and misunderstand each other plenty

Sure, and there are ways to tell when people don't understand the words they use.

One of the ways to check how well people understand a word or concept is to ask them a question they haven't seen the answer for.

It is the difference in performance on novel tasks that allows us to separate understanding from memorization in both people and computer models.

The confusing thing here is that these LLMs are capable of memorization at a scale that makes the lack of understanding less immediately apparent.

> You're entitled to your own standard of what it means to understand words but for millions of people it's doing great at it.

It's not mine, the distinction I am drawing is widespread and common knowledge. You see it throughout education and pedagogy.

> It is as though you said a dog couldn't really play chess if it plays legal moves all day every day from any position and for millions of people, but sometimes fails to see obvious mates in one in novel positions that never occur in the real world.

While I would say chess engines can play chess, I would not say the chess engines understands chess. Conflating utility with understanding simply serves to erase an important distinction.

I would say that LLMs can talk and listen. And perhaps even that it understand how people use language. Indeed, as you say, millions people show this every day. I would however not say that LLMs understand what they are saying or hearing. The words are themselves meaningless to the LLM beyond their use in matching memorized patterns.

Edit: Let me qualify my claims a little further. There may indeed be some words that are understood by some LLMs, but it seems pretty clear there are definitely some important ones that aren't. Given the scale of memorized material, demonstrating understanding is hard but assuming it is not safe.


Some of us care about actual understanding and intelligence. Other people just want something useful enough that can mimic it. I don't know why he feels the need to be an ass about it though.


> Maps are useful, but they don't understand the geography they describe. LLMs are maps of semantic structures and as such, can absolutely be useful without having an understanding of that which they map.

That's a really interesting analogy I've never heard before! That's going to stick in my head right alongside Simon Willison's "calculator for words".


i am not sure where this comment fits as an answer to my comment.

Firstly, do understand that I am not saying that LLMs (or ChatGPT) do understand.

I am merely saying that we don't have any sound frameworks to assess it.

For the rest of your rant: I definitely see that you don't derive any value from ChatGPT. As such I really hope you are not paying for it - or wasting your time on it. What other people decide to spend their money on is really their business. I don't think any normal functioning people have the expectation that a real person is answering them when they use ChatGPT - as such it is hardly a fraud.


I added an /s tag to my comment.


Sorry, I was too fast to answer to see that.


I think the critics (myself included) perfectly understood that point.

> What they really mean is that good programmers should think ahead and craft their code with an eye minimizing future modifications.

The critique is exactly that this can not happen in real world projects because you can only speculate what requirements for the code base is down the road.

To counter this I usually apply two princinples:

1. Occam's razor - implement the simplest solution

2. Write code that is readable and understandable, so it is easier to change the code with the requirements.

The last being completely opposite to what the author of the article thinks.

The worst thing I can think of is somebody needlessly DRYing up a code base prematurely - this is in my opinion a junior behavior.


Exactly my thoughts.

The reason why good code is code that is easy to read, is because products evolve, and so does the code.

Suddenly the taxonomy of that enum starts to shift, and the name that was perfect yesterday does not make sense tomorrow.

These changes happen gradually and a basic acceptance of the code base not being on par with the product understanding is necessary in order to have any kind of velocity on not only spend time refactoring.


I disagree.

"good code is easy to read" - that does not work.

I can write a bubble sort instead of quicksort and that code will be bad.

Maybe you can do the same thing with privacy policies. Most complicated privacy policies are bad, so they make them hard to read so that people do NOT understand them and give up.

But you could have a privacy policy that is bad and easy to read. "We can do anything".

I think good code is primarily easy to read. And I think it should not attract attention through bad behavior, so it should additionally not come under scrutiny for that.


Of course your code should live up to requirements and be correct, for it to be good - The requirements can also be performance requirements.

If you have a list of maximally 10 elements that needs to be sorted and you opt for quicksort over bubble sort in a context where bubble sorts time/space guarantees perfectly solved the requirements, well, then you absolutely wrote bad code.

This is what a more senior developer understands, where a junior would jump in and write worse code.


I still rely heavily on principles acquired from courses on Software Architecture and PL studies as a part of my CS degree - and I can definitely see a difference in how people organise their code with same tenure but no schooling.


What are the most common principles you use?


Using types ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: