You're never really wrestling the computer. You're typically wrestling with the design choices and technological debt of decisions that were in hindsight bad ones. And it's always in hindsight, at the time those decision always seem smart.
Like with the rise of frameworks, and abstractions who is actually doing anything with actual computation?
Most of the time it's wasting time learning some bs framework or implementing some other poorly designed system that some engineer that no longer works at the company created. In fact the entire industry is basically just one poorly designed system with technological debt that grows increasingly burdensome year by year.
It's very rarely about actual programming or actual computation or even "engineering". But usually just one giant kludge pile.
Are people still in denial about the daily usage of AI?
It's interesting people from the old technological sphere viciously revolt against the emerging new thing.
Actually I think this is the clearest indication of a new technology emerging, imo.
If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.
>Because leaded gas is the same thing as people using a new technology like AI.
It's not the same, but it's not necessarily any good. I've observed the following, after ~2 weeks of free ChatGPT Plus access (as an artist who is trying to give the technology a chance, despite the vociferous (not vicious, geez) objections of many of my peers):
It's addictive (possibly on purpose). AI systems frequently return imperfect outputs. Users are trained to repeat until the desired output comes. Obviously, this can be abused by sophisticated-enough systems, pushing outputs that are JUST outside the user's desire so that they have to continue using it. This could conceivably happen independent of obvious incentives like ads or pay credits; even free systems are incentivized to use this dark pattern, as it keeps the user coming back, building a habit that can be monetized later.
Which leads into: it's gambling. It's a crapshoot whether the output will be what the user desires. As a result, every prompt is like a slot pull, exacerbated by the wait to generate an answer. (This is also why the generation is shown being typed/developed; the information in those preliminary outputs is not high-enough fidelity or presented in a readable way; instead, they're bits of visual stimuli meant to inure your reward system to the task, similar to how Robinhood's stock prices don't simply change second-to-second, but "roll" to them with a stimulating animation).
That's just a small subset of the possible effects on a user over time. Far from freeing users to create, my experience has been one of having to fight ChatGPT and its Images model, as well as the undesirable behaviors it seems to be trying to draw out of me.
I don't think there is anything that can be said to actually change people's minds here. Because people that are against it aren't interested in actually engaging with this new technology.
People that are interest in it and are using it on a daily basis see value in it. There are now hundreds of millions of active users that find a lot of value in using it.
The other factor here is the speed of adoption, which I think has seriously taken a lot of people by surprise. Especially those trying this wholesale boycot campaign of AI. For that reason people artificially boycotting this new technology are imo deluded.
If it were advocating for Open source models it would be far more reasonable.
>People that are interest in it and are using it on a daily basis see value in it.
I'm one of them. I've got plenty of image gens to prove it (and I'd have more if OpenAI hadn't killed Dall-E labs with almost no heads-up). I'm telling you that I still think contemporary implementations of the technology are just this side of vile, and that I hope that the industry collapses soon, so that grassroots start-ups with actual moral scruples, and a desire to enable rather than control their customers, have the chance to emerge and compete. Also: for said customers, such a collapse wouldn't even be THAT different from the way in which tech companies currently snatch away tools on a whim.
> Because people that are against it aren't interested in actually engaging with this new technology.
How do you know that? Are you just assuming anyone who has something negative to say just hasn't used it?
In my case it's absolutely not true. I've used it near daily for coding tasks and a handful of times for other random writing or research tasks. In a few cases I've actively encouraged a few others to try it.
From direct experience I can say it's definitely not ready for prime time. And I like the way most companies are trying to deploy it even less.
There is something there with LLMs, but the way they're being productized and commercialized does not seem healthy. I would rather see more research, slow testing and trials, and a clear understanding of the potential negatives for society before we simply dump it into the public sphere.
The only mind I see not willing to be changed is yours when you characterize any push back against AI as simply ignorant haters. You are clearly wrong about that.
>There is something there with LLMs, but the way they're being productized and commercialized does not seem healthy. I would rather see more research, slow testing and trials, and a clear understanding of the potential negatives for society before we simply dump it into the public sphere.
There's something incredibly harmful about this kind of mentality.
It's a weird kind of paternalizing, diminutive, and degrading view of common people.
> NFTs are still being used. Along with a lot of the crypto ecosystem. In fact we're increasingly finding legitimate use cases for it.
Look at this. I think people need to realize that it's the same kind of folks migrating from gold rush to gold rush. If it's complete bullshit or somewhat useful doesn't really matter to them.
In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.
Did you even click the link. It's a rant I would get banned for repeating it here. Actually even the title here says "nuclear".
So yes. Vicious.
Your problem is actually with my point, which you didn't address, not really, and instead resort to petty remarks that tries to discredit what's being said.
Yep. I hear that "vicious attack" phrase from plenty of people with narcissistic personality disorders in the tech industry in an attempt to try and shift the narrative. Its sick, really.
It's the entrenchment of a particular kind of parasitic elite.
The logic that made them into "elites" has turned in on itself and is now self-cannibalizing.
The saving grace is only the capacity for the American people to see through this, but with the derangement of information pathways we're increasingly at the behest of these people and their narratives that only serve their aggrandizement.
All the talk about "saving the west" or "individualism" or the some other talk of spirit that these preachers sermon about, is only to serve themselves and no one else.
"Calling out evil" is another one of those victims to their self-serving motivations. Along with "climate change", "environmentalism", "democracy", "freedom", or a whole host of otherwise genuinely noble causes.
Really? Anthropic is /the/ AI company known for anthropomorphizing their models, giving them ethics and “souls”, considering their existential crises, etc.
Anthropic was founded by a group of 7 former OpenAI employees who left over differences in opinions about AI Safety. I do not see any public documentation that the specific difference in opinion was that that group thought that OpenAI was too focused on scaling and that there needed to be a purely safety-focused org that still scaled, though that is my impression based on conversations I've had.
But regardless anthropic reasoning was extremely in the intellectual water supply of the Anthropic founders, and they explicitly were not aiming at producing a human-like model.
No. UPI. It's an initiative by the Indian government.
It's controlled by the RBI, just through a complex public-private corporate structure through NPCI.
UPI is much larger and more international than PIX. It's currently processing iirc something like 200 billion transactions. UPI is also used in several countries, France being among the most recent examples.
As such UPI has a broader scope than PIX and requires a public-private corporate structure with stakeholders from both sides.
But this is off topic. The competence of the Indian government to at the very minimum partner with Industry shows that such software preloaded on phones is a threat to the civil liberties of people that the State shouldn't encroach on. This is a violation of individual privacy.
By "strong" I assumed that the GP meant provocative, and that's the sort of linkbait that the site guidelines call for submitters (or moderators) to dampen - see https://news.ycombinator.com/newsguidelines.html.
I used the word "weaker" as a sort of play on GP's "strong" just for fun.
> I don't know how you even come to that kind of conclusion at all actually.
Because most products, including iOS/macOS now, have glaring annoyances or shortcomings that have gone unfixed for a long time.
If Tim Cook or even Craig Federighi etc. actually used iOS/macOS in their day to day lives, they would have run into those issues sooner or later and they'd be fixed in a day.
Plenty of CEOs do. The comment you replied to already questioned Tim Cook's usage of Apple products.
Most Apple executives are probably using a Mac. Most engineers at Apple probably code on a Mac. Most engineers in the Bay already use Macs and have been using them for many years.
Such a silly comment. Is your theory that everyone with any decision making authority at Apple doesn't actually use the product? Even when it comes to "glaring annoyances or shortcomings"?
So odd of you to frame this as some sort of personal outrage. Like I'm so annoyed by this "glaring issue" on my device clearly the people working on this don't even use it or "it would be fixed in a day". Lol. Maybe people who actually have to get things done at a trillion dollar company don't have the same constraints as you, or relatedly, the luxury to obsess over your so-called glaring issues.
It’s not a silly comment, both macOS and iOS have been decaying into dog shit over the years from obvious bugs that anyone who uses the apps and features being sold would run into very quickly.
Tim and other executives might be using their devices as email machines, but it’s not obvious they’re using everything they’re quite literally selling us.
2: The Music app is barely functional, and will regularly fail to play music. Here it is bugging out, and stacking multiple album covers https://imgur.com/a/Sg8oU1p
> So odd of you to frame this as some sort of personal outrage.
Hey you try waiting 5+ years on a bug report/feature request for a simple thing. Or things like a rendering bug that survives all year throughout beta into the X.1 release (see the Tahoe Contacts app)
You'd give up. This "outrage" is all the outlet we have left. Shame the system that lets such crap get through!
Yah... It's not as if the healthcare/pharma industry have ever ran false multi-year propaganda campaigns that later turned out to be outright harmful to people.
They'd never lie and conspire for years and years. That couldn't possibly happen.
I would point out that the anti-vaxx campaign about vaccines causing autism is a multi-decade propaganda campaign that absolutely harms people.
However, being as that is merely a to quoque fallacy, I'm rather curious: do you have any examples of said campaigns run by the healthcare/pharma industry? And, more importantly, do you have any evidence such campaigns have anything to do with vaccines?
Note: the Purdue/Sackler campaign surrounding opiods is already well-known, but AFAICT, it has no relationship with vaccines.
Purdue is one pharmaceutical company. Given their behavior, I find no fault with anyone who'd distrust products from Purdue specifically. However, it appears Purdue doesn't produce any vaccines, so it is orthogonal to the discussion.
> You think the opioid campaign is the only real wrongdoing by pharmaceutical companies??
I'm open to the possibility of there being additional wrongdoing by Purdue or other pharmaceutical companies, perhaps even related to vaccines. However, the fact that one pharmaceutical company engaged in (admittedly pretty egregious) wrongdoing with respect to opioids does not itself prove any wrongdoing regarding vaccines made by itself or other companies. Assuming otherwise is falling victim to a syllogistic fallacy.
Answering my call for evidence of wrongdoing specific to vaccines with such a conspiratorial-minded question suggests you have no such evidence. I implore you to prove me incorrect.
I'm talking about perceived industry wide reputational damage by the public as a cause for distrust.
Who cares about fallacies?
Beliefs triumph over logic. Public perception > truth.
Further reputational damages are not unwarranted. Thalidomide is an old example. There are many more recent ones outside of opioid. You're free to look up actual court cases.
A lava lamps that just produces randomness, ie for cryptology purposes, is different than the benefit here, which is to produce specific randomness at low energy-cost
One can create a true random generator algorithm by plugging a moving computer mouse to its input.
Would be easy to put a dozen cages with mouse wheels on in them, real mammals in there, to generate a lot of random numbers, everyone would understand so only funny, they want mysterious!
You're never really wrestling the computer. You're typically wrestling with the design choices and technological debt of decisions that were in hindsight bad ones. And it's always in hindsight, at the time those decision always seem smart.
Like with the rise of frameworks, and abstractions who is actually doing anything with actual computation?
Most of the time it's wasting time learning some bs framework or implementing some other poorly designed system that some engineer that no longer works at the company created. In fact the entire industry is basically just one poorly designed system with technological debt that grows increasingly burdensome year by year.
It's very rarely about actual programming or actual computation or even "engineering". But usually just one giant kludge pile.
reply