Not Maybe, I owned a 2009 MBP. Everyone with a macbook from that period that I knew had the same issue, they were absurdly bright, you could not keep it anywhere near a bedroom without putting very thick tape over the light.
It was a poorly thought out design of aesthetics over ergonomics.
nope. actually I remember I had that model first and yes I still don't care. simply the least annoying light compared to other bright color leds in a room. doesn't stand close to liquid glass chaos.
loved battery level indicators on old macbooks too, they kind of brought it back with led on magsafe except this new led is more annoying.
> I started making deliberate grammar and spelling mistakes in professional context.
I've also noticed an increase of this in myself and others, I used to edit a lot more before sending anything, but now it seems more authentic if you just hit send so it's more off the cuff with typos, broken sentences and all.
I'm sure an LLM could easily mimic this but it's not their default.
> the number of bus stops might matter at the margins, we’re not talking about a system where marginal improvements will matter
The central argument of reducing stops is increasing bus speed, not reducing margins, It's in the second paragraph.
[edit]
Top comment is a straw man, attempt to correct course downvoted... I'm not sure how much value HN has left for useful discourse, who the fuck are you people, if you even are people.
You're being downvoted because you misunderstood the post you're replying to. They aren't referring to profit margins, but marginal utility—i.e. incremental improvements to stop spacing (purportedly) would not be enough to fix a fundamentally broken system.
I have the same fears. Last year they have publicly stated they are not interested in acquisition [0]
> Pennarun confirmed the company had been approached by potential acquirers, but told BetaKit that the company intends to grow as a private company and work towards an initial public offering (IPO).
> “Tailscale intends to remain independent and we are on a likely IPO track, although any IPO is several years out,” Pennarun said. “Meanwhile, we have an extremely efficient business model, rapid revenue acceleration, and a long runway that allows us to become profitable when needed, which means we can weather all kinds of economic storms.”
Nothing is set in stone, after all it's VC backed. I have a strong aversion to becoming dependent upon proprietary services, however i have chosen to integrate TS into my infrastructure, because the value and simplicity it provides is worth it. I considered the various copy cat services and pure FOSS clones, but TS are the ones who started this space and are the ones continuously innovating in it, I'm onboard with their ethos and mission and have made use of apenwarrs previous work - In other words, they are the experts, they appear to be pretty dedicated to this space, so I'm putting my trust in them... I hope I'm right!
Just note i doubt Tailscale were first popular vpn manager as i remember many hobby users are Zerotier converts and also much older products like Hamachi.
Tailscale have build great product around wireguard (which is quite young) and they have great marketing and docs. But they are hardly first VPN service - they might not even be the most popular one.
Yes, I ambiguously said "started this space"... and to be honest even in the most generous interpretation that's probably incorrect, maybe ZeroTier started "this space", in that it had NAT busting mesh networking first.
As far as I understand Tailscale brought NAT busting mesh networking to wireguard + identity first access control, and reduced configuration complexity. I think they were the first to think about it from an end to end user perspective, and each feature they add definitely has this spin on it. It makes it feel effortless and transparent (in both the networking use sense and cryptography sense)... So i suppose that's what I mean by started, TS was when it first really clicked for a larger group of people, it felt right.
How about inverting the issue, highlight posts with an opt in label. e.g
Show HN [NOAI]:
Since it's too controversial to ban LLM posts, and would be too easy for submitters to omit an [LLM] label... Having an opt in [NOAI] label allows people to highlight their posts, and LLM posts would be easy to flag to disincentivise polluting the label.
This wouldn't necessarily need to be a technical change, just an intuitive agreement that posts containing LLM or vibe coded content are not allowed to lie by using the tag, or will be flagged... Then again it could also be used to elevate their rank above other show HN content to give us humanoids some edge if deemed necessary, or a segregated [NOAI] page.
[edit]
The label might need more thought, although "NOAI" is short and intelligible, it might be seen as a bit ironic to have to add a tag containing "AI" into your title. [HUMAN]?
I'm 90% sure this will end with endless squabbles who's right that the label is correct/incorrect, rather than actual conversations about what the project that the person is showing. It already happens without the labels, feels like it'd increase the frequency of that even more if this label gets enforced.
Is the problem that the app was written with AI assistance or that it's low-effort/bad? I don't care if you used Claude to fix a bug or something if you have a cool app, but i do care if you vibe coded something I could've vibe coded in an hour. That's boring.
Feels like effort needs to be the barrier (which unfortunately needs human review), not "AI or not". In lieu of that, 100 karma or account minimum age to post something as Show HN might be a dumb way to do it (to give you enough time to have read other people's so you understand the vibe).
This study is measuring the wrong thing. Any diet that restricts calories will cause weight loss, that's just physics not biology. So long as the person strictly sticks to that diet it will work.
Strategies like intermittent fasting or diets that moderate what you eat rather than quantity are focused on the later aspect "strictly sticking to that diet". Because being strict is not sustainable, will power is limited and inconsistent, so wasting it on strategies that are hard to stick to is both futile and a waste of will power. Changing what and when you eat accounts for biology instead of just physics, because those variables have a huge impact on satiety.
The study has a minimum interval of 4 weeks, which does not take much will power. Not to mention the psychological impact of being part of a study.
Like magic pixie dust, nobody knows in detail how AI models work. They are not explicitly created like GOFAI or arbitrary software. The machine learning algorithms are explicitly written by humans, but the model in turn is "written" by a machine learning algorithm, in the form of billions of neural network weights.
I think we do know how they work, no? We give a model some input, this travels through the big neural net of probabilities (gotten with training) and then arrives at a result.
Sure, you don't know what the exact constellation of a trained model will be upfront. But similarly you don't know what, e.g, the average age of some group of people is until you compute it.
If it solves a problem, we generally don't know how it did it. We can't just look at its billions of weights and read what they did. They are incomprehensible to us. This is very different from GOFAI, which is just a piece of software whose code can be read and understood.
The number can be anything, is there a number at which "we don't know" starts?
The model's parameters are in your RAM, you insert the prompt, it runs through the model and gives you a result. I'm sure if you spend a bit of time, you could add some software scaffolding around the process to show you each step of the way. How is this different from a statistical model where you "do know"?
For just a few parameters, you can understand the model, because you can hold it in your mind. But for machine learning models that's not possible, as they are far more complex.
May I point out that we don't know in detail how most code runs? Not talking about assembly, I am talking about edge cases, instabilities, etc. We know the happy path and a bit around it. All complex systems based on code are unpredictable from static code alone.
We know at least quite well how it runs if we look at the code. But we know almost nothing about how a specific AI model works. Looking at the weights is pointless. It's like looking into Beethoven's brain to figure out how it came up with the Moonshine sonata.
When we built nuclear powerplant we had no idea what really mattered for safety or maintenance, or even what day-to-day operations would be like, and we discovered a lot of things as we ran them (which is why we have been able to keep expanding their lifetime much longer than they were planned for).
Same for airplanes: there's tons of empirical knowledge about them, and people are still trying to build better models for why things that works do works the way they do (a former roommate of mine did a PhD on modeling combustion in jet engines, and she told me how much of the details were unknown, despite the technology being widely used for the past 70 years).
By the way, this is the fundamental reason why waterfall often fails, we generally don't understand enough about something before we build it and use it extensively.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
> Road upkeep is from general taxation. Road tax was abolished in 1937
I was skeptical of this being true since fuel duty is notoriously high in the UK, so I did a quick fact check.
Based on the change in 1937 you are "technically" correct, in that none of the motoring taxes are ring fenced for road funds since 1937.
However the opposite is true of what you are implying... income from fuel duty alone is generally around 3 times larger than all road maintenance spending (a fairly steady +25bn/yr [0] Vs -8bn/yr [1] over the last decade).
In other words, although it's officially one big tax pot, motoring taxes pay for road network expenditure more than 3 times over.
This is why they are introducing the per mile EV tax, because fuel duty provided a proportional tax to road use, but EVs skip that and electricity can't be so easily taxed for road use specifically.
TLDR, UK road users pay for far more than the road network.
> TLDR, UK road users pay for far more than the road network.
Right, but driving has far more externalities than just the cost of the roads. For example:
> Results suggest that each kilometer driven by car incurs an external cost of €0.11, while cycling and walking represent benefits of €0.18 and €0.37 per kilometer. Extrapolated to the total number of passenger kilometers driven, cycled or walked in the European Union, the cost of automobility is about €500 billion per year. Due to positive health effects, cycling is an external benefit worth €24 billion per year and walking €66 billion per year.
It was a poorly thought out design of aesthetics over ergonomics.
reply