"The MLP (multilayer perceptron) is a two-layer feed-forward network: project up to 64 dimensions, apply ReLU (zero out negatives), project back to 16"
Which starts to feel pretty owly indeed.
I think the whole thing could be expanded to cover some more of it in greater depth.
I think the big frustration I've had in learning modern ML is that the entire owl is just so complicated. A poor explainer reads like "black box is black boxing the other black box", completely undecipherable. A mediocre-to-above-average explanation will be like "(loosely introduced concept) is (doing something that sounds meaningful) to black box", which is a little better. However, when explanations start getting more accurate, you run into the sheer volume of concepts/data transforms taking place in a transformer, and there's too much information to be useful as a pedagogical device.
I tried to include tooltips in some places that go into more depth, but I understand there's a jump. I'm not sure what will be the best way to go about it tbh
eBikes are such a game changer. I do most of our family of four's grocery shopping with ours.
Because of the assist, I find myself more comfortable in a wider range of weather conditions:
* If it's hot, I use more assist and there's an instant cooling effect. Much better than climbing into a hot car.
* If it's cold, I dress up to be warm outside and if I start to warm up on the ride, I use more assist. I don't have to try and balance staying warm and not getting sweaty.
* Same thing if it's wet out: I can wear heavier waterproof gear and not get sweaty.
I see loads of those around my neighborhood, usually ferrying kids.
At the same time, I don't need to go 5 miles for groceries, so you might be picturing using a cargo bike in sparse suburbs. If your built environment is car centric then almost definitionally using any other mode of locomotion is going to be subpar.
You moved the goalposts, but that also brings up another problem in the US: land use that forcibly segregates different things - like making corner stores illegal in newer suburban developments.
We’ve moved the goalposts from “Food, beer, and cat litter would be too heavy for a bike.”
Also, my grocery stores are 0.7, 1.1, and 1.6 miles away, not that it matters. 5 miles is just not very much time at 20-28 mph. I think theft and weather/comfort are bigger obstacles to most people than distance.
I don’t know how often you’re buying cat litter, but carrying food and beer in a pannier on a pedal-powered bike is perfectly reasonable, let alone an ebike
I live about 3 miles out of town, fortunately directly on a rail trail. I ride my e-bike in to town to get groceries weekly. I have saddlebags on the bike and I pull a kids trailer with the seat folded down and have never run out of room, or had issues with weight. Sometimes I'll even get a few bags of water softener salt. I have a fat tire ebike (aventon), it's pretty sturdy. I've got about 2k miles on the bike, I'd guess half those are from grocery runs.
For years we’ve been grocery shopping with e-bikes and a burley flatpack trailer. The trailer can hold 50kg/100lbs and we used to live up a steep hill. No problem at all. If it fit on the trailer we could haul it back. 52V e-bikes limited to 25km/h.
You don't think a family of four buys 'food' ? I also get beer occasionally, although sometimes I get it from the corner store a few blocks away.
I do get kitty litter with the car on the occasional trip to Costco because I'm not set on using the bike for every last thing. Just that the eBike makes a lot of things a lot more convenient.
I can get three or four days of food for my family of four on my regular bike with no problem (I also have a cat). I live somewhere where I ride past half a dozen super markets on my regular commute, so stopping at the shop is no big inconvenience.
It's why I think "software engineer" is a misnomer. We don't have a license, we don't have an ethics code, we don't sign off on stuff. In other disciplines, an engineer could topple a project they feel is unsafe or against code, and be backed by their union if replaced. A software engineer just says yes if their stocks aren't vested, and will be replaced if not.
I just looked this up so might not be fully accurate but it seems most private sector “engineers” don’t require a license. You only need a PE license when providing a service to the public. That is quite a strict band on the title.
Might also be different between countries. But a bridge wouldn't be built without an engineer signing off on it being safe. A process change at a production facility would need an engineer to approve that the output still will follow specs etc
> Nowhere is it stated that it is a score out of 100.
It says it right on the homepage. Twice. Once for people, once for organisations. It’s right there in green: “BEST (SCORED OUT OF 100)”. And if you go into any of them, you see a score like N/100.
Found the methodology page, and it clarifies it goes from -100 to 100.
Also, it's probably tricky to find a Schelling point that a broad range of people can agree to.
* no military use
* no lethal use
* no use in support of law enforcement
* no use in support of immigration enforcement
* no use in mass surveillance
* no use in domestic mass surveillance (but mass surveillance of foreigners is OK)
* no use in domestic surveillance
* no use in surveillance
* require independent audits
* require court oversight
* require company to monitor use
* require company to monitor use and divulge it to employees
* some other form of human rights monitoring or auditing
* some other form of restriction on theaters/conflicts/targets
* company will permit some of these uses (not purport to forbid them by license, contract, or ToS) but not customize software to facilitate them
* company can unilaterally block inappropriate uses
* company can publicly disclose uses it thinks are inappropriate
* some other form of remedy
* government literally has to explain why some uses are necessary or appropriate to reassure people developing capabilities, and they have some kind of ongoing bargaining power to push back
It feels normal to me that a lot of people would want some of those things, but kind of unlikely that they would readily agree on exactly which ones.
I even think there's a different intuition about the baseline because one version is "nobody works on weapons except for people who specifically make a decision to work for an arms company because they have decided that's OK according to their moral views" (working on weapons is an abnormal, deliberate decision) and another version is "every company might sell every technology as part of a weapons system or military application, and a few people then object because they've decided that's not OK according to their moral views" (refusing to work on weapons is an abnormal, deliberate decision). I imagine a fair number of people in computing fields effectively thought that the norm or default for their industry was the latter, because of the perception that there are "special" military contractors where people get security clearances and navigate military procurement processes, and most companies are not like that, so you were not working on any form of weapon unless you intentionally chose to do so. But, having just been to the Computer History Museum earlier this week, I also see that a lot of Silicon Valley companies have actually been making weapons systems for as long as there has been a Silicon Valley.
There is definitely a muddle on so many levels about signaling and agreeing on ethics in technology.
But as innovation slows globally, it is implementation, ethics, and ideology that will once again be the dominant metrics of progress, so there's a new window emerging to push for this social/moral change in technology once again.
So it's still critically important that we actively work towards finding a meaningful, socially contagious differentiator other than "ethical technologist" even if it's difficult- look at what OpenAI gets away with under that flimsy banner.
"Starting today I will be asking prominent members of the tech community to sign their name onto this. A code of conduct, authored by me, that pledges them to a universal ethos, which I created, that I call tech ethics or Tethics for short."
I think it would still be useful. Call my cynical but gone are the days where the individual comp and benefits available to SWEs outweigh the benefits of collective bargaining.
This, honestly. Seeing all those billionaires on inauguration day lined up to kiss the ring was utterly pathetic. Like what is the fucking point of having billions of dollars if you're just going to be someone else's bitch. And for what? A couple more billion dollars. Oof
> I might sign up just to stay on top of a market change that I don’t have an employer paying me to learn.
This is the thing I hate most about AI. It is a huge shift in power towards big companies that have the capital to throw at it. And towards those few mega corporations that control the tech.
It's a big shift away from hobbyists, tinkerers and people exploring ideas on their own time.
I don't know if there is one. There are models people can run on fairly expensive hardware at home, but will those be 'good enough' compared to the heavier duty resources that a big, well-funded corporation can deploy?
Like... while the open source models are improving, does that converge with the fancy models with tons of money behind them?
My understanding of the economics of it - so far - is that "more computing resources leads to better results" and capital wins that game.
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
Mankind is doing what it does best at scale: sprinting mindlessly into problematic scenarios because the species is fragmented and has arbitrarily established concepts of groups defined by region, race, ideology, etc.
Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.
If they're truly principled, and these are true red lines, given no other recourse, I would be impressed if Anthropic decided to shut down the company. Won't happen, but I would be smashing that F key if they did.
The other two definitely never would in a million years.
If I had decision input at Anthropic I'd be giving serious consideration to reincorporating in the EU or Japan, and also doubling or tripling my personal legal and security budget.
They’ll go after their bank accounts and their financing, in effect killing them outright, no matter from where they’d be headquartered (other than China or Russia, that is). Also, EU and Japan would not risk their nuclear umbrella protection in order to defend the interest of an US company that is fighting the US Government, not in a million years.
France doesn't even have a nuclear triad in place, and last time they offered any big assurances at the international level Munich '38 happened and then June '40. Macron and the people running the French State are well aware of this, no matter their public statements.
I fail to see how the inane failures brought by a dysfunctional IVth Republic are in any way relevant to the post-WW2 world order where both nuclear weapons and the EU began to exist, and in which France has been extremely relevant multiple times over, both geopolitically and on operational theatres.
The importance of the land component of the triad is vastly overstated, simply do not make sense at the landmass size of France, and only matters when your doctrine is USA vs USSR cold-war era complete retaliatory annihilation anyway.
Well look at it this way. Europe wants to ramp up on defense given Putin and Trump's moves, so having a big AI company they can keep close probably fits into that.
Unless you are saying Europe is basically submissive to the US due to the nuclear situation.
Anthropic have a pretty progressive corporate governance structure, so there is a good argument that they will stay true to their principles. However, this will likely be the biggest test for how strong that governance structure is up to now.
There is one tiny problem in your assessment. That statement was written by the employees of Google and OpenAI, in solidarity with their counterparts at Anthropic. It doesn't really matter what Anthropic does. We're doomed! (cue the dramatic music!)
More like a nightmare. This isn't happening by accident. They aren't being opportunistic either. They're playing a game that they planned at least two decades ago. If the books they wrote and published openly aren't evidence enough, you can look at the Epstein files. Look past all the obvious horrific crimes in it, and you'll the see signs of their numerous interventions in society through large scale social engineering, that got us to the dystopia we're in now.
I don't know why you're being downvoted. This letter is completely toothless, and what you're suggesting is literally the only thing that these people could do that would make a difference.
The other unions are also run by their members. And they had a constitution. It's just the truth that most people who join a union are trying to kick out minorities. And when the minorities band together and the majority bands together one of these bands is bigger than the other.
And people like to flag kill the truth but it was a union who got the Koreans deported and it was a union that made it so the Chinese couldn't get citizenship. These are facts and the guys who would be their victims haven't forgotten it. Obviously the majority would like to hide this inconvenient truth using the tool this site offers to do that, but it doesn't change the truth, and these people know it.
> You sound like an unhinged person if you in plain words describe what’s happening, but the Trump admin demanded Anthropic’s AI be able to kill things for it without human approval and also do mass surveillance.
> Anthropic said no, and now the admin is trying to destroy the company in retaliation.
I would not defend all of Google's decisions in the Trump era, but complying immediately with politicized name changes has always been the status quo. Even in healthy democracies, the precise names of geographic features can be extremely controversial, and no sane company wants to get in a debate with the Japanese government about the real names of various islands.
reply