> The injunction doesn't cut off all contact between the Biden administration and social media companies. Doughty's ruling said the government may continue to inform social networks about posts involving criminal activity or criminal conspiracies, national security threats, extortion, criminal efforts to suppress voting, illegal campaign contributions, cyberattacks against election infrastructure, foreign attempts to influence elections, threats to public safety and security, and posts intending to mislead voters about voting requirements and procedures.
This sounds like this injunction does literally nothing. If there was a weird political conspiracy between the White House and social media companies, they could easily couch their complaints as one of these issues. This really obviously reads as a judge trying to make a ruling so broad it can't be overturned for press
Yet they canceled their weekly meetings. So it can be extrapolated from that that whatever they were talking about did not fall into these categories. Most of the disinformation I’ve seen comes from the government. That they are “policing” social media by removing objective truth that might be “misinterpreted” tells you all you need to know.
I feel like them cancelling their meetings is a reasonable response no matter the content. Clearly there's some amount of attention going towards these meetings, and they have a literal court case directly policing what can be discussed. If I were some rando working for Facebook doing communications, I'd cancel that shit until my boss told me what to do. Conspiracy or not, I don't think the cancellation of be used as a strong signal.
Much more likely they'll resume meetings but with transcripts vetted by lawyers
It's interesting to think if grift of book sales and random convention talks can equal the ~30 billion of material value wiped out by Twitter.
Somewhat reminds me of Andrew Wakefield who went from being a licensed research MD to a guest of honor for "Conspira-Sea", a novelty cruise. Just a complete loss of authenticity and meaningful career prospects.
Writing extremely good quality software is hard and expensive. In the vast majority of cases all you need is okay software developed by people who just want to get the job done. Understanding how software creates value and what kind of software at any moment is valuable; and which parts of your job is tied to that process makes this much clearer.
If anyone reading this just want to write bash scripts that don't violate fundamental principals, there are jobs out there, but they're typically low paying and academic
> Understanding how software creates value and what kind of software at any moment is valuable; and which parts of your job is tied to that process makes this much clearer.
This is a good point and something that is indeed requires a lot of care and thought and experience to tackle properly, but I think it strongly hinges on the assumption that software just does a thing, so to speak. Whereas I personally view software as something that does a thing but at the same time as something that is also a communication system and encoding of domain knowledge. So if the communication and encoding parts of the software are missing, and all you've got is it doing something, then in my view, the software is leaving value on the table over the long term. Poorly tested, understood, documented, and specified software will have waste products over time, siphoning value away. But it does so much more quietly, but in no less magnitude, than when software is not doing what it is supposed to be doing.
You are correct, but current geopolitical conditions and economic policy have created financial situations that basically enforce short sightedness on the side of corporations. If your employer is publicly traded or has institutional investors; they need to report quarterly earnings. Investors won't care about code health or the longevity of a product, they only care about sales/revenue.
It's a fundamental systemic issue that's why we see so many tech companies produce terrible products. But we (the programmers) have no ability to fix the shortsightedness of the American financial system.
Do you have any examples from your experience about making that distinction? Sometimes I struggle with the boundary between good maintainable code and meeting deadlines.
Not sure if you were responding to the right comment, but yeah, this is a pretty major argument for who unchecked private centralization is very dangerous. The main solutions to this contradiction are website keeping healthy competition between firms to have a rich ecosystem of competition or to place everything in a centralized location controlled by the govt where things like access are intensely regulated (i.e why every subway and post office is ACA accessible)
They could easily reduce those response times through increased funding, training, and recruitment though... this is just a bandaid on a fundamentally dysfunctional pubic service in need of more state support
Most problems can be solved with increased funding, training and recruitment. But there are just not enough people. Yes, we could surely employ enough emergency call reaponders, but in many many governmental institutions you find a lack of employees, and our society can simply not afford to employ enough bureaucrats, teachers and emergency call reaponders. These issues will simply never be resolved via political means because they are an instance of plain economic scarcity. You can always say that X can just be fixed with more funding, but you can not do that for every single one of these subjects without balooning the government expenses 3x.
Technological advances have a chance to address the core scarcity and therefore actually solve the issue. If we can double the total labor of people employed by the governemnt in such roles, then we have just created a massive gain for society.
I'm not sure why the level of analysis is appropriate for the context here. We have an extremely specific objectives with metrics (call wait times and calls correctly resolved). There's nothing fundamentally scarce about the labor involved in responding to 112 calls, it seems like a job pretty poorly suited to automation, we as a group decide how to allocate our resources, and if we want to cut tax rates at the expense of ambulances getting to people with heart attacks, that's the right of the Portuguese population; but let's not pretend that this is any kind of revolution in business theory
Its going to be really disturbing if the falling real world conditions caused by these weird technocratic decisions leads to the government being replaced. Revolution always leads to a lot of suffering; not sure why the current crop of people in charge can't just do their job, the stakes are pretty damn high
I mean, has a bit of weight when a person who won a Nobel prize in meteorology says it. At least worth giving the sky a glance once in a while to make sure you're safe
If only climate scientists were given .01% of the credence these buffoons get.
edit: Henry Kissinger has a Nobel Peace Prize. If the Nobel committee ever corrects that error and makes the world safe for political satire again, I might start giving a shit who has a medal.
I think it's hard to get people to give credence to experts whose claims can be demonstrated to be outlandish. E.g. James Anderson, famously known for helping to discover and mitigate the Antarctic ozone holes in the late 20th century, said in 2018 that the chance there will be any permanent ice left in the Arctic by 2022 is "essentially zero"[0].
Yet a NASA site reports that in September 2022 (when the most recent measurement was taken), the Arctic sea ice minimum extent was ~4.67 million square kilometers. [1]
To be very explicit: I'm not saying that climate change doesn't exist. I'm not saying that Arctic sea ice is not diminishing (the NASA site says it's diminishing at ~12% per decade). I'm not saying that the Nobel prize is a good indicator of expertise.
I'm saying specifically that I believe it's more difficult to convince people to trust a source making claims of negative consequences when those consequences are less bad than the source says.
An analogy I might use is drugs (specifically in the US). I've heard a few people, who went through an anti-drug education program forced on them in their adolescence by parents/teachers, mention that marijuana was portrayed as just as bad as other, harder drugs. Then, when they went on in high school and college to smoke weed and discovered that they did not ruin their lives by getting stoned a few times a week, or even every day, they subsequently gave less credence to what the anti-drug advocates were saying.
So basically, the original article is about an AI huckster amping the FUD in order to push through some sort of corporate control of the technology, using fear of the bullshit they're spinning as the justification.
I brought in the analogy of Chicken Little, inflating the scope of a threat to one of apocalyptic proportions, which is exactly what is taking place here.
First responder to me brought in the climate analogy, presumably as a means of getting me to think that maybe I'm the fool here by ignoring the real scientist who, hey, has a Nobel Prize! Or at least, the theoretical meteorologist in his metaphor does, and therefore maybe me, with no Nobel Prize, should just be respectful of the expert here.
I responded by pointing out that the Nobel committee are morons who gave a Peace prize to one of the worst war criminals of the 20th century and pressed the fact that we have thirty years+ of scientific concensus about climate change, along with a lot of corporate funded think tank noise that is running ideological interference, successfully so far. But you can only fool people for so long, it caught up to the tobacco industry and it will catch up to us.
The idiots amping up the FUD to seize control and the assholes pumping money into think tanks that generate endless climate denialist noise are the same people.
> If only climate scientists were given .01% of the credence these buffoons get.
Climate scientists are extrapolating complex models of a system that's literally planet-sized, with lot of chaotic elements, and where it takes decades before you can distinguish real changes from noise. And, while their extrapolations are plausible, many of them are highly sensitive to parameters we only have approximate measurements for. Finally, the threat itself is quite abstract, multifaceted, and set to play out over decades - as are any mitigation methods proposed.
The "buffoons", on the other hand, are extrapolating from well-established mathematical and CS theorems, using clear logic and common sense, both of which point to the same conclusions. Moreover, the last few years - and especially last few months - provide ample and direct evidence their overall extrapolations are on point. The threat itself is rather easy to imagine, even if through anthropomorphism, and set to play out near-instantly. There are no known workable solutions.
It's not hard to see why the latter group has it easier - at least now. A year ago, it's them who got .01% of the credence climate people got. But now, they have a proof of concept, and one that everyone can play with, for free, to see that it's real.
The buffoons are definitely talking about a scary beast that's easy to imagine, because we've been watching it in Terminator movies for forty years. But this is not that beast.
The beast here is humanity, and capitalism, by which I mean, the idea that you can collect money without working and that that is at all ethically permissible. The threat of AI is what is happening with kids' books on the Kindle platform, where a deluge of ChatGPT-generated kids' books are gaming the algorithm and filling kids' heads with the inane doggerel that this thing spits out and which people seem to believe passes for "writing".
And people keep saying how amazing the writing is. Show me some writing by an AI that a kindergartener couldn't do better. What they do is not writing, it's a simulacrum of the form of a story but there is nothing in it that constitutes art, just an assemblage of plagiarized structures and sequences. A mad-lib.
Everyone is freaking out, and the people who should be calming folks down and pushing for a rational distribution of this new tool which will be extremely useful for some things, eventually, are abdicating their responsibility in hopes of lots of money in their bank account.
When silent movies came out, there were people who freaked out and couldn't handle seeing pictures move, even though, the pictures weren't actually moving. It was an illusion of movement. This is an illusion of AI, it's just a parlor trick, like a victorian seance where you grandpa banged on the table. Scary, because they set the whole scenario up so you would only look at the stuff they wanted you to see. We still spend months assembling a single shot of a movie, and even if AI starts doing some of that work, all that work still has to happen; the pictures still don't move. A hundred years from now, what you're all freaking out about still won't be intelligent.
I do agree this is a world-changing technology, but not in the way they're telling you it is, and the only body I see which is approaching this with even a shred of rational thinking is the EU parliament. The danger is what people will do with it, the fact is it's out and it's not going back in the bottle.
We don't solve this by building a moat around a private corporation and attempt to pitchform all the AI into the castle. One requires two things to use this technology: A bit of Python, and a LOT of compute capacity. The first is the actual hard part. The second is in theory easier for a capitalist to muster, but we can get it in other ways, without handing control of our society to private equity. It's time we get straight on that.
The AI apocalypse only happens if we cling to capitalism as the organizing principle of our society. What this is definitely going to kill is capitalism, because capitalists are already using it to take huge bites of the meat on our limbs. Ever seen a baboon eat lunch? That's us right now, the baboon's lunch. As long as we tolerate this idea that people who have money should be able to do whatever they want, yes, AI will kill us (edit: because it works for free, however absurdly badly).
How many submarines, how many Martin Shreklis, before we recognize the real threat?
Yah, being cynical about a giant corporation inflating the scope of a new parlor trick in an attempt to establish a legal moat is exactly the same as ignoring over thirty years of scientific concensus against a torrent of tobacco-industry-style denialism to keep line going up.
A giant corporation? You know that Hinton doesn't work for Google, and Bengio, the most cited computer scientist of all time, is saying the same thing?
Too lazy? There are over 100 CS professors and scientists
Plus neither of the CEO's of Microsoft or Google are on there
It's the corporate camp, companies and investors, that are gung-ho about pushing capabilities immediately because there's big $$ in their eyes. You're the one falling for the safety denialism pushed by corporate interests, a la tobacco
> A giant corporation? You know that Hinton doesn't work for Google
U of T is a giant corporation in its own right.
> Too lazy? There are over 100 CS professors and scientists
And also Grimes. But what do these particular experts really know about humans and what vulnerabilities they have? This isn't a computer science problem. Being an expert in something doesn't make you an expert in everything.
So what's the plan for putting it back in the bottle? llama is already out there, the chatbots are already out there.
I think the solution is that government should standup a bunch of compute farms and give all citizens equal access to the pool, and the FOSS community should develop all tools for it, out in the open where everyone can see.
There isn't a feasible plan, we're at the "sounding the alarm part." Unfortunately, we're still there because most people don't even acknowledge the possible danger. We can't get to a feasible plan until people actually agree there's a danger. Climate change is 1 step past, that there is a danger, but not a feasible plan.
However, your solution is first day naivety for the problems machine intelligence poses to us. It's akin to saying everybody should have powerful mini-nukes so that they can defend themselves.
a) what is currently being touted as AI is neither artificial, nor is it intelligent. This is a bunch of hucksters saying "we made a scary demon with powers and now we're scared it's going to kill us all!" but in fact it's just a plagiarism machine, a stochastic parrot. Yes, it will get more useful as time goes on, but the main blockade is always going to be access to compute capacity, and the only viable solution to that is a socialist approach to all data processing infrastructure.
b) even if we stipulate that there is a scary daemon that could consume us all (and meanwhile teach me linear algebra and C++), and we transform that into pocket nukes as a more terrifying metaphor cause why not, your solution seems to be to pretend that your mini-nukes cannot be assembled from parts at hand by anyone who knows a bit of Python.
Andrew Ng doesn't agree but that is a boring story.
It seems to me the big names in AI research our having a moment that appeals to their vanity. Easy to get your name in the headlines by out dooming the next guy.
There is also no downside to these predictions about AI eating us since when they are totally wrong you can just counter that they haven't ate us, yet.
The fundamental cultural conditions of isolation and low meaning really haven't moved much on the needle since the 90s (look at the writing of David Foster Wallace to see what I'm talking about)
The Internet has just exasperated an already present issue of low empathy, reactionary politics, and get rich quick schemes
This sounds like this injunction does literally nothing. If there was a weird political conspiracy between the White House and social media companies, they could easily couch their complaints as one of these issues. This really obviously reads as a judge trying to make a ruling so broad it can't be overturned for press