why does chatGPT passing tests or interviews continue to make headlines?
all they're proving is that tests, and interviews, are bullshit constructs that merely attempt to evaluate someone's ability to retain and regurgitate information
When I read this I feel people must be using it in the wrong way. I use it all the time to quickly solve tech problems I mostly know something about, however it’s so smart it regularly takes 1-2 hour problems for me and turns them into 10 mins ones. That is definitely not dumb from my perspective, but obviously it’s also not smart in it will give me profound understanding of something, but ok whatever, it’s still a massive productivity booster for many problem.
When you call it dumb, what do you mean? Can you give some examples?
Please don’t give computational examples we all already understand it does inference and doesn’t have floating point computational capabilities or reasoning, and so many give such examples for some silly reason.
It's dumb in the sense that it doesn't actually have a symbolic understand of what it's actually saying.
I use it quite frequently too, mostly for solving coding problems, but at the end of the day it's just regurgitating information that it read online.
If we took an adversarial approach and deliberately tried to feed it false information, it would have no way of knowing what's bullshit and what's legit, in the way that a human could figure out.
A lot of people who've never used ChatGPT make the mistake of thinking it has symbolic reasoning like a human does, because its language output is human too.
How many book mistakes have you found, as a human, so far? How about deliberate mistakes hidden in plain sight? I once re-validated for 3 times the same test set and was still finding mistakes.
Agreed on getting excellent hints from it, shortening the time to figure out stuff. But eg. just now it gave me an example algorithm that, only at 2nd look, turned out to be complete nonsense. You know That colleague, who shines in the eyes of his managers, but peers know that half what he does is garbage.
Its ability to correct itself is very impressive when given the feedback. I imagine with the proper feedback loop it can advance very fast. E.g., when asked to write a piece of html markup, if it could "see" how the rendered layout is different from what was asked for, it could adjust its solution without human involvement. If it could run the deployment script and see where it fails, it could apply all the fixes itself until it works. If it could run the unit tests and see where its solution breaks the other parts of the system, it would need much less handholding.
And this is the AI effect in practice. We are past the point where the original idea of the Turing test has been met by machine intelligence. We are at that point.
The problem with people is we keep pushing it to "Only AGI/machine superintelligence is good enough". We are getting models that behave closer to human level. The 'doesn't know everything, is good at somethings, and bullshits pretty well'. Yea, that's the average person. Instead we raise the bar and go 'well it needs to be a domain expert in all expert system' and that absolutely terrifies me that it will get to that stage before humanity is ready for it. This is not going to work well trying to deal with it after the fact.
Sure, but in our present reality those tests and interviews are how we currently gatekeep upper middle class jobs, so it is at least of some practical interest.
Also, I think this is a bit overstated. Programmers (and smart people in general) like to think that their real job is high level system design or something, and that "mere regurgitation" is somehow the work of lesser craftsmen. When in reality what GPT shows is that high-dimensional regurgitation actually gets you a good fraction of the way down the road of understanding (or at least prediction). If there is a "buried lede" here it's that human intelligence is less impressive than we think.
While I agree with this sentiment, I think we should be careful assuming that our jobs as knowledge workers are much more than “retaining and regurgitating information”. Even the emotional intelligence and organizational strategy portions of what we do may boil down to this as well.
> all they're proving is that tests, and interviews, are bullshit constructs that merely attempt to evaluate someone's ability to retain and regurgitate information
No, all they are proving is that either tests are bullshit constructs, or ChatGPT is human-level.
The ability to retrieve and then synthesize the retrieved information into an answer tailored to the question is completely new. The applications of this go far beyond passing an interview, its a fundamental capability of humans that gets used everyday at work
It's probably not slave labor. It may be really poorly paid labor but if you had slave labor you'd probably use it for something profitable like construction like they do in the Persian gulf countries instead of solving captchas that people pay $3 per 1000 for.
nice work to whomever archived it in time... the significance of the internet archive is going to grow and grow as the bigots, sleezeballs, and grifters continue to multiply
and yes, i was playing league of legends while typing this monunmental reply
I also think it's kind of pointless as anything other than a demonstration of the speech synthesis (which is getting pretty good). The Feynman character especially shows that while these models can pick up a bunch of the details about the surface style, they certainly can't be clever or insightful, which is the appeal of the people being imitated.
With respect to using the voices or faces of dead people, I do think it's interesting that the estates of the deceased can have some control over commercial uses of personal images. The estate of MLK I think still controls use of the whole audio from some of his famous speeches. But my understanding is we haven't yet settled whether synthetic / derived artifacts which rely heavily on existing photos or recordings should themselves be controlled in the same way.
This is such an insane worldview. Lots of things are technically possible but socially or legally discouraged. You could infringe copyright. You could steal. You could walk around naked. It's not whining to say that something is outside of social norms.
When it's trivially easy to break the law and there are no obvious victims, and it can be done on computers, those rules go out the window.
Almost everyone who shared music on Napster would have never shoplifted a CD from a record store. Almost everyone who shares TV episodes on Bittorrent would never have stolen a DVD from a store or climbed up a power pole to illegally tap into cable.
Your single counter example is "people do media piracy". I personally do torrent media, and I'm pretty clear eyed that I do it because there are no negative consequences for me. It has nothing to do with computers or victims. There's no enforcement. If ISPs started fining people or cutting off access for pirating media I would stop because those consequences are worse than the benefit I get.
Who benefits from making deepfakes, and how do we respond to them? By simply throwing our hands up in the air and upvoting you're encouraging this behaviour by normalizing it. If people recognize it as deception that violates consent it will be discouraged.
When it becomes trivial to 3d print gray goo (https://en.wikipedia.org/wiki/Gray_goo) I wonder if you'll sing the same tune to humanity's destruction or if you'd admit that there's a spectrum and there's some threshold past which a technology becomes condemnable. Because right now your statement basically implies there is no such threshold (since clearly the person you're replying to thinks podcast.ai is past it).
Imagine an enemy nation-state of [insert your country] decides to create a "recording" of a political candidate admitting to pedophilia.
Yes, those of us in tech would be skeptical. But 99.9% of people don't read technology or political news and don't know this is possible. They haven't been inoculated against it.
What would happen? You can't make that stigma go away.
One of Trump's only drops in polling during his first presidential campaign was due to an audio recording. In his 2023 campaign, any recording that comes out could easily be AI-generated.
If that doesn't terrify you, I don't think you've been paying enough attention to the tactics of modern authoritarian regimes.
It will likely eventually be possible to create bioweapons like https://en.wikipedia.org/wiki/Gray_goo, but we would still condemn someone for doing so, your statement makes literally no sense.
> Arc, which has been in an invite-only beta for more than a year, is trying to rethink the whole browser UI
no one is ever going to dethrone chrome unless they start selling hardware filled with their crapware at a loss solely to suck ppl into the ecosystem and/or start giving away laptops as fast as possible for the same reasons
&
no one is ever going to compete with firefox or brave on UI alone, because the ppl that have the wherewithal to move away from the edge and chrome apps that are thrust upon them with every hardware purchase, care more about if your shit is secure and privacy respecting than if your 'tabs are on the left' :|
> he thinks browsers are due for a reinvention — and why he thinks a startup is the best place to do it
ah yes, a loss-leading category of software is best done at cash-strapped startups... wonder where the revenue will come from
Nobody thought anybody could dethrone IE when Firefox came around. I don't think the goal of Arc is to be the browser that everybody uses, so much to be a useful browser for a lot of power-users.
With the industry standardizing (for better or worse) on a handful of rendering engines, it's not a big stretch to imagine a world where you can choose between different browsers and still experience the same web as everybody else (as opposed to the days past where choosing a niche browser meant accepting a worse web).
I imagine they'll charge money. Mighty's $30/month is a bit out of my price range, but given that I spend ~70% of the workday in a browser, if one could improve my productivity the amount I'd pay for it isn't nothing.
I'd argue that IE6 got worse and didn't keep up with developments while Firefox got consistently better. Firefox' problem as "only" that is was really slow and that's what Chrome solved.
Yet, I also think that a new UI doesn't really change the browser experience much. It must be a completely different UX.
Firefox came at a time when the web was really annoying to use, and had features to make it better. I don't think IE had even shipped a pop-up blocker at the time. But FireFox's extensibility also made things like ad blockers and Grease Monkey scripts possible, which was the only way to stay sane as a power user in that era. And tabs! That was a big one.
I think the web has gotten annoying again, in different ways, because of the thousand things that nag you on every website to click to close them or otherwise obstruct your reading (cookie banners, too). If a browser can solve that reliably, I'd switch to it.
I can only speak for myself but I personally would switch to a fast, open source browser if it came within a set of built in and trusted features. Those include: adblock, Privacy features, fleshed out developer tools, better bookmark management, proper session management etc.
Chome/FF are fine and all but I wouldn't mind trying out a browser that rethinks the way things are done. Especially considering how much resources Chrome/FF currently eats up with the ~15 tabs I have open at any point.
exactly what he's getting... menchies in the press and general attention lol
aside from that, he might try to leverage this to appeal to MAGAs for some political play in the future too - the 'white lives matter' shirt shenanigans is further evidence of his desperation for attention & desire to appeal to a certain demographic
> then they'll have to raise taxes to offset the reduction in CIA and DEA revenues
Is this sarcasm?
Is the CIA & DEA even cash-flow positive?
I would imagine by not needing to enforce cocaine being illegal, we would have a lower tax burden. Then, you also get to do an excise tax - like with weed, alcohol, & cigarettes - and bring in a ton of revenue.
I can't imagine a world in which shitty, dangerous drugs that empower cartels has less negative externalities on the world than legal alternatives.
The creation of this very large black market has created opportunities for clandestine funding sources for organizations that don't mind acting in extra-legal ways.
Yep, while they _can_ make some amount of money from fines it'll never come close to paying salaries to officers, covering the cost of prison, court, etc etc.
Yeah, cause the best thing our federal agencies can do is just let whatever drugs laced with whatever shit come through the borders and kill however many people all at once because of massive overdosing.
We control the supply for a reason. 50K people dying in a single weekend from spiked drugs isn't something the US can handle.
By relegating drugs to a black market filled with fast cash, they incentivize cutting drugs and other antisocial profit-seeking behavior that would be unacceptable in an open market. If heroin users could get measured, packaged, laboratory grade product from Rite Aid, they wouldn't bother playing (car)fentanyl roulette. The reduction of costs from medical, prison, and social services alone are compelling, the elimination of a major financial support for organized crime groups, and the removal of non-job occupational options from people's choice should please employers, as it has potential to depress wages and inflation (there are no solid statistics on the prevalence of drug dealing as a livelihood or side hustle for obvious reasons, but simply observing the number of US adults who admit to recreational use in surveys shows that the number must be substantial).
> If heroin users could get measured, packaged, laboratory grade product from Rite Aid
Yeah exactly, this is particularly urgent with opiates. If people could just buy pure oxycodone pills at cost - stuff that's already manufactured - opiate addicts would be much much safer.
And as a casual user (once or twice a month), I'm extremely skeptical that this would lead to an explosion in addiction, even without any kind of legal safeguards like purchase limits.
all they're proving is that tests, and interviews, are bullshit constructs that merely attempt to evaluate someone's ability to retain and regurgitate information