There are plenty of non-blockchain, non-NFT, non-online gambling, non-adtech, non-facist software jobs. In fact, the vast majority of software jobs are. You can refuse to work with all of these things and not even notice a meaningful difference in career opportunities.
If you refuse to work with AI, however, you're already significantly limiting your opportunities. And at the pace things are going, you're probably going to find yourself constrained to a small niche sooner rather than later.
If your argument is that there are more jobs that require morally dubious developments (stealing people's IP without licensing it, etc.) than jobs that don't, I don't think that's news.
There's always more shady jobs than ethically satisfying ones. There's increasingly more jobs in prediction markets and other sorts of gambling, adtech (Meta, Google). Moral compromise pays.
But if you really think about it and set limits on what is acceptable for you to work on (interesting new challenges, no morally dubious developments like stealing IP for ML training, etc.) then you simply don't have that FOMO of "I am sacrificing my career" when you screen those jobs out. Those jobs just don't exist for you.
Also, people who tag everybody like that as some sort of "anti-AI" tinfoilhatters are making a straw man argument. Most people with an informed opinion don't like the ways this tech is applied and rolled out in ways that is unsustainable and exploitative of ordinary people and open-source ecosystem, the confused hype around it, circular investment, etc., not the underlying tech on its own. Being vocally against these matters does not make one an unemployable pariah in the slightest, especially considering most jobs these days build on open source and being anti license-violating LLMs is being pro sustainable open-source.
> There's always more shady jobs than ethically satisfying ones. There's increasingly more jobs in prediction markets and other sorts of gambling, adtech (Meta, Google). Moral compromise pays.
I would say, this is not about the final product, but a way of creating a product. Akin to writing your code on TextPad vs. using VSCode. Imo, having a moral stance on AI-generated art is valid, but AI-generated code isn't, just because I don't consider "code" "art".
I've been doing it for about 20 or so years at this point, throughout literally every stage of my life. Personally, I'd judge a person who is using AI to copy someone's art, but if someone is using AI to generate code gets a pass from me. That being said, a person who considers code as "art" (I have friends like that, so I definitely get the argument!), would not agree with me.
> Most people with an informed opinion don't like the ways this tech is applied
Yeah, I'm not sure if this tracks? I don't think LLMs are good/proficient as a tool for very specialized or ultra-hard tasks, however for any boilerplate-coding-task-and-all-CRUD-stuff, it would speed up any senior engineer in task completion.
> I would say, this is not about the final product, but a way of creating a product.
It is the same logic as not wanting to use some blockchain/crypto-related platform to get paid. If you believe it is mostly used for crime, you don't want to use it to get paid to avoid legitimizing a bad thing. Even if there's no doubt you will get paid, the end result is the same, but you know you would be creating a side effect.
If some way of creating a product supports something bad (and simply using any LLM always entails helping train it and benefit the company running it), I can choose another way.
That is because your views appear to align with staunch progressives. From rejecting conservative politics ("fascism"), AI, advertising, and gambling.
From my side the only thing I would be hesitant about is gambling. The rest is arguably not objectively bad but more personal or political opinion from your side.
There seems to be some confusion. I wouldn't call conservative politics as a whole fascist, that's your choice of words. I doubt that "anti-AI progressive" is a thing too.
> The rest is arguably not objectively bad but more personal or political opinion from your side.
Nothing is objectively bad. Plenty of people argue that gambling should be legal if anything on the basis of personal freedom. All of this is a matter of personal choice.
(Incidentally, while you are putting people in buckets like that, note that one person very much can be similtaneously against gambling and drug legalization and be pro personal freedom open-source libertarian maximalist. Things are much more nuanced than “progressive” vs. “conservative”, whatever you put in those buckets is on you.)
It is just from my experience that political discussions online are very partisan. "fascism" in relation to the current US government combined with anti-AI sentiment is almost always a sure indicator for a certain bucket of politics.
To play devil's advocate: all the people using AI are not being significantly more productive on brownfield applications. If GP manages to find a Big Co (tech or non tech) which doesn't precisely bother about AI usage and just delivering features, and the bottleneck is not software dev (as is the case in majority of old school companies), he/she would be fine.
This is not under that “all discussion” because the topic is not the same.
If the title of the thread is ”Microsoft restructuring”, and I care about them becoming for profit, I am not clicking on that thread, the headline is not interesting.
I come here pretty often and have not seen a proper discussion about them becoming for profit in the past month. Last it happened was a whole year ago, and then it was just announced and was not confirmed.
Why is it relevant for the topic of becoming for profit that someone discussed specifically the Microsoft topic recently? Unless the topic of becoming for profit is intentionally kept under wraps.
Bad: 1) of poor quality or a low standard, 2) not such as to be hoped for or desired, 3) failing to conform to standards of moral virtue or acceptable conduct.
(Oxford Dictionary of English.)
A broken tool is of poor quality and therefore can be called bad. If a broken tool accidentally causes an ethically good thing to happen by not functioning as designed, that does not make such a tool a good tool.
A mere tool like an LLM does not decide the ethics of good or bad and cannot be “taught” basic ethical behavior.
Examples of bad as in “morally dubious”:
— Using some tool for morally bad purposes (or profit from others using the tool for bad purposes).
— Knowingly creating/installing/deploying a broken or harmful tool for use in an important situation for personal benefit, for example making your company use some tool because you are invested in that tool ignoring that the tool is problematic.
— Creating/installing/deploying a tool knowing it causes harm to others (or refusing to even consider the harm to others), for example using other people’ work to create a tool that makes those same people lose jobs.
Examples of bad as in “low quality”:
— A malfunctioning tool, for example a tool that is not supposed to access some data and yet accesses it anyway.
Examples of a combination of both versions of bad:
— A low quality tool that accesses data it isn’t supposed to access, which was built using other people’s work with the foreseeable end result of those people losing their jobs (so that their former employers pay the company that built that tool instead).
That’s why everybody uses context to understand the exact meaning.
The context was “when would an AI agent doing something it’s not permitted to do ever not be bad”. Since we are talking about a tool and not a being capable of ethical evaluation, reasoning, and therefore morally good or bad actions, the only useful meaning of “bad” or “wrong” here is as in “broken” or “malfunctioning”, not as in “unethical”. After all, you wouldn’t talk about a gun’s trigger failing as being “morally good”.
An LLM is a tool. If the tool is not supposed to do something yet does something anyway, then the tool is broken. Radically different from, say, a soldier not following an illegal order, because soldier being a human possesses free will and agency.
That is not true, and several people have already make the same mistake in this thread. What is done now is speculatively executing one path, not two or more paths in parallel.
True, it was incorrect for me to say they already do parallel execution. However, when parallel execution is a special case of speculative execution, the security concern I meant to highlight still applies, doesn’t it?
Put a sine wave emitter (or multiple) on the scene. Enable head tracking. Analyze stereo sound at the output. Mute output. There you go: you now can track user’s head without direct access to gyroscope data.
Apple does not secretly analyze sine waves to infer head motion. Instead, airpods pro/max/gen-3 include actual IMUs (inertial measurement units), and ios exposes their readings through core motion.
It’s a known research technique called acoustic motion tracking (some labs use inaudible chirps to locate phones or headsets) you mentioned, but it’s not how airpods head tracking works
I think they're more so talking about measuring attenuation that apple applies for the "spatial audio" effect (after apple does all of the fancy IMU tracking for you), by using a known amplitude of signal in, and the ability to programmatically monitor the signal out after the effect, you can reverse engineer a crude estimated angle out of the delta between the two.
I don't think that's how this app works though, after installing it I got a permission prompt for motion tracking.
Since the author of the app mentioned reverse engineering, analyzing audio is a way that immediately came to mind. It should be quite precise, too, only at the expense of extra CPU cycles.
I did not imply that there is no API to get head tracking data (even though Google search overview straight up says that). It’s mostly a thought experiment. Kudos for digging up CMHeadphoneMotionManager.
> Apple does not secretly analyze sine waves to infer head motion.
Duh. The mechanism I described hinges on Apple being able to track head movements in the first place in order to convert that virtual 3D scene to stereo sound.
There are intangible kinds of property and assets that are more valuable than “real property”. Trade secrets is an obvious one, being a special case of intellectual property. It would be absolutely irrational for legal system to classify theft of iPhone as criminal and theft of IP as civil, when the former cost $700 and monetizing the latter could finance creator’s entire life.
From purely business and career perspective being anti-blockchain/NFT/online gambling/adtech/fascism (at least for now in US)/etc. is a self-own, too.
I'm sure everybody making a choice against that knows it.
Thankfully purely business and career perspectives don't dictate everything.
reply