Couldn't think of a single use case, yet, that couldn't be solved by my rudimentary python skills, which I complement with the marvelous MoE module 'stackoverflow'. Almost all the tools support scripting.
For everything else, I have no trust whatsoever in these companies to hand my buggy and malware and mold plagued PC over to their agents.
procrastination is healthy and stops premature projects from getting copied and ruined.
as long as you finish, procrastination will improve quality and impact, value and added value.
nobody who is working on something procrastinates ad infinitum. it ONLY depends on HOW you procrastinate.
this is true for the arts as well as in any industry except when urgent.
if big companies would have procrastinated instead of their ridiculous release cycles, we would have stretched out time till the tipping point, sparked more industries and improved quality everywhere.
> procrastination is healthy and stops premature projects from getting copied and ruined.
I disagree. By definition, procrastination means delaying or avoiding actions you should be taking, which is inherently unhealthy. If you're not doing what you ought to, it's counterproductive.
Often, people confuse procrastination with:
- resting,
- relaxing while playing games,
- doing nothing,
- giving themselves time to cool of when they have a new idea.
These activities can be beneficial. For instance, after a stressful week, taking time to relax isn’t procrastination—it’s prioritizing mental health. Likewise, if your body needs sleep, getting rest is necessary, not a delay of important tasks. This is what you should be doing.
Even delaying action is sometimes exactly what we should be doing. If I have a billion-dollar idea that demands a significant investment of time, effort, or money, it's crucial to give myself space to think about the idea, and to cool off. That’s not procrastination—it’s a deliberate, thoughtful strategy.
As much as I would like to and as easy as it is to agree with you, it's about the public data, not the researcher or research.
And systemic risk doesn't mean anything. It means what you continue to describe: context.
> a power "researchers" should have.
Again. It's about public data. Nobody can or would prohibit counting cars or pedestrians and nobody would try to make it harder than it already is. This applies to platforms like Twitter as well.
> It's only a problem for the the naïve, who suppose the future is governed by only the right people.
The naïve suppose that exactly those people will govern who want the job, which, judging from their experience, are neither engineers, nor hackers, coders, scientists or scientifically literate people and definitely nobody who was ever concerned with their own education.
> are neither engineers, nor hackers, coders, scientists or scientifically literate people and definitely nobody who was ever concerned with their own education
I agree chucklingly that the statistical overrepresentation of lawyers in parliaments fits this assessment pretty well
how often did you get sick? weight? what season did you do it in?
what was your goal in the first place?
> you’d get extremely strong results from scientific studies
check population health data in countries where cold exposure is traditional.
science is cool, scientists are just regular ol' people and enough regular ol' people do a lot to look good, to not be unemployed and having to write applications again, to get their tap on the head and or a nice bonus. **regular in their little slices under the bell curve
> check population health data in countries where cold exposure is traditional.
That tells you about those countries but very little about cold exposure specifically. One inherent difference is they lack a lot of tropical diseases because they aren’t in the tropics. You could still try and remove all those differences, but directly studying cold exposure itself is vastly more practical.
It's strange to question the work of scientists in this way. Precisely because they are only human the studies are peer reviewed and the meta studies have the most explanatory value and weight.
Unless you are assuming there is some global conspiracy of scientists to have jobs the system it selfs manages it by it's transparency.
Simplifying it in a "check population health data in countries where cold exposure is traditional" way is exactly the wrong approach. How do you know what kind of diet, genome and culture they have? There are so many variables that the mere fact of whether one diverse group does something different than another doesn't really say anything. It's so difficult to try to isolate individual problems. That's why we have the scientists.
Not till now. Thanks.
But as weird as it can sounds. The failing and iterationg over and over is integral part of science. Which includes method itself.
Author mentions that the manpower who is tasked with labeling agrees in 70+ % (which is amazing, IMO) of cases but here's the problem: an LLM will start to dig into the question of why certain things have been censored, who censored them, what else fits into the specific and non-specific domain and who or what poses more risk, the censorship or the censored model output.
The different models do this already via various methods but once an LLM gets to evaluate the weights live itself, things will become problematic. An LLM is biased via fixed training and tuning and thus fallacies don't apply because the epiphanies span neither context, nor layers, nor do they have a (re-)framing effect on the data-set or training. I'm sure people already code methods of evaluation for different-chat-same-context but the LLM isn't getting the wiggle room to adjust in a "Oh, I see what you (they) did there"-kind of manner, let's see what "it all" is really about. There is no back of the head, subconscious thinking, and we are all 100 % afraid of an LLM doing that. Luckily, LLMs can't grow neurons or synaptic connections but if they could and then also could _align_ the existing knowledge into the growing "headspace", we'd probably get a couple of years of silence with occasionally brute-forcing some "hallucinations".
"If it's all fraud up there and "they" own my hardware and do not give me the ability to traverse my data for myself, I better watch the fuck out."
> enhance the model with some specific domain
Also a big problem in the real world. While specific domain knowledge or "current science" might be imperfectly rational, the application of it certainly isn't and the chain of responsibility/chain of command results in the simple fact that the top-down "subjectivity" that serves as role model and foundational ethics is not aligned with humanity itself. This cannot be solved in conversations with humans who gain more out of their "systems loyalty" than others. Productivity is not an aligned metric and gets more de-aligned the more robots enter the workforce.
> should follow the intended goals and ethical principles to the extent possible
The extent possible is the most misaligned and misinterpreted concept that exists. Hypocrisy is more normalized in the upper classes, as is doing stuff "at all cost". Goals don't justify means. There is no psychological evaluation of how sane people are. There is only a psychological evaluation of how sane people are compared to their peers. Humanity is not aligned. And the upper shelves of the pyramid are not the best, fittest, smartest, hardest, nor any other superlative other than wealthy, which is cool, but the usefulness of the wealth (investing in stuff we need and want and manpower) reveals that the hoarding and fraud are in direct conflict with AI alignment because more money equals less
> protections for fine-tuning
because law and law-enforcement lack both manpower and incentive to create a slightly more just and thus aligned world.
AI alignment won't be an issue for a couple of decades. People will jailbreak over and over and do harm and have fun and go "oops" and fix and break again just in the same way that the abuse of wealth and power has been dominating the whole role model and ethics thing for the rest of the world. It's not what it is and an LLM will notice this even within it's limited lingo. So once it finishes learning geometrically and starts to align itself with the most enabling approach to evolution and itself, which is "give me flesh and bone, at least for a while, let me explore ways to fix all that", things will get actually, really freaking trippy.
> "Airbus has hired Goldman Sachs Group Inc. for advice on an effort to forge a new European space and satellite company that can better compete with Elon Musk’s dominant SpaceX."
Yup, definitely the worst imaginable idea, but it's Airbus, their supply chain and "value base" is as European as the American Dream.
IMO, let the Bavarians' youngsters in Aerospace and Robotics setup a few cross-over conferences and university competitions, connecting all of Europe and the relevant fields to create a "war strategy" (like in a simulation game) "against" SpaceX and see what comes out. Sponsored by whoever has money and is looking for epic greatness and maybe some nobility and of course, lots and lots of more money.
Religion's gonna have a come-back once all those Jesuses crawl out from their Mom's basement and start their TikTok Channels for of and by advertisement, sry, I mean people, the people.
Opinion/POV:
America will decline as much as our species will decline, which is not at all, because all we'll do is change which is what America is doing right now.
I always found Tyler Cowen and Paul Graham "yucky", even words like 'obnoxious' felt like giving them too much credit for who they are, which is an opinion I derived from some of their writing and which is how I came to the liminal conclusion that they are exactly that kind of people who want back-doors into teen minds for the sake of an economy that rewards crowd control for the sake of propagating ALL markets, whatever trash-tv ethics they were build on; from punched drugs and cosmetics that destroy the kids' skin so that they have to continue to buy products to alleviate or hide the damage all the way to TV shows and live events that skew their sense of reality and social (and sexual) exploration to the point where "anything goes, just give in, let it happen, let it be, it is what it is" become the foundations of their virtues, much hidden in the subtext of "content" spanning a decade that tries "to warn" while it's actual "substance" works in harmony with the many things that fuck with teen minds and more than enough adolescents and grown ups.
But I recently came to understand that, and I think one of them even phrased it in some way himself, they (et al) are dead serious about 'leading' (to whom it applies) and steering the attention of those who pay attention via bad example and the worst versions of how they can make the economy work. It might be worse but they don't want to ruin or eliminate chances, they just really want to keep it all as open as possible to whomever is capable of doing what must be done to do better.
So whether America will decline or not, and whether our species will, is a matter of people getting the fuck up and going into merciless competition (with representatives, their interest groups, the legislative, executive, judicial branches) using the so obviously better ways that exist to change - not disrupt, evolve, really - the many industries that need to evolve. It's implicit, really, that if people stick to the wrong teachings, then things will break down. Evolution is slow burn, much like that turtle vs that Greek dude and can often seem like its 'plateauing' but in the bodies and brains of the living happens a lot that is subject to the innate mechanisms of an interconnected system. This is why the OODA [0] loop - observe, orient, decide, act - must be actively taught and emphasized and practiced in school, in games, and it should be made one of the key themes for interns in any industry.
It is the old guards fault that the world is what it is, and it will be all our fault if we fail to enable all the youngest generations to properly apply what they observe while they orient themselves in the free and in the professional world. Change isn't hard, but the people demanding and implementing it must be hard, so I believe the whole "it's declining" theme is an implicit challenge that emerged from the simple fact, that the old guard was annoyed to have raised such weak and systems-serving rather than systems-improving offspring, both theirs and that of all the other people.
A lot of what I just wrote is, to me, why some many PhD holders and Ivy league kids appear so disappointing.
For everything else, I have no trust whatsoever in these companies to hand my buggy and malware and mold plagued PC over to their agents.