I know plenty of people who like to create—but who also have a better technical understanding of LLMs—who use LLMs in their workflows (some even use LLMs finetune LLMs on their own work and then incorporate them into their workflows.)
Most people who are non-technical (including most creators) have an extremely naive view of what LLMs are, mostly driven by what the media, and shills who are mostly targeting audiences that aren't creative are focused on, and their response to LLMs is shaped by that.
I should have said "the people who like to create don't use LLMs for the parts they like creating". I like making products that are easy to use and useful, I have LLMs write 100% of the code but I still do all the UX by hand, because that's what I enjoy.
The name makes sense because Aluminium has an -ium suffix like Chromium. There's also no reason for the project name to agree with the US pronunciation of the element.
Well, it makes sense and it doesn't because it makes it sound like this is a 'lightweight' version of the Chromium-based products while the opposite seems to be true. Call it Osmium instead, that's got '-ium' and some weight to it just like this thing.
My dad always pronounced it a-luna-min, so my whole life I thought that there were 3 pronunciations, and the fact that there are only two correct ones feels strange to me. Not sure where he got that from, maybe he had special metal from the moon.
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
I love how we have such a poor model of how LLMs work (or more aptly don't work) that we are developing an entire alchemical practice around them. Definitely seems healthy for the industry and the species.
I am heavily tattooed and I tried generating a few designs. I'm sorry to say that everything it suggested was awful, and if an artist ever showed me any of this in their flash collection I would block them on Instagram and potentially call the police.
I asked it for "one hand stabbing another with an ornate dagger, traditional style." It got the style right-ish but everything else was terribly wrong. Daggers with multiple blades, hands with hands on them, fingers with multiple fingernails, etc.
This could be useful if it can reliably get the core requirements correct though. I can see myself generating some ideas with this and taking them to an artist as a basis for them to start with.
OP: It would probably be good to have a "these aren't even close, please try again" button which allows for a couple free retries.
but here's the thing: why spend a bunch of time trying to formulate a prompt to give to a machine to generate a bad image to bring to an artist who would then refine it, when you could just say to the artist "one hand stabbing another with an ornate dagger, traditional style" and get what you want in one shot?
I wouldn't get annoyed at the study—they're trying to discover objective facts about the world, and it is very important to know that a commonly-accepted sugar substitute causes a drastic and long-lasting increase in clotting behavior after consumption. What you do with that information is up to you.
To add to that, they seem to have anticipated the OP's reaction since it was mirrored by the Calorie Control Council (industry cartel?) and are literally saying: if the study is sound and if you are worried about clotting or heart disease you need to watch your intake. This is because they claim the amount that was used was the same amount in common sugar-free sodas.
If the OP is saying their intake is way higher than that and that the only way they can reduce that is to return to their old sugar habit and become clinically obese, then yeah, without additional information I guess it's better to stay on sweeteners. But if you're escaping a burning building and are in danger of getting hit by a car when doing so, the course of action is to avoid getting hit by the car -- not run back into the burning building or wait till the car hits you.
If you live in a place with accessible healthcare (easy access, fast appointments, or cheap, depending on your criteria) you should just keep tabs on your heart at least.
Reading this it sounds like 'AI' is when you build a heuristic model (which we've had for a while now) but pass some threshold of cost in terms of input data, GPUs, energy, and training.
The classical approach was to understand how genes transcribe to mRNA, and how mRNA translates to polypeptides; how those are cleaved by the cell, and fold in 3D space; and how those 3D shapes results in actual biological function. It required real-world measurement, experiment, and modeling in silico using biophysical models. Those are all hard research efforts. And it seems like the mindset now is: we've done enough hard research, let's feed what we know into a model, hope we've chosen the right hyperparameters, and see what we get. Hidden in the weights and biases of the model will be that deeper map of the real world that we have not yet fully grasped through research.
But the AI cannot provide a 'why'. Its network of weights and biases are as unintelligible to us as the underlying scientific principles of the real world we gave up trying to understand along the way. When AI produces a result that is surprising, we still have to validate it in the real world, and work backwards through the hard research to understand why we are surprised.
If AI is just a tool for a shotgun approach to discovery, that may be fine. However, I fear it is sucking a lot of air out of the room from the classical approaches. When 'AI' produces incorrect, misleading, or underwhelming results? Well, throw more GPUs at it; more tokens; more joules; more parameters. We have blind faith it'll work itself out.
But because the AI can never provide a guarantee of correctness, it is only useful to those with the infrastructure to carry out those real-world validations on its output, so it's not really going to create a paradigm shift. It can provide only a marginal improvement at the top of the funnel for existing discovery pipelines. And because AI is very expensive and getting more so, there's a pretty hard cap on how valuable it would be to a drugmaker.
I know I'm not the only one worried about a bubble here.
You're using "AI" quite broadly here. Here's a perspective from computer vision (my field).
For decades, CV was focused on trying to 'understand' how to do the task. This meant a lot of hand crafting of low level features that are common in images, finding clever ways to make them invariant to typical 3D transformations. This works well for some tasks, and is still used today in things like robotics, SLAM etc. However - when we then want to add an extra level of complexity - e.g. to try and model an abstract concept like "cat", we hit a bit of a brick wall. This happens to be a task where feeding a large dataset into an (mostly) unconstrained machine learning model does very well.
> The classical approach was to understand how genes transcribe to mRNA, and how mRNA translates to polypeptides; how those are cleaved by the cell, and fold in 3D space; and how those 3D shapes results in actual biological function.
I don't have the expertise to critique this, but it does sound like we're in the extreme 'high complexity' zone to me. Some questions for you:
- how accurate does each stage of this need to get to useful performance? Are you sure there are no brick walls here? How long do you think this approach will take to deliver results?
- do you not have to validate a surprising classical finding in the same way that you would an AI model - i.e. how much does the "why" matter? "the AI can never provide a guarantee of correctness" - is true, but what it was merely extremely accurate, in the same way that many computer vision models are?
> do you not have to validate a surprising classical finding in the same way that you would an AI model - i.e. how much does the "why" matter? "the AI can never provide a guarantee of correctness" - is true, but what it was merely extremely accurate, in the same way that many computer vision models are?
The lack of asking "why" is one of my biggest frustrations in much of the research I have seen in biology and genetics today. The why is hugely important, without knowing why something happens or how it works we're left only with knowing what happened. When we go to use that as knowledge we have no idea what unintended side effects may occur and no real information telling us where to look or how to identify side effects should they occur.
Researching what happens when we throw crap at the wall can occasionally lead to a sellable product but is a far cry from the scientific method.
I mean - it's more than a sellable product, the reason we're doing this is to be able to advance medicine. A good understanding of the "why" - would be great, but if we can advance medicine quicker in the here and now without it, I think that's worth doing?
> When we go to use that as knowledge we have no idea what unintended side effects may occur and no real information telling us where to look or how to identify side effects should they occur.
Alright and what if this is also a lot quicker to solve with AI?
> I mean - it's more than a sellable product, the reason we're doing this is to be able to advance medicine
I get this approach for trauma care, but that's not really what we're talking about here. With medicine, how do we know we aren't making things worse without knowing how and why it works? We can focus on immediate symptom relief, but that's a very narrow window with regards to unintended harm.
> Alright and what if this is also a lot quicker to solve with AI?
Can we really call it solved if we don't know how or why it works, or what the limitations are?
Its extremely important to remember that we don't have Artificial Intelligence today, we have LLMs and similar tools designed to mimic human behaviors. An LLM will never invent a medical treatment or medication, or more precisely it may invent one by complete accident and it will look exactly like all the wrong answers it gave along the way. LLMs are tasked with answering questions in a way that statistically matches what humans might say, with variance based on randomness factors and a few other control knobs.
If we do get to actual AI that's a different story. It takes intelligence to invent these new miracle cures we hope they will invent. The AI has to reason about how the human body works, complex interactions between the body, environment, and any interventions, and it had to reason through the necessary mechanisms for a novel treatment. It would also need to understand how to model these complex systems in ways that humans have yet to figure out, if we already could model the human body in a computer algorithm we wouldn't need AI to do it for us.
Even at that point, let's say an AI invents a cure for cancer. Is that really worth all the potential downsides of all the dangerous things such a powerful AI could do? Is a cure for cancer worth knowing that the same AI could also be used to create bioweapons on a level that no human would be able to create? And that doesn't even get into the unknown risks of what an AI would want to do for itself, what its motivations would be, or what emotions and consciousness would look like when they emerge in am entirely new evolutionary system separate from biological life.
> how much does the "why" matter? [...] merely extremely accurate, in the same way that many computer vision models are?
Because without a "why" (causal reasoning) they cannot generalize, and their accuracy is always liable to tank when they encounter out-of-(training)-distribution samples. And when an ML system is deployed among other live actors, they are highly incentivized to figure out how to perturb inputs to exploit the system. Adversarial examples in computer vision, adversarial prompts / jailbreaks for large language models, etc.
"AI" has always been a marketing term first, there's a great clip of John McCarthy on twitter/X basically pointing out that he invented the term "Artificial Intelligence" for marketing purposes [0].
Don't read too deeply into what exactly is AI. Likewise I recommend not being too cynical about it either. McCarthy and those around him absolutely did pioneer some incredible, world changing work under that moniker.
Regarding your particular critiques, natural intelligence very often also cannot provide a "why". If you follow any particular technical field deep enough it's not uncommon to come across ideas that currently have deep rigorous proofs behind them, that basically started as hunch. Consider the very idea of "correlation" which seems rooted in mathematical truths, was basically invented by Galton because he couldn't find causal methods to prove his theories of eugenics (it was his student Pearson, who later took this idea and refined it further).
Are we in an AI bubble? Very likely, but that doesn't mean there's not incredible finds to be had with all this cash flowing around. AI winters can be just as irrational (remember that Perceptron basically caused the first AI winter by exposing the XOR problem, despite the fact that it was well known this could be solved with trivial modifications).
I feel like the invariant with "AI" is the software engineer saying "with enough data and statistics I can not understand the problem domain." It's fundamentally a rejection of expertise.
Take weather prediction for instance. This is something that the AI companies are pushing hard on. There are very good physics-based weather prediction models. They have been improved incrementally over many years and are probably pretty close to the theoretical peak accuracy given the available initial state data. They are run by governments and their output is often freely accessible by the public.
So firstly, where on earth is the business model when your competition is free?
Secondly, how do you think you will do better than the current state of the art? Oh yeah, because AI is magic. All those people studying fluid dynamics were just wasting their time when they could have just cut a check to nvidia.
> I feel like the invariant with "AI" is the software engineer saying "with enough data and statistics I can not understand the problem domain." It's fundamentally a rejection of expertise.
Nature doesn't understand the problem domain, and yet it produced us. Capable of extraordinary achievments.
> The classical approach was to understand how genes transcribe to mRNA, and how mRNA translates to polypeptides; how those are cleaved by the cell, and fold in 3D space; and how those 3D shapes results in actual biological function.
Do you have references for this approach? It’s my understanding that structure solutions mostly lag drug development quite significantly and that underlying biological understanding is typically either pre-existing or will end up not existing for the drug until later. Case in point look at recent Alzheimer’s drugs where the biological hypothesis is even straight up disproven.
Hopefully the bubble will pop when the GenAI bubble does. (Not that it probably should, since this both predates and is unrelated to it… but hype isn't rational to begin with.)