I bristled at the title, article contents, and their spreadsheet example, but this does actually touch on a real paint point that I have had - how do you enable power users to learn more powerful tools already present in the software? By corollary, how do you turn more casual users into power users?
I do a lot of CAD. Every single keyboard shortcut I know was learned only because I needed to do something that was either *highly repetitive* or *highly frustrating*, leading me to dig into Google and find the fast way to do it.
However, everything that is only moderately repetitive/frustrating and below is still being done the simple way. And I've used these programs for years.
I have always dreamed of user interfaces having competent, contextual user tutorials that space out learning about advanced and useful features over the entire duration that you use. Video games do this process well, having long since replaced singular "tutorial sections" with a stepped gameplay mechanic rollout that gradually teaches people incredibly complex game mechanics over time.
A simple example to counter the auto-configuration interpretation most of the other commenters are thinking of. In a toolbar dropdown, highlight all the features I already know how to use regularly. When you detect me trying to learn a new feature, help me find it, highlight it in a "currently learning" color, and slowly change the highlight color to "learned" in proportion to my muscle memory.
> how do you enable power users to learn more powerful tools already present in the software?
On-the-job-training, honestly; like we've been doing for decades, restated as:
Employer-mandated required training in ${Product} competence: consisting of a "proper" guided introduction to the advanced and undiscovered features of a product combined with a proficiency examination where the end-user must demonstrate both understanding a feature, and actually using it.
(With the obvious caveat the you'll probably want to cut-off Internet access during the exam part to avoid people delegating their thinking to an LLM again; or mindlessly following someone else's instructions in-general)
My pet example is when ("normal") people are using MS Word when they don't understand how defined-styles work, and instead treat everything in Word as a very literal 1:1 WYSIWYG, so to "make a heading" they'll select a line of text, then manually set the font, size, and alignment (bonus points if they think Underlining text for emphasis is ever appropriate typography (it isn't)), and they probably think there's nothing more to learn. I'll bet that someone like that is never going to explore and understand the Styles system on their own volition (they're here to do a job, not to spontaneously decide to want to learn Word inside out, even on company time).
Separately, there are things like "onboarding popups" you see in web-applications thesedays, where users are prompted to learn about new and underused features, but I feel they're ineffective or user-hostile because those popups only appear when users are trying to use the software for something else, so they'll ignore or dismiss them, never to be seen again.
> By corollary, how do you turn more casual users into power users?
Unfortunately for our purposes, autism isn't transmissible.
Yes, and that's the point. In my opinion, this is the perfect use case for generative AI, one that takes advantage of the strengths of the technology while avoiding its weaknesses.
The generative UI example in the article is an example of the complete opposite of this idea - obtuse implementation of generative AI where it creates more problems than solutions. Yes, there is value in the idea of personalized UI. But UI/UX derives a lot of its value from consistency, as the other comments in this thread have mentioned. Losing that in exchange for personalization is a huge net negative, in my opinion.
Generative UI is incompatible with learning. It means every user sees something different, so you can't watch a tutorial or have a coworker show you what they do or have tech support send you a screenshot.
The solution could be search. It's not a House of Leaves.
I break out blender every six months or so in order to create a model for 3d printing. It needs to be precise and often has threads or other repetitive structures.
Every. Single. Time. I spend at least the first 3 hours relearning how to use all the tools again with Claude reminding me where modifiers are, and which modifier allows what. And which hotkey slices. Etc etc.
yeah but when you then need to do the same action 4 times in a row, getting claude to provide the correct action all 4 times takes a lot more brainpower on my part than just learning the menus yet again, right?
Writing is an expression of an individual, while code is a tool used to solve a problem or achieve a purpose.
The more examples of different types of problems being solved in similar ways present in an LLM's dataset, the better it gets at solving problems. Generally speaking, if it's a solution that works well, it gets used a lot, so "good solutions" become well represented in the dataset.
Human expression, however, is diverse by definition. The expression of the human experience is the expression of a data point on a statistical field with standard deviations the size of chasms. An expression of the mean (which is what an LLM does) goes against why we care about human expression in the first place. "Interesting" is a value closely paired with "different".
We value diversity of thought in expression, but we value efficiency of problem solving for code.
There is definitely an argument to be made that LLM usage fundamentally restrains an individual from solving unsolved problems. It also doesn't consider the question of "where do we get more data from".
>the code you actually want to ship is so far from what LLMs write
I think this is a fairly common consensus, and my understanding is the reason for this issue is limited context window.
I argue that the intent of an engineer is contained coherently across the code of a project. I have yet to get an LLM to pick up on the deeper idioms present in a codebase that help constrain the overall solution towards these more particular patterns. I’m not talking about syntax or style, either. I’m talking about e.g. semantic connections within an object graph, understanding what sort of things belong in the data layer based on how it is intended to be read/written, etc. Even when I point it at a file and say, “Use the patterns you see there, with these small differences and a different target type,” I find that LLMs struggle. Until they can clear that hurdle without requiring me to restructure my entire engineering org they will remain as fancy code completion suggestions, hobby project accelerators, and not much else.
Text, images, art, and music are all methods of expressing our internal ideas to other human beings. Our thoughts are the source, and these methods are how they are expressed. Our true goal in any form of communication is to understand the internal ideas of others.
An LLM expresses itself in all the same ways, but the source doesn't come from an individual - it comes from a giant dataset. This could be considered an expression of the aggregate thoughts of humanity, which is fine in some contexts (like retrieval of ideas and information highly represented in the data/world), but not when presented in a context of expressing the thoughts of an individual.
LLMs express the statistical summation of everyone's thoughts. It presents the mean, when what we're really interested in are the data points a couple standard deviations away from the mean. That's where all the interesting, unique, and thought provoking ideas are. Diversity is a core of the human experience.
---
An interesting paradox is the use of LLMs for translation into a non-native language. LLMs are actively being used to better express an individual's ideas using words better than they can with their limited language proficiency, but for those of us on the receiving end, we interpret the expression to mirror the source and have immediate suspicions on the legitimacy of the individual's thoughts. Which is a little unfortunate for those who just want to express themselves better.
3D printing is to mechanical engineering what vibe coding is to computer science.
With the rise of accessible 3D printers that can print engineering materials, there are a lot of people who try to create functional parts without any engineering background. Loading conditions, material properties, failure modes, and fatigue cycling are all important but invisible engineering steps that must be taken for a part to function safely.
As a consumer with a 3D printer, none of this is apparent when you look at a static, non-moving part. Even when you do start to learn more technical details like glass transition temperature, non-isotropic strength, and material creep, it's still not enough to cover everything you need to consider.
Much of this is also taught experimentally, not analytically - everyone will tell you "increasing walls increases strength more than increasing infill", but very few can actually point to the area moment of inertia equation that explains why.
3D printing has been an incredible boon for increasing accessibility for making parts in small businesses, but it has also allowed for big mistakes to be made by small players. My interpretation is the airshow vendor is probably one of these "small businesses".
You don't need to be able to mathematically jerk the equation off to understand why increasing material at the perimeter adds more strength than the center (within reason and in typical cases) or why you probably shouldn't use something that melts around 200deg in an engine bay.
Note that the actual material used has a glass transition temperature around 50 degrees, not 200. If the part was actually made from ABS-CF (as the pilot thought it was) it'd stand a decent chance of surviving for a long time given that it gets a lot of air cooling.
Everything you need to consider is really not that much when it comes to most typical consumer 3d printing projects. Mostly because they are usually about stuff like "fixing a broken tashcan". The engineers who made that bullshit plastic part that broke after a year probably knew all about area moment of inertia, but that doesn't mean I need to to print a replacement part that lasts longer - or not, in which case I'll just iterate on my process.
I really don't get the dismissiveness, and frankly, I've never experienced that from engineers in my life. They just seem delighted when someone, kid or adult, tinkes with additive manufacturing.
Hmm, I suppose the analogy could be interpreted as dismissive, which is not my intent.
I think both vibe coding and 3D printing are wonderful things. Lowering the barrier to entry and increasing technology accessibility allows those without formal training to create incredibly capable things that were previously difficult or not possible to do.
What I meant to specifically highlight is the 3D printing of functional parts that have some level of impact on safety, things that can lead to significant property damage, harm, or loss of life. Common examples include 3D printed car parts (so many) and load bearing components in all sorts of applications (bike mounts, TV mounts, brackets, I even saw a ceiling mounted pull-up bar once).
This isn't to say it can't or shouldn't be done. What I'm saying is that both on the digital side (files for personal use) and the production/sale side (selling finished parts), there is no guarantee of engineering due diligence. 3D printers enable low volume small businesses to exist, but it also means that, purposefully or not, their size means they can go quite a while without running into safety regulations and standards meant to keep people safe.
I call bullshit. 3d-printing is just a manufacturing method. Basic woodworking is much cheaper and more accessible than 3d-printing, do you call it vibe-coding?
If you carve a wooden part with "the right shape" for an engineering application that the part lacks the physical properties that allow it to perform under load stress ... then yes, that's vibe carving.
Looks good - falls apart in practice, and a junior can't tell the difference as they "look the same" to the inexperienced eye.
From practical experience, you cannot just replace a tyre on a car with any old bit of wood - you really need to use hard wearing mulga (or equivilant) as an emergency skid. (And replace that as soon as possible)
What you're describing is more like someone who doesn't know computer science principles hacking on code, manually. Part of the definition of "vibe coding" is that AI agents (of questionable quality) did the actual work.
This whole thread is a stretch, IMO. But, I like this phrase.
As a fabricator (large wood CNC, laser cutting and engraving, 3D Printing, UV Printing, Welding). I put engineering into a whole different job scope. I can make whatever you tell me really well, not vibe-carving.
I don't necessarily write the specs or "engineer" anything. I'm just saying, don't blame the medium, 3D printing. The fact is a fabricator is not necessarily an engineer, regardless of the medium.
Don't get me wrong, wood is great, I've made a lot of things and replacement parts from appropriate woods.
Using scrublands wood (slow growing tough long grain mulga) as a skid when a rubber tyre destroys itself is an old old hack passed on by my father (he's still kicking about despite being born in the early 1930s).
Point being, I don't blame processes (3D printing, etc) for part failure, that comes down to whether the shape and material are fit for purpose, whether material grain structure can be aligned for sufficient strength if required, whether expansion coefficients match to avoid stress under thermal changes, etc.
Engineering manufacturing can sometimes be suprisingly holistic in the sense that every small things matter including the order in which steps are performed (hysteresis) .. there's more t things than meet the eye.
A quick primer. There are two forms of 3D modeling - parametric solid body modeling, used in engineering CAD programs like Solidworks, and mesh modeling, used in CGI industries from programs like Blender. Hobbyist 3D printing currently exists between these two audiences of engineers who design for function and designers who design for design, and all the newbies get caught up in the mosh pit between them all and it gets crazy confusing. It doesn't help that some software (like Fusion360) integrates both in the same software, or that STL is a mesh format and not a solid body format (like STEP).
If you want to make things that have any importance put on things like fit, function, dimensions, tolerances, etc., then you want to learn CAD (Solidworks) and find resources that teach the basics of mechanical engineering parts design (intro to CAD courses, basically). If you want to design from a more artistic standpoint, then use a mesh modeling software (Blender).
Fusion360 is actually quite usable for both, but my problem is that the Fusion resources for functional design are frequently non-engineers trying to teach engineering concepts and it's just a longer and more frustrating process.
BTW, their Maker version locks Maker-created files to ONLY be editable in Maker, which means upgrading to normal Solidworks renders your previous files unusable. The $60/year student edition is better. Avoid cloud versions of anything you pick. Up to you on your use case.
Direct comparisons of the shortcomings of conventional therapy in direct comparison to LLM therapy:
- The patient/therapist "matching" process. This takes MONTHS, if not YEARS. For a large variety of quantifiable and unquantifiable reasons (examples of the former include cultural alignment, gender, etc.), the only process of finding an effective therapist for you is to sign up, have your first few sessions spent bringing them up to speed(1), then spend another 5-10+ sessions trying to figure out if this "works for you". It doesn't help that there's no quantitative metrics, only qualitative ones made by the patient themselves, so figuring it out can take even longer if you make the wrong call to continue with the wrong therapist. By comparison, an LLM can go through this iteration miles faster than conventional therapy.
- Your therapist's "retirement"(2). Whether they're actually retiring, they switch from a mental health clinic to a different clinic or to private practice, or your insurance no longer covers them. An LLM will last as long as you have the electricity to power a little Llama at home.
If you immediately relate to these points, please comment below so I know I'm not alone in this. I'm so mad at my long history with therapy that I don't even want to write about it. The extrapolation exercise is left to the reader.
(1) "Thank you for sharing with me, but unfortunately we are out of time, and we can continue this next session". Pain.
(2) Of note, this unfortunately applies to conventional doctors as well.
"These scrolls were scanned at Diamond Light Source, a particle accelerator near Oxford, England. The facility produces a parallel beam of X-rays at high flux, allowing for fast, accurate, and high-resolution imaging. The X-ray photos are turned into a 3D volume of voxels using tomographic reconstruction algorithms, resulting in a stack of slice images."
That article is really cool BTW, amazing stuff. Give it a read.