As a delivery consultant in a Generative AI specialty practice for an extremely large cloud services consultancy, I can say with certainty that failure to achieve results with the latest models is definitely more of a reflection of the abilities of the user, and much less the abilities of the model.
A lot of people look at LLMs through the same lens that they have looked at all other technology to this point — that if you learn and master the interface to the technology, then this eventually equates to mastering the technology itself. This is normalizing in the sense that there is a finite and perceptible floor and ceiling to mastering an objective technology that democratizes both its mastery and use in productivity.
But interacting with LLMs that are in the class of higher-reasoning agents does not follow the same pattern of mastery. The user’s prompts are embedded into a high-dimensional space that is, for all intents and purposes, infinitely multi-faceted and it requires a significant knack for abstract thought in order to even begin the journey of understanding how to craft a prompt that is ideal for the current problem space. It also requires having a good intuition for managing one’s own expectations around what LLMs are excellent at, what they perform marginally at, and what they can fail miserably at.
Users with backgrounds in humanities, language arts, philosophy and a host of other liberal arts — while maintaining a good handle on empirical logic and reason, are the users who consistently excel and continue to unlock and discover new capabilities in their LLM workflows.
I’ve used LLMs to solve particularly hairy DevOps problems. I’ve used them to refactor and modularize complicated procedural prototype code. I’ve used them to assist me in developing UX strategy on multimillion dollar accounts. I’ve also used them to teach myself mycology and scale up a small home lab.
When it comes to highly-objective and logical tasks, such as the development of source code, they perform fairly well, and if you can figure out the tricks to managing the context window, many hours of banging head against desk or even weeping and gnashing of teeth can be saved.
When it comes to more subjective tasks, I’ve discovered that it’s better to switch gears and expect something a little different from your workflow. As a UX design assistant, it’s better for comprehensive abstract thinking, identifying gaps, looking around corners, guiding one’s own thoughts and generally being a “living notebook”.
It’s very easy for people who lack any personal or educational development in the liberal arts or the affinity for and abilities of abstract thought to type some half-cocked pathetic prompt into the text area, fire it off and blame the model. In this way, the LLM has acted as sort of a mirror, highlighting their ignorance, metaphorically tapping its foot waiting for them to get their shit together. Their lament is a form of denial.
The coming age will separate the wheat from the chaff.
A lot of people look at LLMs through the same lens that they have looked at all other technology to this point — that if you learn and master the interface to the technology, then this eventually equates to mastering the technology itself. This is normalizing in the sense that there is a finite and perceptible floor and ceiling to mastering an objective technology that democratizes both its mastery and use in productivity.
But interacting with LLMs that are in the class of higher-reasoning agents does not follow the same pattern of mastery. The user’s prompts are embedded into a high-dimensional space that is, for all intents and purposes, infinitely multi-faceted and it requires a significant knack for abstract thought in order to even begin the journey of understanding how to craft a prompt that is ideal for the current problem space. It also requires having a good intuition for managing one’s own expectations around what LLMs are excellent at, what they perform marginally at, and what they can fail miserably at.
Users with backgrounds in humanities, language arts, philosophy and a host of other liberal arts — while maintaining a good handle on empirical logic and reason, are the users who consistently excel and continue to unlock and discover new capabilities in their LLM workflows.
I’ve used LLMs to solve particularly hairy DevOps problems. I’ve used them to refactor and modularize complicated procedural prototype code. I’ve used them to assist me in developing UX strategy on multimillion dollar accounts. I’ve also used them to teach myself mycology and scale up a small home lab.
When it comes to highly-objective and logical tasks, such as the development of source code, they perform fairly well, and if you can figure out the tricks to managing the context window, many hours of banging head against desk or even weeping and gnashing of teeth can be saved.
When it comes to more subjective tasks, I’ve discovered that it’s better to switch gears and expect something a little different from your workflow. As a UX design assistant, it’s better for comprehensive abstract thinking, identifying gaps, looking around corners, guiding one’s own thoughts and generally being a “living notebook”.
It’s very easy for people who lack any personal or educational development in the liberal arts or the affinity for and abilities of abstract thought to type some half-cocked pathetic prompt into the text area, fire it off and blame the model. In this way, the LLM has acted as sort of a mirror, highlighting their ignorance, metaphorically tapping its foot waiting for them to get their shit together. Their lament is a form of denial.
The coming age will separate the wheat from the chaff.