I've asked before - I am not clear what the massive boost is other than saving a few days or weeks at the beginning. I believe all taxation, labor compliance and other such regulation is still the same as the status quo. The one time ease of setup doesn't seem to be worth the ongoing hassle of dealing with the same crap, just with the added friction of a new and unfamiliar entity structure. What am I missing?
Engineers work with non-deterministic systems all the time. Getting them to work predictably within a known tolerance window and/or with a quantified and acceptable failure rate is absolutely engineering.
Same way as any other production model in ML. Or any field that requires quality control. Really, this is not fundamentally different in conceptual approach than implementing any other technology or area of knowledge which is a near verbatim definition of engineering.
Depends on the failure mode and application. But a first approximation is the same way you would for a human output. E.g. process engineering for a support chatbot has many of the same principles as process engineering for a human staffed call center.
I agree with your points but I'm also reminded of one my bigger learnings as a manager - the stuff I'm best at is the hardest, but most important, to delegate.
Sure it was easier to do it myself. But putting in the time to train, give context, develop guardrails, learn how to monitor etc ultimately taught me the skills needed to delegate effectively and multiply the teams output massively as we added people.
It's early days but I'm getting the same feeling with LLMs. It's as exhausting as training an overconfident but talented intern, but if you can work through it and somehow get it to produce something as good as you would do yourself, it's a massive multiplier.
I don't totally understand the parallel you're drawing here. As a manager, I assume you're training more junior (in terms of their career or the company) engineers up so they can perform more autonomously in the future.
But you're not training LLMs as you use them really - do you mean that it's best to develop your own skill using LLMs in an area you already understand well?
I'm finding it a bit hard to square your comment about it being exhausting to catherd the LLM with it being a force multiplier.
No I'm talking about my own skills. How I onboard, structure 1on1s, run meetings, create and reuse certain processes, manage documentation (a form of org memory), check in on status, devise metrics and other indicators of system health. All of these compound and provide leverage even if the person leaves and a new one enters.the 30th person I onboarded and managed was orders of magnitude easier (for both of us) than the first.
With LLMs the better I get at the scaffolding and prompting, the less it feels like catherding (so far at least). Hence the comparison.
Humans really like to anthropomorphize things. Loud rumbles in the clouds? There must be a dude on top of a mountain somewhere who's in charge of it. Impressed by that tree? It must have a spirit that's like our spirits.
I think a lot of the reason LLMs are enjoying such a huge hype wave is that they invite that sort of anthropomorphization. It can be really hard to think about them in terms of what they actually are, because both our head-meat and our culture has so much support for casting things as other people.
No, LLMs don't learn - each new conversation effectively clears the slate and resets them to their original state.
If you know what you're doing you can still "teach" them though, but it's on you to do that - you need to keep on iterating on things like the system prompt you are using and the context you feed in to the model.
This sounds like trying to glue on supervised learning post-hoc.
Makes me wonder if there had been equal investment into specialized tools which used more fine-tuned statistical methods (like supervised learning), that we would have something much better then LLMs.
I keep thinking about spell checkers and auto-translators, which have been using machine learning for a while, with pretty impressive results (unless I’m mistaken I think most of those use supervised learning models). I have no doubt we will start seeing companies replacing these proven models with an LLM and a noticeable reduction in quality.
That's mostly, but not completely true. There are various strategies to get LLMs to remember previous conversations. ChatGPT, for example, remembers (for some loose definition of "remembers") all previous conversations you've had with it.
I think if you use a very loose definition of learning: A stimuli which alters subsequent behavior you can claim this is learning. But if you tell a human to replace the word “is” with “are” in the next two sentences, this could hardly be considered learning, rather it is just following commands, even though it meets the previous loose definition. This is why in psychology we usually include some timescale for how long the altered behavior must last for it to be considered learning. A short-term altered behavior is usually called priming. But even then I wouldn’t even consider “following commands” to be neither priming nor learning, I would simply call it obeying.
If an LLM learned something when you gave it commands, it would probably be reflected in some adjusted weights in some of its operational matrix. This is true of human learning, we strengthen some neural connection, and when we receive a similar stimuli in a similar situation sometime in the future, the new stimuli will follow a slightly different path along its neural pathway and result in a altered behavior (or at least have a greater probability of an altered behavior). For an LLM to “learn” I would like to see something similar.
I think you have an overly strict definition of what "learning" means. ChatGPT now has memory that lasts beyond the lifetime of it's context buffer, and now has at least medium term memory. (Actually I'm not entirely sure that they are not just using long persistent context buffers, but anyway).
Admittedly, you have to wrap LLMs to with stuff to get them to do that. If you want to rewrite the rules to excluded that then I will have to revise my statement that it is "mostly, but not completely true".
You also have to alter some neural pathways in your brain to follow commands. That doesn’t make it learning. Learned behavior is usually (but not always) reflected in long term changes to neural pathways outside of the language centers of the brain, and outside of the short-term memory. Ones you forget the command, and still apply the behavior, that is learning.
I think SSR schedulers are a good example of a Machine Learning algorithms that learns from it’s previous interactions. If you run the optimizer you will end up with a different weight matrix, and flashcards will be schedule differently. It has learned how well you retain these cards. But an LLM that is simply following orders has not learned anything, unless you feed the previous interaction back into the system to alter future outcomes, regardless of whether it “remembers” the original interactions. With the SSR, your review history is completely forgotten about. You could delete it, but the weight matrix keeps the optimized weights. If you delete your chat history with ChatGPT, it will not behave any differently based on the previous interaction.
Yes with few shots. you need to provide at least 2 examples of similar instructions and their corresponding solutions. But when you have to build few shots every time you prompt it feels like you're doing the work already.
You just explained how your work was affected by a big multiplier. At the end of training an intern you get a trained intern -- potentially a huge multiplier. ChatGPT is like an intern you can never train and will never get much better.
These are the same people who would no longer create or participate deeply in OSS (+100x multipler) bragging about the +2x multiplier they got in exchange.
The first person you pass your knowledge onto can pass it onto a second. ChatGPT will not only never build knowledge, it will never turn from the learner to the mentor passing hard-won knowledge on to another learner.
Motivation is some combination of real and perceived effort Vs expected reward. Shorter isn't always better. For eg. Counting every single calorie is the shorter way to lose weight, but for most people, eating approximately healthy is more optimal from an effort /motivation poi t of view.
I think both have a place. When someone is starting for the first time, they're enthusiastic, but they haven't built faith in the process yet. It's easy for them to lose confidence if they're putting in work but the results are slow or ambiguous. I think it's best to take advantage of their beginner's enthusiasm and kick them off with something higher effort that is guaranteed to show them clear results. After they build confidence they can settle in to something lower effort (aka "more sustainable") where the benefit is longer-term and you don't see dramatic results every week.
The patterns are different on different string sets. You don't need to learn DEF with the same pattern again, but you do need to learn all the ways of playing CDE
But then the pattern across strings is also "relative" and only depends on the guitar tuning you're playing. For instance in the standard tuning, two neighboring strings are always a perfect fourth apart (five frets) unless they're the G-B strings in which case they're a major third apart (four frets). So if you know where you'd be playing a note on one string, that same note is just five frets back or four frets back on the next one. Which is again a totally "relative" framing that works for any individual note the same way. You can even figure out where you'd have to play if the tuning was non-standard. These patterns only have to be practiced a little bit, there's not really any need to learn them from scratch.
(If anything, I would want a "guitar learning" app to automatically come up with its own exercises, similar to ear training apps for learning to recognize intervals - and using something like a spaced repetition approach to evaluate how the user is doing.)
This is why learning guitar when I was younger was so difficult to me; people just presented things like "you have to learn these 5 scale patterns" but they didn't really go into why, it was just "memorize this stuff and then you'll be good!", but I hate rote memorization without understanding the underlying principles. I'm old and didn't have the internet back then so I was just learning from various books or friends and it was slow going, but I still see things like this presented in tons of Youtube videos today.
I've since gone back and learned a bit of music theory as an adult and it's been super helpful understanding the underlying principles so I can work things out vs. having to just memorize things without understanding why they work.
I think then you can go practice the various scale patterns and get good at them with the knowledge that you can always work out the scale from first principles if you need to.
Different strokes for different folks though I guess, I'm sure there's an argument to be made for not overwhelming folks with too much theory out of the gate. Not sure if I had started with a bunch of theory if I would have stuck with it when I was younger.
> not overwhelming folks with too much theory out of the gate.
Thing is, it's not even "too much theory". It really is just a simple tone-semitone pattern and a few bare facts about how the usual guitar tuning works, that you'd know anyway if you've ever had to tune your guitar by hand. That's all it takes to make the guitar explainable from first principles. Then sure, you can practice the "patterns" all you want for convenience's sake, but you don't have to commit anything to memory that you could not figure out again from scratch if needed.
I would agree with you. I feel like watching videos where someone goes "hey, so here's this pattern that completely unlocks the neck for you, don't be stuck in a box anymore!" and all they're showing you is where the various notes in a key are along the neck (now I know that, but I wouldn't have known that before...) and it's WAY more confusing than if you just learn how the pentatonic scale works and how to find the notes in a key etc. And the funny thing is, the only reason I was stuck in that box in the first place is because of silly rote memorization without understanding why you play the various notes in a scale etc., it just feels like this thing that kind of compounds when you just learn patterns vs. just learning the underlying principles.
But again, I'm completely amateur at this stuff still, and I don't have any experience teaching other folks an instrument, so it's hard for me to say with any certainty that we should be teaching it one way or another I guess.
> This is why learning guitar when I was younger was so difficult to me
I agree. The downvoted op is right in a way. Guitarists have a way to make things difficult. Just learn to play the 1 octave major scale/arpeggio, and triads, then 7th shells. The guitar is relative, a 1 octave scale on a guitar is the same, on keyboard it's positional.
However it's worth mentioning that I think Berklee does teach patterns, and a few jazz guitarists say to learn it too. It almost seems like learning guitar is not as worked out as other instruments. Everybody that gets good winds up having to learn all the things other guitarists have had to figure out over years after they rote learned it.
Nearly every popular-music guitar lesson series in the world teaches the five various pentatonic patterns--the few exceptions being those that focus on classical guitar or non-western music. You might find this article interesting.
There is definitely theory to how they are constructed, and you are right that the shifts and adjustments can be derived if you think about it and practice it. But that's just a longer way to the five patterns.
Lots of this rings true. Especially the "Solutions to plateaus are straightforward but not easy." part. So much of this psychological - both in being objective with your failings and in having to live with them every day until you fix them.
P.S. Nice of him to not mention the "built an unscalable org and burnt myself out" plateau :)
Some thoughts (I used to run a chain of 20+ coworking spaces)
* Remote individual workers are the least attractive customer for the typical coworking space (that isn't a cafe). Small and unpredictable revenue, plus higher support per head, since the account is only one head.
* Remote workers also aren't a great fit for you. Very few like to hop around. Their reasons are typically avoiding loneliness, finding a reliable place to focus etc. And so would only use your platform until they find a space they like.
* SMBs are the sweet spot. Coworking is cheaper AND less overhead for them. In many countries they don't get access to grade A space even if they're willing to pay. For the coworking space, it's only slightly more work than a solo account and significantly more recurring and reliable revenue. You may be better off targeting small teams.
* One particular pain point is expiring inventory - remote workers actually fit this well since they're willing to go for a floating desk. Most spaces would be willing to offer discounts on this. Kind of like last minute flight or hotel deals.
* Another related product is meeting and conference room bookings. Also expires and has a market in WFH teams.
* The last two also have a better business model fit since they are intermittent and people may be more inclined to shop around, allowing you to take a cut of every transaction. For any kind of recurring contract, you're probably limited to taking a one time lead gen or brokerage fee since you have no grounds to maintain a relationship with the customer after the initial match.
reply