this is mainly meant as a way to conversate with the model while you are programming with it. This is not meant to pull questions to a team but more to pair program. a markdown file is best for syntax in an llm prompt and also just easiest to have open and answer questions with. If i had more time and could i would build an extension into cursor.
Why not have the model ask in the chat? It's a lot easier to just talk to it than open a file. The article mentions cursor so it sounds like you're already using cursor?
Because i only have 500 requests in my cursor usage plan so if there's a way for claude to ask me questions (e-g missing context) without it taking an entire new request, i'll take. Haven't tried it yet but looking forward to it
would probably work better, this is just how i threw it together as an internal tool a long time ago. i just improved it and shipped it to opensource it.
The word "conversate" is in the dictionary [1], labelled as "non-standard". That doesn't mean it's not a word. Most people would be able to easily infer its meaning.
I wouldn't agree that using an understandable word that's in the merriam-webster dictionary is being ignorant or lazy. Nor would I call something AI slop because of a single word, without otherwise engaging with the content.
I do genuinely wonder why some people can be so derailed by odd or unfamiliar words and grammar. Are they stressed? Not wanting to engage with a conversation? Trying to assert status or intellectual superiority? Being aggressive and domineering to assuage their self-worth? Perhaps they feel threatened by cultural change? I assume it has something to do with emotional regulation, given that I can't recall bumping into too many mature people who do such things.
Human in the loop means despite your best efforts at initial prompting (which is what rules are), there will always be the need to say "no, that's wrong, now do this instead". Expecting to be able to write enough rules for the model to work fully autonomously through your problem is indeed wishing for AGI.
In my example, the human would be in the loop in exactly the same way as the technique in the article. The human can tell the model that it's wrong and what to do instead.
Tools like th one in the article are also "rules".
"If you don't know the answer to a question and need the answer to continue, ask me before continuing"
Will you have some other person answer the question?