"Looping in others" is an opportunity for others to say NO, or to try and redirect you, or to jump (unwanted and unneeded) onto your band wagon. If I'm working with true peers (not age or title peers, but truly similar in talent and ability), then their early involvement or collaboration is great.
Generally, however, I prefer to get solutions developed to the point where they are their own proof. It's much harder to argue against something that's (near or entirely) completed and which proves to work. It's fine for me to then involve others to get their feedback, etc.
Proof to whoever the stakeholders are, presumably the customers in your model.
Proof that it satisfies what the customer needs or wants - something which can't be demonstrated until the would-be solution reaches some level of demonstrable maturity.
For many engineering solutions, you can talk about them earlier, handwave, present, but nothing conveys that it actually works, or sometimes even makes sense, short of a working demonstration.
Why so quick to assume it's a "pet" project? If the developer is doing it right, they are making exactly what the customer needs, but the customer does not have the developer's expertise to recognise what will work at an early stage.
We are not talking about things everyone understands and could have an opinion on, like say the colour and layout of the login window.
Take something that requires deeper work and hired expertise. For example suppose you've started to design a new kind of GPU technique that will double the customer's calculation throughput for the same power consumption. They hired you for your expertise to solve that sort of problem. They have no idea how it works though, and for the first month you have only an outline of a new algorithm, that only peers at your own level could understand well - and you don't have any peers in that area. And most of figuring out the details is on paper and whiteboards, math and logic along with throwaway experiments to measure. The first line of code in a working demo is not possible to figure out until 3 weeks in, and it takes only 1 day of coding and 2 days of polish after that to finish.
That sort of thing isn't a pet project. It might not even be fun; it might be quite stressful, after all it's a higher risk to the developer to pursue such paths. It doesn't fit into daily PRs and isn't even a coding problem at first. Yet it's exactly what the customer needs, what they said they hired you to accomplish, and what they will appreciate if it has reached the point of "proof" when shown.
It will be incorrectly assumed to be the developer's pet project if it's demonstrated before it works, though. Thus the reason to build it up to "proof" level, if you're sure the customer needs it.
So... how do you know what those people want? How do you know what you're building in 3 weeks isn't something totally different from what the users now want?
The answer is that you, alone, don't. By spending more than a small amount of time building out the simplest possible version of what the customer wants, you get their feedback along the way.
It's generally not true that you need many weeks, or even a full week, to build something you can show a user. Usually you're not being creative enough to figure out what to show your customer if you can't think of a way to provide value in a few days.
There are rare exceptions, but in the vast majority of cases you can build something for the user quickly, even when you initially think you can't.
This isn't even my opinion, it's the observed practice of a whole generation of developers. Read "The Art of Agile Development" if you want to know more, it's extremely helpful if you're trying to understand how to achieve the delivery I'm talking about.
You know what they want because they told you, up front, when they hired you.
In the example, let's say the customer has said they would find it really useful if the GPU could calculate twice as much on the same power, but they don't know how that could be done. Perhaps it's in a self-driving car and these are hard specifications. And then they crank up some agile "creative" process, encouraging devs including you to work on whatever areas of the system make sense to improve it. As long as you ship code PRs daily and a little report.
You talk to them about your idea, and they don't understand it. It's not convincing to them, but it is to you - and you're the expert on this. You are hired for your expertise and the customer doesn't have anyone else with it in the GPU algorithm design area. Your team peers are either busy or don't understand the area well enough. They can't recognise what will solve their stated problem, and you can't convince anyone that your method will work until you have reached the "proof" stage after 3 weeks of not producing any PRs, which is against what the customer or your managers incorrectly believe a competent developer would invariably produce almost daily.
I'm not saying you should do that as a dev. I'm saying it's an example of what "proof" refers to, while being something that gives a customer what they said they need, and is not a dev's pet project. Addressing the incorrect assumptions implied in "Proof to whom? Shouldn't you be developing things your customers need, not some pet project you enjoy working on for its own sake?"
There is no pet project, and you don't actually enjoy workng on it all that much. You're just trying to solve the customer's stated problem in an environment where they openly say what they'd really want and have set up a process which prevents it - a very common combination.
> You know what they want because they told you, up front, when they hired you.
This is, very often, a fatal mistake, and what they hired you to do is not what they need, nor what they want you to do in three weeks, let alone three days.
Sure but now that's just arguing in circles. Whatever I'd say about what they've shown they need in a simplified example for discussion, you can find a matching argument that it's not what they really need, but that argument doesn't address anything. It's a "no true scotsman".
Assume for the sake of the point, that you are a competent and experienced dev, and somehow you have correctly ascertained something that the customer needs, most likely by talking with them.
The point of this discussion is to ask, are there circumstances where someone needs to "work on one's own" for a while before others will recognise a solution is in progress to the clear need of their customer. And are there circumstances where not doing that will result in the solution being blocked, without any other useful replacement taking its place. There are countless anecdotes, and in all walks of life not just software development, pointing to the answers to both of those questions being a strong yes.
That's your mistake; it's not a matter of competency and experience. You are not a solo venture, you are a member of a team, and part of that team needs to be a customer representative who can tell you, in real time, what the customer wants.
There is no room for a hero engineer in modern software development. That's not how it works anymore, and it may never have worked that way.
Your countless anecdotes of this working stack only a fraction as high as the countless anecdotes of what you're describing completely failing.
This battle was fought in the 00s and won in the early 2010s. Agile produces results, waterfall generally does not.
Generally, however, I prefer to get solutions developed to the point where they are their own proof. It's much harder to argue against something that's (near or entirely) completed and which proves to work. It's fine for me to then involve others to get their feedback, etc.