What do you consider to be deep research? I agree that engineering research is not actually deep, but once you eliminate that almost all of the current (applied, and maybe theoretical too) Machine Learning work for instance is also dismissed.
As a guiding principle, I think deep research is answering questions that no one has approached in quite the same way before. In my own PhD work, I used the time to learn how to answer questions sufficiently (methodology) as well as how to recognize shape/distill questions in a way that they can be approached. Developing these skills don't require a PhD, but the dedicated time helped me.
I've used these skills to learn deeply and answer questions to great extent in business roles and drive some solid change in a few large orgs. It doesn't mean I do a job better than someone without a PhD. Also, the PhD signals that I have answered some deep questions to the satisfaction of others that answer deep questions (PhD committee).
I don't have a PhD, but I have run research programs.
The deeper into a field you get the more you realize that the parts which seem deep research aren't, and the parts which seem incremental improvements are actually very deep.
I can think of a number of things in machine learning which appear hard which are easy, and vice versa.
Theoretical justification for GAN improvements (eg, the WGAN paper): elegant but obvious, even though I'm not a mathematician.
Generative models for text including entities that remain coherent for longer than a sentence? We barely know how to even start thinking about this problem.
I'm not who you asked, but here's my current take:
Deep research maximizes uncertainty reduction (or information gain in other words). Uncertainty here could be model uncertainty if you are developing models, or a shift in the probability distribution for a particular question more generally. E.g., "Does P=NP?" would be an example of the latter.
It might be very general, applicable in many fields. Or it could be targeted at a particular field, but in a way which answers many questions.
Bayesian experimental design can do what I think is the easy part of the problem: maximizing information gained for a particular experimental problem statement. In my view, most of the time you can reasonably guess what Bayesian experimental design might tell you by looking at a state space of your exprimental data. So the math may not be strictly necessary. Unfortunately, not all research is experimental. And it won't tell you, for example, if you are missing a variable.
Framing the problem (which questions to ask, and how to answer them) seems like the most important part to me. Or at least it has been in my almost complete PhD.
These thoughts are in flux. I may have a different view in a year.
That's why it's called Machine Learning and not Artificial Intelligence. It was an intentional differentiation to avoid the pure research academics who start every presentation with 'assuming infinite compute resources'.
ML is an Applied Research discipline and all the better for it.
There's plenty of academic research in machine learning.
I've never been to any academic presentation where they start like that. In-fact, more often they complain about the huge compute resources in industry.