You can't assign a category to the tokens in something like "T * p" unless you know whether "T" is a type or not. Suppose you have the following line of code:
T *p;
It means one thing if it were preceded by this, making the line a variable declaration:
struct T;
typedef struct T T;
It means quite another if it were preceded by this, making the line an expression:
The specific details aren't too important. (Another example: "(T)-1" - very different interpretations depending whether we previously had "typedef int T" or "int T=10"). The problem is that a line of code can't be understood, not even at the most basic level of dividing it up into appropriately-categorized tokens and arranging it an appropriate tree-like structure, without having previously interpreted (to some extent) the code that precedes it.
From a practical perspective, this makes the compiler code more complicated, because now the parsed code has to be analyzed and the results fed back to the loop. As well as the obvious negative effects of code that's more complicated, this also means any code analysis tools that work with source code have to themselves do this same work. (For C, this probably isn't too much of a difficulty, but it's still annoying having to keep track of typedef names and so on just so you can parse it! For C++, it's a very big problem indeed, on account of how much stuff you have to keep on top of - and that's why C++ source analysis tools basically never worked properly once you got past a certain level of complexity until people started just using the compiler to do it.)
I don't have an in-depth background in this stuff, so it's possible there are also more abstract benefits from having a grammar that doesn't suffer from this sort of problem.
Are you saying strong typing is a problem with C++ as a language? Or am I misunderstanding?