Its a bit like saying algorithms don't matter for solving computational problems. Two different algorithms might produce equivalent results but if you have to wait years for an answer, when seconds matter, the slow algorithm isn't helpful.
I believe the current approach of using mostly a feed-forward in the inference stage, with well-filtered training data and backpropagation for discrete "training cycles" has limitations. I know this has been tried in the past, but something modelling how animal brains actually function, with continuous feedback, no explicit "training" (we're always being trained), might be the key.
Unfortunately our knowledge of "whats really going on" in the brains is still limited, investigative methods are crude as the brain is difficult to image at the resolution we need, and in real time. Last I checked no one's quite figured out how memory works, for example. Whether its "stored in the network" somehow through feedback (like a SR-latch or flip-flop in electronics) or whether there's some underlying chemical process within the neuron itself (we know that chemicals definitely regulate brain function, don't know how much it goes the other way and it can be used to encode state)
> I don't think architecture matters. It seems to be more a function of the data somehow.
of course it matters
if I supply the ants in my garden with instructions on how to build tanks and stealth bombers they're still not going to be able to conquer my front room
When we arrive at AGI, you can be certain it will not contain a Transformer.