I'd be very curious to hear some elaboration on this claim; it's the first time I've ever seen it.
From my point-of-view, the fact that TensorFlow completely scrapped its entire ≤v1.0 graph-based API for a PyTorch-style eager execution API for v2.0 onwards does not suggest that it was “designed from the ground up” in any way, shape, or form. We're talking a total rewrite of the entire frontend codebase going from v1.0 to v2.0, and a total abandonment of all v1.0 style code. TF 2.0 is basically a brand new language.
Under the hood it's all the same BLAS/LAPACK operations anyway. I've found TF superior to PyTorch for stats methods and probabilistic programming in general (far richer API), but for standard linear algebra the two are equivalent performancewise, with PyTorch having the superior API IMHO.
From my point-of-view, the fact that TensorFlow completely scrapped its entire ≤v1.0 graph-based API for a PyTorch-style eager execution API for v2.0 onwards does not suggest that it was “designed from the ground up” in any way, shape, or form. We're talking a total rewrite of the entire frontend codebase going from v1.0 to v2.0, and a total abandonment of all v1.0 style code. TF 2.0 is basically a brand new language.
Under the hood it's all the same BLAS/LAPACK operations anyway. I've found TF superior to PyTorch for stats methods and probabilistic programming in general (far richer API), but for standard linear algebra the two are equivalent performancewise, with PyTorch having the superior API IMHO.