I've worked in BI (end-to-end - data modelling, reporting, ETL, etc.) for more than 10 years now across various organisations and since "data science" became all the rage, I had the pleasure to work with a few data scientists. From what I've seen so far, they are very good as statisticians (some of them university lecturers) but when it comes to building ETL pipelines, I don't think any of them could actually do it properly. Properly as in an ETL process which connects to various data sources, writes to logs, is repeatable, restartable and so on. It is not easy to get to know how to build a proper ETL process and it is not easy to learn how to "do data science" correctly as well. I see it as more productive (from my personal experience) to let the "data engineers" do the "data engineering" work - build data models, ETLs, etc. and let the "data scientists" do the "data science" work - build and fiddle with statistical models. Just like with a "full stack" developer, and the separation of work between "back end" and "front end" developers, it might be better to let each do what they do best unless you have people who can do both properly (but often it's hard to find them and they would actually be better in one area or the other). The frustration between the two camps - data "engineers" and "scientists" is usually due to mismanagement (distinct teams doing each bit separately, coordinated by one to many management layers) rather than suboptimal division and allocation of labour. Small teams of two to four people which contain the correct mix of experts would benefit from the strengths of both data professional types, and would avoid the problems around syncing the effort.