Well, you know I simply can’t resist throwing my hat in the ring. If everyone else is going to stand on a soapbox and decry the end of all data pipelines as we know it, you can be sure I’m going to climb to the top of the old oak tree and scream my truth to the glassey eye’d masses who’ve been sitting in their echo chambers for far too long.
I can see all you old-school SQL Server buggers just squirming in your seats, hot under the collar. How dare they speak of such sacred things, holy, without reference and a nod to the past? Methinks someone, somewhere, once said that there is nothing new under the sun. This indeed is true.
So, you want to join the cool kids on the block and don't know where to start? Do you have stars in your eyes and think adopting Databricks as a Data Platform is the cure-all for the data woes and problems you have today?
In this episode, I sit down with industry veteran Robin Moffatt — Sr. Principal Advisor in Streaming Data Technologies (Kafka, etc.) and a longtime voice in the data engineering community, to unpack the journey from old-school data architectures to today’s real-time streaming ecosystems. From early mainframe data processing and COBOL through the rise of Apache Kafka, streaming ETL, and event-driven systems, Robin shares lived experience from across decades of building, scaling, and evolving data platforms.
I rarely get excited about happenings in the data world. When you’ve been staring at CSV files for 20 years, heard the same problems argued over and over again with only a different tool name at the front of the sentence, things can get a little … mundane.
Andy’s career didn’t start in software at all. It started with physical circuits, literally wiring systems as an electrician, before moving into programming, databases, and eventually decades of hands-on data engineering work.
It’s an interesting time to be in software and data; the world of generative AI is changing the landscape beneath our feet. I don’t see this as a bad thing for software folk, but as an opportunity to learn new technologies and BUILD / UNDERSTAND the technologies used in an LLM and AI context.
You can’t expect an LLM trained two years ago to be up-to-date on what the new and best approaches are for X, Y, Z tech.
Every day I go out and do the Lord’s work for y’all. On an average evening or weekend, you can find me setting up cloud instances, generating data, deploying code, and running tests. I don’t answer to anyone. I write what I please and what I find. The simple approach: poke things, turn over rocks, ruffle the feathers of many powerful people.
Anyone who’s been around for more than a decade or so in the programming, development, and data world might get a slight eye twitch when the word database driver appears. Before the modern times we live in came along, the entire data world was driven by SQL Server, Oracle, with just a sprinkling of Postgres and MySQL … a-la AWS. That’s just the way it was.
It seems we have several cadres of people when it comes to “clean code.” I know there is a lot of previous baggage that comes with that nomenclature, good and bad. But, I think we can think about “clean code” from a simplistic point of view. It doesn’t have to be that complex.