Some of the worst code I've had to work with in my nearly 10-year long career so far was written by one of the smartest programmer's I've met. His problem was that everything because an exercise in designing this all-encompassing abstracted-to-hell-and-back web of interfaces, services, aggregates, domains, etc etc.
Indicative of this was a request to take a table of data that already existed (on a website) and add a button to export a CSV of that data. For anyone familiar with .NET (and I assume most other languages) if you've already got the data in the format you want to export this is literally a 10-minute task, including a unit test or two for the functionality. His quote was something on order of 18 hours which included time to write a set of TableDataExportService methods that would support a whole host of file formats in the future.
I've consumed enough CSV data to be wary of CSV data that was generated in ten minutes - including the unit tests. Are you sure you already have the data 'in the right format', for example? Maybe the data that's presented on the page has already been formatted for localization, so when you use that in your CSV export you put out dates in US or european format depending on the user's preferences, creating some hard-to track-down integration bugs later. Or maybe it only includes the display name for the status code, not the status code itself, so when three months later you change the display code from 'cancelled' to 'removed', all your clients' Excel macros break.
And once you've done that ten minute job on this page, how long will it take you to add it to the other 25 pages which also have tables of data that need CSV export? And when the table format changes to add another column, does the next developer also have to adjust your CSV output code?
Sure, YAGNI, but... there's no excuse to just throw a bunch of CSV-export logic inline into a page that previously had shown no interest in knowing how to format CSV files. Take a little longer, think about where to put the logic. There's a middle ground here.
I started on a project filled entirely with 'senior people' once, and was really pumped about the prospect of doing Serious Programming with serious people, instead of doing a bunch of mickey mouse crap.
Six months later everything was going horribly wrong because there was not a single one of us who was willing to solve a simple problem with simple code. Everybody was engineering the hell out of every single 'solution' and the code was impossible to read.
From this I learned a couple of things. One was that I had not learned as much from my Second System Syndrome as I thought I had. The second was that every project benefits from having people who are entertained by solving 'mundane' problems, to whom you can assign all concerns that are not part of the information architecture.
But the most important is that the best solution is -never- the one that is dazzlingly brilliant. It's often the one that's subtly clever (everyone agrees "that works", but some can wax poetic about how great it is at satisfying the concern), but sometimes it's the one that's dead simple.
Few solutions are easier to replace than the dead simple one.
I don't mean to advocate against YAGNI or writing simple code. I can't authoritatively to your specific example, but what I am suggesting is that you default your assumptions to a view that this developer had some reason for the choices that were made.
Maybe this client was notorious for asking for CSV export but really meant CSV, XLS, XLSX, PDF? Maybe the build and release infrastructure is so slow that any change - no matter how small - needs 3 days to be built, tested, and deployed? Maybe the complexity makes sense in other areas of the system and they decided to adopt patterns across the codebase to aid in teaching/onboarding new developers?
Just to be clear, maybe you will find that the reasons are totally invalid and this is gold-plated-abstracted-to-hell code. (It sounds like you probably will)
But if you assume from the start that everything is an over-complicated pile of junk and you could rewrite it in a day, I think you will find yourself jaded and unhappy with your environment.
Software should be implemented as simple as possible, and then refactored as necessary when new functionality is required (the only exception is things you are very sure is going to be needed, e.g a password reset functionality on a password dialog).
Yeah this was some of the most anti-YAGNI stuff I've seen.
Which is not meant to detract from the fact that the code was great 99% of the time. It just took 10x longer than it should have and cost 10x as much and half of it was never used.
Indicative of this was a request to take a table of data that already existed (on a website) and add a button to export a CSV of that data. For anyone familiar with .NET (and I assume most other languages) if you've already got the data in the format you want to export this is literally a 10-minute task, including a unit test or two for the functionality. His quote was something on order of 18 hours which included time to write a set of TableDataExportService methods that would support a whole host of file formats in the future.