> However this time I didn’t rush to implement, thanks mainly to the influence of Ed Ashcroft. In retrospect Ed’s first impulse when he had an idea was to document, not implement. We produced a series of widely read papers and soon had other researchers and brilliant students of our own working on it.
I'm not sure this actually applies to most people. The document vs implement struggle is something I see in modern software shops a lot. It certainly depends on the personality: some people just like thinking through everything and producing a coherent writeup before writing a single line of code, but for some writing code itself is an act that clarifies the still-murky concepts and helps to produce a good writeup.
But in the end, most people aren't producing novel enough technologies that the benefit of a complete writeup is there. You don't write something brilliant and have other researchers work on it; you merely write something for the benefit of a future colleague understand your code quickly. It's a very different class of software development.
> "for some writing code itself is an act that clarifies the still-murky concepts and helps to produce a good writeup."
This is my philosophy after ~2 years of working on distributed systems. I got sick of vague design discussions where everyone has a 5 minute memory.
I am much happier to sit at my desk and figure it out by writing code, then produce a design doc once I have a solid prototype working. It feels more honest and real.
At the same time, I wonder if I'm missing out on a different way of doing things.
If I'm working within a system I like to code the minimum golden path, add a tests that passes, then add more checks or a new test and go back to the code, etc. Usually the constraints of the system point the way to an obvious skeletal implementation and you can learn as you go along.
On a greenfield project it is especially valuable to have discussions beforehand if people have done something similar before but talking about code at a high level tends to become nebulous rather quickly.
I think the philosophy behind TDD is really valuable in the sense of "start with a goal in mind", but I find that I get better results with running and iterating than going to the effort of writing tests all the way out.
Probably just a mismatch of our testing tools vs the types of things I need to develop usually
If the problem is in a space that we know what the solution 'should' look like, getting as much of that down in the most abstract sense is something I like to do. However that design is only a -proposed- solution, and of course subject to all sorts of changes.
But where things are murky in a design, yeah, sometimes the easiest thing to do is code it out.
In either case, (up to date) diagrams of data flows are one's friend in any distributed system.
I think it depends on the size/scope of what you're doing, how complex it is, and how much novelty (technical and domain) there is in it.
For things that are not terribly new, are small enough to knock out with 1-2 devs, and aren't massively complex, just writing some damn code works really well. But the larger it gets the more useful it is to do some thinking beforehand, especially about the overall architecture, technology choices, potentially tricky error cases, etc. Also it can be particularly useful to flesh out public/widely-used APIs and data storage details, as those tend to be difficult to change once put in place.
That said, I don't think a fully fleshed-out "specification" is useful in most cases. A wiki page, or maybe a few, is usually enough. You just want to spot major problems far enough ahead that you can avoid them instead of running into them.
But hey, YMMV, this is just what I've found. TBH I wish it was different, because I actually _like_ hacking out code more. It's just that thinking ahead _works_ better IME.
I do this a lot too, it works great. For any non-trivial work, we don’t really consider a design doc “good” unless the author wrote a proof-of-concept demonstrating the feasibility of whatever approach is being advocated, specifically focused on whatever is novel or risky with the project. Iterative development starts before the doc is ever reviewed.
> I am much happier to sit at my desk and figure it out by writing code, then produce a design doc once I have a solid prototype working. It feels more honest and real
Can relate, every single time I did the opposite, I ended up rewriting anyway because I wasn't following the design doc.
I don't think that writing software is a linear process, as much as us software developers would like it to be. Broadly speaking, there tends to be two approaches. First is documenting and designing. The second is prototyping some code. I actually sit somewhere right in the middle: prototype and use that prototype to inform my document — then I switch back and forth between the two. Really, for me, it's a circular process — not linear.
Most modern software shops don't document at all. They would definitely benefit from writing down a problem statement and description of the solution so that everyone is on the same page.
Documenting doesn't have to mean design-first development, it's a way to make sure you're working the right problems and building the desired solutions before you waste your time writing code.
I've tried to force myself to write something in English before coding, and... it's not 100% followed, but it's often helped. Sometimes I'm not even prescribing the full 'solution', but it's a sketch of the pros/cons of a couple approaches. This can help me remind myself later of why a particular option was chosen, even if it turns out to be bad.
I can't say if anyone else who looks at the code ever also reads what I've written - it seems that generally they don't (even though I announce it and it's in the commit activity stream and PRs).
Writing proper/decent software specification is what, IMO, missing from most modern day software development. The idea is that, someone knowing deep enough what AND how to build, penning down vague ideas as concrete as needed, and hand off the rest to the team. In fact, every engineer need a plan, either for themselves or for the team, either implicit or explicit. Having an explicit plan for a project's development is what I find resulting in a coherent and high fidelity product.
Agile software development is used by many as a disguise of not knowing what to build. Countless hours have been wasted on "iteration". I'm not saying agile is bad. Just that many are adopting agile without knowing why they should
I had the bad fortune of taking this guy's "computer science" class a long time ago as an undergrad. He spent the entire first week covering WW2 and other history. Then during the rest of the semester didn't teach anything.
The icing on the cake was when we were handed evaluation forms he said to the class "It doesn't matter what you fill out because I have tenure."
I clearly remember teaching the resolution proof method, adversarial graph search (games), the state of automatic language translation, Bayesian spam filtering ... maybe you weren't paying attention.
Not really. I don't remember learning anything and I still got an A just like most of the other students. Teaching is more than just providing a syllabus and then lazing around in the classroom.
I didn’t find any valuable insight in this. Get help and build for the future... ok? Also, I felt his anecdotes barely got whatever point he was trying to make across. And half the post consisted of name drops of people I’d never heard of.
Hey, Bill Wadge was a prof of mine! It was a topics in Artificial Intelligence course just before the Deep Learning boom. It was all about entailment and modal logics and frankly seemed really out of date. It was interesting, however.
I took his meaning as "assume better hardware will be available in the future", which is not what you seem to mean, and may not be as true these days anyway.
That said, when I've seen people over-engineer things (and maybe this isn't your problem, it's just what I've seen most) it's usually because instead of solving the problem in front of them, they created a flexible framework to solve a class of problems including the one in front of them. And it usually fails because they don't actually know what other problems they'll encounter - they guess and get it wrong. So their framework is flexible on Dimension X, but they actually need it to be flexible on Dimension Y.
The best advice I read on this was to never build a framework until you have at least 3 examples of the problem. Once you've got 3 examples, you have a good feel for which ways it needs to be flexible. I would add that you should also make sure you'll eventually have more than 3, because if 3 is the total number of concrete examples you still don't need a framework.
Also, it's entirely possible some smart guy/gal can give you those 3 examples before you even start building. A good product person can do that. If you have that, and they're pretty sure you'll actually get there, then go ahead and build it.
Difficulty of changing it in the future also plays into this, but I've typed too much already. :)
I find the opposite. This topic is fascinating to me actually.
I'm a system oriented thinker. I don't know how I do it, but I'm able to design forward-thinking systems in my head... really well. Like large complex systems. I've since validated this with retrospectives (checking my past decisions, how they worked out, and how they compare to others). The systems turn out to be fairly elegant in their simplicity and ability to scale.
So what's going on here? I think maybe the answer is pretty simple. Some people are good at building large abstract systems and some people aren't. In fact one thing I'm bad at is narrowly focusing on problems. My solutions are always good enough but not great there.
Oversimplified example: if you were to ask me to build a highly performant sorting algorithm I'd probably just give you an okay solution. But if you were to ask me to design a distributed system that runs these sorting algorithms (where the algorithms itself is a black box; implemented already) I'd do really well!
Back in the 1990s, when I was a software engineer at a database company in the Bay Area (rhymed with "PsyBass", if you pronounce "bass" as a musical instrument :), there was a joke among some of the architects along the lines of, "I DREW the boxes and arrows! If you can't implement it, that's you're fault!" Now, as a "software architect" in a really, really small shop, I basically just try and lead by example, and make sure my ideas work as code before inflicting them on anyone else. But that sorta sounds like keeping it all to myself until I am sure it works, per the article. I like to document (more than most), but have found over the years the only person who reads my documentation tends to be me. So writing docs or writing code, either way, tends to help me refine my ideas.
Yep. You can write for a future where desktops boast single-core speeds in the 10GHz range. That's how you get Crysis, which still has performance issues on the latest CPUs.
I was a CS student at Warwick (UK university) in the late 80s, and clearly remembering doing functional programming with Lucid. Never heard of it since.
I'm not sure this actually applies to most people. The document vs implement struggle is something I see in modern software shops a lot. It certainly depends on the personality: some people just like thinking through everything and producing a coherent writeup before writing a single line of code, but for some writing code itself is an act that clarifies the still-murky concepts and helps to produce a good writeup.
But in the end, most people aren't producing novel enough technologies that the benefit of a complete writeup is there. You don't write something brilliant and have other researchers work on it; you merely write something for the benefit of a future colleague understand your code quickly. It's a very different class of software development.