I seem to recall reading that McCarthy was actually surprised to discover that Lisp _could_ be run by a real computer; he intended it to be a completely theoretical tool.
I have also heard that, but only from secondary sources. Here is a clip of Russell talking about the time he wrote the first lisp interpreter. He doesn't mention McCarthy being surprised, but he seems to imply that he, Russell, was quicker to grasp the idea of translating the functions McCarthy had been writing to machine code.
No, that's definitely wrong. Initially they compiled LISP code into assembly by hand (when LISP still looked a lot like fortran) and the plan was to write a compiler in assembly to automate that. Instead, McCarthy came up with a way to express LISP code (aka M-expressions) as data (aka S-expressions) to give a definition the LISP semantics in LISP itself. Steve Russell then hand compiled this definition, and lo, they had a working interpreter of S-expressions. M-expressions were never implemented and the compiler was written in LISP instead of assembly.
I really don’t think that’s what he’s saying, and if at least some of this doesn’t resonate with you, you’ve never looked for (programming) work after 40. The problem is that programming is, by it’s nature, “strange” work. In every other job, you perform some function for a long time, learn all the ins and outs of it, and then move on to manage other people who are performing that function: break their work down into tasks, assign different people to different tasks based on skill sets, suggest timeframes, etc.. That’s true from sandwich making up to neurosurgery. Programming work seems to defy that natural progression. I’ve been doing this for 25 years now, and I’m no better at breaking software development projects down into discrete tasks for _other people_ to carry out than I was when I started - and I’ve never met nor worked with anybody else who could, or even pretended they could. So we have this odd career where you start out as a programmer, and you stay a programmer until you retire. Couple that with the outsiders expectation that programming is getting easier when the reality is that the opposite is true, you have a _lot_ of people with a very low opinion of programmers as professionals - i.e. if you were actually any good, you wouldn’t have to be doing this any more.
Yeah, I've never really understood being _offended_ by being asked to do something relatively simple (and honestly, those are actually pretty good problems). I might consider it a little amusing if somebody asked me to write FizzBuzz or something really basic, but I would shrug my shoulders and do it.
I agree, although I might be offended if an interviewer spent more than a few minutes on such things once it was clear that you could do them. However, that's more about not respecting the value of your time than the difficulty of the problem(s), just like expecting you to do a day of homework problem(s) before they even meet you.
I think a lot of this religious behavior stems from an underlying belief, mostly by people who've never tried it themselves, that programming _must be_ easy and anything that makes it appear hard (and especially slow) means that somebody is making a mistake somewhere. So they go looking for a silver bullet, and the latest fad seems to fit the bill.
Please don't argue like this here. It has troll effects whether you mean to or not. Instead, please argue substantively, and get less inflammatory (not more) as a topic becomes more divisive.
There's a great, fun, programmer-centric website called thedailywtf.com (sorry, I just scheduled the rest of your afternoon for you). Developers can submit WTF's that they've found in other's code - as the site admin says, "curious perversions in information technology".
One thing that strikes me about the majority of the submissions, as funny as they are, is that they mostly boil down to "so-and-so didn't know that such-and-such feature existed, so wrote reams of code to implement that feature in a complex way". It also strikes me that just this article's sort of analysis of "prolific" (aka "good") engineers/programmers drives this same sort of behavior. If every developer is supposed to be committing code all day, every day, there's no time left over to read the product documentation, try out a new feature, review a reference implementation, read a blog post: to be "good", you must be spending as much time as possible _typing_, because that's what you're paid to do. This (ubiquitous) management mentality is how we end up with roll-your-own crypto, or five competing Javascript frameworks, parsing using regular expressions... it's not so much that what they did was wrong - and trust me, if it works, it won't be removed - it's that it's pointless.
It can also work the opposite way. You can think you understand the requirements and go hole yourself away for a couple of days pumping out code. When you finally merge your work, another developer tells you, "Why didn't you just use function X or package Y?".
If you're doing your work in small, sharp bursts, you actually end up with more time in between work for talking with other devs, reading up on technology, etc.
A lot of DailyWTF posts are about people not knowing a function existed and then reimplementing it horribly. Many of them include a note like "the other 10,000 lines were similar".
Ignorance is not embarrassing, it's the default state when all of our tools grow and change daily. But a lot of the worst wtf moments are clearly cases where something wasn't sanity checked by anyone until it was presented as a finished product - sometimes too late to keep out of production. That's the benefit of frequent commits, reviews, and research breaks; they might not prevent our ignorance, but they keep it under control.
I think you missed what this article was on about. They're comparing against "impact", not LOC. From what I understood, "impact" is actually LESS from high LOC; it's highest for a few lines in many files.
But, the overall conclusion - small bites, many times - definitely fits both my experience and my intuition.
You either have to a) create a library/module or b) use a library/module. "A" is rare, and it's easier to see the impact of it - "you're the person that made that thing". "B" is the most common, and it usually involves many small bites. It's also harder to notice who's accomplishing what this way, since you have to point at the whole team to say "you made that thing".
Impact sounds like a decent way to see who's meaningfully contributing to a group project, although it definitely (sounds like) it has major blind spots. They could do with more detail as to how "churn" relates.
Some rolled-their-own solutions may have been done before feature/library X emerged. Ironically because those which came later were initially excluded to follow a "take small bites" approach.
Assuming equal likelihood of success for product and library. It may feel that way to us but there is a survivability bias embedded in that assumption. I'd hazard a guess and say incremental value add per unit of effort is greater for product development due to the inherent bias developers have for building libraries under the false assumption that a domain can be conquered for good.
Are you joking? I've worked as a programmer for 13 years, mostly in the same two languages. But my reading list is longer than ever and my thinking-to-coding ratio is higher than ever. Unless my employers are regularly lying to my face, they are pretty happy with the outcome.
This is going to be domain-dependent. The lower in the stack you go, the slower things move and the less likely it is you're going through wholly unfamiliar territory. Top-of-stack, user-facing code(of which web browser stuff is the poster child) churns endlessly; UI code you write today is likely to have a lifespan of a few years at best, if the system continues adding features and staying up to date with new platforms.
This is true, at least, until you hit code that touches hardware directly, and then you still get to encounter lots of new problems because the hardware is doing things that break your code.
I think that if you are reading > 50% of the time, vs coding, then you are still learning your trade. I write for MacOS and iOS, lots of new material each year, it doesn't take that long to get through the relevant updates, though.
The times I've spent most time reading, are when I've transferred across industries, to capital markets trading systems, to web media, to web apps, to mobile apps, to big data, to VR/graphics apps. Each transfer involves an initial period of furious research, but even though the ratio of reading to coding stays high initially, I am generally able to start writing more than reading within the first few weeks.
I did spend more time reading the first years of programming. But I think you hit a watershed, where it becomes harder to find a treasure trove of ideas about software architecture, for example, in a new book, and instead it becomes more helpful to advance your art by experimenting, and developing, and analyzing your own output.
For what it's worth my main point was about the ratio of coding to research, and although the previous commenter disagreed, I think he also introduced a new point about the ratio of thinking to writing. I remember I used to spend more time staring at walls of code, or paper notes. Not anymore. If before my work was more chunky, now it is more consistent and smooth. That is a skill/discipline I developed. The article resonates with me because my commit frequency has increased since I have been able to improve my analytical process in this way.
> how many people here seriously believe the result would not be exams in Agile Software Craftsmanship Manifesto Driven Development
Well, since the sort of people who set up regulatory exams are likely to be the same sort of people who design college curricula, I don't think that would be the case at all. I suspect that a software licensing exam would cover things like algorithmic complexity, NP-completeness, pushdown automata, Turing completeness, LR parsing, etc. etc.
Well, since the sort of people who set up regulatory exams are likely to be the same sort of people who design college curricula, I don't think that would be the case at all.
OK, you win. There actually would be a worse option than letting the consultants do it...
Well, I'm 42 and I believe I'm 20... until I spend any time hanging around 20 year olds.