For whatever reason, this big picture knowledge about how to bring everything together to make effective software is rarely taught explicitly. Rather we mostly see guides to the latest syntax or features of the framework du jour. Having been in this game for over a decade, I know full well this sort of surface knowledge has the shortest shelf-life.
If you've got more of a taste for the slower-changing, thoroughly unsexy principles behind writing stable and maintainable software, I'm devoting myself to exploring these ideas with my screencasts - topics like data integrity, building instrumentation for knowing what's happening in your code (esp. in production), non-brittle tests, leaning on unix tools and the OS to debug things, the timeless editor vim, etc.
The first 12 videos are available over at https://www.semicolonandsons.com/ -- it's a new side-project of mine that I hope will bring some value.
I'd like to watch these and appreciate you've put a decent chunk of work into the content.
But...as a desktop user the video player only has two screen sizes...too small to be readable and full screen which is pretty useless because I like to multitask. Could I suggest offering different sized embedded players, or something that resizes with the browser window?
Not that it excuses the poor choice of embedded player, per se, but:
youtube-dl https://www.semicolonandsons.com/episode/sql-cascade-transactions-db-design -fhd_mp4-720p
# -F for alternatives to hd_mp4-720p
works fine. (I don't care for video tutorials (vs text) due to temporal illegibility (things not being on screen long enough to read), but it definitely looks readable enough in terms of spacial size.)
I very much welcome the feedback. Full-screen is something I personally didn't consider.
I use the Wistia player and it seems to be fully responsive on page resizes -- the issue is more that the player is constrained by the size of its column in the css. I'll put that on the list of things to sort out once I get around to hiring a designer. The css at present is an extremely rough proof of concept.
This comes off a bit as "knows how to produce a product on a particular platform (iOS, with Swift via XCode), doesn't demonstrate the theory or fundamentals". Talking about "retain cycles" from an iOS perspective with no mention of "memory leak" or "reference counting" or "garbage collection", which are more widely-applicable terms.
I don't think going wholly to the other end of the theory-praxis spectrum is the answer in addressing the gap :-)
There is a good main thrust here but requires a lot of work/editing to be really useful IMO.
I had very little idea what retain cycles were - what a weird term (to an outsider)! I thought it was related to using a variable for looping or something.
Agreed; it's not bad what the OP explains, though indeed a bit platform-restricted, but with a title like that I expected more of the architectural points/standard good practices/design principles/... to pass by. While those will not always have a direct link to making your software usable from the user's point of view, they do (imo) matter a lot. Also for making it usable for the developer, and maintainable (i.e. still usable in the future).
Valid point you got there, I'll take it into consideration in my next articles. I just started blogging 3 days ago and never expected this would blow up. Thanks for the advice again !
All solid advice. This is a nice checklist of things to remember if you have been through it before. However, I don't think the target audience of "knows how to code, but not how to build a product" will be able to do much with this individual article. Any one of this points can spiral into days, weeks, or months of learning and experimentation. Even how to set an environment variable can be a difficult task for a new developer that doesn't have a good concept on what the shell is beyond that they type magic commands into it from stackoverflow.
I really think more established apprenticeships would be good for the software industry. This is a list of things that you only learn if you have been through launching a product, and it takes a few years of experience to have a chance to both do it and have time to digest and understand it.
Also almost 30 points on a business advice article with no comments, smells a bit fishy.
The gap between learning to code and producing usable software is...
... ability to perform complicated abstract reasoning, ability to reflect on own performance plus some experience.
Almost everybody is able to perform simple programming tasks in isolation. For example, follow a tutorial to build simple app using framework X or google solution to problem Y and ctrl-c/ctrl-v into your codebase.
Complicated abstract reasoning is necessary to be able to put together abstract concepts, flows, in new patterns that are required by the application.
Ability to reflect on own performance is necessary to learn from mistakes, look critically at the code, understand strengths weaknesses, evaluate solutions, etc. A person that lacks ability to reflect on own performance might be for example deaf to complaints of other people, might be blind to problems with the code they are producing (for example overcomplicated code), or trying to find faults anywhere else but themselves.
If you can't reflect on yourself you are cut off from being able to effectively learn and improve and this is critical to being able to produce usable software.
Regardless of how intelligent and self-reflective you are, some experience is needed.
Quoting this because I would only fail to phrase it any better:
> Getting hooked on computers is easy — almost anybody can make a program work, just as almost anybody can nail two pieces of wood together in a few tries. The trouble is that the market for two pieces of wood nailed together — inexpertly — is fairly small outside of the "proud grandfather" segment, and getting from there to a decent set of chairs or fitted cupboards takes talent, practice, and education.
The thing is; there is actually a big market for nailing two pieces of wood together; there is just no market for someone whose only skill is nailing two pieces of wood together [0]. Consider a chair where one of the cross supports got unattached. You could by a new chair, or hire a carpenter to fix it. In most cases, unless you already have a maintenance staff, you would probably just fix it yourself.
Similar with programming. If you core product is software, you should probably have an expert. However, if the software is just ancillary; you can often get by with just the basic skills.
[0] Maybe assembly lines; but with modern manufacturing technology, I doubt that is the case anymore.
> ... ability to perform complicated abstract reasoning
This seems like it's kicking the can down the road. It is like saying that people need the ability to perform memory. It's tautologically true that these are both necessary for complex learning.
A lot of the challenge for learning systems is figuring out how to help people develop the mental schemas they'll be pumping abstract reasoning through. (Similar to the design of procedural pieces, which most coding lessons focus on!)
This is not my experience at all. I see the gap much more related to identifying and streamlining the real use cases, the things people really want out of the software, as opposed to whatever un-tethered flights of fancy originally drive the creator.
This is part where you learn to reflect on your own performance. As you build more and more software you learn from your mistakes to build what the clients want/need. And what kind of your fancy is helpful and what is not.
I know very intelligent people who, nevertheless, build shit software because they think they are right and everybody around them is wrong. You can't identify and streamline anything unless you are first open to opinion.
Any sort of HCI or design stuff was absent from my CS education, and it seems to be one of the bigger gaps I see with engineers: not many people will try to have someone actually use their software after creating it, and there's a ton you can learn by seeing how people fail to effectively navigate your UX or try to use it for something they wanted but wasn't originally scoped.
I got that through years of providing helpdesk service to end-users, and it's staggering how much UX kludginess gets into production that would have seemingly been caught by watching 1-2 people (other than the software authors) actually try to use it.
Those responsibilities are ostensibly handled by other business units, but that either doesn't actually happen in startups (what's a design team? we don't have one!) or would be smoothed by the engineering side having a basic foundation of the concepts involved, enough to understand the rationale behind UX stuff and challenge bad design before it gets in front of users.
> Those responsibilities are ostensibly handled by other business units, but that either doesn't actually happen in startups (what's a design team? we don't have one!) or would be smoothed by the engineering side having a basic foundation of the concepts involved, enough to understand the rationale behind UX stuff and challenge bad design before it gets in front of users.
Even more directly, someone has to operate (from the back-end, not as an end-user) the systems you build, and usability (or even existence of) tools for that is often terrible.
> would have seemingly been caught by watching 1-2 people (other than the software authors) actually try to use it.
A thing I always do when doing UI work is actually try to use it. I feel like that is glossed over by my coworkers so often and it's maddening. Yes, you might have fulfilled the specific wording of the ticket, but we're going to get a bug report as soon as any user actually touches that interface.
Similarly, logging too much makes logs meaningless.
Purpose-based specific log platforms can be useful, like, stack traces go to sentry, tracing goes to jaeger, stdout for global context stuff, etc. All of these going in one stream makes it almost as useless as no logs.
> I include a random environment variable that I can keep track of which dictates how the password will be generated for a specific user depending on the information provided. And we are set! I don’t need to know my users’ password, I trigger the “sign-in” by triggering the same hashing function using the environment variable I stored before and everything is secure.
> and if you are logging sensitive information you’ll get in trouble.
This is one of the areas, where strong types can help you. Have all your sensitive information in separate types from your non-sensitive info. For example, if you store the user's name in a type called SensitiveString, you can write methods/traits/overloads/etc that can either make it a compile time error to log a a value of that type, or log a placeholder - ie "SenstiveInfoHidden*". This also helps ensure that you don't accidentally assign or append sensitive information to a non-sensitive variable. Put the compiler and type system to work for you.
This sounds interesting, my only experience with Types is via TypeScript, would you mind sharing a really simple example of how this could work in TS? Or maybe some link/resources you think are appropiate for learning more on this topic, thanks!
When I saw title of the post I thought it would be talking about principles that you learn the first time you have to write production code, such as reliability, maintainability, validation (testing, continuous integration, rollout processes), performance tuning, security, contingency planning (backups), etc. Those are definitely part of the steep learning curve right after you learn to sling code but before you are able to work in a team setting on apps/services with substantial usage.
I suppose the actual contents of the article makes sense for apps on a specific platform, but I feel like there's a more generalizable set of lessons that fit the bill for "all of the problems you get once your hacked together app/website/service has real users."
Luca Palmieri[0] is trying to fill that gap with his series called "Zero to Production" in Rust that assumes the reader can solve project euler problems and is instead going to focus on "how do I build and deploy production software."
You can read the introduction[1] to the series to get a better sense.
The gap between writing code ("programming") and producing software that is going to be available after a while, in a team ("software engineering") is maintainability.
Where maintainability is composed of documentation, version-control, testing, building, issue tracking, soft management, automating all of the above, adhering to conventions even if they are crazy, and more. All are non-trivial but won't be gleaned from your average programming course.
The article brushes with some of these but misses the mark overall.
If you've got more of a taste for the slower-changing, thoroughly unsexy principles behind writing stable and maintainable software, I'm devoting myself to exploring these ideas with my screencasts - topics like data integrity, building instrumentation for knowing what's happening in your code (esp. in production), non-brittle tests, leaning on unix tools and the OS to debug things, the timeless editor vim, etc.
The first 12 videos are available over at https://www.semicolonandsons.com/ -- it's a new side-project of mine that I hope will bring some value.