Many years I worked with a team that couldn't figure out why the code ran so fast on Bob's laptop but was slow as shit on production server hardware. The production hardware was state of the art at the time and they were kind of in a pinch deadline-wise. They ended up literally deploying Bob's laptop to the ISP data center.
We had "Rafał's computer" where there was a development environment for building our app on windows with mvc 6 and commercial QT 3 licence.
Every customer except one used our app on linux machines, and every developer had linux environment, but sometimes you had to debug or build a new version of the software for that one windows customer.
In that case we connected through remote desktop to Rafał's Computer and worked there, because nobody could figure out how to make it work anywhere else.
Rafał quit that company 1 year after I came to work there. When I was leaving 6 years later the "Rafał's Computer" was a part of critical infrastructure and there was an effort to virtualize it :) It's a miracle it worked so long without disk failing TBH :)
I'm sometimes wondering if now, 5 years later - they still have Rafał's Computer around. Hopefully they managed to virtualize it eventually :)
Similarly we had to ship something targeting WinCE 4, and the official "Platform Builder" toolkit wouldn't even install on Vista or above. So even into the Win10 era we had to have XP VMs to build from.
We had and probably still have somewhere a builder for our legacy software for pSOS on PowerQUICC. This build PC was forgotten, lost and then found several times while I was in the company and there is probably one person left who even knows what that is and how to work with it, and he is close to retirement. Thankfully target platform is in process of end of life and there will be no more development for it.
At my internship, one of the senior developers (a great guy, wish we had stayed in touch more), joked about building a company out of developer's machines (like a, Developer Machines As A Service type thing), so then, software will always work. After all, it works on MY machine!
I recently got a job and that’s how it is here. (I’m not sure if they’re actually VMs, they’re very powerful machines.)
One of the first things you do when you get hired here as a developer is request that a machine get provisioned for you in the data center and provide your public ssh key. You’re not allowed to do anything on the MacBook they give you other than mail/wiki/chat/ssh. IMO it’s pretty brilliant.
One extra nice thing is that the OPs people rather than the corporate IT people manage it so it actually works well.
>They ended up literally deploying Bob's laptop to the ISP data center.
I guess in a pinch but that is a solution that I just don't think I would like.... I would feel like we're going to figure this out, not surrender to the absurd...
Sometimes that requires expertise in a very low-level topics. For example I've read a similar story, but developer digged the truth. Production was running on vmware cluster with live migration or something like that. Servers used slightly different CPUs (frequency differed a bit), so VMware emulated some important instruction (something related to get current timestamp). MSSQL used that instruction a lot in one specific scenario, so it was running much slower on virtualized production server.
They were able to figure it out and rewrote one stored procedure so it ran fast again. But I'm sure that I, for example, don't possess the necessary expertise to figure this out. I wouldn't be surprised that in many small-to-medium companies there are no experts of that level either.
In a live migration scenario the TSC in the source and destination machines will be necessarily different so that emulation is required to make it not warp.
That said, due to previous TSC instability all operating systems deal with a TSC warp pretty well. Whether applications deal well with it is probably the reason why emulation is the default, but it could be that this can be disabled.
Probably VMware fault tolerance which keeps a virtual machine in sync over 2 physical machines to allow instant failover. But I can imagine that taking a lot of instruction trapping.
Maybe putting the laptop in the data center lets you do the launch. Maybe the launch is a flop because you forgot to advertise it. Maybe you raise series A funding; maybe you don't and the site dies in three months, still happily running on the laptop. Maybe someone decides to re-write the product in a different stack. Maybe the product was wrong and needs to pivot.
A lot of products have shipped and succeeded with absurd hacks in them. Hex-editing the binary to change "EMM386 Memory Error.." to "Thank you for playing Wing Commander", etc.
I know that I'm a small sample size - but in my career so far, "later" means "never" a lot more often than it actually means "later". Especially when it comes to paying down technical debt.
In my similarly limited experience, this is directly correlated with equal parts 1) how strong-willed the architects and technical leadership is; 2) how [dis]functional your non-technical leadership is.
Most good developers want to pay down technical debt. Most good architects are able to balance business and technical needs enough to know when and how to pay down what debt without spending four sprints renaming variables. It's when you have either dis-functional leadership who refuses to listen to their employees (most likely IME), or architects weak in either skill or constitution who can't or won't stand up the bosses to try to explain to them why we need to spend some time renaming variables.
The article talks about the difference between laptop vs server, but no other comments here so far have really touched on that topic. I think this post isn't about how good or bad cloud/VPS servers are, but really about difference between running on laptop vs servers.
It seems starting around maybe 2012~13, devs in the tech industry started moving from Windows laptops to Macbooks in large scale, and this was when it became much more common to run your server stack locally on the laptop. This was really strange to me at first; since prior to this, I was very used to having "dev servers". I came from LAMP world where we'd write code on Windows, upload code to a dev server, run the app on it and test. This seems very amateur in retrospect, but I also worked in a relatively large unicorn at the time (from 2008 to 2013) and I don't think we were even the only company doing that -- from what I knew, this was pretty much standard practice, especially if you run LAMP stack. Which is to say, it was way more common back in the days to dev on a dev server than on their own laptop. (Maybe that's a point against Windows at the time; about how difficult it was to have a reasonable local dev environment on Windows.)
So when I joined a startup later, and found out how everyone on the team had Macbooks (with no option of going Windows), and everyone ran the entire server app on their laptops (and this was the nodejs world, no longer LAMP), I was almost in disbelief -- are we not to worry about any difference between developing, testing and running this server code on a Mac laptop, vs when deploying it and have it actually run on a Linux server later? After a short while it seemed like it was not really a concern; it seems perhaps either macOS linux is close enough to servers, or node is compatible/high-level enough between mac and linux.
Reading this article is kind of a wake-up call -- no, it's really not exactly the same.
Thing is that the dev servers of the past was never equal in setup of the prod servers, difference in software versions, failovers, proxies, crons, backups etc, that was just a lie that was told.
In my opinion is is much better to develop a cross platform/environment product that can work everywhere and the key to that is to avoid assumptions about where the product is executing. In the long run, this makes your software much more flexible.
There is a significant difference between "works everywhere" and "works everywhere efficiently". For the former you can get away with just writing portable code. For the latter you basically have to benchmark everywhere. Performance is a hell of a leaky abstraction.
Yes, that is why it is futile to try to have an exact copy of the production server because the smallest difference will ruin your performance profiling, to be absolutely sure you have to do the profiling on the production server itself.
Usually every developer has their own login user on the development machine, but that is enough to skew performance profiling, because usernames populates environment variables & that will change memory layout of any program that the shell executes.
If memory layout changes due to environment variables significantly changes performance characteristics then it looks like an opportunity to optimize this aspect of loading a binary. I assume cache alignment is in play here.
It's great that they can average out the random effect of layout, but what I meant by optimization opportunity is to deliberately aim for the "good" memory layouts. Something like fixing the alignment of the stack on program entry point or marking specific functions as "stack aligned".
Also when everything is running locally people tend to forget network latency and the cost of additional roundtrips. This comes at a high price when for example doing lots of database queries or waiting on a majority confirmation of a clustered transaction.
Another problem with developing locally is that the dev is the only simultaneous user, and doesn’t have to share the resources with other users.
I guess the takeaway is to always validate your assumptions through measurements like profiling etc.
Another thing is that if you're using Node.js (or worse, Ruby or Python), your code is so inefficient that you probably don't care about performance differences; just throw more dynos at it.
I can see Go slowly eating Python's lunch for web services here. It's close to Python's productivity and much less resource-hungry while being more performant for many use cases.
Go still has a way to go in terms of productivity-boosting frameworks in comparison to Python though. I hate the tight coupling of and patterns of Django but if I had to spin up a company in a month, there's no doubt I would do it Django.
Lots of people will say "frameworks are an anti-pattern in Go" and I generally agree with the sentiment of writing boilerplate to stitch together various libraries > giving control to the framework.
Still, some people want big hefty, quick-to-production frameworks with everything & the kitchen sink built in. I'm certain we will see those come into existence for Go.
I like Go and Elixir from the minimalist approach but I've delivered so many projects recently by using Django and Django Rest Framework with a React front-end simply because you get ORM, Admin, form validation etc out of the box.
In a couple of days you can have a production-ready site and I've never seen anything quite like it for productivity.
Yeah I think probably the biggest accelerating factor is out of the box user account & auth management. That gives you a huge leap from “hey my app works” to “I can literally deploy this to the public right now”.
The models and rest framework are convenient but you could probably do the same in close to the same amount of time in another language or framework.
I always ran my code locally on Windows, too, and never had a dev server. I had staging and test servers, but not a dev server. I can't imagine working that way. I like to make a change and see it instantly.
> It seems starting around maybe 2012~13, devs in the tech industry started moving from Windows laptops to Macbooks in large scale
I really doubt it has been that recent — I think the Great Migration to Macs started a decade earlier.
Meanwhile, a lot of us have been developing on Linux and deploying on Linux even longer still. Heck, that's a good part of how Linux got popular, although once upon a time it was develop on Linux in order to deploy on Solaris or HP-UX (or even AIX)!
macOS is also just really different from Linux... it is madness to me that people run so much code on their local laptops and then expect it to work in production <- and I say this both as someone who long felt t was ridiculous to do that at all and now as someone who started doing it myself, got addicted, and just found a serious bug in libwebrtc of all things that only manifests on Linux.
m.2 (NVMe) is so fast and cheap it's absurd. I blinked(a few years) on hardware and when I went shopping for a new disk was blown away by the price to performance.
I was equally as shocked to discover the crap-show that is the m.2 controller space. I felt for sure supermicro would have chassis you could load up with those suckers and they'd be hot-swappable and changing the face of storage. Instead I found proprietary(Intel/AMD) RAID drivers coupled to specific CPU models and expansion cards that could take a measly 4 drives(no hot-swapp).
The last 15 years of computing can be described as "extract maximum revenue" rather than what people expected of computers during the 80/90/early 2000's.
Anyway, I've been considering building some pcie/m.2 boards for a couple of markets because the price difference between a "consumer grade" board and the server/embedded board with nearly the same features but minus a couple important things for some markets is about $25 in parts, but the markup is about $500. Its even worse in some of the built for purpose devices where wrapping a bit of sheet metal around said machine adds another $2k+.
The m.2 vs U.2 market is such a shitshow at the moment. And if one of the major players decides to create a form factor that happens to be compatible with m.2? Well huge industry outcry because they might remove everyones fat margins on "enterprise" gear.
The hotswap is unfortunate, and likely a resukt of server/consumer space differentiation among all kinds of vendors (os, cpu, motherboar, and even the nvme manufacturers all have to play along). But the limit of 4 is at least a real technical limitation, each one of those addin cards usess a pcie 16x slot (either real or "dimm.2" ) and each card needs A 4x from it. You could use a mux to add more but they're already getting to the point of being able to saturate the links. Pcie 4.0 and 5.0 will give a lot of headroom for more drives on a system.
Sounds like we might need to go back to the kind of mainframe architecture that has IO offload. Split the PCIe bus into NUMA-like zones; give each zone its own (probably ARM) CPU, running its own kernel; then use "application processors" (probably x86) to command-and-control the IO zones, allocating e.g. IOMMU-subvirtualized ethernet channels to them. Control plane/data plane separation.
There's some (to me) interesting work in this area. See for example this talk[1], where they show how a RISC-V CPU with a narrow and slow PCIe link can orchestrate the direct transfer of data between two PCIe devices (say NVME and Ethernet card), saturating the x16 link between them.
sort of, the network part of that ends up being a huge bottleneck then too, with 16 drives at 5GB/s (max i've seen so far) each you've got 80GB/s you need for the network to each server. You start getting into the really expensive side of things speed wise.
Also, most CPUs you can buy have around 40-64 PCIe lanes, limiting you to 10-16 drives if you want full speed out of them (this also leaves you with no lanes for ethernet).
Which begs the question: why is server rental so expensive, but for maintaining artificial scarcity for the sake of a cloud service's bottom line?
Cloud computing price is still based around (illegally monopolized and price fixed) memory prices, which is absurd because two load-balanced NVME disks are comparably as fast as DDR3 RAM.
This is grossly under appreciated. And this is why server machines and sustained performance remain one of Intel’s strengths, and despite the number of ARM developers out there, ARM hasn’t managed more than a toehold in the server market. Nobody has designed an ARM CPU for that kind of workload.
Intel hardly has an impregnable lead in this regard, but I expect AMD to get noticeable server market share before any ARM part does.
There's also the similar problem where a developer usually has a much better laptop, phone, internet or lte connection, etc, than the average end user.
I remember the Youtube dev story posted here. They optimized Youtube page at some point, deployed and saw much worse average load time for users. As it turned out the whole new batch of users saw load time decrease from unusable tens of seconds or minutes on slow connections to tolerable less than 10-20 seconds or so and they started using the site which dropped average time by a lot :)
All these modern developers usually ignore how their websites overloaded with usually unneeded scripts and trackers load on slow networks and PCs.
For server workloads, it often makes sense to give up clock frequency to “fit” more cores in the same heat / power space.
In other words, server hardware is usually designed for parallelism/maximum throughput, while client hardware is designed for single-threaded performance and decreasing latency.
I come from a field where it would be unthinkable to draw conclusions about application performance by running it on a developer's laptop instead of the target platform.
Judging by the comments here this is not as obvious as I previously thought.
A bit off-topic, but can someone tell me why server firmware takes so long to initialize compared to laptop firmware? And why even laptop firmware takes longer to initialize now compared to laptops from some say 7 years ago? Why is firmware initialization getting so darn slow when computers are getting faster?
Seems like most of the comments are talking about the reasons and they're all right of course but there's a bit more to it.
First, there's the out-of-band management engine, which is technically in control of everything. The BIOS and the Lights-Out management system are interdependent. They will communicate with each other.
Just initialising this communication channel is more complicated that a normal computers entire BIOS.
Then there's activating every sensor and hardware device, most servers aren't actually the 2 x86_64 CPUs that are socketed to the motherboard, they're more like 30+ CPU's, various hardware controllers, including networks, fan controllers, drive controllers, power supply controllers.
Each of them also has firmware equivelant of a whole BIOS, and they also will communicate with each other.
Thirdly, every sensor/firmware/device test is synchronous and logged. That logging is painfully slow as it's logging via the out-of-band management engine and that engine is super tiny and low power.
Finally all that memory will be tested and not 'assumed' to be fine. Each individual DIMM is blasted with voltage to test if it's seated well, then there is some memory testing which is not only more extensive than the 'fast check' your laptop will do, it's more heavy than the "heavy" check your laptop will do.
There's a bit more to it too, like the fact that memory gets "extended" into the out of band system so that it can control VGA/Serial/USB.
Memory initialization is really slow. On servers there is that, plus the fact that you usually don't want to do a fast ram check like you can afford on a laptop. Add to that initialization of the remote management, SAS/raid,... A server does a pretty thorough quality control when it boots. Also if you have a lot of mechanical drives, there may not be enough power budget to spin them all up at the same time, so they start progressively.
Memory training with a lot of channels is slow. Laptops have 2 channels and 2 ldimm slots.
> Why is firmware initialization getting so darn slow when computers are getting faster?
That alone has zero bearing with regard to the initialization process is it doesn't have to compute stuff, but wait for signals and much, much slower IO/memory.
I have no idea why you're getting downvoted; this is very much my experience. An actual physical server is usually either a database server that can literally manage years of up time (no I didn't like it, but they were on an isolated network and the Oracle DBAs were very particular about updates), or else VM hosts that support live migration so nobody cares if we happen to have one host offline for an hour or two while it's patched and rebooted.
We've had software perform wildly different on our dev servers from prod servers (which are owned by customers). One time it was VM overprovisioning, another time we were just getting completely different performance depending on what hypervisor was being used.
I remember when I was building my first PC with some friends doing the same.
One managed to get a hold of a 'server' and we just assumed it had to be fast right? Very much learned the opposite....and the lack of a sound card was a big turn off...
Good article though. I'm sick of seeing comparative benchmarks on a MacBook. If you don't have real servers you should always benchmark your code in the cloud (bare metal instances if you can afford them).
Well, you should benchmark on similar setups to what you will deploy on: benchmarking on "real servers" or "bare metal instances" is also going to be highly misinformative if you expect most of your deployments to be on virtualized hardware under KVM or Xen.
benchmark on a thermal limited, turbo-boost enabled machine is usually a very bad idea. (single socket, etc.)
Edit: If you wish to microbenchmark - lock the CPU frequencies (core/uncore for intel), lock the voltage too and measure the power draw as well. You wish to use taskset too, depending how many cores the benchmark utilizes.
Interesting, but I'd think that just running your profiling in a server environment would have the same effect as doing everything in one, no? And for much less hassle
You'd be amazed how few developers even know about profiling, or think it's a method of last resort when other attempts to track down performance issues have failed. One of my standard interview questions is, "A customer comes to you and says that [app] 'is slow', with no additional information". You look at the code and there are no glaring inefficiencies, no O(n!) algorithms, etc. What's your next step?", and less than a quarter of candidates ever mention profiling. "Sympathy for the hardware" is, IMO, sort of a hack to be an imperfect solution to this problem.
The reality is that a lot of profilers cause a sort of Heisenberg effect when running in production where it can slow down your code so much that it's not as meaningful anymore. This is now the time your ops engineers like me will be wagging their finger saying the application should have had instrumentation or APM support built-in like via OpenTracing or that ebpf and friends could be useful if your production machines were reasonably up to date. In most occasions I've seen, the majority of engineering teams outside massive scale companies with gobs of resources even today wind up debugging performance problems like it's 1999 with application-external hypotheses and checking for smoking guns like high page fault rates, dropped packets, etc. (eg. USE methodology).
Sampling cycle count or LBR with linux perf events is almost invisible to performance, maybe a 1% hit to throughput as a general rule. The problems come from languages where the PC is irrelevant, like Python, but nobody uses Python because it's fast, so using a python profiler like pyflame should be fine, even in production.
Profiling in production with the JVM using something like YourKit or JProfiler is the typical case for myself. Ironically, I've found profiling with Python in production easier for the reasons you've mentioned. If something is down or running slowly already, adding another 3%+ latency is hardly going to be an issue. Architecturally, with big monolithic programs that do too many things attaching a profiler to try to analyze 1% of the program's responsibilities or surface area becomes a risk to other production operations unfortunately. In most cases slowdowns happen because of resource saturation, things timing out, blocking on shared resources. In the first scenario, trying to run a profiler can exacerbate the problem or even fail to start, so the only way forensics can be done there is by emitting observability data prior to the failure point.
Other approaches taken have been the more Erlang style "let it fail" methodology which is fine for newer projects but represents a rewrite for most systems in practice and is thus far, far beyond profiling discussions.
Is profiling actually a good answer to that question? Profiling is very useful for improving the performance of software under a particular workload, but unless you have very good non-anonymized telemetry anything other than trying to get more information about the customer's situation is going to be a wild goose chase. I would definitely start with talking to the customer to get more details, and ideally observe the customer using the app before I got anywhere near a profiler.
Profiling is really useful for any moderately complex application, particularly where you have dependencies upon a bunch of third party code.
While your code may look all nice and sane, not have any glaring performance issues - what you may not realise is that you're using a framework that does things in unexpected ways.
The devs I work with are running into the reality of this at the moment - they blame the servers/network/database/whatever as being slow/shit/etc.
I go take a look and see that network traffic is at capacity.
They've got some 'magic' caching framework that promises to speed things up by storing data in Redis.
What's happening in the background though is that this caching framework is doing dumb things like asking for the same key over and over again - so we've got a multi-Gbit stream of the same handful of keys being asked for thousands of times a second.
These are the sorts of things profiling can tell you just by looking at the amount of calls to Redis.
We had a customer who complained about a part of our site being very slow, it drew a bunch of Canvas stuff back in like 2011.
Eventually we got them online to do a screen share, and sure enough it took over a minute to render the page (it was a map). They were using an 8 year old CPU.
Switching them from IE6 to Firefox fixed the issue.
Had a similar issue, but switching browsers wasn't an option. This was in a hospital environment in 2002: slow computers, running IE as the standard. A large form with a ton of HTML select dropdowns totally killed performance. Had to rewrite the thing to use just a few (dynamically generating options, before heavy Javascript apps were common), which fixed things.
I also see this, and I think its worse than that because so many devs concentrate only on the static expression of the code and have little understanding of what things look like at runtime. Ironically, client-side coders are much better at this, thanks to excellent tools. Speaking of tools, server-side profiling is almost always weird and it pays to have someone show you the ropes. (Java has some of the best profiling tools, interestingly).
The upside is if you know how to use the profiler, especially if you know a lot about PMUs and whatnot, you'll look like a damned wizard to many people.
Performance testing is hugely valuable but nothing beats having performance metrics captured and made available from production. You never really know how things are going to perform until you see them under real load against real data.
There’s been many times I’ve been bitten by this, including the one time we found that a small portion of users would set the number of results returned to the max and scroll through hundreds of pages of results, blowing the cache for everyone.
I tend to hear “we run performance tests so we don’t get regressions” a lot more than “we monitor the performance of releases to canaries and rollback”. It shouldn’t be one or the other, both are invaluable.
Sure, but my IDE doesn’t have be running there. Code compiles and is benchmarked on prod hardware (and OS environment), I write code on machine designed for my productivity.
I'm not saying remove it from the dev cycle; you can stand up a remote server for whatever testing happens during your normal workflow without a) buying a physical server box to use as your workstation, or b) constraining yourself to working via a terminal.
It's a bit obvious that some major cloud providers are very expensive when consumer hardware provided so much better performances compared to price for years.
But they can have such a pricing model. What they offer is good and valuable anyway. And you need to pay Jeff Bezos.
I have two computers at work: A Xeon workstation that was low-mid range when new, 7 years ago, and still has an HDD. And I have new mid-range i5 laptop with an SSD. The old Xeon performs much better for everything except boot time. Even program load times.
"Hanging out in the old data center, rendering web pages for a thousand people simultaneously? A different story. For server workloads, it often makes sense to give up clock frequency to “fit” more cores in the same heat / power space."
Servers have larger caches, I'd expect better IPC than a laptop CPU from better micorarchitecture, at least double the memory channels etc. I guess that was his point though.
> Servers have larger caches, I'd expect better IPC than a laptop CPU from better micorarchitecture, at least double the memory channels etc.
In aggregate yes the server cpu numbers are great. But looking at them per core makes them much worse. I checked some random Xeon Platinum CPU vs a mobile i7, both being Skylake (so essentially same microarch). The Xeon had 28 cores, 38.5M cache (=1.4M/core) and 6 memory channels(=0.2/core). In comparison the i7 had 4 cores, 8M cache (=2M/core) and 2 memory channels (0.5/core). The Xeon uses 25% faster memory, but that doesn't really make up the difference.
My understanding is kind of matches yours, but I assume it will depend. If you have a memory-bound problem then yes I'd expect a laptop perhaps to be competitive, but if it substantially hits cache for the server then the laptop with smaller cache, may be waiting longer.
> But looking at [the server cpu numbers] per core makes them much worse
but for what benchmark? (sorry if you said and I missed it)
If you have a memory-bound problem running on a Xeon with a couple dozen cores... you either have a couple dozen memory-bound problems (which brings into scope the cache per core concept) or you're using the CPU wrong.
Then why should you even allow them to access any server - vps or bare - at all? Also, please remove all ops-code from the two of someone who doesn’t understand ops is calling himself devops
We're not talking about obsolete hardware. There is a fundamental tradeoff between frequency and power efficiency and even the very newest servers generally choose a point on that tradeoff with more cores at lower frequency. There are also scale effects where you can put a small amount of very fast NVMe storage in a laptop without much of a price premium but to store all your production data on NVMe is possible but might not be financially justifiable.
Giving each developer access to a full mirror of the production environment might also not be financially viable. If a production scoped database fits on a small ssd, then your deployed production dB should probably not be on spinning metal trying to mitigate the orders of magnitude performance hit with boatload of expensive ram. If it doesn't fit, then obviously you have plentiful other issues that should be screaming at you that your laptop performance profiling is not going to make much sense.
I synced an Ethereum fullnode on one of Linode's most powerful instances, it was fast but seemed like it should be faster. I was posting issues on github about how this wasn't working fast enough
a lot of tutorials online is about people trying to get their raspberri pi to sync blockchains, so I was prepared for the worst
but then I put together an 8 year old desktop and it synced in 2 days
kind of underwhelmed at "cloud" options. but they work
Good article, but one thing that annoys me deeply is the author’s intimation that a server ‘renders’ a web-page. A server does not ‘render’ a web-page, that’s a client-side (browser) task (unless he’s referring to running a browser in a Remote Desktop environment, which seems to be definitely not the case).
1. provide or give (a service, help, etc.).
"money serves as a reward for services rendered"
2. cause to be or become; make.
"the rains rendered his escape impossible"
3. represent or depict artistically.
"the eyes and the cheeks are exceptionally well rendered"
You seem to be stuck on the 3rd definition (there are more, but 1, 2 and 3 cover everything we need here to see how it's appropriate based on both the server and client side actions.
The other way of looking to it is that the server renders [something] to HTML, and the client renders HTML to pixels.
It's quite common to refer to things being rendered to another forms, the term is not owned by processes with a visual output.
Heh, sorry to beat you over the head with it. You're certainly right that they are transformations. That feels like a more general term (within this context, obvs not in the context of render as "to give"), but I can't tease out what the difference is in my understanding of them.
Render is the term used by many templating languages to denote transforming a template to a result using parameters. So in the case of web apps that use templates, it's the terminology commonly used (you could argue about its correctness, but it is this word). For example, the jinja2 library names the transform function 'render'.
Yes, I do. But as you say, it’s in circulation. I guess I dislike it in the same way that I consider ‘literally’ and ‘figuratively’ to be antonyms, but how the former has unfortunately come to be considered a synonym of the latter, which basically hobbles the language permanently.
It really doesn't "hobble" language. In all the time since the muddying of the word literally, I've yet to come across a use of it that can't be inferred contextually. The end result is that now there's even more flavor to language, not that it's hobbled.
People complaining about literally being watered down, literally need to chill. Go read a non-technical book or something and re/learn that language is absolutely unstrict. You can do whatever the hell you want with it and use words in ridiculous ways, and it's all valid. The only thing that matters is communication. If you understand that "I literally died when I saw it come off" and "he literally couldn't fit through the doorway for 20 minutes" have different uses of literally, what's the issue? You want every word to be a reserved keyword?
Who considers "literally" to be a synonym for "figuratively"? I would venture that almost no one. In fact, it is used exactly for its proper meaning, as a hyperbole.
If I say "I feel so embarrassed I could literally die", I am using literally in its proper sense, but hyperbolically. Just like if I say someone was "a mountain of a man" I am using the proper sense of "mountain", even though the man is not literally a mountain.
> If I say "I feel so embarrassed I could literally die", I am using literally in its proper sense, but hyperbolically.
I'd rather say they were using "literally" as a meaningless intensifier. But here's the thing: We already have loads of intensifiers. But we don't have very many ways of saying, "I'm not speaking hyperbolically, this actually happened."
Think of all the words that, etymologically speaking, should mean "this actually happened", but in fact don't mean anything anymore: really, truly, definitely, and so on. The reason they didn't say "I feel so embarrassed, I really could die" is that "really" has lost its strength from overuse; now they've moved on and are trying to do the same thing to "literally".
"Literally" is my (figurative) line in the sand: This far and no farther. Language is defined by its speakers: I'm an English speaker, and so I get to vote on how the language is used. As such, I will continue to make fun of people who say "I could literally die" as long as it is possible, and I encourage everyone to do the same.
Dictionaries are descriptive but conservative - so by the time a change gets included in major dictionaries, it's a sign that the language shift has definitely happened by now and it would be considered futile to try and reverse it because it's already ingrained, it's here to stay.
I think we’re getting a bit distracted here by the fact that the frequently used “I literally died [of embarrassment]” is obviously figurative because nobody would be able to use the verb “to die” in the first person in the past tense.
When you get to thinks like “She literally died” it becomes murkier. “I could literally die if I ate shrimp” is murkier still.
This is one of the many reasons I’d like a firm firewall between ‘figuratively’ and ‘literally’. It seems I’m constantly besieged by the “language evolves, deal with it” crowd, but what they don’t get is that it’s all fine and well when language evolves in such a way to become more precise (perhaps by adding terms that distinguish between things that were previously lumped together, such as ‘dumbphone’ and ‘smartphone’ splitting from ‘cellphone’, itself a portmanteau of ‘cellular’ and ‘telephone’, to distinguish it from a fixed line) or to make meanings quicker to convey in conversation (by abridging the latter to ‘convo’, for example), but the ‘figurative’ versus ‘literal’ debate is very different, because it’s one of those relatively few instances that add ambiguity, making meanings (at least potentially) harder to convey.
"I could literally die if I eat shrimp." "I could literally die of this disease." Those sound like "this actually happened" to me.
"I can't believe I did that in front of all those people -- I literally could die of embarrassment." "I can't believe I did that in front of all those people -- I could literally die of embarrassment". Those both sound like normal "meaningless emphasis" usages of the word 'literally'.
I take the completely opposite stance: things like the dual usage of "literally" are less symptoms of a hobbled language and more symptoms of hobbled language skills in those that complain about them.
And I mean that literally; rigid educational systems have stamped the fluidity of language out of far too many people.
Then you’re entirely willing to concede that “Quack quack quack, quack quack, quack quack. Quack quack quack quack.” is entirely intelligible because you can easily ascribe multiple meanings to various instances of ‘quack’ so that it makes sense?
The point being that there’s definitely a threshold beyond which ambiguity of meaning makes parsing a sentence and resolving its meaning impossible. The more specific (and distinguishable) terms are, the less risk there is of a misunderstanding. This is Information Theory 101.
"It works on my machine" remains a problem to this day.
While you can make various workarounds to rectify this, it still seems like "so make your machine the machine it runs on" is a very effective solution. Unlike the alternatives, once you commit to it, you can't forget or revert back to bad old patterns.
I've had this thought before: What if Amazon, one day, just decreed that all employees' laptops had to run Amazon Linux?
Maybe it's a good thing I'm not Jeff Bezos, but I think it would have a lot of positive effects.
Ex Amazon here: the development in Amazon is very different than many other companies. There is no Docker and every small change is deployed on test stages using a CD system. Most tests and experiments are done on servers.