>What? Objective C is a lot more dynamic than Swift
Sure, that could be argued. But with Objective-C, you still had exposure to the "metal". You had to understand what a pointer is, and why you would want to use it. You had to understand memory management, even though ARC does the job (mostly). It was firmly entrenched in the world of memory efficient embedded programming that mobile started out as, but now that developers coming from Javascript/Python land see a familiar syntax they've brought their patterns with them as well.
>So… your theory is that if people understood pointers this wouldn’t have happened?
Yes. But it has nothing to do with pointers specifically, just the mindset and training of the average developer who has had experience with them, vs. the average developer who has not.
There's an entire generation of developers now graduating from CS programs, hiring into Apple, and getting dumped on these application teams with zero real world experience and their only language being Python or Swift. The result is you have tons of brilliant people who can quickly whip up a DFS algorithm, but don't understand that using 4MB of RAM for a JPEG is unnacceptable, or that whatever dynamic thing they are asking the runtime to do might not always work as intended. That's why we get these massive feature lists nobody asked for with every iOS release, and zero emphasis on performance.
I think you over-simplify — engineers slot into different roles/disciplines within large companies.
There are going to be engineers that have to deal with driver-level code that know full well the limitations of memory constraints, thread overhead, etc.
No doubt you're describing the other half — the app engineers that use the API/SPI's. It might even be argued though that, given a well defined API, they should not have to worry about how much memory a JPEG requires ... the API decompressing the image only when rendering to the destination or what-have-you. Pointers, memory management should be managed by the low-level parts of the language or OS/kernel.
I happen to like the bit-banging, pointer walking, free-wheeling world of straight C but I don't begrudge higher level languages that are designed to tackle the more modern pressures of concurrency and "asynchronicity".
> The result is you have tons of brilliant people who can quickly whip up a DFS algorithm, but don't understand that using 4MB of RAM for a JPEG is unnacceptable, or that whatever dynamic thing they are asking the runtime to do might not always work as intended.
I’m not sure where you’re getting this anecdote from, because I have not found it to be at all true in practice.
I’m not sure about the situation at Apple, but this is 100% true in the web development world today; with the exception that many of the new programmers hired straight out of General Assembly and the like can’t implement a DFS algorithm either.
It’s a nightmare for security and performance - the number of obvious, blatant security issues I’ve spotted and fixed just through luck alone is horrifying.
I'm sorry, but I don't know what "General Assembly" is. I live in the US, so is this some part of a degree program in your country (if you live somewhere else) that I'm not aware of?
But coming back to your point, there have always been new engineers with weak skills, just like there have always been smart engineers as well. I don't think the choice of programming language changes this fact significantly, although certain languages may have a slightly higher proportion of inexperienced programmers than others.
General Assembly is one of the popular US 3 month coding bootcamps. There are others in the US, but I’m not familiar with their carriculums.
As programming gets easier to learn, people spend less time learning programming. This has a number of negative knock-on effects, eg less understanding & focus on correctness, performance, security, etc. Obviously there’s lots of wider benefits too - but I suspect that the average person writing objective-c today spent more time studying programming than the average person writing swift today.
It sounds like we generally agree on that - but my claim is that this effect size is big enough to dominate almost all other considerations. I suspect the average C program is more secure than the average JavaScript web app, despite how the absurd difficulty in writing correct C, just because of the ratio of new and old programmers in both communities.
Then, how do you explain that the last update improved performance a lot on older devices ? Last time I've checked there was no hardware upgrades in the downloaded update...
Sure, that could be argued. But with Objective-C, you still had exposure to the "metal". You had to understand what a pointer is, and why you would want to use it. You had to understand memory management, even though ARC does the job (mostly). It was firmly entrenched in the world of memory efficient embedded programming that mobile started out as, but now that developers coming from Javascript/Python land see a familiar syntax they've brought their patterns with them as well.