I completely agree. "Smart" isn't used sarcastically here. It's an adjective that most young devs would (rightfully) like to be referred to as. But I see experienced devs as less interested in looking/being "smart" (or clever or whatever word you want to use) and just getting things done in a way that allows the org to make money and get rid of BS (unrelated to the above) as much as possible.
Maybe there's a better way to outline this difference.
Obviously great article (for real) but it's sad that so many people still misunderstand containers and orchestration.
> I’m not smart enough to figure out docker confiuration [sic], so instead I compile my web application into a self contained statically linked binary executable file.
That might be fine for something that needs to run once. What if it crashes? And what about everything else that's required. Database, centralized cache, queues, workers, load balancers.
You can't build a fully functional product without those and what are you going to do Install all of the dependencies on a VM, by hand? What if you have to migrate to a different machine for whatever reason? Are you going to write scripts to automate all that and test them and change them and test them again? What if your program crashes? Are you installing supervisor? Will you turn your nice little binary into a daemon and deal with the whole initd/systemd configuration? What about the development environment? Are you going to do all that again when you switch laptop or when you hire someone?
(I personally prefer a PaaS like Heroku, Vercel or Render but that's besides the point)
Complexity very quickly gets out of control. What if you could, instead, define what your project requires in terms of environment, services, dependencies, networking etc. and just give it to a magical (today we'd say "AI powered") DevOps entity that makes sure everything runs smoothly no matter on what machine it's running, assuming it has enough resources? Locally you'd just run a single command and the whole thing would magically come to life.
That's what Docker and Compose/Kubernetes are. A very convenient abstraction layer that makes it very easy to define your underlying architecture in a declarative way and have something else worry about how to get there.
And, let's be honest, it's not even that complicated. You get 80% of the benefits with (less than) 20% of what Docker etc. are capable of. I learned how to use it once 8 years ago and I'm still using it to this day, just occasionally googling the odd command or Dockerfile keyword. I've seen the same with git. So many people just memorize a few commands and can't be bothered to learn how it conceptually works.
I guess I just went on a long rant here but I'm really surprised how, in a constantly evolving industry like tech, there are so many people who are afraid of learning something new or to change how they do things, sometimes out of pure irrational spite or fear of whatever is new. Reminds me of the bit from Brett Victor's talk "The future of programming" where (paraphrasing) he says "binary people thought assembly was a bad idea and assembly people thought C was a bad [...]" etc. etc.
I've never spent more than $400 for a smartphone, always bought second hand Android phones. My income went up in the last couple of years and a few months ago my phone broke. I bought a $900 iPhone.
If it's good people will buy it. I will buy it. No doubt about that.
> It's interesting that HN is completely overloaded right now...with people coming to announce how unimpressed they are and how it isn't for them.
Agreed, polarization is a good sign that this is going to make an impact. Ironically "unimpressed" is communicated by a lack of response, not by a negative one (which more likely indicates people's beliefs are being challenged). The only way this would be a flop is if they shipped something really buggy and worse than the competition (which at the time will be the Meta Quest 3). Otherwise...
> it's going to be a hit. HN is going to be swamped with "How I used Vision Pro to..." posts when it comes out.
100%!
> did they talk about using it as a display for a Mac? I'd love to use a real keyboard mouse interacting with flexible Mac displays.
Looks like it's going to be a standalone device that you can pair with a magic keyboard and trackpad. Considering it ships with an M2 I expect iPad/Air level performance (assuming the spatial stuff is solely handled by R1). I can totally see myself using it as "the one device" (pun intended) and get rid of my Macbook, assuming there's an easy way to share content with someone who's next to me, e.g. on my iPhone.
> Agreed, polarization is a good sign that this is going to make an impact.
Virtually every new Apple product is going to generate this sort of response, and while many Apple products have had a large impact, just as many haven't, I don't know how much predictive strength "this new Apple product generated a lot of conversation on HN" has.
Exactly. Especially in this case, where we knew this was coming for months. It's generating a response because people have been waiting to talk about it since the rumors started.
For myself, my "unimpressed" reaction is because the experience they're selling is the same as what Meta has been trying and failing to sell for years now. It's definitely typical Apple—wait for the tech to mature and execute better than anyone else—but I'm unconvinced there's an actual need being filled here.
The iPhone took a market that had already taken off in business—PDAs—and blew the roof of of it by revolutionizing the tech. The VR-for-productivity market is practically non-existent, and even in gaming it's still very niche. Neither are anywhere near where PDAs and Blackberries were when the iPhone made it big.
I'm just not convinced the "execute better" strategy will work when there is no proven market.
The YouTube stream I watched mentioned it can detect when you are looking at your Mac and offer the screen up in the googles with full sizing and layout control. Your Mac appears just as another app and you can multitask as ususal.
If you can put up all the windows from your 3 display workstation, why would you want to simulate displays?
There’s a similar approach available for the Meta quest 2 (and I’m sure the quest pro and quest 3) but it takes a little reorienting to stop thinking in terms of “screens”
Wow, this makes me think of the fabled zooming interface[0]. Why limit yourself to a "monitor" or a set of windows when the sky's (literally) the limit? With a ZUI you could have the entire world at your fingertips. Browser history (or git commits) could just be further away from you in the Z direction. Or maybe it's behind you and you just have to turn around to see it.
An old, ancient term for it is "Spatial UI". The window you need is right where you left it in the other room; navigating between your apps becomes like navigating around your house. Your coding apps are in your office and your social media apps in your bedroom and now you won't get the two confused and "accidentally" scroll social media while working.
In some ways this is particularly great, because humans involved to have a lot of spatial memory in this way.
(It's an interesting footnote here that the early pre-OS X Mac OS Finder was sometimes much beloved [or hated, depending on your OCD predilection and/or personality type] because it was a Spatial UI. Files and folders would "stay" where you placed them and you could have and build all sorts of interesting muscle memory of where on your desktop a file was or even a deep tree of folder navigations, with scenic landmarks along the way. Apple discarded that a long time ago now, but there was something delightful in that old Spatial UI.)
Check out the “Platform State of the Union” video stream for more detail on that. They discuss windows, volumes, shared space, spaces.
https://developer.apple.com/wwdc23/102
Ah yes, but those apps appear to be just some kind of iOS type thing, so at least for me I couldn’t really use them for productivity. (The lack of coding tools on iOS also really kills any possible productive uses of my iPad in this way, unfortunately)
(That’s why I was focusing more on the mirror-your-Mac functionality)
We might be at a point where that could change. This device seems like it could be close to providing the performance needed to start running productivity apps, and it also provides the screen real estate. Those types of apps have to be coming to iOS in the next few years.
I don't think it can simulate any app. Likely, it is a feature akin Continuity, and you have to have corresponding app installed on your Vision Pro to pick it up from Mac and continue working on a headset.
While they did confirm Continuity would work across visionOS they also showed direct footage of your Mac monitor being displayed as an app while using your Mac.
I don't get this. There is no live demo so far, only a pre-rendered ad. So you have no idea what the actual experience will be like (in an industry famed for over-promising and under-delivering; remember Magic Leap?). The use-cases are also dubious: you can... watch TV alone? Scroll through photos alone? Take pictures? Only the virtual desktop thing was something that I thought "that's useful".
I'm unimpressed so far, maybe that will change maybe it won't. But right now I don't see anything worth being impressed by.
They gave 30 min demos to WWDC attendees the following day.
I'm excited mainly for two reasons: fantastic eye and hand tracking (according to reviewers such as MKBHD) and replicating my office/entertainment setup wherever I am (except for shared experiences, that is).
I think Apple tried to nail the seamlessness of the experience, rather than give you some amazing use case nobody ever thought of. That will be a good challenge for developers.
>Agreed, polarization is a good sign that this is going to make an impact. Ironically "unimpressed" is communicated by a lack of response, not by a negative one (which more likely indicates people's beliefs are being challenged).
To quote Elie Wiesel: "The opposite of love isn't hate, it's indifference." It's an extremely good barometer.
I took Andrew Ng’s famous Coursera ML course and I still find this stuff extremely fascinating and as close to magic as you can get (in the digital realm).
I agree that a higher degree of understanding lessens the emotional impact but often you just need to pause for a second, look back and appreciate what we’ve accomplished as a species.
Sometimes I get on a plane and mid flight I realize “wait a second, I’m in a metal box flying in the sky at 800 km/h and I can breathe, eat, drink and watch tv” and I get goosebumps, even though I’m kind of familiar with the physics of it.
Maybe some things just have an inherent “awe factor” and no matter how well you understand them you still get those butterflies. Or maybe I’m just blabbering :)