Hacker Newsnew | past | comments | ask | show | jobs | submit | wingman-jr's commentslogin

I found this: https://textslashplain.com/2016/03/06/using-https-properly/ Seems like it at least partially corroborates OP's recollection!


Thanks, I stand corrected! Apologies to the OP.


move fast break things


Agreed. These technologies are the backbone of a lot (most?) of IoT devices, so unlike the article's description of "devices" in terms of "consumer handheld phones (that are often replaced every 3 years)", the impact here would be much deeper. And these technologies have been and are continuing to be sold as stable. For example, see https://www.nordicsemi.com/Products/Wireless/Low-power-cellu... . Quote: "Future-proof: LTE-M and NB-IoT are slated for support beyond 2040, ensuring devices' long lifespans. Subscriptions guarantee a reliable network, in contrast to other LPWANs that could shut down preemptively, risking your business." On the other hand, I guess I haven't seen as many IoT devices choose T-Mobile as a carrier either, so maybe it's just that T-Mobile knows their market.


I have worked at such a place as well and would strongly recommend this. If it works out right, you can remain in software and do adjacent jobs to manufacturing. I suspect industrial / specialist products would be the better side to get in on, at least at first.


This is an interesting article, and the subject of alternate paths to payment seems quite relevant. It listed a strategy or two I hadn't seen before. What other strategies have folks from HN seen? Did they work or not?


Fair point. But putting on my user hat, that Smart Paste sounds pretty handy if it works half decently. I'm thinking CRM entry use cases and the like.


But if Smart Paste makes errors even 1% of the time then that would be astonishing from an LLM benchmark POV and still completely unacceptable from a CRM reliability POV. Nobody wants a data entry system where you have to fix 1 in 100 rows because the computer made an error.


My experience was quite a bit different than SWE. For me, it was as part of a R&D group and was more closely assigned to say, signals processing and my coworker was a physicist. The big change was that the skillset was more in rigorous thinking about the model itself and challenging how it worked. I had other parts to wireup the AI/ML that were more SWE for sure, though. What was your experience like?


Yeah, I went looking for debug.exe on the listing as well. There was something just so visceral and direct about its usage that I enjoyed.


As an interesting contrast, I work on a WPF app professionally that has been around for sure since .NET 3.5 (with references to .NET 2.0 DLL's at times) - at least 12 years - and as much as we give MS crap for abandoning WPF, I can still crank the project open in the latest Visual Studio, probably transparently upgrade to .NET 6 - and everything just works. There are a lot of advantages to web-based frontends but sometimes I think desktop apps are underrated from a stability perspective.


Windows native desktop apps are OK in some controlled (or mandated) environments. For example, 20 years ago I was working for a large company that handed only Windows XP laptops to every single employee.

But what if the customer has a mix of Macs, Windows, Linux laptops? Or if before even getting there, they think that they want a web app so they engage only companies that build web apps? A native desktop app will never happen.

By the way, that company I was working on 20 years ago, despite being Windows only had a number of web apps, including timetracking and everything else. I didn't investigate the reasons but I could guess one: distribution of updates.


Yes update speed, iteration speed and general ease of deployment is a huge part of it. Also the ability to develop on Mac/Linux but deploy to Windows.

One of the popular features of the desktop deployment tool I like to shill here sometimes is web-style "aggressive updates", which basically means synchronous update checks on every launch. If the user starts the app and there's an update available a delta is downloaded, applied and the app then launches all without any prompts or user interactions required. As long as users restart the app from time to time it means they stay fully up to date and so bugfixes can be deployed very quickly. This isn't quite as fast a deployment cycle as a multi-page web app, but is comparable to a SPA as users can leave tabs open for quite long periods.

Weirdly, AFAIK no other deployment tool has a feature like that. It's unique to Conveyor. Desktop update tech is pretty stagnant and is optimized for a world where updates are assumed to be rare and optional, so support for fast delta updates or synchronous updates are often rough/missing. When your frontend has a complex relationship with a rapidly changing backend / protocol though, that isn't good enough.

Also we made it from the start be able to build packages for every OS from your dev laptop whatever you choose to run, no native tooling required. So you've got a deployment process very similar to an HTML static site generator. Hopefully this closes the gap between web and desktop deployment enough for more people to explore non-web dev.


If memory serves correctly, I think that's close to how ClickOnce worked/works? - but Windows only. One of the apps I worked on does it, but it was a homegrown framework. Definitely the sort of thing it's nice to delegate to a specialized system where possible.


Well Java Web Start could do it also I think, but none of these systems are active anymore and of course none of them were a "native" desktop experience.


Too bad Microsoft refuse to work on proper cross platform WPF support. I've tried Avalonia UI[0], but it's just not the same. For instance the lack of a proper out-of-the-box virtualized list.

[0] https://avaloniaui.net/


For a side project of image classification, I use a simple folder system where the images and metadata are both files, with a hash of the image acting as a key/filename - e.g. 123.img and 123.metadata. This gives file independence. Then as needed, I compile a CSV of all the image-to-metadata as needed and version that. Works because I view the images as immutable, which is not true for some datasets. On a local SSD, it has scaled to >300K images. Professionally, I've used something similar but with S3 storage for images and Postgres database for the metadata. This scales up better beyond a single physical machine for team interaction of course. I'd be curious how others have handled data costs as the datasets grow. The professional dataset got into the terabytes of S3 storage and it gets a bit more frustrating when you want to move data but are looking at thousands of dollars projected costs for egress of the data... and that's with S3, let alone a more expensive service. In many ways S3 is so much better than a hard drive, but it's hard not to compare to the relative cost of local storage when the gap gets big enough.


Also, Tensorflow.js WebGPU backend has been in the works for quite some time: https://github.com/tensorflow/tfjs/tree/master/tfjs-backend-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: