Hacker Newsnew | past | comments | ask | show | jobs | submit | jasoncchild's commentslogin

Indeed. There is also the fact that internal and external messaging is likely much different. I'd not be too surprised if there hadn't been some attrition within their ranks (especially devs) leading up to this point.


A good deal of this comes down personalities; yours and the rest of the team. Even with tooling it can be difficult to stay connected...but for someone else it could be difficult to stay focused with the ambiance that comes with a lot of "modern" office configs.


I agree with your disagreement. Furthermore, I'm curious how this edge case could have been mitigated using an alternative tool? most of my VCS experience is with git, and a very little SVN.


As an experienced Hg user I will say what the author did would be about the same pain level in Hg (maybe even less if you use some extensions). Regardless it is probably less painful with whatever tool you are most familiar with.

I would argue the author probably could have even used SVN (not that I would recommend that at all) given he did some pretty manual scripting stuff (looping through commit ids etc).

Of course it his hard for me to tell with out seeing his exact repository (files).


In SVN, this case would completely explode into manually applying a big patch.


"The easiest strategy in a case like this is usually to (go) back in time [...] That strategy wasn't available because X had already published the master with his back-end files, and a hundred other programmers had copies of them."

"a hundred other programmers had copies of them." isn't a problem/cannot happen with svn because svn uses a single master repository. Fix it, and everybody can fetch it.

That leaves the "going back in time" step. That isn't easy with svn. If you have a backup of your repository, you can create patches for the commits made after it, restore from backup, and replay those you want to keep.

If you don't have a backup, the svn administrator can use svnadmin to dump and restore revisions and remove them (http://superuser.com/a/315138). I think that will be a lengthy operation if your repository has lots of history. It also requires free space to create a copy of your reporpsitory. Disclaimer: I have never used this tool.


> If you have a backup of your repository, you can create patches for the commits made after it, restore from backup, and replay those you want to keep.

Maybe I'm just too young to have a good perspective on this (due to only becoming a developer after git was fairly well established), but I thought that version control itself is supposed to be your backup


A backup by necessity is a copy; with centralized version control as svn is, you don't have a copy of the repository, so you have to make one some way.

(Decentralized version control is only slightly different; if you lose the 'main' repository and don't have an exact clone (aka a backup), you may be able to glue together something from other repositories that is close to or equal to what you had, but there is no way to know for sure)


it is for as long as your repository is undamaged.

its rare, but sometimes a repository can get damaged beyond repair. keeping it inside of a cloud storage directory like google drive, dropbox or similar makes it prone for fatal errors.

in such a scenario a backup of the full repository is required. in gits case, remote sources can fill the 'backup' role pretty easily, as everything gets pushed to a remote server. this is obviously not enough safety if you're a bigger enterprise, they'll need additional safety guards/backups, but its sufficient for most people.


The edge case could be mitigated by having a pre-defined development workflow and not seat-of-the-pants cowboy commits of "backend" and "frontend" changes that were in reality interwoven. Or even a basic code review where someone hopefully would say "WTF are you doing!?"


Agree - this should have been a single push and dev should have squashed commits together where it makes sense to. I get in the dev stage you might be all over the place but a well-designed codebase with separation of concerns should have allowed the dev to group like commits into single areas and brought it down to something more manageable like 20 commits instead of 400.


I'm an advocate of edge computing for sensor data aggregation. I don't think round trip computation will be viable for soft real time applications for quite some time, unless that computation requires operating over large datasets (where it's prohibitive to store all it locally). Just my opinion (and some experience), hardly an authority.


It's funny to me; when I got my first taste of "web apps" it was writing c apps to process cgi requests a bazillion years ago. It is neat to see people advocating for native compiled code in this area. I do lots of node these days and full stack so I appreciate the client side leaning for routing and rendering. If you don't need SSR then I'm sure you could create a very performant server in rust or c. Sounds fun!


I sometimes toy with the idea of using fastCGI C++14 scripts for a RESTful JSON service, to be able to handle a lot of requests on not much hardware. Though maybe spending a lot of time shaving 10's of dollars on your monthly VPS bill is not a fantastic use of your time :D


There are some c and c++ web frameworks floating around that seem quite nice.



CGI and compiled Scheme, over here. If you write some fairly simple wrapper functions, you can get an almost sinatra-like API: very nice to work with.


Can you share anything more? I'm very interested. Chicken Scheme?


Yes, CHICKEN Scheme. Although I imagine Gambit would work equally well. I have ~3/4 of a single-file CGI framework on my hard drive. It's feature complete: the other 1/4 is just some optimizations and sugar over GET and POST handling, as they are by far the most common methods used.

In the future, I might add some wrappers over access of CGI environment vars, but you can already look those up by hand, using the standard environment access routines.


SSR?


Server Side Rendering


Ah ha, thanks both! Rust web frameworks can absolutely do server-side rendering, there's multiple template engines, even.


My guess is Server Side Routing.


This is interesting; why dev tools are not seen in these FOSS offerings?


Some want to be small and lightweight, e.g. Dillo, so I can imagine them rejecting such features altogether.

Others don't have features like DOM, Javascript, etc. so there's not much they could offer that "view source" doesn't already do.

Others don't have a suitable interaction model, e.g. w3m is effectively a pager for turning HTML into ANSI escape codes; conkeror is keyboard driven so would need a radically different UI for it to be effective; etc.

Others, like Netsurf, Konqueror and the Webkit wrappers (Midori, Arora, etc.) would probably like a dev tools feature, but don't have enough developer power to implement one.


Agreed. A lot of the op is bit hyperbolic in expressiveness.


I mean, you don't have to use JSX...

...but I agree with many of these comments; I thought it was odd (to be charitable) but after using it find it to be quite nice. Redux is what I see over used (imo) and at times almost an antipattern


Fantastically dramatic assessment! I'm curious just what aspects of the pre-iPhone Apple were preferable?


Off the top of my head I can think of a couple:

1) OSX was a viable operating system that was first class in the Apple ecosystem.

2) Laptops and desktops that were actually better than the competition including models geared toward power users.


1) OSX is not viable any longer? Really? As evidenced by what? I mean, looking at WWDC talks, there's a ton of improvement year over year. Of course there's always more things that could be done, but remember, the core OS team is very small (especially compared to Microsoft, for example). Not to say that there isn't stagnation in some UI aspects (Finder, I'm looking at you...) but saying that the entire OS is no longer viable is a bit dramatic.

2) I don't see much competition that can even match Apple's build quality. The ones I do see, I am excited about, because that means they are striving to improve, and can challenge Apple and force them to improve more. Much better that, than having Apple rest on their laurels because nobody is even trying to get close.


I agree but on the viable operating system part, I think it's important not to discount the impressive research that has gone into the power saving and security aspects of the Darwin kernel, as well as innovations in saving of application state and state sharing between devices


Thanks! I've not much personal experience of pre-iPhone Apple. I'm curious about why OSX is no longer viable.


Mac OS X or whatever they're calling it at the moment is buggier than it used to be. 10.11 is where I really started noticing it. The OS no longer deals with memory pressure well, causing the computer to freeze up when Safari's memory usage spikes. The new USB stack in 10.11 often has me restarting the computer to get my devices working again. In addition to big things like that being broken, I'm finding more and more little things not working like they used to. I recently came across a bug that would sometimes not allow me to select a file in an open dialog with my cursor, but I could select it with the keyboard and arrow keys.

Apple really needs a Snow Sierra release to just concentrate on squashing bugs.


I'm not so sure it's no longer viable as it is stagnant.

Looking at some of the major "features" - more emoticons and reactions in Messages... Siri.

Some were certainly nice - Apple Pay... but I don't really see PIP being used, or Continuity (which usually actually pisses me off, because I'll switch from my Mac to my phone with an image or similar on the Clipboard, and have to wait several seconds while Continuity syncs - shame, such potential).


OSX/MacOS is still viable but it doesn't receive the attention it once did. It feels second-rate compared to the mobile OS. I get the feeling one day Apple is going to release the new MacBook and it will just be an iPad with a keyboard.


An iPad with a keyboard. God help us all.

And looking back on ye olden MacOS 9, prior to OSX, there were things that just felt nice about the interface that no longer hold sway.

Clicking on an icon gave an immediate response. The mouse cursor felt more precise. Compare to these nightmarish touchpad/button slabs (and worse still, touchscreens), mouse movement and pointer precision were lightyears beyond the way things seem to work lately.

I have to retry things a solid 1/3 of the time, because some kind of gesture or taptic garbage tripped me up, and pushed me into an unintended outcome. I have to focus, and concentrate on finessing my hand motions and pressure or I am punished by mistakes that need an undo. When I'm in a rush, life is hell. This makes me hate life.

No zealously decorative animations meant (on an unstressed system with low CPU/RAM load) the menus flashed before you like lightning. Text was crisp, unshaded pixels, with no font smoothing. There were no transparency overlays, and so everything was high contrast. Nothing was EVER lagged by network traffic except browser images and FTP/SMB shares. (and doom deathmatches)

Most of this was also true of Windows 2000 at the time.

Had operating systems stood frozen (particularly GUIs), while terabytes of disk space, gigabytes of RAM and dozens of CPU cores had inflated our hardware resources, I've often held the belief that we'd like our computers more, and fewer people would be as obnoxiously incompetant with computers as we see. I'm probably wrong, but the idea feels right.


It's poorly maintained. Large parts haven't been updated in a decade.

Take a basic command, `readlink -f`. Works everywhere except MacOS X. This comes from FreeBSD, which added `-f` years ago, but MacOS X hasn't resynched its code with FreeBSD for an age, so it's running the version from 2007 or similar. I can show you plenty of examples of this sort.

It would take very little effort to do this on a regular basis, and it could probably be automated.

This creeping incompatibility due to being outdated and unmaintained is becoming increasingly problematic. Not something a regular user cares about, but for development and technical users, it's lacking.


The company's name was "Apple Computer," for one thing, and not just "Apple."


Idk, pointing to issues like deep dependency within the Node community isn't the best argument. It's healthy to be skeptical of dependencies imo, especially in light of issues like the infamous left pad problem.


My comment wasn't about deep dependencies; it was about parent suggesting 200 lines is too few to be a legitimate library.


Ah, I see. Noted!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: