Hacker Newsnew | past | comments | ask | show | jobs | submit | grease's commentslogin

tldr: In the 1950's, doughnut shops were some of the first food businesses commonly open late at night. They became hot spots for police working the night shift since it gave them a place to grab a snack, fill out paper work, or even just take a break.


Fair point, thanks. Changed the title to be more accurate.


Rich made some interesting points on developing libraries in such a manner that it doesn't introduce breaking changes (for the calling code). Does anyone here agree (or have counterpoints) to his suggested approach?


Clojure has been the most stable ecosystem I've ever dealt with. Once you get into it it's not uncommon to see libraries that are years old that function perfectly. I'm not even scared to update clojure to the latest alpha builds because things just always work.

There's also this to look at on the subject: https://lkml.org/lkml/2018/8/3/621

Reddit discussion: https://www.reddit.com/r/linux/comments/95b1hf/linus_torvald...


It is amusing, because the very thing that leads to stability is among the things that causes people to claim ecosystems like CL have gone "stagnant."

Similarly, TeX shows the halmarks of stability. I can reliably rebuild any document I've ever written. However, that means that the styles I used to use to write documents have to still be supported. Contrast that with HTML, where they constantly give new methods to do something because they changed their mind about how it should be done. Worse, they no longer support the old ways, because they were supposedly "never supported."

I think there is some merit to change driving progress. It is just frustrating when so much change is effectively user hostile.


Yea, that was my impression about the CL ecosystem and clojure made me rethink that. Nowadays it's pretty much the other way around for me - CL serves as an example of how clojure could keep dying and still be a useful tool for years to come


I think part of the point of clojure is a hosted, small core, programmable programming language can't really die, it would just be dormant and could spring back whenever someone needed it.

It's weird to me when people claim clojure is dying, yet the ecosystem is evolving rapidly and people are making money with it. It's not a model that's easy to starve, it's too lightweight to collapse under it's own load, and there's no way to kill it, so how is it dying?

Mindshare ebbs and flows, but being THE programming language was never clojure's goal.


I think most of the crowd claiming that clojure is dying are those that feel that way by virtue of it being a lisp. And, oddly, most habits in most languages tend for them to pull more and more into them such that they are all looking for an uber language.


You might be right. I do think that by being intentionally hosted, it avoids a number of traditional lisp problems. If it was a stand alone implementation, I wouldn't be nearly as bullish on it.


I have enormous respect for Alex et al that are maintaining Clojure at Cognitect. From my observations there tends to be few changes in core which is reinforced by a fireside chat with the guys at Cognitect showing the level of Java code churn over time has continually decreased. Everything has benefits and tradeoffs. The benefit of this lack of churn is a stable foundation but I think the tradeoff is that there aren't as many eyes on that code and the depth of familiarity required to improve the code for legibility, maintainability, performance, and security is lost with lack of intimacy. Much of the Java code is undocumented and at times can be hard to follow with unused variables and little if any tests available to validate behaviour. It's delegated much higher up the stack.

During the release of 1.9 there were some issues relating to spec, libraries, and build tools in the pre-release versions. So I'm not sure I would say the rule is followed to the letter such that you can consume Alpha releases are safe. It was prerelease candidate so no biggie. I know there were some discussions on the mailing list of a formal security audit of the Clojure but I'm not sure where that went but I'd be interested in seeing the results.


> Maybe it worked because the user had taken the bug into account, maybe it worked because the user didn't notice - again, it doesn't matter. It worked for the user.

That's also an argument for keeping security holes around. By definition those are bugs and users (albeit malicious ones) depend on them. How dare you fix a security hole and break a perfectly working exploit!


Only at an absurd reduction. Few security holes are actually built on features for users.

Now, there is a tradeoff between security and convenience. That is also why fraud techniques haven't taken away credit cards as a thing. To that end, sometimes it is better to engineer a detection/mitigation strategy instead of removing the convenience.

And life seems to be full of more strawmen then makes sense. So, yes, you can easily find examples for this in either direction.


I imagine in case of a Kernel that should be like a rule beyond any scope of discussion. And may be for any piece of software that is at the bottom layers of any stack.

You just can't break backwards compatibility.

This was a big mistake Python 3 made.


The problem with almost all non-lisps is that you have to add / modify syntax to get a lot of new features (think async/await), whereas in a lisp, most such changes are done via macros and require no breaking changes to the language itself.


> You just can't break backwards compatibility. This was a big mistake Python 3 made.

At least Python's backers had the guts to release a breaking version 3, unlike some other dynamic languages, specifically Ruby and Apache Groovy. Ruby's backers have tentatively slated 2020 for a (MRI) Ruby 3 release after two more Ruby 2.x versions. As for Groovy, 2 months ago its backers canceled its version 2.6 release which was to backport the planned features of Groovy 3 into a JDK 7 compatible release. Looks like we won't be seeing version 3 of either language anytime soon.


> At least Python's backers had the guts to release a breaking version 3, unlike some other dynamic languages, specifically Ruby and Apache Groovy.

The backward-compatibility breaking Ruby release often held up as a parallel to Python 3—Ruby 1.9—was released December 25, 2007, about a year before Python 3.

If Python had as much guts as Ruby in this area we'd be seeing a backward-compatibility breaking Python 4 around 2021.


>>At least Python's backers had the guts to release a breaking version 3,

If you have to absolutely must break backwards compatibility. Break it once, and never again, in a way you fix your language permanently. Like Perl 6.

Python broke backwards compatibility for a print statement.


Gradle is the only thing holding Groovy alive, beyond maintenance projects.

I remember attending JSF Days in 2009 and how we would be all writing Groovy JEE Beans in the near future, show in different ways across a few sessions, and here we are.


... and Android Studio could be the only thing keeping Gradle alive, beyond 20-liner build scripts. When Google finally releases Fuchsia, it will virtually replace Android's market share within 2-3 yrs. Because Flutter uses Dart for building apps, it'll probably go with Dart for specifying build scripts as well. Although one of the Apache Groovy PMC is probably busy politicking inside Google to get them to use Gradle for Flutter, hopefully that team won't repeat Android's mistake. Otherwise the 3rd-party app market for Fuchsia will be stillborn.


Yep, if it wasn't for Google having the silly idea of adopting Gradle, I would never bothered to learn it.

I don't suffer from XML allergy and am pretty fine with Maven in what concerns Maven projects.

Actually I was pretty happy with Eclipse + Ant + ndk-build as well.

To this day the NDK already went through a couple of Gradle build variations and it still doesn't support NDK libraries across NDK projects, as AAR only support binary libs for Java apps.

I follow Fuchsia with attention, but it might end up just as Brillo.


there was freedom I feel Ruby had that Python didn't which allowed Ruby to successfully go from 1.8 to 1.9;

* Ruby had better dependency management. Python's only recently introduced Pipenv which provides a standard approach for managing development and prod dependencies. VirtualEnv's been around for a while but didn't handle scope.

* Ruby wasn't used as extensively as Python in stuff like system libs and start-up scripts so system packages weren't constrained in the same way.

* Ruby's ecosystem was more focused around web development. Rails in particular was fairly early in adopting new versions of Ruby within about 1 year of the release of Ruby 1.9. Django on the other hand was about 5 years before it had it's first Python 3 compatible release.

* "microbenchmarks" generally got better with Ruby releases whereas Python 3 seemed to have gotten worse from what I remember. I don't think microbenchmarks are terribly useful out of the context of an actual application but many people use them as indicators.

* subjectively I think the Ruby community was more committed to unit testing which made "fearless upgrades" a little more palatable.


That's why it's called Python 3 and not Python 2.x

Think about it like totally new language


If it's like totally a new language, it should be called TotallyNewython 1.0


AlmostTotallyNewthon would be better, correct. Would shift discussion from 'upgrade' to 'migration' without all the usual cries. Other than that it's understandable that some architecture decisions from 1991 haven't aged well.


I think “Perl 6” was already taken, so they couldn’t use that :-)


This is essentially the model the Go community is aiming to do with versioning standards. It's also the philosophy the linux kernel takes to e.g. syscalls.

It's something a lot of libraries could learn from. Most users want to build software for the long term and tend to avoid upgrading libraries too often.

Fwiw, this is a large part of Rich's Spec-ulation [0] talk.

[0] https://www.youtube.com/watch?v=oyLBGkS5ICk


I don't know the specifics of the Go standards but semantic versioning doesn't add anything if you don't break your consumers.


It sounds like the philosophy is "don't break your consumers", which I agree with. Somebody else already linked his Spec-ulation talk, but I think somewhere in there he compares the 10 minutes a library author spends on deprecating or removing some piece of code versus the 10 minutes each and every one of your consumers spends trying to determine why a dependency they put trust in broke their application.

I do think it can be very difficult to know if you, as a producer, are breaking any consumers. I'd love to see some sort of OSS build tool and CI system that automatically rebuilds and tests all consumers of your library to give you, the library author, more information about the changes you make before releasing.


Rust (core) uses the cargo ecosystem as tests


Similar idea in ClojureScript: https://github.com/cljs-oss/canary


Around 14:50 in Rich talks about breaking changes and mentions a library he worked on (codec) to track function changes (checksums, not semantically but still) instead of looking at a library as 'changed' as a whole. This makes a ton of sense.

Knowing a set of functions I rely on did not change (at all) provides more peace of mind to me than just knowing the function signature is still compatible.

Versioning (or at least understanding changes) on the function level in addition to the library level sounds so powerful to me that I wonder why this concept isn't more mainstream yet. Are there other examples?

[edit: I found codec: https://github.com/Datomic/codeq#codeq]


He’s been thinking some really interesting thoughts about dependency management and libraries. It sounds like if he had his way, library versions might disappear entirely.

From the interview: “But there are really two very distinct kinds of change - there are breaking changes, where your expectations have been violated, and there are accretions, where there's just some more stuff where there wasn't stuff before. And in general, accretion is not breaking.”

To paraphrase stuff I’ve heard him talk about elsewhere, your library should never change a function to require more input or provide less output. If you decide to do that, call it something else and keep the old function around. He’s talked about function-level versioning too, touches on it here a bit. If I use library functions x y and z and the maintainers only broke (the contract for) functions a b and c, then I probably don’t care and can grab the latest version for its other new goodies. But that decision still requires me to read the library source or at least a changelog, even though it’s simple enough for the right tool to be able to report it to me. Spec looks like a step in that direction.


I agree with the principle that you shouldn't arbitrarily change API's. I disagree if it is extended to being an unbreakable contract. There are specific scenarios where people should consider breaking changes most of which fall under some for of improving usability of the library;

1. improving/providing secure defaults. 2. reducing the API surface so that usage is more evident. 3. refactoring that eases maintenance. 4. improving performance where appropriate. 5. probably others I haven't thought of.

In RFC style I would say it's a SHOULD rule rather than a MUST.


In all of these cases you can add new functions without removing (and breaking) the old ones.


Agree in many cases you can do this however with point 2 the removal of a function(s) is literally the aim. The accretion of functions benefits legacy systems however its tradeoff is potential harm to users new and unfamiliar with the library. Accretion creates a cognitive overhead (even if only minor) for both the maintainer and new users. Maintainers when they return to code to update and modify behaviour. New users when they seek to understand the library through documentation, examples, and usage. I don't think it's coincidence that a number of languages and libraries acknowledge this by having "one correct way to do X".

Using a concrete example relating to security; libressl maintained much of the API surface that OpenSSL provides. In essence they aimed to provide a "drop-in" replacement. However there were whole families of algorithms and functions which they deemed "unsafe/unfit" for purpose (e.g. FIPS algorithms). I think that's a perfectly valid exception to the rule. It acts as a canary in the coal mine and you have options; fix your code or defer upgrading.

I would advocate for thoughtful deprecation cycles over ossification of poor APIs and algorithms.


Elm goes a step further, it enforces semantic versioning. If the package has any breaking interface changes, it forces major version change.


That's not a step further, you lose the ability to use the latest version without code changes.


Which in any case you'd have to do in other languages. What Elm provides is a way for library users to decide whether to upgrade the library or not and more importantly not automatically upgrade to a library which needs code changes.


If you just don't have breaking interface changes you never need a major version change.


All your points (1 through 4) matched with what was looking for. I chose Clojure. It's Lispy, focuses on immutable data (but not necessarily pure functions), has a version (cljs) that compiles to javascript (if you're writing web-apps, its great to write both your front-end and backend code in one langauage). I've broadly found clojure to be a "practical" functional language to work in.

(If it helps, I'm primarily a python/js developer before picking Clojure).


For someone coming from JS/Python, I think a major familiar property of Clojure is the dynamic nature. You can quickly try something out or iterate and it runs right away, without asking you to specify the program completely type-wise.

Clojure's data checking and testing stories are good too, with spec and the various test libs - including generative tests from specs. And of course you can share the same specs on front and back end, demonstrating a payoff of the front/back code sharing.


Yes, and here's why.

A team's picture gets better when it sees collective participation from a large number of people in the team. Which is why I feel this will beat company LI/FB pages, that are updated by a single person(or a select few)


Good point.

In (email) targeted recruiting, the recipient will check the company out first. Isn't it worthwhile to build what they would like to see?


There's a hypothesis there, and it may be a good one. My thought is that getting someone to hand over money for a tool based on the hypothesis will be easier if the hypothesis is supported by data indicating to what degree the product will affect the bottom line.


Got it. Agreed.


IMO Glassdoor reviews suffer from extremity bias. For some reason, most reviews there either sound artificial or angry.

More importantly, glassdoor gathers "what people think of a company". Isn't it more interesting to know "how people think in a company"?


Yep, starters on me. Or recruiterbox, to be more precise :)

Look forward.


I'll try to make it


If you guys are ok, I don't mind sponsoring pizza for the meet. Let me know.


I'll try to make it and have some grease pizza


@Jonovono How do you ensure that your search is constrained over music videos only?


I don't really. I can't remember if I am able to only search music videos from the youtube API or not. I'll check it out now and edit. But right now I basically just search youtube with the song name, artist and album and use the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: