Hacker Newsnew | past | comments | ask | show | jobs | submit | travv0's commentslogin

What tools? That's a git config file.


`ga` and `gcp`. Are those part of git?


The only part I left out is that as part of my shell config I automatically alias all my short git aliases (as well as a few special cases). So “gcp” -> “git cp” etc.

https://github.com/kbd/setup/blob/d95653da5ab367b1e628b97537...


I assumed those were aliases for `git a` and `git cp` which are in the config file.


No, they're not in the config file.

`gcp` and `ga` are part of forgit, not OPs config. That's why searching the repo didn't find anything. From the phrasing, I thought they were part of OPs repo.

`gcp` https://github.com/wfxr/forgit/search?q=gcp

`ga` https://github.com/wfxr/forgit/search?q=ga



> The fact most kids dont have a relationship with their parents or guardians where they can talk to them about anything is highlighted with metadata like you have highlighted.

You have a source for that? The statistics you're referring certainly don't say anything at all to back your claim.


Have a look online, but here is one plucked from a search engine ( https://www.highlights.com/about-us/press-room/national-surv... ) again meta data, but there is also a Thomas Pynchon quote to think about, "If they can get you asking the wrong questions, they don't have to worry about answers.".


I also haven't seen an ad on the Gmail web client in years. Maybe because I pay for extra storage through Google One?

Edit: It looks like it doesn't happen in the primary tab and I have all the other tabs disabled which would explain it.


Reading through the GitHub issue, it seems like the one place it makes sense in my eyes is that you can refer to stuff in external libraries with your convention even if the library uses a different one. I'm not sure if there's something I'm not thinking of, but to me it'd make sense if the library boundary was the only place it let you do it.


> Lately when I search for something like "recipe for borgelnuski" I get a page of links to sites with names like "Molly and Audrey" that first tell you a long story about their grandmothers pet kangaroo, then go through a very long a tedious explanations about ingredients, then vaguely go through the recipe step by step with a lot of pictures, but if you don't mind scrolling for 20 or 30 minutes, you get a very good recipe for borgelnuski.

So what you're saying is Google took you to a page that had a very good version of the thing you were searching for?


> FP is terrible at logging

Um, what? You can't just drop this without elaborating. I've never seen anybody have problems logging in a functional language.


They probably mean Haskell. Note: I don't know Haskell, so below is speculation.

To log you need IO. To get IO you need to provide it to the function that will do logging. And now your function has to be marked as doing IO. So now you need to thread all that IO through all your functions, and "turn you code into monadic code" (I think that's the term).

Other languages (like Erlang) don't care, and you can log whenever you want.

---

Flamewar off-topic: it looks like hardly anyone is doing any useful logging in Haskell, because if you search for "logging in Haskell" you end up in:

- highly academic discussions on "logging actions" vs "logging of computations"

- extremely convoluted solutions that turn even the simplest examples into a mess

- a couple of libraries whose entire documentation is usually "believe me we're the shit", and if they do have examples, they are an impenetrable mess of custom types and ascii art

For every other language it literally is `logger.info(something)`


And here we see the damage caused by the modern OOP. People that complain about that want to replicate in Haskell the lob4j philosophy of adding logging into every interface, because with data and IO chunked everywhere inside object interfaces, you never know if you can ever repeat an execution in a development environment to verify it.

The thing is, if for some reason you really think you need to log inside a pure function, you either need an intermediate variable or your perceived needs are severely misguided.


> And here we see the damage caused by the modern OOP. People that complain about that want to replicate in Haskell the lob4j philosophy of adding logging into every interface

And here we see a person slinging unsubstantiated accusations

> if for some reason you really think you need to log inside a pure function, you either need an intermediate variable or your perceived needs are severely misguided.

Clear demonstration of "it looks like hardly anyone is doing any useful logging in Haskell".

Because, as we know, the fact that "you can repeat an execution" immediately makes your need to log anything in that execution as "misguided".


What exactly do you expect to gain by logging inside a pure function?


Thanks, but an answer from someone that knows the language that they're talking about would be much more productive.


> The vast majority of the code in that repository is in the IO monad and uses carefully placed “!” eager evaluation annotations.

Looking through that repository, I believe we have extremely different definitions of "vast majority."


Nearly everything in Render.hs is in the IO monad. That is a fact, not a subjective opinion. I point out those routines because they are the heart of the application.


Yes, rendering is by definition I/O. That's also one file. You stated:

> The vast majority of the code in that repository is in the IO monad and uses carefully placed “!” eager evaluation annotations.

Do you have anything to back that up?


> Yes, rendering is by definition I/O.

First of all rendering is not by definition I/O except in the trivial sense that all functions take input and produce output. A pure 3D rendering function takes game state as input and produces a list of triangles + attributes to draw as output.

Even if that were true it would only further validate my point.

> Do you have anything to back that up?

Yes, I already showed that Render.hs validates my point.


> The vast majority of the code in that repository is in the IO monad and uses carefully placed “!” eager evaluation annotations.

This is the claim we're talking about. Since you're into facts, not subjective opinions, show some evidence that the vast majority of the code in the repository is in the IO monad and uses carefully placed "!" eager evaluation annotations. Just to be clear, you haven't done that yet. That's a fact, not a subjective opinion.



Here's one more file than you listed without BangPatterns enabled:

https://github.com/rainbyte/frag/blob/master/src/BitSet.hs

https://github.com/rainbyte/frag/blob/master/src/Command.hs

https://github.com/rainbyte/frag/blob/master/src/Curves.hs

BangPatterns is a normal language extension to have enabled, was this used heavily? (hint: it's not enabled on "every source file.") You listed two of 28 files there, I'm assuming to try to show that the vast majority of the code in that repository is in the IO monad? You went from "vast majority" from one file to now two files. I'm looking for objectivity here, as you seem to be into. Let's see some numbers. I think anyone that glances at that repo would need some convincing of your claims.

Render.hs is 211 of 5580 lines of Haskell in the repository, by the way.


Curves.hs may not have BangPatterns but it is heavily in the IO monad. My point is sufficiently supported by the provided evidence, even evidence provided by you, exceptional cases not withstanding. Thank you.


You still haven't shown me any data to back up your assertion, but you're welcome I guess.


I find VS Code with the Haskell extension to be very good for displaying type signatures, code completion and navigation, etc. Holes (which are covered by the article) are the go-to way to see what's possible at the current location. You type an underscore in your code and the compiler tells you a bunch of information about what goes there including things you could put there that would typecheck.


Tail call optimization has nothing to do with optimizing for different CPUs, it's about dropping a function's stack frame when it's evaluating its return expression and its stack frame isn't needed anymore.


Is this the difference in calling conventions ie callee or caller stack clear up? https://en.wikipedia.org/wiki/X86_calling_conventions#List_o...

This is half the problem different terminology in use between different age groups, just like slang is different between different age groups.


I'm not sure, so here are a couple examples.

    def f() =
       g()
In a language implementation that doesn't optimize tail calls, the stack would look like the following after the call to g:

    g
    f
    main
In a language implementation that does optimize tail calls, the stack would look like this, because the result of f is whatever the result of g is so f is no longer needed:

    g
    main
If a language implementation doesn't optimize recursive tail calls, the following code will quickly overflow the stack and the program will crash:

    def loop() =
        do something...
        loop()
In a language implementation that does optimize recursive tail calls, this code can run forever because loop's stack frame gets replaced with the stack frame of the new call to loop.

The reason people want recursive tail calls optimized out is at a much higher level than anything to do with the actual CPU instructions being used, they just want to have a way to write recursive functions without worrying about the stack overflowing.

What's my age group, by the way?


I have never seen this referred to anything other than things like tail-call recursion, tail-call optimization, etc.

Languages like Python make implementing simple loops like:

    def loop():
        <whatever>
        loop()
impossible.

Python will reach a maximum recursion depth and error.

Why is this important? Like I said, it makes looping very easy. For example, actors can almost be trivially implementing in languages with tail-call recursion.

It’s not in Python because like most things in Python, van Rossum doesn’t like it because <reasons>.

https://stackoverflow.com/questions/13591970/does-python-opt...

There’s little point in having full traces of the data doing in and out of the tail-call loop is immutable, so you only really care about the current call of the function.


<goes away to find out what if its in my preferred language & tool>

Its not something I've every heard of before, I guess its peculiar to Python though but dont most languages have some eccentricity?


It's absolutely not peculiar to Python, it's something that every single language implementation has to make a decision on one way or the other.

Here's an SO answer from 2008 about how to enable TCO in various C and C++ compilers: https://stackoverflow.com/questions/34125/which-if-any-c-com...

There are many things both you and I have never heard of before. That's normal.


Yes, the different terminology is a reflection of the entrance of so many new entrants going for what is easy in the short term, instead of learning the theory of their industry and thus learning better approaches that are not 'immediately' obvious.

This lack of learning theory in our industry, instead going for something that is 'easy to get started' explains the popularity of python and javascript, and at the same time why python and javascript are littered with problems that have already been solved, and cluttering up the field of knowledge by reinventing terminology because they never learned the original existing terms.


I'd also be interested in a Haskell example.


Generally this problem is because someone went crazy with template haskell or generics


What problem? I'm looking for an example of where poor scalability has been a problem in a Haskell codebase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: