Hacker Newsnew | past | comments | ask | show | jobs | submit | skrishnamurthi's commentslogin

Proof-of-work was originally proposed precisely for email (specifically spam), by Dwork and Naor back in 1993!

https://en.wikipedia.org/wiki/Proof_of_work


Author here. That's not true.

GTD asks you to figure out now the action for each thing, think about how long that will take, figure out if it will take more than 2 (or N) minutes, and if ≤ that, do it now. The "do it now"s can add up to a lot of time and distraction. DBTC is the sorting step but without the "figure out the action" step or (most critically) the "do it now" step. And there's no reflection step, either.

So it's not "literally reinvented", not even "almost".


What you've done above and beyond GTD as I noted in an earlier comment (<https://news.ycombinator.com/item?id=46376558>) is that you've time-blocked when you make that assessment, on the (IMO correct and insightful) notion that triage itself is expensive. That is, determining the importance and length of a task / item of email itself consumes limited cognitive resources.

Adding to what I wrote earlier, another advantage of postal mail is that it comes at fixed intervals, typically once a day (historically possibly more often especially in cities with "morning" and "afternoon" mail, making one-day responses possible, currently with curtailments in service possibly only a few days a week). This automatically batches mail processing.

Early in the corporate adoption of email a firm I worked at only polled periodically for new external email (every 20--30 minutes or so). Whilst internal email was pretty instant, this meant that at most external emails would give cause for interruption only a few times an hour, rather than at any given moment. I've given thought to reimplementing this on my own systems from time to time, perhaps even only 2--3 times a day, say, "morning email" (limited to priority recipients), and afternoon email (the Great Unwashed Masses have their opportunity).

In reality, I've adopted Inbox Black Hole, in which I rarely if ever check personal email. Circumstances make this reasonably viable, though those are decidedly atypical and most professionals would be unable to adopt a similar tactic.


I agree “literally” was too strong, but you’ve implemented the core idea of GTD inbox, which is to have a place to put stuff you don’t want to do right now, plus the confidence that you’ll look there later (which is what lets you forget about it now).

> If this message is not urgent, and if dealing with it now will distract me, and if it’s either not long, or if it’s personal, it goes straight into the folder.

How do you know if it’s urgent, or if dealing with it will distract you, if you don’t know what the action is?

Anyway, I didn’t mean it as a criticism; that sort of thing happens to me all the time so I thought I recognized the phenomenon.


I'm the author of the original article, I'm a tenured professor, and none of these things is optional for me. Indeed, I wrote this article several years into being a full professor, because my obligations had only grown, not reduced, by virtue of all the promotions. Of course different people have different senses of how "obliged" they are or should be.


It is important to clarify what we mean by "obligated / not optional", as I think there is a terminology mismatch.

When I said that a particular job task is not optional I mean that not doing this task will lead to a disciplinary action from the employer (being fired, put on a performance improvement clock with the HR, etc.). Reducing those tasks brings in one set of considerations.

Your definition of "obliged", if I understand it correctly, is primarily a self-assigned or a community-expected one: "if I stop running this seminar or remove myself from that editorial board, I will feel I am not doing all I can / my colleagues will look at me askance". But it will not trigger retaliation from the employer. Reducing overload from those tasks brings a completely different set of considerations.


It's not the same thing. `recur` in Clojure must be in tail-position. This program

https://news.ycombinator.com/item?id=45154253

would therefore not work.


I was trying to say something like that with my note in the GP comment:

  > "nb. Clojure doesn't have automatic tail call optimisation. We need to explicitly emulate it with`recur`."
Just an average joe programmer here... advanced macrology is way above my pay grade :sweat-smile:.


Tails calls are especially useful in languages with macros. You don't know what context you are in, you just generate the call that makes sense. If the call happens to be in tail-position, you get the benefit of it.

Moreover, you can design cooperating macros that induce and take advantage of tail-position calls.

Here's a simple example that motivates tail-calls that are not tail-recursive:

https://cs.brown.edu/~sk/Publications/Papers/Published/sk-au...


Yeah, absent automatic TCO, we have to do it all, explicitly, by hand... `recur` and `trampoline`.

recur: https://clojuredocs.org/clojure.core/recur

  > Evaluates the exprs in order, then, in parallel, rebinds the bindings of
the recursion point to the values of the exprs.

  (def factorial
    (fn [n]
      (loop [cnt n
             acc 1]
         (if (zero? cnt)
              acc
            (recur (dec cnt) (* acc cnt))
  ; in loop cnt will take the value (dec cnt)
  ; and acc will take the value (* acc cnt)
  ))))
trampoline: https://clojuredocs.org/clojure.core/trampoline

  > trampoline can be used to convert algorithms requiring mutual recursion without stack consumption.
i.e. these emulate TCO, with similar stack consumption properties (they don't implement real TCO).

(edit: formatting)


Thanks for the pointers. Trampolining is an old idea for obtaining tail-calls. It's a kind of folk-wisdom that has been rediscovered many times, as the related work here shows:

https://dl.acm.org/doi/pdf/10.1145/317636.317779

Usually the trampoline is implemented automatically by the language rather than forcing the author to confront it, though I can see why Clojure might have chosen to put the burden on the user.


Yeah, Rich's HOPL lecture covers that ground...

https://clojure.org/about/history

  > Clojure is not the product of traditional research
  > and (as may be evident) writing a paper for this setting 
  > was a different and challenging exercise.
  > I hope the paper provides some insight into why 
  > Clojure is the way it is and the process and people
  > behind its creation and development.


Ah, I didn't know there was a HOPL paper! Some day I will have time to run a course reading HOPL papers. Some day I will have the time to read HOPL papers myself (-:. Thanks for the pointer.


Thanks for the suggestion to replace the reference to MzLib with SRFI-31. I've done that now.

https://github.com/shriram/anonymous-recursive-function/comm...


The correlation is likely causal in both directions.

They're niche because they're doing weird, interesting things. Like creating their own VMs to support funky features. So nobody wants to depend on them: low bus-factor.

They can do weird, interesting things because they don't have a large user-base that will yell at them about how they're breaking prod.


This isn't the Y-combinator.


1. This isn't the same. `rec` names the function. This does not name the function. The point is to illustrate how the name-capture works.

2. The README literally says "Don't Use This Macro!" and references `rec` to use instead:

https://github.com/shriram/anonymous-recursive-function?tab=...


This isn't meant to be a good programming mechanism, it's meant to be an illustration of how to use the macro system.

But also, if you're processing non-linear data, you're going to want to do with a recursive function anyway. E.g., when dealing with a tree. Code below; can't seem to get multi-line code-formatting so it looks hideous:

#lang racket

(require "anon-rec.rkt") (require rackunit)

(struct mt ()) (struct node (v l r))

(define sum-tree (lam/anon (t) (cond [(mt? t) 0] [(node? t) (+ (node-v t) ($MyInvocation (node-l t)) ($MyInvocation (node-r t)))])))

(define t (node 5 (node 3 (mt) (mt)) (node 7 (node 9 (mt) (mt)) (mt))))

(check-equal? (sum-tree t) 24)


For formatting code blocks on HN, prefixing each line with 4+ leading spaces works:

    (define sum-tree
      (lam/anon (t)
        (cond ((mt?   t) 0)
              ((node? t) (+ (node-v t)
                            ($MyInvocation (node-l t))
                            ($MyInvocation (node-r t)))))))


Aaah, thanks Neil!


Recursion just ends up using the call stack as a stack data structure. I would much rather use an actual stack data structure, that will be easier to debug and have better locality since there isn't an entire call frame overhead to put one value into the stack.


You’d be right if this was 1950. Since then literally all hardware, and compilers, have this specific use case so optimized that you’ll likely see the opposite if you benchmark it.


Prove it. You can put your stack data structure on the stack anyway. A balanced tree isn't going to have more depth than your memory address bit length. Why would copying a single value be slower than pushing an entire call frame to the stack? Locality is what matters and there is no truth to what you're saying.

More important is the debugability. If you have a normal data structure you can see the full stack of values. If you use recursion you have to unwind through multiple call frames and look at each one individually.

Recursion is for people who want to show a neat clever trick, it isn't the best way to program.


Here you go : https://godbolt.org/z/haroWGa3c

Recursive DFS took 4 ms Iterative DFS took 8 ms

I'm sure you could optimize the explicit stack based one a bit more to reach parity with a significantly more complex program.

But might as well let ~75 years of hardware, OS, and compiler advancements do that for you when possible.

> Why would copying a single value be slower than pushing an entire call frame to the stack

Because that's not what happens. The stack arithmetic is handled in hardware increasing IPC significantly, and the 'frame' you are talking about it almost the same same size as a single value in the happy path when all the relevant optimizations work out.

> More important is the debugability

Debugging recursive programs is pretty neat with most debuggers. No, you don't unwind through anything manually, just generate a backtrace.


One reason it's slower is because your stack doesn't reserve any memory and grows to 40441 at its max size then shrinks back down again. Stack uses a dequeue by default which stores elements in chunks which likely causes lots of memory allocations (and deallocations) which don't happen in the recursive version. Also at n=80,000 your recursive version blows the stack.

https://en.cppreference.com/w/cpp/container/deque.html

The stack arithmetic is handled in hardware increasing IPC significantly, and the 'frame' you are talking about it almost the same same size as a single value in the happy path when all the relevant optimizations work out.

The program stack isn't magically special, it isn't going to beat writing a single value to memory, especially if that memory is some constant sized array already on the stack.

Debugging recursive programs is pretty neat with most debuggers. No, you don't unwind through anything manually, just generate a backtrace.

No matter what kind of debugger it is you're still going to be looking at a lot of information that contains the values you're looking for instead of just looking at the values directly in an array.

Recursion gets used because it's quick, dirty and clever, not because it's the best way to do it.


Go on, 'Prove it'. Write a version that's faster.

I know it's doable, because I have done it.

You don't seem to understand yet how complex it will be. My guess is ~10x the number of lines of code. It'll be significantly less readable, let alone debuggable.

(btw changing from stack to vector and reserving memory outright for the allocations has virtually no change in performance.)

> The program stack isn't magically special

This is what you're missing. Yes, it is magical because the hardware optimizes for that path. That's why it's faster than what you'd think from first principles.

> it isn't going to beat writing a single value to memory

If you examine the kernel trace from this, you'll find that it has the exact same memory usage and bandwidth (and about twice the ipc). Magical, yes.


Go on, 'Prove it'. Write a version that's faster.

You're trying to say that pushing a function call frame is faster than writing a single value to memory and incrementing an index, and when asked to prove it you made something that allocates and deallocates chunks of memory to a linked list where every access takes multiple pointer dereferences.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: