Hacker Newsnew | past | comments | ask | show | jobs | submit | Thaliana's commentslogin

The News Agents has almost entirely replaced TRIP for me. I still like some of their Leading interviews but TRIP itself I've gone right off.


I've heard about The News Agents before but never really checked it out.

I fear Emily Maitlis will annoy me more than Alistair Darling does though. Can't win.

There's always The Rest Is History for neutral interesting information!


I'd highly recomend you give reverse sear a go. It definitely works best with thick steaks but a nice low oven temp 120C (250F) and pull when the steak is 40C (120F), then into a ripping hot pan with oil in it gives a fantastic sear.

I'm not suggesting it's better than your method or anything but well worth trying imo!


An easy way to convert lbs -> kg is divide by 2 and subtract 10%


Just divide by 2. No need for the 10% bit.

10% is small enough that it does not matter for these types of informal conversational scenarios, and in cases where it does, do a proper conversion.


3000lbs to KG Mental Math

/2 = 1500 .1 = 150 = 1500 - 150 = 1350kg

Google's result: 1360.777kg


> why is it significant that we slice like a[i:i+4:i+4] rather than just a[i:i+4]?

Well I had never seen that "full slice" expression syntax before. It turns out that it's important because it controls the capacity of the new slice. The capacity of the new slice is now i+4 - i.

So by using the full slice expression you get a slice of length 4 and capacity 4. Without doing this the capacity would be equal to the capacity of the original slice.

I suppose that by controlling the capacity that you eliminate the bounds check.


In my testing [1] that doesn't eliminate bound checks. Instead, it avoids a computation of otherwise unused `cap(a[i:i+4]) = len(a) - i` value if my reading is correct.

[1] https://go.godbolt.org/z/63n6hTGGq (original) vs. https://go.godbolt.org/z/YYPrzjxP5 (capacity not limited)

> Well I had never seen that "full slice" expression syntax before.

Go's notion of capacity is somewhat pragmatic but at the same time confusing as well. I learned the hard way that the excess capacity is always available for the sake of optimization:

    a := []int{1, 2, 3, 4, 5}
    lo, hi := a[:2], a[2:]
    lo = append(lo, 6, 7, 8)      // Oops, it tries to reuse `lo[2:5]`!
    fmt.Printf("%v %v\n", lo, hi) // Prints `[1 2 6 7 8] [6 7 8]`
While I do understand the rationale, it is too unintuitive because there is no indication of the excess capacity in this code. I would prefer `a[x:y]` to be a shorthand for `a[x:y:y]` instead. The `a[x:y:len(a)]` case is of course useful though, so maybe a different shorthand like `a[x:y:$]` can be added.


I think slices are wonderfully intuitive... if you've worked with C.

Slices encapsulate an ubiquitous C pattern, where you pass to a function: a pointer (to an initial array element), and a length or capacity (sometimes both). This pattern is directly encapsulated by Go's slices, which can be thought of as something like:

  type Slice struct {
      Ptr *Elem
      Len int
      Cap int
  }
I love Go's slice syntax. It's the right piece of syntactic sugar. It removes tedium and room-for-mistakes from this prevalent C pattern. It lets me work as precisely with memory as I do in C, yet everything is simpler, lighter, and breezier.

For example, I'm making a game in Go. I don't want to be allocating memory throughout the game (which can cause frame drops), so instead I allocate giant arrays of memory on launch. Then I use slices to partition this memory as needed.

Some of these arrays get their elements re-accumulated at some interval (every level, or every frame, etc.). And so it works nicely to first set them to zero length:

  myMem = myMem[:0]
Notice that the myMem slice now has zero length, but still points to its same underlying array. Then I perform the accumulation:

  for ... {
      myMem = append(myMem, elem)
  }
Again, I don't want to be allocating in general, so I care very much that append continues to use myMem's preexisting capacity.

All this is to say, I don't see slices as being the way they are for the "sake of optimization." Rather I see them as an ideal tool for working with, referring to, and partitioning memory.


C doesn't have any slicing operator like Go's `a[x:y]`, which is the main problem I want to point out. Slice itself is just a natural construction.


Yes, I'm saying Go's slice operator (and slice type, and related functions) grew out of experience from the way arrays are used in C.

To me, this becomes obvious if you read The Practice of Programming by Kernighan and Pike, the latter of which was a co-designer of Go. If you read the book, which was written well before Go, and pay attention to how it uses C arrays, you can almost feel Go slices emerging. The slice syntax perfectly encapsulates the C array bookkeeping.


I'm not sure how it can be possible. In my experience the notion of three-part slices does exist in C but only implicitly. For example,

    size_t trim_end(const char *p, size_t len) {
        while (len > 0) {
            if (p[len - 1] != ' ') break;
            --len;
        }
        return len;
    }
Conceptually this function accepts a slice `(p, len, cap)` and returns a slice `(p, len2, cap)` where `len2 <= len` and the capacity never changes. But the actual argument doesn't have `cap`, and the return argument doesn't have `p`. Everything is implicit and it's typical for C programmers to fully document and follow such implicits. Go's slice operator can't come out of such implicit practices in my opinion.

In comparison, your claim only makes sense when the following was somehow normal:

    struct slice { const char *p; size_t len, cap; };
    struct slice trim_end(const struct slice *s) {
        struct slice out = *s;
        while (out.len > 0) {
            if (out.p[out.len - 1] != ' ') break;
            out = subslice(out, 0, out.len - 1);
        }
        return out;
    }
Note that a hypothetical `subslice` function call maps perfectly to a Go code `out[0:len(out)-1]`, and my complaint will equally apply: there should be two clearly named variants of `subslice` that may or may not keep the capacity. But I hardly saw such construction in C.


I feel like were talking past each other. I'm not saying that C has a slicing operator, or that it's typical to define one as a function, or that it's typical to define a slice-like struct in C.

I'm saying that if you look at how arrays get used in C, you'll see that you're usually passing around extra numbers with them. So Go added syntax than encapsulates this. And it encapsulates the most general case (hence both a length and a capacity, even though most cases in C only use one number).

Instead of passing (char *, int) in C, you just pass (slice) in Go. And the Go slicing operator gives a nice syntax for selecting ranges of buffers.

But a Go slice is just a pointer to an array underneath, and I think it's best to always think about them that way. And then it's natural that mySlice[:2] would keep the same underlying capacity, and that future appends would mutate that capacity. Defaulting mySlice[:2] to mean mySlice[:2:2] seems less convenient overall to me (though prevents mistakes like in your original hi/lo example, but those come from not thinking of slices in terms of the underlying array).


> Instead of passing (char *, int) in C, you just pass (slice) in Go.

Maybe that's a point of disagreement here. I don't exactly see (char *, int) as a single entity, so it cannot be replaced with a slice (a definitely single entity) in my view.

They do appear together in arguments, but they have to be separate variables when you manipulate them and no C syntax will suggest that they are indeed related. So you have to rely on conventions (say, `name`, `name_len` and `name_cap`) which tend to be fragile. You can't expect such annoyance from Go slices, so I argue they are necessarily different from each other.


It works fine if you replace that by the same thing, which is what most langages (with slices) do.

The problem is that Go does not do that, because its designers could not be arsed to provide a separate vector type it’s (ptr, int, int) and now you can start stomping on out-of-slice memory unless the creator of the slice has used the extended slicing form.


Wow that seems pretty unsafe...

In D, for example, this works as most people would expect:

    import std.stdio;
    void main() {
      int[] a = [1, 2, 3, 4, 5];
      auto lo = a[0 .. 2], hi = a[2 .. $];
      lo ~= [6,7,8]; // new allocation
      writefln("%s %s", lo, hi);  // prints [1, 2, 6, 7, 8] [3, 4, 5]
    }
You simply can't overwrite a backing array from a slice (unless you do unsafe stuff very explicitly).


This like most issues with slices stem directly from them pulling double duty as vectors.

Capacity being conserved by slicing is important to the “slice tricks” of removing items, I assume.


You’re also never increasing the size of those slices right? So it’s better memory wise by a bit and maybe faster? I last used Go a while ago, but I recall the capacity was the length of the underlying array? Internally it may even reuse the same arrays for slices maybe, since they’re not changing size each loop iteration.

Edit: weird, this was supposed to be an update to a previous comment I made, but this is a different comment now


You’re also never increasing the size of those slices right? So it’s better memory wise by a bit and maybe faster? I last used Go a while ago, but I recall the capacity was the length of the underlying array?

Edit2: (I’m throttled and I can’t post a new comment I guess? Makes me feel like my voice was stolen! I guess I’m not wanted around HN.)

Thanks for the correction, I would delete my comment but I found a bug in HN while updating it so I’ll preserve my stupidity here in duplicate for now.


There is no copying of memory going on, the a[i: i+4] or a[i: i+4: i+4] slice references the same underlying data as in a.


The way that I learned to write the WHERE clause of a SQL update statement first was updating an entire column of a very important SQL table in a TV stations automation software database.

I also took CNBC off air briefly, although that was their man's fault as he told me to unplug the wrong video server.


The plants used in this study are Arabidopsis thaliana, which is a very commonly used model organism in plant biology.

I think it's not a particularly wild assumption that a lot of plants will react in this way. The MYC2 protein and jasmonic acid are important regulators of plant defenses so priming of defenses when it rains makes sense. There are pathogens (such as Pseudomonas syringae) that can trick a plant into opening its stomate, which are airholes on the leaves, as a means of gaining access to the interior of the leaf.

As the paper describes how the mechanical action of rain can wash pathogens around the leaf or onto other plants I think it's reasonable to think other plants would react like this.

The interplay between the jasmonic acid pathway and other plant hormone pathways is super interesting as plants, in general, have two ways of defending themselves. For pathogens that feed on dead tissue the name of the game is to keep the cells alive (jasmonic acid can be thought of as generally responsible for this sort of response), whereas for pathogens that feed on live tissue then the plant will kill of cells local to the infection site in order to deny the pathogen food (salicylic acid is largely responsible here).


I still stand by my original comment, it would have been nice to have the type of plant studied included in the article and or in the synopsis of the paid article.

"The plants used in this study are Arabidopsis thaliana" was this information in the story and I just missed it? Or are you making that assumption due to that being a "very commonly used model organism in plant biology"? The plant image used at the top of the article looks nothing like rock cres but instead more like some sort of hosta or lilly pad, I wonder if the image plant is shaped that way to catch and funnel water away from its roots? Just as a palm does the opposite? Anyways I could go on, just wish they had included the species in the article.


Checking the article again I got it from the "More Information" box at the bottom of the article. The paper is titled "In vivo evidence for a regulatory role of phosphorylation of Arabidopsis Rubisco activase at the Thr78 site" so that means A. thaliana.

The picture is certainly misleading, I guess they just grabbed a generic "plants in a rainforest" picture for evocative reasons.


I understand the premise of the article and it makes sense, a very common example of this occuring in many gardens would be powdered mildew on squashes and the like. The leaves dont even have to be penetrated in order to damage the plant and rain definitely encourages the spreading from leaf to leaf.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: