Hacker Newsnew | past | comments | ask | show | jobs | submit | ybaumes's commentslogin

I've read it multiple times, I still do not understand the simpleton strategy...


Essentially, the simpleton is treating your previous move as either a reward or punishment.

If you cooperated, then the simpleton thinks the thing it did last time must have been good and does it again. If you cheated, the simpleton thinks the thing it did last time must have been bad and does the opposite.

It follows these rules based on its last move even if that last move was flipped from what it should have been due to the chance of mistake.


"If I 'won', do the same thing again. If I 'lost', do the other thing."

Also called win-stay-lose-switch or similar.


I think 4ad has a good point here. It's been a long time since I has the same kind of thinking: Shuttleworth won't spend all his money in Ubuntu indefinitely and/or he wants his firm to be financially sustainable over time. They've tried hard to make a living out of desktop market, and it never worked out. Now they cut it and focus on B2B markets. No longer expect Canonical to make innovation for the end user market in foreseeable future.

The way he is expressing his bitterness to the open source community in its Google+ post is telling.

We may expect Ubuntu to drift slowly into the once-in-a-time-popular distribution graveyard.


I'm glad finally someone notifies it. Thank you and +1.


There's actually an item about that in Scott Meyers's Effective STL book.

Item 23: Consider replacing associative containers with sorted vectors.


I though having multiple cache levels was about a trade-off between performances and costs. The closer to the cpus (or the fater cache lvl), the more expensive it is.


Yes. I don't know where he gets the idea that a large L1 cache is to a CPU the same as a 150mx150m desk to a human. Address decoding is done in parallel, not sequentially. And desks are as large as people are comfortable to produce and use.

Likewise, if the RAM would be as cheap to produce as SRAM like it is as DRAM, it would be as fast as the CPU (since it is using the same technology as the CPU) and we would not need the cache at all. Imagine gigabytes of L1 cache!


Well, address decoding can be started in parallel if your page size lets you do virtually indexed, physically tagged caches which applies to only some processors. But that's a separate issue from the relationship between cache size and cache speed. That's governed by three things.

First, the larger your cache the more layers of muxing you need to select the data you need, meaning more FO4s of transistor delay.

Second, the larger your cache the physically bigger it is. That means more physical distance between the memory location and where it is used. That means more speed of light delay.

And third there's the issue of resolving contention for shared versus unshared caches.

So despite the fact that you're using the same SRAM in both your L1 and L3 but access to the former takes 4 clock cycle but access to the later takes 80.


There's also the fact that as you get down the cache hierachy the cache becomes more complicated. An L1 does lookups for a single processor, and responds to snoops. An L3 probably has several processors hanging it off and may deal with running the cache coherency protocol (e.g. implements a directory of what lines are where and sends clean or invalidation snoops when someone wants to upgrade a line from shared to unique). As a result you've got layers of buffering, arbitration and hazarding to get through before you can even touch the memory array.


> And desks are as large as people are comfortable to produce and use.

Think about what this implies though -- a desk that is too large becomes difficult for a person to use (for one, the person would have to start walking to access certain parts of it).

Likewise, L1 cache sizes are bounded, because the larger the cache becomes, the more difficult it is to address a particular location, and the cache also becomes physically larger such that speed-of-light propagation delays will slow the entire cache down.


No, a cell of L1 cache is exactly as expensive as a cell of L3 cache (ignoring weird stuff like eDRAM).

Now, SRAM is these days made with 6 or 8 transistors while the the DRAM you use in your main memory only takes 1 transistor per cell. Also your DRAM is built with a different sort of silicon process so it cheaper on a transistor to transistor basis. But generally the dollar cost is the same for memory in any given location.


Given a fixed area which fits a fixed number of transistors at the same cost, you allocate some portion of those transistors to compute and memory cells.

If you want to maintain your number of memory cells without decreasing the number of compute transistors, you need to grow your area which increases costs. That can be a very expensive thing here.

Additionally engineer time around layout and architectural costs are different for those different placements and cache requirements, so the cost is not uniform, but amortized it is not as significant as things like chip area.


Changing a chip from having 8kB of L1 Dcache to 16kB might be far more expensive in design terms than making a similar change in L3 cache but from a blank slate would either be more expensive to design in the first place? When I look at the layout of a late model x86 the regular structures of the caches stand out in the die photos among the irregular hand-tuned logic. Yes, there are follow on effects on the layout from changes in cache size but I don't see any reason a priori to say whether increasing the L1 size will tend to make designing the rest of the core logic harder or easier.

So I still don't see any reason to back off from saying that a cell of L1 costs as much as a cell of L3, modulo concerns about keeping the cache size a power of 2.


Is sms channel really secure? Isn't it a plain text channel, as opposed to an unencrypted channel?


SMS is encrypted on GSM control channels using the broken A5/1 stream cipher which has well-known weaknesses.


and the base station can _disable_ encryption completely without any warning given to the cellphone user.


I was thinking Andrei Alexandrescu did invent the D language.


Nope, D hit 1.0 before Andrei got involved in development, or it was at least very close.


After reading the article I found diagram cute, that's true. But I still don't understand the method how they are generated. It's not even discussed in the article. I don't know the benefits of it, how do you analyse a diagram?

It sums up to a big text advertising a cute poster.


As far as I know people go to kickstarter in order to kick start projects. Not every games are 100% complete when submitted to kickstarter.com . That is a simple prototype, right. And still it is an exciting one, as far as I am concerned.

So no, not disappointed at all.


I was thinking that kick starter was all about helping in project to "kick start". So obviously for me, I find it normal that's there is still no game at this early stage of development. Or rather: 10% complete development so far.

It looks like people get "wrongly" used to dev studios submitting projects nearly completed. But those are more like the exception than the rule. If look closely to the list of kickstart submission I think that's the case most of the time for software development, that is they are far from completed. And the lead developer has an idea and needs fund to work full-time on his idea and/or hire folks for helping him out. For instance: lighttable of Chris Granger. And I think that's what is thrilling about kick starter and launching a startup. That's all about taking risks.


I get your point, but I think you're identifying the wrong problem. Kickstarter backers being averse to "here's my idea, give me money" pitches is rational. Most professional game developers are pretty bad at estimation, pretty bad at risk forecasting, and just generally aren't great at bringing stuff to market on time and on budget. Kickstarters with a bunch of people with no real personal credibility or track record (individually or as a group) are serious risks and it's perfectly understandable that they'd be treated as such by potential backers.

As I noted upthread, I have a Kickstarter coming up sooner rather than later (like, May-ish) and a lot of my time right now is building out my engine to have a workable hands-on demo for people to play with. It won't be designed to be "fun" yet, but rather a synthetic demo where I can go "these features will be used for X, Y, and Z, and you can see them already basically done". I need money to pay my artists to build assets more than I need money to write the damn game.


You can call it "wrongly" if you want, but I think there's a pretty big difference between "look at my cool game (that doesn't exist)" and "look at my cool idea for a game, that I need Kickstarter to fund". The OP isn't 10% done...they're 1% done, and while they look better than most of the folks at the same level of development, it still gives me precisely zero confidence in their ability to deliver when they overstate so obviously what they're bringing to the table.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: