Hacker Newsnew | past | comments | ask | show | jobs | submit | downer68's commentslogin

I'm not sure that I enjoy the idea of communicating with millions or billions of people. That's not why I use the language I use.

The incidental fact that I share a common language with certain other individuals of high status, isn't why I speak this language. Would that I could choose another language, would that be the reason to choose it?

If that's true, certainly there are people with more than me, operating at higher status in other languages, so why don't I simply open doors for myself and choose to learn one of those languages?

(hint: you can't just walk up to someone and kick it to them, not in any language; there are more rules to the equation, and they are neither trivial nor obvious; I mean look at how I'm disagreeing with you right now, and in your own language, no less!)

The reality is that, all I really get from this language is a the house and food that keeps me going. The quantity of people I could theoretically interact with isn't a benefit, because the quality of people I could possibly interect with is much lower than the peaks and unicorns of whatever metric you might use to chart the value of social interaction in an objective manner, which, by the way, is conceptually repugnant to me.

So, preserving a dying language, what's that worth? Well, in the sense of using selective breeding to artificially create stunted or distorted animals as curiosities for entertainment, yeah, it seems ethically distasteful to force A child to learn esperanto, without an endgame in mind, other than for the purpose of injecting an artificial sense of nostalgia into a custom that will never see practical use. Under the hood, the mechanics of this almost translate to wasting a child's time as investment in future emotional blackmail.

So, then, to look at that idea as a corrolary to preserving and forcing an enclave to remain isolated and backwards because it's so quaint. Or perhaps because it serves as a visceral teaching tool, to demonstrate an interactive example of human existence that predates certain discoveries. What does that really mean?

Does blocking access to modernity mean, if not so much (outsiders blocking said access) for the mature members of a primitive clan within an isolated tribe, then moreso it means permitting those older members to engage in what is effectively child abuse. All this transpiring as they foster a new generation of children who will grow up without refridgerators and freezers, and effectively be restricted from knowing the delight of ice cream? Ice cream, Mandrake? Children's ice cream?

What could have been? What would we see, as we watch these children blossom into adults (as modern citizens of the global village), were it not for our morbid fascination with the primitive cultures of indigenous peoples? On the one hand, children raised in mud huts could certainly do without the tape worms and eye infections that leave some of them blind, malnourished and half-starved. And what of all the patently wrong misinformation of superstition and ritual? But then again, the hollow, selfish emptiness of modernity, and the auto-immune problems, endocrine disorders and psychiatric medication required to cope with the cognitive dissonance of living beneath an RFID geolocated surveillance ad tech apparatus designed by quantitative economists leaves something to be desired, so where's the goldilocks zone in all this?


BTW, as an addendum, I wrote this as a pure response to the parent comment, without RTFA, and the original HN title of the submission was that of the article itself (The Right to Kill) didn't disclose that this was specifically an article WRT murder as a crime of passion, since framing an idea as a Right is more abstract, whether the victims are children or not.

Moral relativism aside, the more civilized approach to produce an effect of nearly the same outcome as murder, is exile.

This is an option that was explored and proven successful by ancient and medieval societies across the globe. When something unforgivable and unforgettable happens, those responsible may be banished instead of executed, and in a civilization of unlimited resources, dungeon-like conditions need not be a term of excommunication.

But, within the context of a scenario like this, where the sterility of contact with the outside world is, in fact, a prerquisite to the concept at hand, the approach to handle the preservation of alien customs apart from society at large demands elaborate effort.

Beyond adopting the effort to carry out what amounts to a charade, one still needs to seek a rational basis for the effort. Why bother? Why should one tread lightly when maybe the life of a child is at stake?

The handy answer is because we can. We can have it both ways. The primitive culture can be preserved, if a channel of willful exile is opened, as an alternative to murder. And having it both ways affords the option of live and let live, which is better.

Perspective may say that this condones something akin to honor killing, as mitigated circumstances, sometimes you have to cut your losses and look at what you still have left.

It would require a carefully designed protocol, to handle the exfiltration of exiled tribal citizens that primitive clans rejected for perceived crimes, but it could be done. Is it worth the effort? I think a quantitative analyst would question how much it costs, and the answer is that it's probably quite affordable.


Responding to both of your comments here:

It seems to me that generally we agree. But I want to clarify one bit: I don't advocate a monocultural/monolingual life. English is my L2, I also have an L3, and more a coming. My main argument was that if something required preservation efforts, it's by definition moribund. In deciding whether or not to try to revive a thing that needs preservation (including the environment, cultures, languages, traditions, etc.), I think that we should take a pragmatic approach: preserving the ecosystem is useful to us, so do it. Preserving a language with a 100 or so speakers with other means than making recordings, analysing it etc., e.g. trying to teach it to kids or adults, is basically impractical, and generally useless. Same with a culture that is detrimental to life in every way. If we think it's interesting, we can document it. But we should help the actual people out of primitive life.

The "emptiness of modern life" is nothing more than a poetical, romantic little sorrow in comparison to being killed for being a twin, or for being born to the wrong mother, or for being slightly disabled. I'm more than 100% sure that a starving kid in Africa or most of these tribespeople would take the "emptines of modern life" over whatever they have without giving it a second thought.


An interesting side-effect of this, is that it would enable a standard of synchronization, across geographic regions, such that one could treat a set of virtual machines as one ultra-wide-bus CPU with a 1 GHz clock speed.

All of the local overhead of real system resouces and network synchronization could handled by the remainder of the real CPU clock available to the bare metal, but contribute to the computation of a segment of a virtual bit field, at speed.

So, now maybe we get a commodity 4096 bit 1 GHz CPU as a service. Which, is maybe comparable to a 64 core processor, but without the overhead of chunking down to the width of 64 bits.


"...across geographic regions, such that one could treat a set of virtual machines as one ultra-wide-bus CPU with a 1 GHz clock speed."

I'm not entirely sure what you're trying to say here, but I am entirely sure that it's wrong.

A precise clock isn't the same thing as the removal of latency, and the operations of a CPU are ordered. That is, I can't start working on the multiplication of A * (B + C) until the addition result is available. Furthermore, if the elements of the operation, B and C, or parts of those elements, were separated by miles (or even feet), the latency of that operation would increase by orders of magnitude.

I doubt that even a 1MHz distributed processor would be achievable as a large distributed bit field computer as you've laid out here.

If you're worried about overhead in computing, it is critical to remember that a foot is a nanosecond. I'd much rather break my data down to register size (and I often do) than ship my data over a wire or fiber (which I also often do).


Actually, if you marshall all of your addressible units up front (4096 bit sentences, instead of 64 bit words), which aligns well with raw allocation units on many file systems, as an end user of the service, the overhead (to you) is reduced to network I/O if the product is built correctly.

The only hard part requiring serialized synchronization is the carry bit, across compute nodes. Share the carry bits between nodes, and while relaying a sentence to a cluster of synchronized nodes, the pipeline can shoot the sentence into the cluster as a unit, proxy and chain together the carry bits with a coordinated execution plan, and on the other side of the pipe, you get your well-timed 4096 bit result, all at 1 GHz, because the service is designed and produced to handle input at nanosecond intervals.

What are the advantages? Predictability, and expanded throughput.

Now you can look at an entire passage of text and make a determination about it in less time. Or stack many passages and composite them to assess or intuit variation. Designing the product this way makes it easy to reason about, and thus easier to market and sell. Is it possible to make a profitable system that works like this? Gee, great question! There's no obvious answer.

But anyway, from the perspective of a subscriber, it's on them to marshall their data, and then, if they have operations for which the scale of 4096 bit chunks improves results, they can get their granular operations done at 1 GHz, which allows them to predict time spent and overall cost more easily.

(e.g. I have all these [less-than-but-up-to] 4096 bit toots marshalled in a single data store, from a shit ton mastodon instances (i did all the crawling and retrieving, and saved them in one place, as a standardized data set), and I think this fact might be true about some of them, here is the rule set to interpret, please give me back the members of the toot array that return true when the function of this rule set returns true)

BTW, don't get hung up on 4096 as "the best number" I just chose it because it's a nice square number.


"The only hard part requiring serialized synchronization is the carry bit, across compute nodes."

I don't think that's the only hard part. Branches, for instance, are rough.

"What are the advantages? Predictability, and expanded throughput."

I think the system you've described would definitely have some predictability, but I contend that it would be predictably slow. Furthermore, given that everything is going to have to be pipelined up to its eyeballs, you don't need nanosecond synchronization to achieve high throughout. Audio, for instance, often achieves higher throughout than clock. Look at the AES MADI spec for an example of this (basic link at Wikipedia here: https://en.m.wikipedia.org/wiki/MADI ).

I'm just not seeing how this is practicable, or, more critically for this conversation, how it is particularly uncorked by precision clocking in a particularly meaningful way. It strikes me as an approach that would have to deal with edge cases robustly, largely using the same mechanisms that would be necessary for imprecise clocking (but with assured sequencing).

"But anyway, from the perspective of a subscriber, it's on them to marshall their data, and then, if they have operations for which the scale of 4096 bit chunks improves results, they can get their granular operations done at 1 GHz, which allows them to predict time spent and overall cost more easily."

This strikes me as similar to the complexity sizing in Craig Gentry's fully homomorphic encryption system, in that all operation sets up to a configured encodable complexity require the same computstional effort, effectively inefficient for smaller operations. For timing attacks in cryptosystems, it actually seems reasonable to retain fixed effort, even if Gentry's original system was largely impractical.

For general computation? I think that the sweet spot between job chunking and dataset chunking for the system you've described may not actually exist.


Are you saying that 64 bit CPU + 64 bit CPU = 128 bit CPU (as long as they are time synced)?

1. It doesn't work this way 2. Why would you want a 4096 bit CPU?


For financial transactions, it would certainly allow for fast high-precision floating point math. Imagine IEEE 754 4096-bit floats. Not sure anyone would actually use this, and you'd still have to standardize the rounding precision, but it might be an interesting vein of research.

Still, I agree with you -- what the OP described is not a 4096-bit processor.

Now highly-synchronized VMs -- that's an entirely different matter. Probably a boatload of use cases for those.


64 bits already gives you 16 digits, that is enough for a trillion dollar to one one-hundredth of a cent. So maybe there is someone who needs 128 bits, which is part of IEEE 754 since 2008, but that then is probably enough to calculate the total of all financial transaction ever done.


Where's that useful? Options pricing? I have no idea.


Why would you use floating point math for finance?


The alpha calculation can (and should) use floating point math. If the market has a midpoint of $99.99 with a bid/ask of $99.98/$100.00, you could compute a bunch of signals and end up with an alpha-adjusted midpoint of $100.00383736383..., at which point you’d convert it back to fixed-point and then try to buy $100.00


A floating point representation is not really the issue, the issue is not using base 10, and IEEE 754 specifies base 2 and base 10 floating point formats and operations. But I am of course not sure whether the original comment referred to base 2 or base 10 and given how common the mistake of using base 2 floating point numbers for financial calculations is, you may be correct with the intention of your comment.


I'm aware of the fact that you don't use floating point math for finance -- for exactly the reason you described -- but the academic in me wonders if you could formally specify a high-enough degree of precision -- and all the corner cases -- to allow FP math for even just a subset of transactions. This would (in theory) allow to programmers to bypass the Decimal classes in your favorite OO language (or GMP if you're a C fan).

Again, purely an academic inquiry :-)


My point was more that it is wrong to say that financial calculations should not be done using floating point formats, for example Decimal in .NET and BigDecimal in Java are floating point formats and they are the types you should use for financial calculations. The important difference as compared to formats like IEEE 754 binary32 (formerly single) and binary64 (formerly double) is that the representation is based on base 10 instead of base 2. Fixed point or floating point and base 2 or base 10 are two orthogonal choices.

So when you initially mentioned high precision floating point numbers for financial calculations that was not necessarily a bad idea because you might have thought about base 10 floating point numbers. The comment I replied to however assumed you meant base 2 which of course most people do if they say floating point numbers without specifying the base and which of course is a bad idea for financial calculations more often than not. I just pointed out that assuming base 2 is usually but not technically correct.

And you can of course use base 2 floating point numbers for financial calculations - 32 bit, 64 bit, or 4096 bit - you just have to keep track of the accumulated errors and stop or correct the result before the error grows into the digits you are interested in. But why would one want to do this? The only thing I can really think of is that you need maximum performance and there is no hardware support for base 10 floating point numbers. And just using integers as base 10 fixed point numbers, which would often be a even better solution, must not be an option.


I don't know how you found your way onto the addition operator (+) on your keyboard, because that's not at all what I was driving at.

I think you are... JUMPING! TO CONCLUSIONS! (get it?)

Anyway, at it's core, much of the logic within a turing machine winds up being addition in an accumulator. So, you widen the pipeline, and that adds place settings to the numeric values addressed at a location in RAM.

I think we both know that each place setting increases the maximum valus of the addressible unit by an exponential factor of the base, which in computing, and so in this instance, is binary.

Specifically: 2^4096 instead of 2^64

Golly, did I get my math right? This sure is difficult to for me to understand!

Why would anyone want a 4096 bit CPU? Oh, I dunno. I suppose 640K ought to be enough for anyone.


I'm having a hard time trying to figure out if you are serious or if you are masterfully trolling everyone.


Good job, guys! Nice downvotes! Real nice!

I answered substantively, addressing each point carefully, and I was pleasantly rewarded for the time I took to respond.

Great incentive system you guys have worked out! Glad to see it being used as intended! Works like a charm!


You were likely down-voted for the snark with which you addressed the points.


Suppose you do a simple addition on your 4096bit "CPU", you have to propagate the carry from the first 64bits to the next 64. How do you do that within your clock cycle over the internet? You'd have to pipeline them so that each subsequent 64bit add waits for the previous carry, but then wouldn't it be orders of magnitude faster to just do it on the same CPU rather than taking the time and resources to do a single 64bit add followed by a high latency network transfer? At any rate what does clock synchronization buy you here exactly, data transfer are still high-latency and high-jitter, at best you're isochronous but definitely not synchronous.

Either I completely misunderstand what you're proposing or it doesn't make sense at all.


I’m not quite sure what GP is getting at, either, but I can sort of see the lockstep synchronization described letting you build something like the original Thinking Machines Connection Machine out of more distributed parts.

The original Cray supercomputers also benefitted from a design where every wire in the pipeline was the same length for “free” synchronization courtesy of the speed of light.


I don’t think you understand how memory bus width is calcuslated or what it means. You are an order of magnitude off on the layer in question.


An order of magnitude. Jeepers, that sounds really bad.


How would the math work on that? Simple addition now requires coordination of results across many CPUs. Worst case is N-1 ticks where N is the CPU count. What operation would get faster by such a virtual CPU?


Economy of scale, my dude.

An organization seeking to market a product based on any spare slack or wastage of their bare metal could stitch together a niche product like this from enough resources, and price it in the space where it nets them money, and is cheaper than something an individual or small business might be capable of building on their own, with the cheapest possible parts.

That's basically the the core principle of every cloud product being sold.


Honestly, 1994 or not, styling a passage of text with CSS just isn’t as useful anyone would like to imagine it being.


I don't mean styling for the sake of looking stylish, I mean styling for readability. Line spacing, page width (reading text that takes up full width of a 27" is not nice), section grouping/separation, title size, use of colour (e.g. for titles to increase visibility of section grouping, for background/font to decrease stark contrast), sans-serif font (greater readability on a screen vs on paper), etc are all valid design considerations.


Buses fucking suck.

Buses make you wait in the rain.

Buses get stuck in traffic very easily.

Buses create traffic and make it worse with their size.

Buses idle diesel fumes as they stand in traffic.

Buses are noisy and idle outside of homes.

Bus stops in front of homes are filled with noisy people.

Buses that permit requested stops go nowhere slower than ever.

Buses are a half measure.

Buses fucking suck.


>"Buses make you wait in the rain."

There's this thing called a bus shelter. They're actually really inexpensive to put up. Ask your city why they're not putting them up.

>"Buses get stuck in traffic very easily."

No more easily than a car. Bus-only lanes work amazingly well to mitigate the problems of traffic and congestion.

>"Buses create traffic and make it worse with their size."

Buses take up much less space per passenger than cars, so it's kind of ridiculous to make that argument.

>"Buses idle diesel fumes as they stand in traffic." >"Buses are noisy and idle outside of homes."

Natural gas buses and electric buses and being phased in, they're both quieter and cleaner than diesel buses. And bus lanes reduce idling time significantly.

>"Bus stops in front of homes are filled with noisy people."

Oh no, people. How horrible.


And, where I'm from (Cambridge, UK), they also:

Are more expensive than driving.

Are regulated in such a way that prevents a competitive market.

Are run by profit making companies that demand subsidies from government.

Don't run at night.

Are full at rush hour.

Spend ages at the bus stop while everyone buys tickets, argues with the driver, asks which bus this is etc etc.

A single ticket can't be used on buses from two different companies.

I cycle. The buses seem like a threat to my safety because they are too big to overtake safely on a busy street. Also their engines are at the back, so I can't hear when they're sneaking up behind me.

Once they don't need a driver and are electric, most of these problems could be solved. I'll re-assess then.


Come to Copenhagen and see how a unified public transport system is supposed to work. My tickets work for all buses and all trains equally.

Don't blame public transport as a concept, blame private profiteering and corruption.


I've been to Copenhagen. I noticed you have a lot of cyclists! But no, I agree with your point. I think that is the main problem.


I've been told by a US expat that we must have "the fittest fat people in the world", since everyone bikes here. Young, old, fit, fat, everyone bikes :-)


Bus shelters, bus-only lanes and electric buses solve nearly all of those problems.


Actually, HN does do /almost/ private messages, if you hellban your account, and the recipient of your reply comment has “show dead” turned on.

(of course, anybody can activate the “show dead” option, but, in reality, there’s no such thing as privacy on a web server, since there’s always a system administrator noticing unhashed passwords scroll through the log stream)


Yes, newts and salamanders in cave systems have lost their eyes, due to scarcity of nutrition and habitats devoid of light.

https://en.wikipedia.org/wiki/Texas_blind_salamander


This is a joke. Consumer electronics radio communication systems may be trivially jammed, with little to no effort. It's simply a matter of political protocol (aka: warrants) for when and where such things must happen.

The article states that the services used to relay data links were common wi-fi and ordinary cellular telephone service.

The first thing that usually happens in a war zone is HAM radio service experiences disruption. [0] Cell phone back doors via CALEA [1] are already used to disrupt would-be suicide bombers around the world, and yes even in the United States. [2]

If push comes to shove, we'll all lose wi-fi and cell phones in an area of effect, around any hostile activity, whenever shit gets real.

[0] http://www.latimes.com/world/la-fg-radio14jan14-story.html

[1] https://en.wikipedia.org/wiki/Communications_Assistance_for_...

[2] http://codegreenprep.com/2013/04/boston-bombing-shows-you-ca...


The article mentions criminal use-cases where direct radio frequency isn't necessary (moving small quantities of high-value drugs across borders) and where jamming isn't an option (surveilling police stations for snitches). Crude military methods aren't always an option, although I'm sure we'll come up with smarter counter-measures, potentially at the cost of everyday privacy.


Not to mention eventually someone is going to figure out how to get a drone to understand sign language. The only way I can think to jam a drone's camera from reading sign language is going to be shining a bunch of lasers at anything that looks like a camera (probably using your own counter drones). Or someone is going to figure out how to take videogame AI and jam it into a drone.

Even if we somehow restrict criminals from getting drone AI (which is on it's face reasonable ... most AI experts probably don't want their work falling into the hands of criminals building kill bots), then I doubt we'll be willing to restrict people from owning half-life because it turns out its AI makes a good autonomous FBI obstructing bot.


How would you "trivially" block LED/light sensor communication? It's not insanely hard to create a drone swarm with zero radio.


Two words: Trained Falcons.


I mean this theoretically works until the gangs decide to start putting small bombs in the drones that detonate on impact. I bet they can build drones faster than people can breed and train the birds.


Well yeah then you have a counter swarm of exploding drones. The government can always afford more than the criminals because they can just squeeze their tax cattle a little harder.


It doesn't just theoretically work.

Trained falcons and eagles are being used by police forces already:

https://www.google.dk/search?q=training+birds+to+attack+dron...

I don't know if any counter-measures are getting popular yet, but it's probably cheaper to poison or shoot them than explode them.


A brighter light to overwhelm the carrier signal


"Lasers." (although, line of sight, point-to-point communication is admittedly less trivial to interfere with)

The point being that, if "criminal gangs" are up to no good, but they're parting their equipment with off-the-shelf Best Buy and Radio Shack products, they probably aren't programming DSP interfaces, aren't encoding and decoding raw bit streams, aren't masking binary objects with base64 blobs, aren't rolling their own encryption.

Or, if they are rolling their own encryption, they're using Excel spreadsheets to do it. [0]

[0] https://www.theregister.co.uk/2011/03/22/ba_jihadist_trial_s...


Block line of sight, use a smoke grenade.


To be honest, I really don't think laser/LED chanels are being used.

I've never seen a remote control quad copter sold off the shelve with IR LED TV remote style I/O for it's tranceiver control system.


There's a pretty bright light on between sunrise and sunset, doesn't affect free air optical transmission at all unless the sun happens to align with the sender or the receiver.

The biggest limitations are water vapor in the air, precipitation and dust. Also, the distance is rather limited compared to radio.


This doesn't work if the receiver on the drone is directed backwards and you usually are in front of the drone as its target.


Does this work?

I had the impression, you can't out-light shadows for example.


These are early days - I expect an arms race of measures / countermeasures (many of them spilling over from the military ) to develop.


You could do waypoint so the drone isn't being controlled by the controller.


  This, kids, is why GET requests should be idempotent.
Or, like, you know, how about a browser only sends a request when I'm actually fucking asking for something, and doesn't try to fetch everything I've ever thought about, with my every slightest accidental finger twitch against its touch screen?

What happened to deterministic user interaction?


Well, ideally you'd have both. When I "GET" I assume that it's GET as opposed to any of the other standard HTTP methods, not "GET" as in Indiana Jones getting the golden idol from the pedestal.

This is not only a problem with browsers preemptively requesting URLs, but also when it comes to caching. What happens when the URL I'm GETing is cached? Absolutely nothing, as far as the original server is concerned.

The current situation with browsers doing smart things to make slow websites appear fast is a bit like compiler writers doing smart things with UB in C, though. Speed a lot of things up by utilizing every undefined nook and cranny of the spec, breaking tons of legacy software that make pretty sound assumptions about how things actually work. I use a bunch of poorly designed legacy systems where GET often has intentional side effects. They break because the browser starts issuing HTTP requests long before I have finished typing an address. Let slow sites be slow and leave the speed problem to the people that should be dealing with it instead.


GET is meant to get, not set. Since 1.0.

https://www.w3.org/Protocols/HTTP/1.0/spec.html#GET

> The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI


So what?

If I never intended for the GET to be GOT, then the browser is just as much at fault for such unintended consequences.


That’s the whole point of HTTP request methods: it isn’t. GET should never cause such side effects. The entire web is built with these principles in mind precisely so that problems like this don’t happen.

If you want a slower browsing experience feel free to disable prefetching in your browser, but this isn’t a hill worth dying on.


Darn kids and their prefetching, GET off my lawn!


Okay, but what about stannous flouride and sodium fluoride?

No one ever discusses the toothpaste fluorine ions, in conjuction with the much more terrifying acid, which is the boogey man that haunts the fears behind fluoridated water, and so on...


This is a good little interesting video on fluorine here (previously linked by someone else), and the ending is the best description of tin fluoride for teeth I've ever seen -- well worth the ~6min

https://www.youtube.com/watch?v=vtWp45Eewtw&feature=youtu.be

(edit: added link)


Or, more to the point, not one single person has any recourse, whether they're an evil serial killer/rapist or not.

But, I'm trying to think about a situation where a secret identity might leave behind trace evidence that could be linked to a readily identifiable relative's samples, thus unmasking... say Spiderman.

I guess, any situation involving trespassing or physical presence under other circumstances, where a person might leave behind a single hair follicle, including the root of the strand.


Definitely. As the number of sequenced individuals grows, it'll become that much harder to hide from a familial genetic search.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: