Hacker Newsnew | past | comments | ask | show | jobs | submit | winstonewert's commentslogin

Perhaps you could refrain from slandering Rust proponents without any evidence.


I think the real question has to be: how do we determine what the regulations should be. Today, regulations are typically the product of dysfunctional political processes, and, no surprise, a lot of those regulations are unhelpful and a lot of helpful regulations are absent.


> but not everything is necessarily backed by the same kind of heap-allocated memory object.

Do you have an example, I thought literally everything in Python did trace to a PyObject*.


The problem is that sometimes it is not a necessary condition. Rather, the tests might have been checking implementation details or just been wrong in the first place. Now, when tests fails I have extra work to figure out if its a real break or just a bad test.


Some thoughts:

- It says that 3/4 of people kept working; to me, that seems like a big drop.

- Data is based on a survey of people in the program; I distrust data from surveys on principle.

- There seems to have been a reduction in the payment as they earned money, so its not really UBI as typically advocated.


> Lewchuk added that while some people did stop working, about half of them headed back to school in hopes of coming back to a better job.

I believe previous UBI experiments have shown the same results: most people keep working, some people stop, but they usually have decent reasons. Education, extending parental leave, or being a caregiver aren't necessarily things we want to discourage if they result in a greater return.


> if they result in a greater return.

Greater return than what and to whom?

We already have existing labor markets that are very capable of determining returns.


> Greater return than what and to whom?

Greater return for the government paying for a UBI, compared to not paying for a UBI.

> We already have existing labor markets that are very capable of determining returns.

I'm not sure I understand how "existing labour markets" are going to solve the three things I listed: education, caregiving, and parents taking time off to look after their kids.

The issue of parents being absent is that it results in negative externalities: crime rate, an alienated society, low literacy rates. The existing labour market is great at placing parents into a job efficiently, but it has absolutely nothing to do with keeping their kids out of prison. Nor should it, really, because externalities are a government-level coordination problem.

When it comes to education, the issue is again a coordination problem. Companies might do some training, but they generally prefer to foist the risk off onto employees, other companies, and governments by hiring people who are already educated. Again, this is a coordination problem, because any individual company that skips training and just hires educated workers directly will be more efficient, but those educated workers have to come from somewhere.

I will concede that it's more efficient not to take care of the elderly. I question whether it is desirable, however.


those labour markets are in shambles atm for most people who aren't upper middle class


In shambles compared to when? Quality of life is the highest it's ever been across socioeconomic strata. It's just our expectations outpace reality.


Then I'm sure you're willing to donate the cash to make it happen.


This should be such an infrequent occurrence that the cost should be negligible. Surely their $10/month plan has enough margin that this can be covered?


There is likely a cost to the infrastructure necessary to enable calling 911 that scales with the number of users not the number of 911 calls. Where I'm at, there is a 75 cent per month fee added to phone plans to cover the costs of access to 911. If most people are on the free plan, the margin from the few paying customers won't cover it.


Every cellphone without a valid service plan is still required to be able to call 911 in most places of the world, including the US, and carriers eat the costs. It should be obvious why.

Frankly it's weird they're making a clone of a classic touch tone corded phone and somehow get around this. Especially for a kids product when we teach kids to call 911 in an emergency.


Donate my time and services? Sure.

Donate the cash? To a business? … So, you mean, paying someone else's profit margin, while they hold lives hostage? Immanuel Kant says you don't negotiate with terrorists.


Perhaps - but they could just do what Microsoft did: bundle a version of Chromium.


As I’ve mentioned previously, WebKit is a mission critical framework that many thousands of apps use, including Apple’s.

Strategically it makes no sense to not own something that important.

Remember: Safari was created when Apple’s 5-year deal with Microsoft that made Internet Explorer the default browser for MacOS X expired in 2003.

10 years later. Google forked WebKit to create Chrome.


Apple never fully owned WebKit in the first place - most certainly not back in 2003. There was an extremely public and messy divorce period with the KDE codebase[0], and to this day there's still KHTML/KJS-derived code in WebKit that has to be sublicensed under GPLv2 for redistribution purposes.

If we're going to split hairs over the whole "Blink is an inferior WebKit fork" brouhaha, we shouldn't forget who Apple sherlocked to get there. After all, turnabout is fair play.

[0] https://blogs.kde.org/2005/04/29/bitter-failure-named-safari...


What actually prevents JSON from being used in these spaces? It seems to me that any XML structure can be represented in JSON. Personally, I've yet to come across an XML document I didn't wish was JSON, but perhaps in spaces I haven't worked with, it exists.


> It seems to me that any XML structure can be represented in JSON

Well it can't: JSON has no processing instructions, no references, no comments, JSON "numbers" are problematic, and JSON arrays can't have attributes, so you're stuck with some kind of additional protocol that maps the two.

For something that is basically text (like an HTML document) or a list of dictionaries (like RSS) it may not seem obvious what the value of these things are (or even what they mean, if you have little exposure to XML), so I'll try and explain some of that.

1. Processing instructions are like <?xml?> and <?xml-stylesheet?> -- these let your application embed linear processing instructions that you know are for the implementation, and so you know what your implementation needs to do with the information: If it doesn't need to do anything, you can ignore them easily, because they are (parsewise) distinct.

2. References (called entities) are created with <!ENTITY x ...> and then you use them as &#x; maybe you are familiar with &lt; representing < but this is not mere string replacement: you can work with the pre-parsed entity object (for example, if it's an image), or treat it as a reference (which can make circular objects possible to represent in XML) neither of which is possible in JSON. Entities can be behind external URI as well.

3. Comments are for humans. Lots of people put special {"comment":"xxx"} objects in their JSON, so you need to understand that protocol and filter it. They are obvious (like the processing instructions) in XML.

4. JSON numbers fold into floats of different sizes in different implementations, so you have to avoid them in interchange protocols. This is annoying and bug-prone.

5. Attributes are the things on xml tags <foo bar="42">...</foo> - Some people map this in JSON as {"bar":"42","children":[...],"tag":"foo"} and others like ["foo",{"bar":"42"},...] but you have to make a decision -- the former may be difficult to parse in a streaming way, but the latter creates additional nesting levels.

None of this is insurmountable: You can obviously encapsulate almost anything in almost anything else, but think about all the extra work you're doing, and how much risk there is in that code working forever!

For me: I process financial/business data mostly in XML, so it is very important I am confident my implementation is correct, because shit happens as the result of that document getting to me. Having the vendor provide a spec any XML software can understand helps us have a machine-readable contract, but I am getting a number of new vendors who want to use JSON, and I will tell you their APIs never work: They will give me openapi and swagger "templates" that just don't validate, and type-coding always requires extra parsing of the strings the JSON parsing comes back with. If there's a pager interface: I have to implement special logic for that (this is built-in to XML). If they implement dates, sometimes it's unix-time, sometimes it's 1000x off from that, sometimes it's a ISO8601-inspired string, and fuck sometimes I just get an HTTP date. And so on.

So I am always finding JSON that I wish were XML, because (in my use-cases) XML is just plain better than JSON, but if you do a lot in languages with poor XML support (like JavaScript, Python, etc) all of these things will seem hard enough you might think json+xyz is a good alternative (especially if you like JSON), so I understand the need for stuff like "xee" to make XML more accessible so that people stop doing so much with JSON. I don't know rust well enough to know if xee does that, but I understand fully the need.


><!ENTITY x ...> and then you use them as &#x; maybe you are familiar with &lt; representing <

Okay. This is syntactically painful, APL or J tier. C++ just uses "&" to indicate a reference. That's a lot of people's issue with XML, you get the syntactic pain of APL with the verbosity pain of Java.

> I have to implement special logic for that (this is built-in to XML). If they implement dates, sometimes it's unix-time, sometimes it's 1000x off from that, sometimes it's a ISO8601-inspired string, and fuck sometimes I just get an HTTP date. And so on.

Special logic is built into every real-world programming scenario ever. It just means the programmer had to diverge from ideal to make something work. Unpleasant but vanilla and common. I don't see how XML magically solved the date issue forever. For example, I could just toss in <date>UNIXtime</date> or <date time=microseconds since 1997>324234234</date> or <datecontainer><measurement units="femtoseconds since 1776"><value>3234234234234</value></measurement></datecontainer>. The argument seems to be "ah yes, but if everyone uses this XML date feature it's solved!" but not so. It's a special case of "if everyone did the same thing, it would be solved". But nobody does the same thing.


I think you have a totally skewed idea about what is going on.

Most protocols are used by exactly two parties; I meet someone who wants to have their computer talk to mine and so we have to agree on a protocol for doing so.

When we agree to use XML, they use that exact date format because I just ask for it. If someone wanted me to produce some weird timestamp-format, I'd ask for whatever xslt they want to include in the payload.

When we agree to use JSON, schema says integers, email say "unix time", integration testing we discover it's "whatever Date.now() says" and a few months later I discover their computer doesn't know the difference between UTC and GMT.

Also: I like APL.


I think I can see something of where you're coming from. But a question:

You complain about dates in JSON (really a specific case of parsing text in JSON):

> If they implement dates, sometimes it's unix-time, sometimes it's 1000x off from > that, sometimes it's a ISO8601-inspired string, and fuck sometimes I just get an > HTTP date. And so on.

Sure, but does not XML have the exact same problem because everything is just a text?


> Sure, but does not XML have the exact same problem because everything is just a text?

No, you can specify what type an attribute (or element) is in the XSD (for example, xs:dateTime or xs:date). And there is only one way to specify a date in XML, and it's ISO8601. Of course JSON schema does exist, but it's mostly an afterthought.


It sounds to me like you are thinking something like: if they use XML, they'll have a well defined schema and will follow standardized XML types. But if they use JSON they may not have a well-defined schema at all, and may not follow any sort of standardized formats.

But to my mind, whether they have a well-defined schema and follow proper datatypes really has very little to do with the choice of XML or JSON.


Have you ever written Markdown? Markdown is typically mostly human-readable text, interspersed with occasional formatting instructions. That's what XML is good for, except that it's more verbose but also considerably more flexible, more precise, and more powerful. Sure, you can losslessly translate any structural format into almost any other structural format, but that doesn't mean that working with the latter format will be as convenient or as efficient as working with the former.

XML can really shine in the markup role. It got such a bad rap because people used it as a pure data format, something it isn't very suited for.


<p>How would you represent <b><i>mixed content</i></b> in JSON?</p>


`{ type: "p", children: [{type: "text", text: "How would you represent "}, {type: "b", children: [{type: "i", children: [{type: "text", text: "mixed content"}]], {type: "text", text: " in JSON?"]}`

or:

`{paragraphs: [{spans: [{ text: "How you represent "}, {bold: true, italic: true, text: "mixed content"},{text: " in JSON?"}]}`


Oh sure, it can be represented. But now JSON is the one being noisy, ugly and verbose.


My specific claim was that you could represent it in JSON, so you can't claim, as the post I responded to did, that JSON "cannot be used."

I'll fully grant, I don't want to write a document by hand in either of the JSON formats I suggested. Although, given the choice, I'd rather receive it in that format to be parsed than in any XML format.


But that's terrible! How is that better? And can you guarantee correct ordering?


  ["p", "How would you represent ", ["b", ["i", "mixed content"]], " in JSON?"]


in addition to all the things listed above, json has no practical advantage. json offers no compelling feature that would make anyone switch. what would be gained?


simplicity and ease of use is a pretty compelling feature to a lot of people, if not you


Really? Is there no level of compensation that would make you think that getting punched was worthwhile?


There is some level of compensation where our protagonist could have retired after his first paycheck. His paycheck was very high, but not that high. He made a bad career move, despite the pay. For some people it would have been ok. Experienced fighters would know how to take the hit better. For them it would not have been such a problem. In the story it was a big problem. I would say there was probably no level of compensation that would make it worthwhile for the character, given he had to keep doing it on a daily basis indefinitely. His old job was fine. His new job let him pay off his mortgage faster but made his whole life worse. What good is all this money if one's life is suffering? If money lets you buy your way out of suffering, that's good.


When I was younger one of my favorite punchlines was "What kind of (wo)man do you think I am?! ... Oh, we've already established that, we're just negotiating price"


One not as funny that I made up was, "if we have to be whores, I'd rather be an expensive whore."


Sure. Pay me a million dollars and I'll take a punch. It would still be a terrible job, though.


Well, if they did that, then people could expect/demand stability with regard to what scenarios get the checks/panics optimized out. This would be a bit of a burden for the Rust maintainers. It would effectively make the optimizer part of the language specification, and that's undesireable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: