Hacker Newsnew | past | comments | ask | show | jobs | submit | joiguru's commentslogin

Very interesting work!

Did you guys get a chance compare with [1]. These seen to be the standard for high-performance not Crypto RNGs.

[1]: https://github.com/DEShawResearch/random123


Which Python 3.10 and static type checkers much of the benefits of algebraic data types is already available [1].

[1]: https://stackoverflow.com/questions/16258553/how-can-i-defin...


There are two primary criticisms of this move in this thread:

C1) This is just hurting ordinary Russian citizens and serves no other purpose. I empathize strongly with the Russians. But I also empathize with the Ukrainians. Since Russia has nuclear weapons, the only thing rest of the world can do is impose sanctions. Sanctions work in the long run, but they take a long time to show effects. Ukrainians do not have a long time. All of these moves will cumulatively amplify the unrest among common Russian people, giving Putin less time to sit back and let his plan go to work. Will these be enough? Only time can tell. But ignoring the impact of secondary and tertiary effects for the primary negative effects is short sighted.

C2) This is just virtue signaling. Virtue signaling may be a component of this move, but this is business for Cogent. They make money moving data, and Russian money has lost a lot of its value. So I am not convinced this is just virtue signaling, there is definitely a business component.


It's crazy I had to scroll all the way to here for someone to mention that Cogent sells interconnects and well Russian clients can't pay them now. If you don't pay, you don't get peering.


A third option would be that the firewall requirements are pretty severe and they don't want to contribute to that ( just mentioning)


Because grep is not a "full text search tool".

Full text search [1] is different from regular expression search.

[1] https://en.wikipedia.org/wiki/Full-text_search


Filling out tax forms and FBAR requirements were a pain.


One theoretical objection to this idea is that distances measured have an accuracy limit of Planck length (1.616e-35 meters). So if your number needs more precision then "just mark" step can't be done.


Surprising-ness is a not a good metric to measure social/psychological science research by. Often a statement and its opposite can be intuitive.

If the opposite result was stated here one could have easily come up with many "intuitive" explanations.


What would be the benefits of this over neovim-qt?


"cut the ball into 5 pieces" is not the best description. A better one is: 2a. Split the ball into infinite pieces 2b. Divide the infinite pieces into 5 groups


> 2a. Split the ball into infinite pieces 2b. Divide the infinite pieces into 5 groups

Huh? The ball is already composed of infinite points. So in 2a you recognize that the ball exists, and then in 2b you cut it into pieces. But it seems superfluous to mention 2a separately.


In regular natural language, cutting the ball into 5 pieces implies cutting 5 contiguous pieces.

If I say a cake is cut into 5 pieces, no person will consider that each piece contains parts from all parts of the cake.


The basic idea is as follows.

Lets say you are building an ML model to decide whether to give someone insurance or not. Lets also assume your past behavior had some bias (say against some group). Now ML model trained on this past data will likely learn that bias.

Part of modern ML focus is then to understand what bias exists in data, and how can we train models to use the data but somehow counteract that bias.


How do you tell if something is biased or not? Seems like the current system is "if people cry foul because it seems unfair, then the model is biased" which doesn't seem scientifically rigorous.

This seems like a hard problem. For example, say that you have an ML model that decides whether someone will be a good sports athlete or not purely based on biometrics (blood oxygen level, blood pressure, BMI, reflex time, etc.). If the model starts predicting black people will be better athletes at higher rates than white people, is the ML model biased? Or is the reality that black people have higher-than-average advantageous physical characteristics? How do you tell the difference between bias and reality?


> If the model starts predicting black people will be better athletes at higher rates than white people, is the ML model biased?

My comment is the naughtiest of wrong-think by HN standards, but the likely reality is that most human programmers will do their genuine best to deliver algorithms that treat all humans equally without bias towards race, gender, and other immutable characteristics and just try and deliver the best objective result (picking good athletes, getting the most money, getting the best employees, or whatever other task is involved), but then will be forced to either do 1 of 2 things when it inevitably yields an unequal outcome or decision that goes against the political correctness orthodoxy.

1) Reprogram their software to fit modern political correctness standards. Personally I think this is close to impossible. As an example: say you're creating some software to determine healthiness by various available data and it objectively determines that heavier people tend to be less healthy. You're boxed into an impossible corner here of either being politically incorrect or just lying to people about their health.

2) Go back to human decision makers for anything controversial because I don't even know how it will be possible to program an algorithm to take into account all of society's made-up, arbitrary, ever-changing rules on "equitable" outcomes. As far as I'm aware, Amazon had to abandon their effort to replace some of their HR efforts with algorithms because it yielded politically incorrect outcomes despite the programmers seemingly trying to just come up with the best possible employees and nothing else.


The bias would have to be determined by a board of experts who debate things based on facts, but is ultimately subjective and linked to the time and place of the culture.

The ethics in AI folks, for the most part, seem to want models to predict what they would predict, based at least partly on subjective analysis of culture, not entirely based on scientific data.

At least that's what I think I've concluded about algorithmic bias. It's one of the situations where I really want to understand what they're saying before I make too many criticisms and counterarguments./


> ML model trained on this past data will likely learn that bias

That's the opposite of what the author is saying, though - or rather, she's saying that data bias exists, but the algorithm itself introduces bias that would be there even if the data itself were somehow totally fair, for some unspecified definition of "fair".


what you just described is a previous bias being encoded in the data. It's not algorithmic bias, because it's not encoded in the structure of the algorithm. Sara addresses that (data re-weighting) but says that's not all.

I honestly don't think it can be what you're describing, or the debate is a very different one from what Sara and others in the "algorithmic bias exists and it is distinct from data bias" sense.


A reference I like, based on your last point:

https://www.frontiersin.org/articles/10.3389/fpsyg.2013.0050...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: