Well,of course you get who ever you elected, that's a trueism that holds for any method.
What method do you prefer?Trust in the market and chose the one with the highest price,
or, choose the one recommended by most, aka the popular choice or the elected?
You're offering two choices which prove the point that electing is a poor way to fill a post.
"popularity" does not imply competence. Popularity is easily gamed and bought. Given that unlimited business money can be spent on elections, it's mostly bought.
I'm not sure what you mean by market, or highest price, but I assume you mean the above?
The opposite of elections is appointment. Based on competence. So, for example, in my company I want job x done well, so I appoint a person based on their ability to do x.
Of course this assumes I want x done well. If I'm elected, and I want x done badly, then I can appoint someone based on other factors, like ideology or loyalty etc.
In the end this is relies allmost completely on proprietary AI as a service services, right? I think the exact services should be advertised as well to help understand the limitations of the device.
E.g. Can I use this device for any language or is it just English. Can it do translations?
Honestly, I've been away from the field for quite a long time so wouldn't be up to date. But, if you want kind of a good framing of the field, how it evolved and how it's different from other kinds of visualization (like scientific) maybe start here [0a][0b]
There used to be a lively research field for information visualization that studied current visualization techniques and proposed new ones to solve specific challenges -- I remember when treemaps were first introduced for example [1]. Large networks were a pretty big area of research at the time with all kinds of centrality clustering, and edge minimization techniques.
A few teams even tried various kind of hyperbolic representations [2,3] so that areas under local inspection were magnified under your cursor, and the rest of the hairball was pushed off to the edges of the display. But with big graphs you run into quite a few big problems very quickly like local vs. global visibility, layout challenges, etc.
Not specifically graph related, but the best critical thinker I know of in the space is probably Edward Tufte [4]. I have some problems with a few bits of his thinking, and other than sparklines his contributions are mostly in terms of critically challenging what should be represented, why, how, and methods of interaction, his critical analysis has stayed up there as some of the best. He has a book set that's a really great collection of his thoughts.
If you approach this problem critically, you end up at the inevitable conclusion that trying to globally visualize a massive graph in general is basically useless. Sure there are specific topologies that can be abstracted into easier to display graphs, but the general case is not conducive. It's also somewhat surprising at how small a graph can be before visualizing it gets out of hand -- maybe a few dozen nodes and edges.
I remember the U.S. DoE did some really pioneering studies in the field and produced some underappreciated experts like Thomas, Cook and Risch [5,6]. I like Risch's concepts around visualizations as formal metaphors of data. I think he's successful in defining the rigorous atomic components of visualization that you can build up from. Considering OP's request in view of Tufte and Risch, I think that they really need to think about the potential for different metaphors at different levels of detail (since they specify zooming in and out). There may not exist a single metaphor that can visualize certain data at every conceivable scope and detail!
One interesting artifact from all of this is that most of the research has long ago been captured and commoditized or made open source. There really isn't a market anymore for commercial visualization companies, or grant money for visualization research. D3.js [7] (and the derivatives) more or less took millions upon millions of dollars in R&D and commercial research and boiled it down into a free, open source, library that captured pretty much all of the major findings in one place. It's objectively better than anything that was on the market or in labs at the time I was in the space and it's free.
How do you "extend" Z (what are you adding to it)? How do you prove there is a field containing Z? Assuming you are working in ZFC, what is an example of an element of Q? What is an example of an element of Z?
One way to do things is to define N as von Neumann ordinals:
Then you define Z as an equivalence relation of NxN by the equivalence (a,b) ~ (c,d) iff a+d=c+b. This means each integer is itself an infinite set of pairs of natural numbers.
Then you define Q as an equivalence relation of ZxZ\{0} by (a,b) ~ (c,d) iff ad = cb. Again, each rational is now an infinite set of pairs of integers.
The point of the OP post is that we want to define things by their properties, but then we are defining what a set of rational numbers is, not what the set of rational numbers is, and we need to make sure we do all proofs in terms of the properties we picked (e.g. that Q is a field, there is a unique injective ring homomorphism i_ZQ: Z->Q, and if F is a field such that there is an injective ring homomorphism i_ZF: Z->F, then there is a unique field homomorphism i_QF: Q->F such that i_ZF = i_QF after i_ZQ) rather than relying on some specific encoding and handwaving that the proof translates to other encodings too. This might be easier or harder to do depending on which properties we use to characterize the thing, and the OP paper gives adding inverses to a ring as one of its examples in section 5 ("localization" is a generalization of the process of constructing Q from Z. For example, you could just add inverses for powers of 2 without adding inverses of other primes), proposing a different set of properties that they assert is easier to work with in Lean.
The behaviour seems perfectly reasonable to me. They are not in the business of reflecting reality, they are in the business of creating it. To me what you call wokeness seems like a pretty good improvement
You want large tech companies "creating reality" on behalf of everyone else? They're not even democratic institutions that we vote on. You trust they will get it right? Our benevolent super rich overlords.
Its not really a question about want, its a question about facts. Their actions will make a significant mark on the future. So far it seems like they are trying to promote positive changes such as inclusion and equality. Which is far far far fucking really infinitely far better than trying to promote exclusion and inequality
Can you please explain how outright refusing to draw an image with from the prompt "white male scientist", and instead giving a lecture on how their race is irrelevant to their occupation, but then happily drawing the requested image when prompted for "black female scientist", is promoting inclusion and equality?
Saying most scientists in the world are white males seems like a very Anglo-centric perspective, at least based on the numbers available from statista.com.
You are so right! Just not the way you want to be.
Google and the rest of “techs” ham fisted approach has opened the eyes of millions to the bigotry these companies are forcing on everyone in the name of “improvement” as you put it.
There’s a huge difference between filling in gaps with diversity and refusing to make innocuous pictures a user explicitly asked for—except only when “white” is involved while making any picture with black people in it even when ahistorical.
If it's a question of facts, why are you allowing blind assumptions to lead your opinion? Do you have sources and evidence for their agenda that matches your beliefs?
I dont know, I would never put these tools in a decision making position, and never provide them with my cc. Gather information, sure, mutate my financials or even my inbox, no way in hell
This reminds me. There was a paper published a couple of years ago and posted here on HN that actually calculated the probability of life aminoacid-based life emerging. Based on the complexity of the chain needed to start replicating.
The conclusion was that it was vanishingly small in the observable universe but only close to 0 in the full universe.
I've since tried to find it without luck. Does anybody here know where I can read it or remember the article I'm talking about?
I posted a comment there. They are using a very long protein instead of a short one. Nobody expect that the first functional protein is so long.
Also, they are generating the protein using a "random dice" instead of assuming a short crapppy version and using "branch and prune" to find a longer and more efficient one.
Not sure if there what the term for this is, but rather than looking at the probability of X happening we should rather look at the "inevitability" of X happening in the context of the environment.
In my experience, even though nature looks chaotic there is a very strict order to things which has evolved over millions of years and is a result of looking for the "most optimal way" to achieve a goal. A good example might be mycelium optimizing routes to nearby resources. Another might be ant colonies creating tunnels that are effective to navigate.
The problem is, in my opinion, that we do not know what the final goal is. Therefore we cannot begin to analyze the inevitability of something as us, or life in general, happening. The answer may be perhaps found in religion or some similar "greater than life" endeavor.
What method do you prefer?Trust in the market and chose the one with the highest price, or, choose the one recommended by most, aka the popular choice or the elected?