Hacker Newsnew | past | comments | ask | show | jobs | submit | Laakeri's commentslogin

There are such things, they are called "arxiv overlay journals". Couple of examples from my research area: https://www.advancesincombinatorics.com/, https://theoretics.episciences.org/


This is great! I'm trying to think of any disadvantage.


But he made a good living out of it, so in the end it was a good idea?


It certainly puts a ceiling on a career. And I'd argue it probably gave him a pretty rough shelf life. At some point he has to understand what he's doing.

Unless he's so good at selling his services he can consistently find new clients. And if that's the case, he'd probably kill it in sales.


I'll bet the ceiling is CTO.


sales engineer is quite a lucrative career. don't have to be really good at it, just enough to be useful.


Sales engineers have to be good enough to bluff their way through the layers of hyperbole/minor exaggeration/utter bullshit (delete as applicable) the sales team have spun. Whether their conscience gets involved before the deal closes, different question.


Not at my work. Around here sales engineers just say "this is a proof of concept, X will be different in the final version". Then, after they close the deal, they give us their half implemented feature they developed that none of us heard about before, and tell ys that we need to finish it and include it in the next release.


Most people never reach the theoretical ceiling of their careers, so he probably did quite well.


Cope. People often make money on things they know nothing about


He may have made a good living, but his customer / employer bought low quality code with lots of tech debt.

That business model only works until customers are sophisticated enough to understand tech debt. In the future, more customers will be less willing to pay the same good wages for low quality code.


    > but his customer / employer bought low quality code with lots of tech debt.
Sarcastic reply: Isn't that most tech? Even good (above average) developer produce lots of tech debt and sometimes low quality code.


"Webdev" makes me think of wordpress, which is like planting 20 onions in your backyard, and comparing yourself to a farmer with acres of crops.

I can completely believe someone had no idea what they were doing when copy/pasting, and working on wordpress.


Yeah, and the business people could not care less. I am on a team taking in millions of dollars from a Delphi Windows app from 1997. Zero tests, horribly mangled business logic embedded in UI handlers. Maintaining that app is not feasible. I'm rebuilding a modern version of it only because it is embarrassing to demo and is such a UX nightmare that our distributor made us commit to a new app.


I'm not sure if its really misunderstanding, when in 99% cases it has turned out that if a problem is in P, then it has a polynomial-time algorithm with a quite small exponent.


I feel like your claim is begging the question.

It's certainly not true in principle that most problems in P have a small exponent. There is no shortage of graph related algorithms in P that have absolutely absurd exponents, like on the order of 10^100. It is true that practical decision problems that are in P have small exponents but that's exactly the point being made, namely that problems in P with large exponents are not practical and hence don't get much attention.


What are examples of natural graph related problems in P with absurd exponents? I think the reason they don't get attention is not that the algorithms are not practical, but that the problems are not natural. Really, the only examples of such problems I can think of are something like "Finding a clique of size 222222 is in P because we can try all possibilities in n^222222 time".


As the amount of data grows, exponents that used to be small become large.

In many applications, the data grows at least as quickly as computer performance. If the fastest known algorithm is superlinear, today's computers take longer to solve today's problems than yesterday's computers solving yesterday's problems. While O(n^2) time algorithms used to be pretty fast in the 80s, today they are often unusable.


I completely agree with you. My point was that the P vs NP distinction matters in practice, but of course also the subquadratic vs quadratic time distinction matters a lot.


"Luck favors the prepared mind"


For binary-encoded input it is in 2-EXPTIME by trying all graphs of size exponential in the input number and testing all subsets of the given size. Would be surprising if any hardness result for complexity classes would be known.


Wait, that doesn't make sense, does it? The Ramsey function grows so fast that Peano arithmetic cannot prove that it is total.


I'm not sure what you're referring to here. Ramsey's theorem is constructive enough that you can extract an upper bound of 4^k on the kth diagonal Ramsey number, and the (j,k)th Ramsey number is bounded from above by the max(j,k)th diagonal number.


My mistake. I, and I suspect the original commenter, was thinking about the strengthened finite Ramsey theorem. From wikipedia: https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theor...

For any positive integers n, k, m, such that m ≥ n, one can find N with the following property: if we color each of the n-element subsets of S = {1, 2, 3,..., N} with one of k colors, then we can find a subset Y of S with at least m elements, such that all n-element subsets of Y have the same color, and the number of elements of Y is at least the smallest element of Y.

This function is outside the reach of Peano arithmetic.


I read the essay, and my takeaway to "There are for example four fundamental forces. Are we going to find another four?" would be:

Maybe not, but we could find out that the model of fundamental forces wasn't really the final answer, and explain the universe much better with some alternative theory. In your comment you assume that science is more or less settled, and all there is to do is to tweak the present understanding a bit and fill some gaps. The same sentiment was shared by many prominent physicists already in the end of 1800s, who were obviously proved wrong by Einstein and others in the 1900s.


I tend to agree that rumors of the end of discovery have been greatly exaggerated, however in the case of theoretical physics, we have the unsatisfying problem of having several alternative theories, but severely constrained access to the energy/technology required to test them.


It's enough to just take one random element and use it as a pivot.


For example Google hires mostly generalists and lots of people who care about income and career progression work there.


Google does not. It hires a lot of specialists and fungible people.


From my personal observations, Google indeed hires a lot of generalists.

Source: been an SWE in a hardware org for the past year there, and my team is a mix of about 1-2 people with plenty of previous hardware experience and focus, with the rest being generalists without much previous related experience (including me) who can pick things up and resolve them quickly, whatever they are. Observed quite a similar pattern on other teams in the org as well, with the only exception (to a degree) being a few research teams filled with PhDs.


Specialist and fungible are contradictory. It's true that they don't hire specialists - but there are only 7-8 different software profiles they look for.

1. ML 2. Systems 3. Product 4. Data engineer 5. SRE ....


Are you implying most companies have 37 profiles? Even this article was giving multiple examples or a little Web shop that probably has less profiles than 7.


Obviously no.

What I meant is that they don't specifically try to hire nodejs/computer vision/graphics/kubernetes specialists etc.

Lots of startups and mid level companies are looking to hire specialists in computer vision, nlp, scala expertise etc.

So startups can have 4-5 highly specialized profiles.


Red Bull is not a sponsor but the owner of the team


Well, it is also not proved to not be NP-hard, so it could also be NP-hard to our current knowledge. But you are right that researchers believe that it's not NP-hard.


Good catch, you’re right, nobody has yet proven that factoring is NP-hard, but a ton of evidence indicates that it is highly unlikely to be NP-hard


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: