Hacker Newsnew | past | comments | ask | show | jobs | submit | vegannet's commentslogin

I’ve always understood the chemical imbalance description of depression (and other mental health conditions) to be a casual way of describing the conditions as being part of the person rather than a choice — and not a way to describe the internal mechanics of the conditions. I’ve found it effective when having conversations about mental health conditions: how would you describe depression without using that phrase, based on what this paper reveals?


I'm not saying there isn't a biochemical component to depression (or any other mood disorders). The specific theory that I'm talking about is "low serotonin causes depression" (as in the proximate cause, not the ultimate cause). When SSRIs were first discovered to be useful for treating depression, one of the theories about why they worked was that they boosted levels of serotonin, but we know now that's not true. It doesn't mean it can't be explained in other ways (like the one this article discusses).

Also, if there is a behavioral component to depression as well, then it doesn't necessarily mean someone is to blame for their disorder. You don't control the environment you grow up in, which has an enormous impact even on traits that are highly heritable (the whole subject of heritability is very misunderstood anyway).

So basically if I were going to describe depression's cause, I'd say it's a mixture of biochemical reactions, behavioural traits, and environmental stresses that cause it.


A behavioral component manifests itself as a chemical process within the brain. So it may be used to explain the cause of certain chemical processes, but it's still the chemical processes that have to be dealt with. Though it doesn't necessarily follow that a chemical intervention is necessary: If the cause of the chemical change was behavioral, the solution might be as well, such in the form of various type of therapy.


In the article they discuss that known SSRIs weakly bind a different receptor which may be why they have an antidepressant effect. It would explain why more specific SSRIs that are thought to target serotonin receptors selectively do not work. That they see a bunch of them doing the same thing is pretty convincing.

to quote: "Now to the BDNF hypothesis. I used the phrase “unknown mechanism” above, and that’s exactly what this work may have cleared up. The authors show that when the TrkB protein forms a dimer in the cell membrane, a binding site for small molecules is formed at the interface. A whole list of known antidepressants (fluoxetine, imipramine, venlafaxine, moclobemide, ketamine, esketamine, and R,R-hydroxynorketamine) bind to this site at about 1 micromolar levels (and can displace each other in binding assays(, while a set of control CNS compounds like chlorpromazine, diphenhydramine, and indeed S,S-hydroxynorketamine do not. It will not be lost on those who’ve done research in the field that the antidepressant compounds listed above have been thought to work through completely different mechanisms. "


> you gotta do some guerrilla marketing

Like getting a piece in the nytimes?

> Or start selling direct to consumer using marketplaces like Amazon.

Per the article, most consumer destinations ban any mask advertising because of scalping.


> Like getting a piece in the nytimes?

Back in my day we called that a slashvertisement.


Yeah after this article they'll be fine.


Does anyone have a take on where Reddit might be, say, 5 years from now? A view on the path to profitability? The obvious outcome for an unprofitable Reddit is an acquisition by a big media company but I’m curious if anyone has a contrary view on how it can become profitable and what sort of shifts it’ll need to make.


Reddit seems to be following all the other social media platforms in increasing the time spent within their app.

Reddit Public Access Network (live streaming) could be further monetized by incentivizing tipping and venturing into gaming. They also purchased the short video app Dubsmash to compete with TikTok. Ads can be shown in between videos.

"...encourage under-represented creators to find a home on Reddit," seems to be hinting at making the platform appealing to influencers? If so, e-commerce and affiliate linking integration could be profitable. Not sure if the current reddit community would like that one, though.

I'm sure they could also squeeze in a Clubhouse copy too. Could be popular with communities like the /r/wallstreetbets crowd.


They are a top ten site on the internet. I don't see why Twitter is 45 B market cap but Reddit should be so hopeless, especially when their subreddit model lends itself much more clearly to ads than the "stream of conscious" of Twitter. They don't even need to worry about tracking/privacy as much, since just the subreddit name and nothing else is a huge hint as to what ads should be relevant.


The problem is when the subreddit content itself becomes ads, although they may have sufficient time to milk it while people figure it out and/or don’t have a better alternative


Another problem is that subreddits are under the ownership of the creators/moderators, who may have their own ideas of monetization as well as where the revenue should go...


Their majority shareholder is a big media company.


I love the idea but one of the challenges I’ve found at quite a few companies is a difficulty across the organisation in understanding the true cost of infrastructure: I’ve experienced situations in which putting the $ upfront causes people to focus on the $ amount.

Do you have a guide for how this information should be used most effectively? I’m thinking of co-workers who would request a change that reduces the monthly cost by $10 but the time it takes to go through the code review and make the change and test it... by that point any $ cost saving has been spent on the time it spent in code review.

My instinct is that perhaps rather than reporting the $ or % change, instead projects could have alarms / limits on cost increases, but I’m not so sure if that is really tackling the problem.


You're right. It has to be a balance between the cost of the infrastructure and the developer's time. One feature we have for this is to only show comments in PRs if the cost increase is above a certain threshold.

Another idea we have is to allow developers to set alerts based on their actual project/IaC concepts instead of configuring alerts based on tags and services. Do you think this would help?


I’ll take that bet. How much? I’m not anti-Tesla but I do see Tesla as a brand more than a technology company: good for business but not good good for breaking new ground with technology — I’d put a lot against Tesla on that bet (I don’t think it matters for the business much though).


I say we do $1,000, but what did you have in mind? We will have to put the bets in escrow

We need to agree to what constitutes full level 5 self driving.

My interpretation of it is a car that can drive itself anywhere in the world, has been approved for commercial use in at least one country, AND recognized as the first to full self driving in at least 2 major publications.

We can put a 5 year time limit on the bet.


Unless I'm reading it wrong, it sounds like you (sixQuarks) and the parent (vegannet) want the same side of the bet :-) ie you're both betting that Tesla won't achieve FSD L5 in 5 years.


> sixQuarks 2 hours ago [–]

> Anyone wanna bet me that Tesla will be first to full level 5 self driving?


L5 under those conditions won’t be achieved in 5 years. Might as well not make the bet.


We can make a different bet on this.


As a lay-person who reads scientific papers sometimes because they’re referenced on HN, how do I validate the credibility of a journal? Does a directory of journals exist with scoring, or is there a strategy I can use when evaluating a paper or journal to determine credibility?


One doesn't validate the credibility of a journal; one validates the credibility of the methodology in the research.

And it then turns out that most methodology even in reputable journals are rather wanting with many objections that can be leveled against it.

A large amount of scientific research can't even be reproduced, and of much that can, even though the cold data can be reproduced, the conclusions that follow from the data are rather dubious leaps of faith.

It doesn't take much for something to be called “science”; it certainly doesn't take reproducibility, despite various claims to the contrary.


It absolutely makes sense to simply ignore articles based on the credibility of the journal, it's an effective and cheap first filter (and you absolutely need filters) - there are many predatory journals (like this one) which will publish anything that's paid for, they probably even outnumber "real" journals, and it makes all sense to automatically discard them without reading.

There is a lot of noise already in "proper" journals - but in the predatory journals, the signal-to-noise ratio is so extremely low, it's not worth looking into the credibility of the methodology of the paper because that's far more time and effort that the paper deserves. If it was any good, it would have been published in a better venue. If it could pass peer review, it would have been published in a venue that actually does peer review as opposed to these (many) predatory journals who just claim to do so. The authors have strong practical incentives to not publish it there if they can avoid to, and the fact that they chose to do so anyway indicates that no respectable place would publish it.

Because of that, if a paper is published in a place like this, is a completely reasonable prior to presume that overwhelmingly likely the paper is very bad, without even looking at the paper itself.


This is true; it works in one direction, but not in the other.

But how the post which I replied to was worded suggested that one can automatically trust “science”, provided that it be published in a credible journal.

I also find that very often in the most reputable journals is where the most sensational papers end up before any attempt at reproduction has been made simply because the data they measured was far from the null hypothesis, which may entirely be a statistical fluke and not hold up under a reproduction attempt.

It is really quite easy to obtain spectacular data as a fluke.


There are impact factors and top lists, and you can check out the Wikipedia page of the journal if it exists. You can google what are the "top journals for <field in question>". You can check the publisher. IEEE and Springer for example tend to be genuine.

But you still have to look at the individual paper and can't believe it just based on a single article. Research articles are for sharing results among experts (and for advancing the careers of researchers), they are not aimed at laypeople. It's easy to misinterpret them if you lack the background knowledge.

Even in non-predatory journals many results fail to replicate and are produced due to a publish or perish pressure. You're better off learning from textbooks so you get info that has been verified, digested and distilled and represent consensus. Cutting edge research proposes new ideas by one group of authors, it's not a consensus yet.


> You're better off learning from textbooks so you get info that has been verified, digested and distilled and represent consensus.

A very large body of consensus ended up in textbooks that had either never been attempted te be reproduced, or was attempted, much later, and couldn't be reproduced, yet continued to remain in textbooks.

The truth of the matter is that most scientific research will never see an attempt at replication, though the replication crisis has no doubt influenced this culture, but before it, as little as 0.16% of peer reviewed results were attempts at replication, and most were unsuccessful. — of course, the successful ones were not as easily published, which is probably why no one attempted it.


That may happen in exceptional cases, but less so for sciences with more concrete results. To learn about human psychology and society perhaps you're better off reading great novels than p-hacked sexy "research".

Regarding replication... Replication is boring in the eyes of funding committees. They want new and sexy results for their money, preferably at a steady rate of a bunch of papers each year.


As a very rough first filter, for papers in the area of natural sciences, check if the paper is indexed in Pubmed (https://pubmed.ncbi.nlm.nih.gov/). Every legimitate peer-reviewed journal in that area is indexed there, anything that is not indexed is almost certainly a dubious journal. But you can't use this the other way around, Pubmed still contains some journals that publish questionable stuff.

Beyond that you can look a bit at the impact factor of the journal, you can generally find it by just googling for the name of the journal + "impact factor" or on the Wikipedia page of the journal. But it is kinda hard to interpret this on its own, and it does not translate directly into credibility. A high impact factor shows that articles in that journal are cited very often, and you can assume that journals with high impact factors generally have a reasonable peer review system. But that doesn't say much for a single paper, reputable journals can easily fail in individual cases.

The Pubmed check generally filters out most of the journals that would just publish anything without real review, so it's the most useful filter in my opinion. I would not try to gauge more meaning than that from the journal and focus on the individual article.


Maybe it's an unpopular opinion, and biased by fields I've worked in as a grad, but I think you really just can't validate a particular paper without a background in the field. You need lots of context: recent works of that university department, credability/background of the last author post doc/professor, state of the art and cornerstone works in that field.

You could probably gain an entry level background (except for highly mathematical or medical things) by spending 5-10 hours a week for a few months reading various papers/online discussions. Assuming you have access to those forums. You should know thats the lower bound amount of literature reading full-time students have to do to keep up with their own field.


> literature reading full-time students

Nobody keeps reading full time, that would not only be insane it would also be conterproductive since it’s time not spend actually doing something or writing papers. In most place it’s not necessary to know all the recent papers, and if a really important one is missing as a reference, a reviewer will note it in the comments.


I worded that poorly. I meant full-time students (as in they're not just reading about something out of curiosity while sitting on the toilet) that read literature as a part of their job.


Actually, asking for the credibility of a journal is the wrong question. The real one is about whether an article is credible. But the answer to that is disappointingly complicated.

Articles are peer-reviewed for their readability (for peers, not laymen), consistency (e.g. if there is a mathematical proof in there, the reviewer might check it if possible. if there is an experimental technique, the reviewer might check if that technique could produce the results claimed) and reproducability (is the experiment described in sufficient detail? does the software provided produce the expected results? do the provided numerical results fit the claims?). All those checks are what the reviewer can do in a few hours, days at most. That is what peer-review means.

All the things to check beyond those are left as an exercise to other scientists after publication. Reproduction is a big task, usually almost as big as the original paper. Proofs are often extremely hard to check, so a few days by any old reviewer won't cut it. So if you want to know if the results in a paper are true, watch literature for the following years and look at publications citing that paper and (dis-)agreeing with its results. Same for (sometimes) letters to the editor, retractions and "everybody-knows"-rumours.

More reputable journals tend to attract better-quality papers (according to the peer-review criteria outlined above). That tends to correlate with a higher probability of the paper being true. But it is not a very strong correlation, there is utter nonsense, groupthink and polished turds even in very high profile journals.


Right, I think many people believe peer review to be some big expert committee giving a paper lots of analysis and evaluation. In reality they are mostly PhD students with 1-5 years experience, not experts with decades of experience and one person may get like 5-10 papers to review at a time. Oh and it's unpaid work that gets time away from your own research and is not really incentivized beyond a vague sense of moral duty towards the progress of mankind's knowledge. The end result is that it's mostly pattern matching, does it look like the typical paper in the field? The gut reaction and impression strongly influences the decision, then the actual review is about justifying that decision.

I mean it's not totally arbitrary, really good reviewers do give it like 2-5 hours, but it's best thought of as a rudimentary filter rather than a meticulous verification.


There are a number of metrics[1] for journal ranking. The journal in question[2] claims to have (or have had) an impact factor[3] of 0.593 (2017-18); "the ratio between the number of citations received in that year [...] and the total number of "citable items" published in that journal[...]". That's not a very good score. Nature has an impact factor of ~42 (2019), or to pick another example, because I recently referred to it here on HN, Eurosurveillance has an impact factor of ~6.

I don't think blindly following these metrics is a good idea, but it's not as a bad first approximation.

[1] https://en.wikipedia.org/wiki/Journal_ranking

[2] https://juniperpublishers.com/ofoaj/

[3] https://en.wikipedia.org/wiki/Impact_factor


Impact factor works if you just to minimize the number of "bad" publications you look at. But if you rely on it too much, you are going to exclude lots of good publications too. Specialist journals tend to have lower impact factors, for one. Journal of Artificial Societies and Social Simulation is a good journal, but its impact factor has never been especially high.


One possible way is looking at the list of editors of the Journal, and seeing if the researchers in question have back-mentioned that Journal on their CV (or personal homepage). The fact that researchers/peers in a field are willing to openly connect their prestige to those of a journal's, is usually an indicator that there is at least something to the journal.

Most if not all scammy journals will fail this test.. but it is also quite labor intensive.


I believe you can probably get a good answer to your question with Beall's list: https://beallslist.net/


^ That was the standard blacklist, but I thought it was shut down in 2017?

It's really tricky, because the move to open access publishing (which typically requires that authors pay) is a good thing on one hand, but facilitates these junk journals (and inflates publication count) on the other hand.

There are also Cabell's lists, both of predatory and "quality" journals.

https://en.wikipedia.org/wiki/Beall%27s_List

https://en.wikipedia.org/wiki/Cabells%27_Predatory_Reports


When you see link to paper with interesting title posted to HN from nature.com domain, it's almost certainly from "Scientific Reports". It's not as bad as described here, but nobody publishes there if they have paper that is scientifically interesting. It's place to report what you did when nothing interesting came out of it.

Nature publishes many journals and "Nature" is the highest quality general science publication. "Scientific Reports" does just quick review for methodology.


A really simple good-enough approach is to go to Google scholar, type in the title, and count how many citations it gets. Adjust for years since publication. Many citations: probably important. Few citations: less important. No citations, more than a year old: probably not much good.

It isn't perfect. Different disciplines tend to cite at different rates; and you may believe that some disciplines are just cabals of bullshitters citing each other. I couldn't possibly comment.


You can start by making sure it has many citations, not self citations, and ideally also people challenging it.

Then you can judge the arguments of the paper and the papers referencing it.

Usually a paper with thousands of citations and only weak arguments against it is trustworthy.


Evaluate the authors directly.


I agree wholeheartedly that Bumble is the dating app for relationships — and it’s the app I recommend! — but from a financials perspective: casual dating is much more profitable, Bumble will always be far behind Tinder on revenue — so as much as I love Bumble, I’d question the room for revenue growth it has vs. tinder.


I’ve used Webflow directly and seen it used within companies: although not perfect, my assessment is that they’ve bridged the gap between WYSIWYG and website builder. They’ve got room for improvement but they’re definitely on the right path, I’d bet on Webflow — if you look to how Wordpress is used in most companies, Webflow is all the good parts.


Having used it and subscribed, after their last round. I will not be renewing. Cost is too high for a such a buggy product that continues to not implement features. So little as improved or changed. The most frustrating thing for me is how they deal with different screen sizes and overrides. They made CSS worse, it is beyond frustrating.


They literally erected gallows on the grounds of the capitol, they literally marched through the building chanting "hang Mike Pence". The entire basis of the Q Anon conspiracy is that Donald Trump is preparing to destroy the deep state and retain power. Spend more than a few minutes browsing around Gab (you can just look at the top posts) and you'll see explicit references to overthrowing the government.

You can absolutely argue that this was a laughable attempt to install an illegitimate government and you can absolutely argue that they have no chance of succeeding, but you can't argue it isn't the intent -- it's the entire basis of the Q Anon conspiracy. The laughable nature of their attempt doesn't disprove the intent.


Didn't BLM install gallows outside Jeff bezos house or something like that? Were they trying to overthrow Amazon?



If so, that would be a crime and those people should have been arrested. So what? I love the conservative view that if any liberal anywhere did anything bad, then it’s a get out of jail card for them to also do that bad thing. No dude. A crime is a crime.


You can’t redeem Tether — that’s a core part of the fraud, they have “banking issues” and had them for years. You can sell it on an exchange but the Tether organisation have prevented any redemption of USD from USDT for years. Anybody turning USDT into USD is selling their USDT.


So you can't trade USDT for USD at all (eg. third party exchange trades, or withdrawing from an USDT exchange directly), or is it only that you can't trade USDT for USD by going through Tether Limited?


Though when you sell your USDT for euro or whatever on Kraken say, the tether org must step in and buy to keep the price at US$1. So they effectively redem if not directly.


When is tether org involved in this?

From your example the only thing that happened is that now you hold x euros and Kraken holds x amount of USDT.

Unless Kraken goes to Tether org and asks "hey I got these USDTs, please give me the USD" no redemption has occurred.

That some people can in effect transact USDT for fiat via other mechanisms (like your example above) says nothing about Tether org backing or reserves.

Until you go to Tether org with your USDT's and ask them for USD you won't know if they have the reserves.


The reason tether stays at $1 is because the tether org or their delegates buy or sell to keep it there. They have to or the whole thing would become untethered as it were.


Plenty of companies redeem USDT for USD. I myself worked at a company that redeemed billions of dollars. The FUD on HN is approaching delusional levels.


And what does the company do with those USDT? If the money printing theory is correct, companies like yours would be the ones counteracting any sell-pressure on USDT. If the theory is not correct, companies like yours would see an at least equal buy-side demand for USDT, bringing real USD into the system.

I mean, does the company hold (increasingly?) large USDT positions on their balance sheet (and would therefore be the ones in the hole if Tether loses the peg), or is the company able to get actual dollars out of Tether inc. in exchange for those tokens?



That article was published in December 2019, when $4B of Tether had been issued in total, and covers an unspecified earlier period.

Are you sure that the $20B of Tether issued since then have an identical origin story?


https://tether.to/faqs/

> Unfortunately, Tether has decided to stop serving U.S. individual and corporate customers altogether. As of January 1, 2018, no issuance or redeeming services will be available to these users. Exceptions to these provisions may be made by Tether, in its sole discretion, for entities that are: Established or organized outside of the United States or its territorial or insular possessions; and, Eligible Contract Participants pursuant to U.S. law.

> An Eligible Contract Participant includes a corporation that has total assets exceeding $10,000,000 and is incorporated in a jurisdiction outside of the United States or its territories or insular possessions. This will be the principal basis upon which we will continue to do business with selected U.S. persons.

Basically, it looks like they only provide on and off ramps for large clients outside the US. Given their history, I am betting they might not want to reveal where they store their assets, and don't want to deal with US regulators (but not sure this is working).


So as source for the claim that you "can't redeem tether", you are linking to something that says you can redeem tether?


Tell you what: you give me $100, I'll give you back a cooldude coupon. I have sole discretion as to whether I'll redeem this coupon when you hand it back to me.

Question: Would you say you can redeem this coupon? It's possible I might decide to give you back your money, after all.


> something that says you can redeem tether

Tether, in its sole discretion, may choose to redeem tether, as long as you're a corporation with over 10m USDT incorporated outside the US.

I'd say that's not an ironclad guarantee that you can redeem tether.


When did I say I was providing a source for not being able to redeem tether? I'm just linking to information that I found useful when forming an opinion on the matter.


Yes, I worked at a company that had an entity outside the US that minted and redeemed tethers. This is how it works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: