Hacker Newsnew | past | comments | ask | show | jobs | submit | milderworkacc's commentslogin

It certainly feels like one at times!


Not sure where to start with this one.

Can anybody briefly explain what a “rot economist” is? Is it meant to be capitalised “ROT economist” which stands for something? Has my browser not rendered the characters correctly or something?

This story of course includes the now almost mandatory attack on e2e encryption, which according to this account when coupled with the people you know feature is “a dangerous tool” - with little explanation as to the nature and size of the danger.

This part is interesting: “Worse still, accounts that were less than 15-days-old now made up 20 percent of all outgoing friend requests, and more than half of friend requests were sent by somebody who was making more than 50 of them a day…”

The explanation leaves a lot to be desired though:

“…heavily suggesting that Facebook was growing its platform’s “connections” through spam.”

Doesn’t this make perfect sense where a new user joins Facebook with no friends to start with, then in the first few weeks of using it finds all of their friends and adds them?

The whole thing reads like a grab bag of grievances rather than a forensic takedown, shame.


"Rot economists" appears to be a pejorative invented by this newsletter to describe an ecosystem of graphs-go-up growth-at-all-costs venture capital firms, management consultants, and corporate leaders who are pushing companies away from building things customers actually want: https://www.wheresyoured.at/the-rot-economy/

It's "rot" as in the companies/society are (metaphorically) rotting.


Off topic but this is a real gripe of mine - analogue speedos that go all the way up to 240 kmph (~150 mph). I am only ever going to do half that speed, so why not just make the dial go to only 120/(80) and give the needle (and me!) twice as much precision!


You probably would better like a digital display that shows a number instead of a needle


Some cars also use unevenly spaced tick marks on an analog dial. Nice in that it gives you the sweep needle and range of scale without overly compressing the range of most interest.

Analog tachometers for race cars do/did the the same thing:

https://www.primusracingparts.com/Racetech-tachometer-0-8-rp...


Many gauges are typically designed to work best in the center 50% of their range.


Maybe limit the dial at 100mph?. Doing 80 on the Interstate near me will get you passed by the impatient. But the real question is why do you need that much precision?


The report [0] itself even mentions Apple's abandonment of on-device scanning, as well as its plan to add end-to-end encryption to iCloud. eSafety has been curiously quiet about both of those announcements.

[0] https://www.esafety.gov.au/sites/default/files/2022-12/BOSE%...


I love ncdu and install it on all of my machines. But at the risk of sounding like a broken record - why isn’t its functionality baked into stock file managers on windows and Linux?

Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager, allowing one to sort directories by recursive size?

I am not exaggerating to say this is the single biggest roadblock to my permanent migration to Linux!

(I would love nothing more than to hear I’m wrong and “you fool, Dolphin can do that with flag: foo”!)


The Bash CLI is my file manager. So I've got ncdu built right in. Try it, you'll love it. I almost never touch the rodent.


Except that running "ls" doesn't show you the directory content size, and "ncdu" requires the user to make a tea first. The above poster is right in saying that having this built-in to the filesystem metrics would be a huge win.


But `du -h -d1` does, though, or `tree —-du -h`.


The time to scan with ncdu on directory with massive number of directories and files can be long and you don't get progressive stats.

I made jsdu to get progressive (and recursive) size.

I mostly only use jsdu on a few top levels directories, and use ncdu for the rest or after the stats is cached by jsdu.

You can install jsdu with "sudo npm i -g jsdu" or run it without install with "npx jsdu"


duc!

use a cronjob for `duc index`, then you can use `duc ui` to see the index. it doesn’t immediately update on change so it’s not quite what you’re looking for, but it might be the closest thing.


Wow thank you for that! This whole thread is great - I've been missing a utility like this for ages but never took the time to go hunting for it.


If I ever need to know a directory size, du -sh foo/ is already muscle memory, and if OP needs it often he can alias it.


I assume the restriction is file system related. It's probably not always cheap to calculate the full size of a directory, especially if it's heavily nested.

Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.


WizTree (https://diskanalyzer.com/) on Windows seem to be faster than other tools I tried.


>Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.

Caja (and probably Nautilus/other-Nautilus-based managers) does that as well. But although can show it in properties arranging by size doesn't take it in consideration. (Rather it just sorts them by number of items inside.)


Just lie to me a little bit. I wouldn't mind seeing quick cached approximations that assume that I have changed the disk between reboots, or recently just move huge files around (and the OS would know anyway)


> Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager

Many file managers can do that, although for obvious reasons it's rather built as a contextual action on a single directory than an always on feature than would slow down the filesystem horribly by accessing it recursively on many levels. On Thunar (XFCE's file manager) for example it's accessible from the contextual menu opened using the right mouse button on a directory name; other file managers would work in a similar way.

I'm sure filesystems could be modified so that any write would automatically update a field referred by the containing directory, so it would quickly propagate to the upper level, but that would imply many more write accesses which for example on SSD media would do more harm than good.


Mac doesn't for me, for a folder it shows size as "--".


Open display settings (CMD + J) and tick “calculate all sizes”. May take a second if you have some huge directories.


You fool, Dolphin's predecessor Konqueror had a directory view embedding the k4dirstat component! There you can sort by subtree percentage, subtree total (bytes) and amount of items, files, subdirs.

This broke some time in the past (KDE really jumped the shark) and is now available as stand-alone applications only: k4dirstat and filelight. The MIME type inode/directory is already associated with those, so you can run them from the context menu of a directory anywhere, including file managers.


I'm not sure what exactly you're asking for, but Dolphin shows me the size of a directory. You may have to right click and update it from time to time.


Almost every district has a tool called "Disk Usage Analyser" that does exactly what you want. Very helpful when you start getting "no space left on device" errors.


ranger has this built in


Yeah but I'm not aware of any repos that use it as a stock file manager..


> This is just completely wrong - banks only loan out money they have. If you go to a bank and get a loan the bank didn't just edit a database entry - they had that money. Banks loaning out money they don't have is extremely illegal.

I'm afraid to say that this is completely wrong. Commercial banks do in fact create money via lending! The 101 textbook explanation offered here is at best outdated and at worst misleadingly perpetuates a myth that simply must die.

The Bank of England's note on money creation in the modern economy [0] is the place to start - and more or less reflects the explanation in the article.

[0] https://www.bankofengland.co.uk/quarterly-bulletin/2014/q1/m...


Page 19 seems to show that the newly generated money has to be backed by somebody's deposit and not just minted though. Like for any situation that doesn't involve you taking a loan from a bank to use that money at the bank the BoE paper calls out that the bank needs to get deposits from where you're sending the loan.

So, you buy a house using say Chase and the seller has Wells Fargo then Chase "mints" say 1M the money for your loan but then solicits 1M of deposits from Wells Fargo. like why does it matter if the 1m for your loan came from Chase depositors or Wells Fargo depositors? The point is that it's backed 1:1 by cash that came from a person which BoE example shows.


> the newly generated money has to be backed by somebody's deposit

Ok. Suppose I am a bank, and pg deposits $10. Then I lend to you, lesourac $9. This $9 is "backed" by pg's deposit. But I, the bank, only hold $1, and you, lesourac, hold $9.

In your head, you hold $9. In pg's head, he has $10 of assets. There are $19 of imagined assets running around, even though the "real" assets are only $10.

It's all good until pg pulls his $10 sooner than I expected, or if you, lesourac declare bankruptcy and default on your loan, and unable to pay back those $9. This is why bankruptcies are deflationary.

We could have a safer banking system if loans were from individual to individual, possibly mediated by a bank, and the lender fully accepted the risk of default.


I think you've missed the argument.

Nobody is arguing that the Money Multiplier [1] doesn't exist. Nobody is arguing that a bank run won't cause loss of deposit (ignoring FDIC).

The argument is whether a bank takes in say $10 of deposits to then loan out $10 OR if a bank "generates" $10 at-will to make a loan of $10.

[1]: https://en.wikipedia.org/wiki/Money_multiplier


Oh interesting. That seems academic given the reserve requirements by the Fed under Regulation D which basically says every bank must have $x dollars in reserve for every $10 loaned out. Thus, the bank already has to have that $10 deposit before it's able to make the loan, but as long as those reserve requirements are met, they can make all the loans they want. Given the size of a commercial bank with deposits and withdrawals going on constantly across many customers, they run with some amount of margin, but basically they'll be able to make a good number of loans before they're anywhere close to their reserve limit. So to answer your question... sorta?

https://www.federalreserve.gov/monetarypolicy/reservereq.htm


I think you've also missed the argument.

So, the side I don't particularly like goes like this. Alice deposits $10 into Bob's Bank. Bob's bank now has $10 of liabilities (Alice's account) and $10 of assets (Alice's ex-Money). Charlie _wants_ a loan of $10. Bob records an increase of assets by $10 so now the bank has $20 in assets and records a corresponding increase in liabilities (to balance out the minted money) of $10 so now the bank has $20 in liabilities. Then the bank gives Charlie this _new_ money. In essence, the bank is not giving out depositor's money for loan but instead new money. (See the Bank of England (BOE) paper for details showing this [1]).

My argument is that (1) the same BOE paper shows that whenever that minted money needs to leave the minting bank's computer systems an equal amount of money from somewhere (either that bank or the receiving bank) will be destroyed. (2) That money will commonly move between banks. Therefore the fact that money is minted is irreverent because it's subsequently quickly destroyed.

Side note, it makes total sense to me that a bank would rather mint money in a lump sum exactly equal to the amount needed for a loan than figure out what fractions of the loan should come from what depositor. Its just practical.

[1]: https://www.bankofengland.co.uk/quarterly-bulletin/2014/q1/m...


This link sure does do the rounds! It's practically a meme in itself these days.

But it's only part of the picture, for instance, large banks have capital requirements - https://www.federalreserve.gov/supervisionreg/large-bank-cap...


> The argument is whether a bank takes in say $10 of deposits to then loan out $10 OR if a bank "generates" $10 at-will to make a loan of $10.

In some sense, the loans are real and backed by real money, but it's your balance that is generated from thin air when the loan is issued.


Do you consider the value in your yield bearing savings account to "not be money" in the sense that your stocks "aren't money"? If the median American does, then your argument holds water. I don't think this is the case, though.


The BoE considers savings accounts to be part of the 97% of money in an economy. Although I'm still not too sure if you're on the side of a bank using a depositor's money or minted money for a loan.

> Of the two types of broad money, bank deposits make up the vast majority — 97% of the amount currently in circulation.


We had a lot of smoke tainted “experimental” wines out from Australian vineyards as 2019 and 2020 releases - some were really interesting, others simply dreadful (with winemakers just glad you were happy to take a punt and get rid of it for them).

Not sure repeating the bushfires that created those releases is a long term strategy though…


What a lot of the comments in here are bumping up against with regards to “fairness” is the fact that willingness to pay is a function of both preference intensity and ability to pay.

The way to make sure TS tickets go to the biggest TS fans is to remove the influence of ability to pay and sort only by preference intensity.

Solving wealth inequality fixes this problem entirely - among others!

Edit: like all good economists, I leave solving that particular bit of the problem as an exercise to the reader.


In perfectly competitive markets, producers are price takers. That includes at a wholesale distribution level.

Being able to pick and choose your customers (and apply conditions to any sale that favour Apple and disadvantage the customer) is a perfect indication of the level of market power that apple possesses.

Most businesses in competitive industries don’t get to do that.


Tesla?


This from the EFF seems to create a bit of a circuit split with Electronic Frontiers Australia [0] - though I hasten to add the two are not affiliated.

My only comment is that this stuff is hard. Really hard. Consider that a two line HN comment may not be able to fit all of the relevant considerations and case-law in.

I recognise the irony.

[0] https://www.efa.org.au/2022/09/05/efa-statement-regarding-cl...


It really looks like the only motivation for Cloudflare stopping service to a client is whether it personally angers the CEO.

I think we've had three letters already where he's basically saying "gee I really hate doing that, but I'm doing it anyway".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: