Hacker News new | past | comments | ask | show | jobs | submit | greenleafjacob's comments login

You are best off buying a “USB condom”, a USB 2.0/3.0 connector with the data pins removed.


The problem there is that the USB standard requires the initial power transfer to be low amperage, and only switches to the high amperage mode on request, with the request being delivered over the data pins.


USB condoms include a tiny chip that does the negotiation to get full power. Sometimes they even work better than the phone itself at getting the charger to give up the juice.


For up to 1 amp, three resistors is enough (that's how power banks work) -- and those easily fit inside the connector.


An actual USB condom should also include a surge suppressor.

However, the closest thing to that I have found is simple battery packs which can generally remove the need for 3rd party mid day charging.



Former Imgur engineer here who worked on the desktop site and helped on mobile when I could. A lot of the code that is loaded supports features that are used by a long tail of users [1]. However, they do serve the javascript with appropriate cache-control headers and serve them from Fastly's CDN so analyzing a cold load is a bit misleading to say the least. Moreover, as other commentators have mentioned, they optimized more for the subsequent images than the initial pageloads (they'd prefetch the next N images).

Keep in mind Imgur is not a large company despite their high traffic, even at their peak of employees the engineering team was pretty small (probably about 12-15 people after series A), and the mobile web team in particular was a handful of people, with a handful of people on iOS and Android, and a handful of people on desktop/backend/API (where I worked).

That said, I think Alan does care about these things. I know at some point they did support NoScript and did care about the experience with JavaScript off (and had code to support uploading images and viewing images with no JavaScript at all). But it's hard to have it as your top priority when Reddit and Instagram are trying to eat your lunch.

I'm sympathetic with the page bloat problem and noscript and I do think more effort should be spent on optimizing this stuff, especially because bandwidth is much of their opex.

[1] Posting, voting, commenting, uploading, accounts, tagging, albums, search. There is even a hidden-ish feature to "auto-browse" in a slideshow-like manner which you can find if you crawl around the source code.


> A lot of the code that is loaded supports features that are used by a long tail of users

Bounce rate 53% according to alexa. So, majority of imgur users don't appreciate it, do hit cold load, etc. A user probably has to be dozens of interactions deep for initial loading cost to not be so high, but more likely there is no way to ever offset overhead of all that bloat for any user.

Personally, I use an extension to fix imgur brokenness and extract images from imgur pages without loading anything else.


Or the signal indicates most visitors are a result of an accidental link click.


care to share the extension? i found this, which seems similar: https://greasyfork.org/en/scripts/390194-imgur-redirect


Which extension? Sounds very useful.


> Keep in mind Imgur is not a large company despite their high traffic, even at their peak of employees the engineering team was pretty small (probably about 12-15 people after series A)

12-15 engineers is small? I'd call that a full-size team, for any single project.

"Officially" Google makes an entire web browser with less than double that: https://www.quora.com/How-large-is-the-Google-Chrome-team/an...

Conway's Law in action here. If it were one person, it'd be one IMG tag. When you put 12-15 engineers to work making a social website for serving one IMG, you get this.


Chrome.... definitely has more people working on it than that. It's absolutely ludicrous to try to say Google only pays 23 people to work on chrome. Perhaps that quora answer is being pedantic and saying only 23 people work on the closed source, non chromium bits?

Regardless, 15 engineers to make a webapp and mobile apps with all of the features mentioned for a site that gets "lots" of views (not sure how many but I'd guess we are counting in hundreds of millions of clicks a day at this point) seems pretty efficient to me?


Chrome is at the top of a giant tower of abstraction. Sure, millions of people built the tower. Imgur is even higher on the tower of abstraction, though. It should be much simpler. That's the whole point of the tower.

15 engineers doesn't seem especially efficient to me for what it does (or ought to). Just because you get a lot of views doesn't mean the software itself has to be terribly sophisticated. It usually means you host a ton of user-generated content. Websites which are relatively straightforward hosting of user content, like Wikipedia [1] and Reddit, tend to have orders-of-magnitude fewer employees than other types of equally popular websites.

[1]: https://meta.wikimedia.org/wiki/Wikimedia_Foundation/Annual_...

If you assume WMF were about 20% engineers back in 2010-2011, as they are today according to their Staff page, that would mean they had about 16 engineers. Is Imgur today as complex as all Wikipedia properties in 2011? That seems rather inefficient to me.


Chrome has hundreds of people working on it, at least. Aside from that Quora link likely being wrong anyway, it's also 7 years out of date.


Even Safari/WebKit has like a hundred people working on it, and the Chrome team is much larger. Probably an order of magnitude more.


> so analyzing a cold load is a bit misleading to say the least. Moreover, as other commentators have mentioned, they optimized more for the subsequent images than the initial pageloads (they'd prefetch the next N images).

I've been opening imgur links on my phone and watched it do nothing for like ten seconds, and I just assumed it was intentionally slow/broken so I'd install the app or something. I'm flabbergasted that it's actually the outcome of a deliberate optimization.


I'm sympathetic to what you say, but optimizing for following pages might be a tad optimistic if I leave after first failed load attempt.


It also doesn't help privacy-conscious users who clear their cache regularly. I use Firefox Focus on mobile and have Firefox in permanent private browsing mode on desktop, so I always get a cold load.


12 to 15 developers can be a significant, if they are experience and with good leadership. Lacking experience and good leadership you get what we're seeing here. I'm older, been coding since the 70's and smaller teams than this wrote many well known operating systems, many major brand-name applications, and the majority of the classic video games were created with teams 1/2 to 1/4 that size. Looks to me like Imgur has inept engineering management.


Serving less code is more important than serving lots of code from a CDN. All that javascript needs to be parsed too after downloading, and that's probably taking the bulk of the time on mobile devices.


Earning money is important so serving ads is important. The masses don’t care about them including 20 frameworks so serving less code is not important.


I've built 4 adtech companies. Page load speed is directly related to ad revenue. Faster pages provide more ad impressions, not less.


Imgur removed most of those features from the mobile site, apparently.[1]

[1] https://twitter.com/martinbean/status/1185605846933352450


We are considering 12-15 people a "small team" now? I've worked on large enterprise applications and I've never worked on a team that big!


Reddit is trying to eat their lunch?

Imgur sprung up as a fast image host for Reddit, then Imgur turned into a social media site - a competitor for Reddit.

In that process Imgur became shitty at their core function... hosting images!


Suppose you follow your argument to the conclusion, and no companies issue dividends. What is the point of owning a share of a company then?


People happily own Amazon/FB/Alphabet stock with no dividend payments.


Alphabet just announced a 25 billion dollar stock buyback.

Amazon and Facebook are growth stocks so investors don't mind them holding some cash to make investments, fund acquisitions and what not (which is what everyone here seems to want). However, they'll both probably hit a 100 billion dollars in cash in the next 5 years and I would expect dividends/buybacks to follow soon after as that is more cash than anyone reasonably needs.


They don't (currently) issue dividends because the expectation is they will buy back their stock or will issue dividends in the future.

If a company never issued dividends and never bought back their own stock, then there would be no reason for investors to buy stocks in the first place. Most investors don't invest out of the goodness of their hearts, they invest to get a return on capital.


Berkshire Hathaway doesn’t pay dividends and most probably never will.


But they have bought their own stock back and they are even considering a whopping $100 billion dollar buyback [0].

[0]: https://qz.com/1611997/warren-buffett-hints-on-timing-of-big...


Yes, but Berkshire is a holding company, made up of a lot of companies that do pay dividends, are expected to eventually pay dividends, issue buybacks, or are themselves holding companies.


Do you think that all of those people are insane? Surely you must realize that shares of non-dividend paying companies are valuable because they are expected to eventually begin paying dividends.


Owning its assets, as in the case of a holding company.


Unfortunately Dropbox recently dropped support for certain Linux file systems like XFS, BTRFS, etc.

https://news.ycombinator.com/item?id=17732912


And lost me as a customer for it.

Effectively Dropbox is no longer the universal solution guaranteed to work everywhere, which was the reason I chose them before. They deliberately killed off their unique selling point.

These days I run Nextcloud on iOS, Windows and Linux and it doesn't complain about ZFS.


For what it's worth, Dropbox recently added support for those file systems: https://www.dropboxforum.com/t5/Desktop-client-builds/Beta-B...


Make that re-added after pointlessly, explicitly and intentionally breaking setups which people/customers relied on and which had been working flawlessly for years.

That lost them a shit-ton of goodwill. Too little, too late.


Article is from April 2019.


Are facebook still "digital gangsters"?



Yup

> Good enough for me!

Sounds a bit like:

https://m.youtube.com/watch?v=PETk8eBbfN0&t=2m0s


Probably graduated to gangsters, without the qualification.


If rollbacks are not safe then you have a change management problem.

If you have a good CM system, you should have a timeline of changes that you can correlate against incidents. Most incidents are caused by changes, so you can narrow down most incidents to a handful of changes.

Then the question is, if you have a handful of changes that you could roll back, and rollbacks are risk free, then does it make sense to delay rolling back any particular change until the root cause is understood?


It's not always as simple as that. What if the problem was that something in a change didn't behave as specified and wound up writing important data in an incorrect but retrievable format? Rolling back might not recognise that data properly and could end up either modifying it further so the true data could no longer be retrieved or causing data loss elsewhere as a consequence.


In that case you would probably still roll back to prevent further data corruption and restore the corrupted records from backups.

There are certainly changes that cannot be rolled back such that the affected users are magically fixed, which is not what I am suggesting. In the context of mission critical systems, mitigation is usually strongly preferred. For example, the Google SRE book says the following:

> Your first response in a major outage may be to start troubleshooting and try to find a root cause as quickly as possible. Ignore that instinct!

> Instead, your course of action should be to make the system work as well as it can under the circumstances. This may entail emergency options, such as diverting traffic from a broken cluster to others that are still working, dropping traffic wholesale to prevent a cascading failure, or disabling subsystems to lighten the load. Stopping the bleeding should be your first priority; you aren’t helping your users if the system dies while you’re root-causing. [...] The highest priority is to resolve the issue at hand quickly.”

I have seen too many incidents (one in the last 2 days in fact) that were prolonged because people dismissed blindly rolling back changes, merely because they thought the changes were not the root cause.


In that case you would probably still roll back to prevent further data corruption and restore the corrupted records from backups.

OK, but then what if it's new data being stored in real time, so there isn't any previous backup with the data in the intended form? In this case, we're talking about Stripe, which is presumably processing a high volume of financial transactions even in just a few minutes. Obviously there is no good option if your choice is between preventing some or all of your new transactions or losing data about some of your previous transactions, but it doesn't seem unreasonable to do at least some cursory checking about whether you're about to cause the latter effect before you roll back.


I think you guys are considering this from the wrong angle...

Rollbacks should always be safe. They should always be automatically tested. So a software release should do a gradual rollout (ie. 1, 10, 100, 1000 servers), but it should also restart a few servers with the old software version just to check a rollback still works.

The rollout should fail if health checks (including checking business metrics like conversion rates) on the new release or old release fails.

If only the new release fails, a rollback should be initiated automatically.

If only the old release fails, the system is in a fragile but still working state for a human to decide what to do.


This is one of those ideas that looks simple enough until you actually have to do it, and then you realise all the problems with it.

For example, in order to avoid any possibility of data loss at all using such a system, you need to continue running all of your transactions through the previous version of your system as well as the new version until you're happy that the performance of the new version is satisfactory. In the event of any divergence you probably need to keep the output of the previous version but also report the anomaly to whoever should investigate it.

But then if you're monitoring your production system, how do you make that decision about the performance of the new version being acceptable? If you're looking at metrics like conversion rates, you're going to need a certain amount of time to get a statistically significant result if anything has broken. Depending on your system and what constitutes a conversion, that might take seconds or it might take days. And you can only make a single change, which can therefore be rolled back to exactly the previous version without any confounding factors, during that whole time.

And even if you provide a doubled-up set of resources to run new versions in parallel and you insist on only rolling out a single change to your entire system during a period of time that might last for days in case extended use demonstrates a problem that should trigger an automatic rollback, you're still only protecting yourself against problems that would show up in whatever metric(s) you chose to monitor. The real horror stories are very often the result of failure modes that no-one anticipated or tried to guard against.


I think the 80 / 20 rule applies here.


My point was that it's all but impossible for any rollback to be entirely risk-free in this sort of situation. If everything was understood well enough and if everything was working to spec well enough for that to happen, you wouldn't be in a situation where you had to decide whether to make a quick rollback in the first place.

I'm not saying that the decision won't be to do the rollback much of the time. I'm just saying it's unlikely to be entirely without risk and so there is a decision to be considered. Rolling back on autopilot is probably a bad idea no matter how good a change management process you might use, unless perhaps we're talking about some sort of automatic mechanism that could do so almost immediately, before there was enough time for significant amounts of data to be accumulated and then potentially lost by the rollback.


Because people make mistakes. Mistakes get fixed in post mortems, retros, best practices, etc. But mistakes will still happen.


Private letters have long been acknowledged as covered by the fourth amendment, that is that searching requires at least a warrant based on probable cause. It’s changed over times whether searches of papers are per se unreasonable [1]. Consent can authorize agents to open mail of course, but it might be an interesting question whether it’s an unconstitutional condition to require waiver in order to access the mail services.

[1] https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?r...


You should return the data in multiple pages for example, or exempt that query from latency SLA.


People have programs that scan github uploads for AWS, etc credentials accidentally uploaded by victim and spawn images that mind crypto for the attacker’s.

https://www.theregister.co.uk/2015/01/06/dev_blunder_shows_g...


Wow. I’m always amazed at the lengths people will go to to abuse anything and everything.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: