Hacker Newsnew | past | comments | ask | show | jobs | submit | eemax's commentslogin

We pin all of our npm dependencies and upgrade them via dependabot. Dependabot links to the GitHub or GitLab release for each dependency bump, and I typically skim / scan every single commit to each dependency. But there's no guarantee that what's on GH matches what is uploaded to npm (which is what happened in this case; there are no malicious commits).

Does anyone know of a good way to verify that a npm release matches what's on GH? Version controlling the entirety of node_modules/ and running untrusted updates in a sandbox would work in theory, but in practice many packages contain minified js which makes the diffs between version bumps unreadable.


Skip the nonsense and just check your dependencies in directly to your repo. The separation has no real world gains for developers and doesn't serve anyone except the host of your source repo. As it turns out most people's repo host is also the operator of the package registry they're using, so there aren't even theoretical gains for them, either.

Doing it this way doesn't preclude the ability to upgrade your dependencies, it _completely_ sidesteps the intentional or unintentional desync between a dependency's source and its releases, it means people have to go out of their way to get a deployment that isn't reproducible, and in 4 years when your project has rotted and someone tries to stand it up again even if just temporarily to effect some long-term migration, then they aren't going to run into problems because the packages and package manager changed out from beneath them. I run into this crap all the time to the point that people who claim it isn't a problem I know have to be lying.


> I run into this crap all the time to the point that people who claim it isn't a problem I know have to be lying.

I don't think that's right.

Just because someone denies a problem exists—a problem that you know for a fact, with 100% certainty exists—doesn't mean they're lying.

It may mean you know they are wrong, but wrong != lying, and it's a good thing to keep in mind.

If you have external reasons to believe that the person you're talking to should or does know better, then it's fair to say they are lying.

But, in general, if you accuse someone who is simply wrong to be lying, you're going to immediately shut down any productive conversation that you could otherwise have.


People don't do this because `node_modules` can be absolutely massive (hundreds of megabytes or more), and a lot of people don't like (for various reasons) such large repositories.

There is a deprecated project at my work that committed the entire yarn offline cache to the repo. At least those were gzipped, but the repo still had a copy of every version of every dependency.

It isn't a good long term solution unless you really don't care at all about disk space or bandwidth (which you may or may not).


A middle ground that I've seen deployed is corporate node mirrors with whitelisted modules. Then individual repos can just point to the corporate repo. Same thing for jars, python packages, etc.


And build pipelines that fail due to the size of the repo.


Committing node_modules and reproducibility are somewhat not orthogonal though.

You can get reasonable degrees of reproducibility by choosing reasonable tools: Yarn lets you commit their binary and run that in the specified repo regardless of which version you have installed globally. Rush also allows you to enforce package manager versions. Bazel/rules_nodejs goes a step further and lets you pin node version per repo in addition to the package manager. Bazel+Bazelisk for version management of Bazel itself provides a very hermetic setup.

Packages themselves are immutable as long as you don't blow away your lockfile. I used to occasionally run into very nasty non-reproducibility issues with ancient packages using npm shrinkwrap (or worse, nothing at all), but since npm/yarn got lockfiles, these problems largely went away.

These days, the non-hermeticity stuff that really grinds my gears is the very low plumbing stuff. On Mac, Node-GYP uses xcode tooling to compile C++ modules, so stuff breaks with MacOS upgrades. I'm hoping someone can come up with some zig-based sanity here.

As for committing node_modules, there are pros and cons. Google famously does this at scale and my understanding is that they had to invest in custom tooling because upgrades and auditing were a nightmare otherwise. We briefly considered it at some point at work too but the version control noise was too much. At work, we've looked into committing tarballs (we're using yarn 3 now) but that also poses some challenges (our setup isn't quite able to deal w/ a large number of large blobs, and there are legal/auditing vs git performance trade-off concerns surrounding deletion of files from version control history)


Ill-advised tool adoption is exactly the problem I'm aiming to get people to wake up and say "no" to. You need only one version control system, not one reliable one plus one flaky one. Use the reliable one, and stop with the buzzword bandwagon, which is going to be a completely different landscape in 4 years.

> Packages themselves are immutable as long as you don't blow away your lockfile

Lockfiles mean nothing if it's not my project. "I just cloned a 4 year old repo and `npm install` is failing" is a ridiculous problem to have to deal with (which to repeat, is something that happens all the time whether people are willing to acknowledge it or not). This has to be addressed by making it part of the culture, which is where me telling you to commit your dependencies comes from.


This problem does happen, but committing node_modules won't fix it. Assuming the npm registry doesn't dissapear, npm will download the exact same files you would have committed to your repo. Wherever those files came from, 4 years later you upgraded your OS, and now the install step will fail (in my experience usually because of node-gyp).

Unless you were talking about committing compiled binaries as well, in which case every contributor must be running the same arch/OS/C++ stdlib version/etc. M1 laptops didn't exist 4 years ago. If I'm using an M1 today, how can I stand up this 4 year old app?

The real problem is reproducible builds, and that's not something git can solve.


Assuming the npm registry doesn't dissapear, npm will download the exact same files you would have committed to your repo

I think that’s where you missed what the gp is saying.

If everything is checked in to source control, npm will have nothing to download. You won’t need to call npm install at all, and if you do it will just return immediately saying everything is good to go already.

The workflow for devs grabbing a 10 year old project is to check it out then npm start it.


You are probably not running the same OS/node/python version as you were 10 years ago. If you were to try this in real life, you'd get an error like this one. https://stackoverflow.com/questions/68479416/upgrading-node-....

The error helpfully suggests you:

>Run `npm rebuild node-sass` to download the binding for your current environment.

Download bindings? Now you're right back where you started. https://cdn140.picsart.com/300640394082201.jpg

Of course if you keep a 10 year old laptop in a closet running Ubuntu 10.04 and Node 0.4, and never update it through the years, then your suggestion will work. But that workflow isn't for me.


> which to repeat, is something that happens all the time

Are you sure this isn't just a problem in your organization? As I qualified, the issue you're describing was a real pain maybe two or three years ago, but not anymore IME. For context, my day job currently involves project migrations into a monorepo (we're talking several hundred packages here) and non-reproducibility due to missing lockfiles is just not an issue these days for me.

As the other commenter mentioned, node-gyp is the main culprit of non-reproducibility nowadays, and committing deps doesn’t really solve that precisely because you often cannot commit arch-specific binaries, lest your CI will blow up trying to run mac binaries


> Are you sure this isn't just a problem in your organization?

I'm really struggling to understand the kind of confusion that would be necessary in order for this question to make sense.

Why do you suspect that this might be a problem "in [my] organization"? How could it even be? When I do a random walk through projects on the weekend, and my sights land on one where `npm install` ends up failing because GitHub is returning 404 for a dependency, what does how things are done in my organization have to do with that?

I get the dreadful feeling that despite my saying "[That] means nothing if it's not my project", you're unable to understand the scope of the discussion. When people caution their loved ones about the risk of being the victim of a drunk driving accident on New Years Eve, it doesn't suffice to say, "I won't drink and drive, so that means I won't be involved a drunk driving accident." The way we interact with the whole rest of the world and the way it interacts with us is what's important. I'm not concerned about projects under my control failing.

> non-reproducibility due to missing lockfiles is just not an issue

Why do you think that's what we're talking about? That's not what we're talking about. (I didn't even say anything about lockfiles until you brought it up.) You're not seeing the problem, because you're insisting on trying to understand it through a peephole.


I mean, of course I'm going to see this from the lenses of my personal experience (which is that nasty non-reproducibility issues usually would only happen when someone takes over some internal project that had been sitting in a closet for years and the original owner is no longer at the company). Stumbling upon reproducibility issues in 4 year old projects on Github is just not something that happens to me (and I have contributed to projects where, say, Travis CI had been broken in master branch for node 0.10 or whatever) and getting 404s on dependencies is something I can't say I've experienced (unless we're talking about very narrow cases like consuming hijacked versions of packages that were since unpublished) or possibly a different stack that uses git commits for package management (say, C) - and even then, that's not something I've run into (I've messed around w/ C, zig and go projects, if it matters). I don't think it's a matter of me having a narrow perspective, but maybe you could enlighten me.

As I mentioned, my experience involves seeing literally hundreds of packages, many of which were in a context where code rot is more likely to happen (because people typically don't maintain stuff after they leave a company and big tech attrition rate is high, and my company specifically had a huge NIH era). My negative OSS experience has mostly been that package owners abandon projects and don't even respond to github issues in the first place. I wouldn't be in a position to dictate that they should commit node_modules in that case.

Maybe you could give me an example of the types of projects you're talking about? I'm legitimately curious.


> I don't think it's a matter of me having a narrow perspective, but maybe you could enlighten me.

I have to think it is, because the "shape" of your responses here have been advice about things that I/we can do to keep my/our own projects (e.g. mission critical or corporate funded stuff being used in the line of business) rolling, and completely neglected the injured-in-collision-with-other-intoxicated-driver aspect.

Again, I _have_ to think that your lack of contact with these problems has something to do with the particulars of your situation and the narrow patterns that your experience falls within. Of the projects that match the criteria, easily 40% of them are the type I described. (And, please, no pawning it off with a response like "must be a bad configuration on your machine"; these are consistent observations over the bigger part of a decade across many different systems on different machines. It's endemic, not something that can be explained away with the must-be-your-org/must-be-your-installation handwaving.)

> code rot is more likely [...] My negative OSS experience has mostly been that package owners abandon projects and don't even respond to github issues

Sure, but the existence of other problems doesn't negate the existence of this class of problems.

> I wouldn't be in a position to dictate that they should commit node_modules in that case.

Which is why I mentioned that this needs to be baked in to the culture. As it stands, the culture is to discourage simple, sensible solutions and prefers throwing even more overengineered tooling that ends up creating new problems of its own and only halfway solves a fraction of the original ones. (Why? Probably because NodeJS/NPM programmers seem to associate solutions that look simple as being too easy, too amateurish—probably because of how often other communities shit on JS. So NPMers always favor the option that looks like Real Serious Business because it either approaches the problem by heaping more LOC somewhere or—even better—it involves tooling that looks like it must have taken a Grown Up to come up with.)

> Maybe you could give me an example of the types of projects you're talking about? I'm legitimately curious.

Sure, I don't even have to reach since as I said this stuff happens constantly. In this case, at the time of writing the comments in question, it was literally the last project I attempted: check out the Web X-Ray repo <https://github.com/mozilla/goggles.mozilla.org/>.

This is conceptually and architecturally a very simple project, with even simpler engineering requirements, and yet trying to enter the time capsule and resurrect it will bring you to your knees on the zeroeth step of just trying to fetch the requisite packages. That's to say nothing of the (very legitimate) issue involving the expectation that even with a successful completion of `npm install` I'd still generally expect one or more packages for any given project to be broken and in need of being hacked in order to get working again, owing to platform changes. (Several other commenters have brought this up, but, bizarrely, they do so as if it's a retort to the point that I'm making and not as if they're actually helping make my case for me... The messy reasoning involved there is pretty odd.)


> check out the Web X-Ray repo <https://github.com/mozilla/goggles.mozilla.org/>.

Thanks for the example! Peeking a bit under the hood, it appears to be due to transitive dependencies referencing github urls (and transient ones at that) instead of semver, which admittedly is neither standard nor good practice...

FWIW, simply removing `"grunt-contrib-jshint": "~0.4.3",` from package.json and related jshint-related code from Gruntfile was sufficient to get `npm install` to complete successfully. The debugging just took me a few minutes grepping package-lock.json for the 404 URL in question (https://github.com/ariya/esprima/tarball/master) and tracing that back to a top-level dependency via recursively grepping for dependent packages. I imagine that upgrading relevant dependencies might also do the trick, seeing as jshint no longer depends on esprima[0]. A yarn resolution might also work.

I'm not sure how representative this particular case is to the sort of issues you run into, but I'll tell you that reproducibility issues can get a lot worse in ways that committing deps doesn't help (for example, issues like this one[1] are downright nasty).

But assuming that installation in your link just happens to have a simple fix and that others are not as forgiving, can you clarify how is committing node_modules supposed to help if you're saying you can't even get it to a working state in the first place? Do you own the repo in order to be able to make the change? Or are you mostly just saying that hindsight is 20-20?

[0] https://github.com/jshint/jshint/blob/master/package.json#L4...

[1] https://github.com/node-ffi-napi/node-ffi-napi/issues/97


I don't understand your questions.

My message is that for a very large number of real world scenarios the value proposition of doing things the NPM way does not result in a Pareto improvement, despite conventional wisdom suggesting otherwise.

I also don't understand your motivation for posting an explanation of the grunt-related problem in the Web X-Ray repo. It reminds me of running into a bug and then going out of my way to track down the relevant issue and attach a test case crafted to demonstrate the bug in isolation, only to have people chime in with recommendations about what changes to make to the test case in order to not trigger the bug. (Gee, thanks...)

And to reiterate the message at the end of my last comment: the rationale of trying to point at how bad mainstream development manages to screw up other stuff, too, is without.


Personally, I see code as a fluid entity. If me spending a few minutes to find a way to unblock something that you claimed to "bring you to your knees on the zeroeth step" is a waste of time, then I guess you and I just have different ideas of what software is. For me, I don't see much value in simply putting up with some barely working codebase out of some sense of historical puritanism; if I'm diving into a codebase, the goal is to improve it, and if jshint or some other subsystem is hopelessly broken for whatever reason, it may well get thrown out in favor of something that actually works.

You may disagree that the way NPM does things works well enough to be widely adopted (and dare I say, liked by many), or that true reproducibility is a harder problem than merely committing files, but by and large, the ecosystem does hum along reasonably well. Personally, I only deal with NPM woes because others find value in it, not because I inherently think it's the best thing since sliced bread (in fact, there are plenty of aspects of NPM that I find absolutely atrocious). My actual personal preference is arguably even less popular than yours: to avoid dependencies wherever possible in the first place, and relentless leveraging of specifications/standards.


> My actual personal preference is arguably even less popular than yours: to avoid dependencies wherever possible in the first place

You lack the relevant facts to even be able to speculate about this. You haven't even done an adequate job grappling with the details presented here without the need for assumptions.

> If me spending a few minutes to find a way to unblock something

Imagine you have a coworker who frequently leaves unwashed dishes in the sink. You wash and put them away, but it happens enough that you decide to bring it up. Now imagine that when you do bring it up, your coworker lectures you, condescendingly and at length, about the steps you can take to "unblock" the dirty dishes problem (by washing them), as if there's some missing piece involving not knowing how and that that's the point of the discussion, instead of the fact that this (entirely avoidable) problem is occurring in the first place.

You're not unblocking anything, nor would this be the place to do so even if you were. Under discussion is the phenomenon that, for all the attention and energy that has gone into NPM, frequently old projects fail on some very basic preliminary step: just being able to complete the retrieval of the dependencies, i.e. the one thing that `npm install` is supposed to do. You voiced your skepticism and then with every response to this thread moved the goalposts further and further away, starting out generally dismissive that the problem exists, then wrapping up by offering your magnanimity to what you choose to see as someone struggling.

There is no struggle here, nor are any of the necessary conditions that would make your most recent comments relevant satisfied. Like the earlier anecdote about the annoyance of dealing with the type of person who jumps in to offer pointers about how to change a failing test case so that it no longer reveals the defect it was created to isolate, these are responses that fail at a fundamental level to understand the purpose and greater context that they're ostensibly trying to contribute to.

If `npm install` is broken on a Thursday and annoys the person who ends up forced to work through that problem, and then you show up on a Saturday with a explanation after-the-fact about what to do in situations like the one that happened on Thursday, what possible purpose does that serve? At best, it makes for an unfocused discussion, threatening to confuse onlookers and participants alike about what exactly the subject is (whether the phenomenon exists vs morphing this into a StackOverflow question where there's an opportunity to come by with the winning answer and subtly assert pecking order via experience and expertise). At worst, it does that and also comes across as incredibly insulting. And frankly, that is the case here.

> You may disagree that the way NPM does things works well enough to be widely adopted (and dare I say, liked by many)

By your count, how many times have you moved the goalposts during this, and how many more times do you plan on moving them further?

> I don't see much value in simply putting up with some barely working codebase out of some sense of historical puritanism

Major irony in relation to the comment above and given the circumstances where this discussion started. Shocking advice to let your source control system manage your source code and examine the facts about whether late-fetching dependencies the NPM way makes it worth the cost of putting up with the downsides that I brought up and the recurring security downsides that lead to writeups like the article that was originally posted here.


> At work, we've looked into committing tarballs (we're using yarn 3 now) but that also poses some challenges (our setup isn't quite able to deal w/ a large number of large blobs, and there are legal/auditing vs git performance trade-off concerns surrounding deletion of files from version control history

With Git LFS the performance hit should be relatively minimal (if you delete the file it won't redownload it on each new clone anyways, stuff like this).


And what happens when you need to update those dependencies?

Software is a living beast, you can't keep it alive on 4yr-old dependencies. In fact, you've cursed it with unpatched bugs and security issues.

Yes, keep a separate repo, but also keep it updated. The best approach is to maintain a lag between your packages and upstream so issues like these are hopefully detected & corrected before you update.


> And what happens when you need to update those dependencies?

Then you update them just like you do otherwise, like I already said is possible.

> you can't keep it alive on 4yr-old dependencies. In fact, you've cursed it with unpatched bugs and security issues

This is misdirection. No one is arguing for the bad thing you're trying to bring up.

Commit your dependencies.


Let's say that the day you update your dependencies is after this malware was injected but before it was noticed.

Now you have malware in your local repo :(

Having a local repo does not prevent malware. Your exposure to risk is less because you update your dependencies less frequently, but the risk still exists and needs to be managed. There's no silver bullet.


This is more misdirection. By no means am I arguing that if you're doing a thousand stupid things and then start checking in a copy of your dependencies, that you're magically good. _Yes_ you're still gonna need to sort yourself out re the 999 other dumb things.


Sounds like a great way to end up with what $lastco called "super builds" - massive repos with ridiculous amounts of cmake code to get the thing to compile somewhat reliably. It was a rite of passage for new hires to have them spend a week just trying to get it to compile.

All this does is concentrate what would be occasionally pruning and tending to dependencies to periodic massive deprecation "parties" when stuff flat out no longer works and deadlines are looming.


That’s the whole deal with yarn 2 isn’t it? With their plug’n’play it becomes feasible to actually vendor your npm deps, since instead of thousands upon thousands (upon thousands) of files you only check in a hundred or so zip files, which git handles much more gracefully.

I was skeptical at first as it all seemed like way too much hoops to jump through, but the more I think about it the more it feels that it’s worth it.


> Skip the nonsense and just check your dependencies in directly to your repo.

Haha, no.

That would increase the size of the repository greatly. Ideally, you would want a local proxy where the dependencies are downloaded and managed or tarball the node_modules and save it in some artifacts manager, server, or s3 bucket


What's the problem with a big repository? The files still need to be downloaded from somewhere. It's mostly just text anyway so no big blobs which is usually what causes git to choke.

For that one-off occasion when you are on 3G, have a new computer without an older clone, and need to edit files without compiling the project (which would have required npm install anyway), there is git partial clone.

Does npm have a shared cache if you have several projects using the same dependencies?


>Does npm have a shared cache if you have several projects using the same dependencies?

pnpm does, that's why I'm using it for everything. It's saving me many gigabytes of precious SSD space.

https://github.com/pnpm/pnpm


Now anything with native bindings is broken if you so much as sneeze.


> Does anyone know of a good way to verify that a npm release matches what's on GH?

I'm not aware of any way to do this, and it's a huge problem. It would be great if they introduced a Docker Hub verified/automated builds[0]-type thing for open source projects. I think that would be the only way we could be certain what we're seeing on GitHub is what we're running.

Honestly it’s hard to believe we all just run unverifiable, untrustable code. At the very least NPM they could require package signing, so we'd know the package came from the developer. But really NPM needs to build the package from GitHub source. Node is not a toy anymore, and hasn't been for some time—or is it?

[0] https://docs.docker.com/docker-hub/builds/


This is ~solvable at a third party level. Nearly everything on NPM (the host) is MIT licensed or similar. When packages are published, run their publish lifecycle and compare to the package that’s actually published.

I don’t have the resources or bandwidth to do this, but it’s pretty straightforward +- weird publishing setups.

Edit: of course this doesn’t apply to private repositories but… you’re in a whole different world of trust at that point.


I started working on this exact problem a few years ago. Didn't get far, though, I think I stopped because I assumed there just wouldn't be any real interest.


I couldn't find the code, so I just started over. Haven't hosted it anywhere yet.

https://github.com/connorjclark/npm-package-repro


Awesome! Thank you.


Doesn't npm have a facility to tell it to download releases directly from source? Most package managers have in one form or the other, but I'm not very familiar with npm.

To be honest I'm not sure if npm (the service, not the tool) and similar services really add all that much value. The only potential downside I see is that repos can disappear, but then again, npm packages can also disappear. I'd rather just fetch directly from the source.

This is how Go does it and I find it works quite well. It does have the GOPROXY now, but that's just an automatic cache managed by the Go team (not something where you can "login" or anything like that), so that already reduces the risk, and it's also quite easy to outright bypass by setting GOPROXY=direct.


Deno (https://deno.land/), another runtime based on v8, has a system similar to Go, with local and remote imports https://deno.land/manual@v1.11.5/examples/import_export.


You can’t really fetch from git because for the majority of packages there is a non-standard build step that packages do not consistently specify in package.json, if at all. Packages on NPM are just tarballs uploaded by the author. Furthermore, what about transitive dependencies?


> what about transitive dependencies?

What about them?

As for unspecified build steps: this seems like a solvable problem. I would just submit a patch.


Fetching from git is possible. The downsides are lack of semver, having to clone the full history of the repo, and having to clone the complete repo including files not needed for just using the lib, eg preprocessors, docs and tests.


I assume people use tags and such no? That gives you versions and you can just fetch it at a specific tag. Either way, this is very much a solvable problem.

A few docs and tests doesn't strike me as much of an issue.


Renovate gives you links to diffs on the published npm package: https://app.renovatebot.com/package-diff?name=hookem&from=1....

It's also great at doing bulk updates, so you get a lot less spam than you do from Dependabot.


Technically you could pin directly to a git commit instead of an NPM release


Although npm supports lifecycle methods that run before publish / on install, many packages fail to use those correctly (or at all) yet still require a build step, so using the GH repo directly very often does not work.


This is the right answer. Everyone else replying is patently wrong.


Edit: this was intended for a child comment sorry


Cool post! I don't have a ton of experience using Stripe, but shouldn't you at least be handling some sort of payment_failed webhook?

It looks like you call _createSubscription and set the initial value for currentPeriodEnds before you know the payment actually succeeded, and since you don't ever check or listen for failed payments, anyone could get a free month (or year) of Checkly by using a bad card, or if the payment just randomly fails.

Maybe this isn't a huge deal in the early days, but you and your customer might not even notice the failed payment for quite a while unless you happen to check your Stripe dashboard!


This is a great comment. And it's stupid I left this out of the post, as I made a conscious decision to not deal with that now. I should at some stage.

I actually had one failing credit card already, but my customer base in in the 30+ under 100 range, so I easily caught it. Also, it was totally benign from an early customer that just had bodged renewal for their card.


The library described in this blog post looks kind of interesting as an implementation of a Maybe monad in python, but the example case is pretty silly.

It re-implements a 4 line function as a 13 line class, but the logic at the caller doesn't get any simpler:

  try:
     result = get_user_profile(id)
  except:
     # handle any exceptions...
vs. with the library:

  result = FetchUserProfile(id)
  if (result is a Failure):
    # handle the failure
  # do something with the result


Well, it does actually solve the problem in the sense that it actually causes houses to be built and students to go to college. Probably this indicates the real problem is more about bad monetary policy and bad tax policy, not a fundamental shortage of resources.

Of course, it would be better to improve monetary policy rather than just making a bunch of dumb, risky loans leading to a dramatic boom and bust.

But that would require policymakers understanding basic macroeconomics, so don't hold your breath...


Is that really solving the problem though, or does it just give us houses no one wants to own and college graduates no one wants to employ? That's what it looks like to me at least. I don't think it's bad monetary or tax policy per-say, although those are bad, but just bad policy in general. But the badness of the policy comes from a fundamental over estimation of what the government is capable of doing for a society.


Well, to some degree, people probably do want to live in big, new houses. There's also probably some "real" underlying demand for an education, outside of people needing it (or believing that they need it) to get a good job.

But the point is, regardless of how much people "actually" want them, it is in fact technologically and ecologically possible to provide most people in the U.S. with these goods. (The evidence is that a lot of people do in fact have these things, even if on paper they are underwater on a mortgage or saddled with a big student loan.)

The bad policy is that by subsidizing and insuring these loans / mortgages far too cheaply, the government is driving up demand, and as a result people buy a bigger house than they need, or get an education that isn't useful for getting a high-paying job.

If the government stops subsidizing mortgages and student loans with bailouts, insurance, laws preventing default, etc. the result will probably be

1) the cost of a college education goes down and also that less people actually go to college 2) people don't buy as many homes and buy smaller houses than they otherwise would have

If you then want more people to have houses, education, healthcare, food, etc., whether or not they are able / willing to sell enough of their labor to pay for these things themselves, the next step is to solve wealth inequality.

We could probably fix wealth inequality with good monetary policy (e.g. NGDP targeting) and "taxing the rich" in a sane way, e.g. with land value taxes, luxury consumption, VAT taxes, etc. After that, you're probably done, but for the remaining poor, the government can just give them cash, e.g. basic income, instead of trying to subsidize a million different services like healthcare and education and homes, in which case the subsidies end up in the pockets of doctors and administrators and landlords.


Okay you piqued my interest... "bad monetary policy" usually means we should be on the gold standard, but I've never heard that before as a solution for student loans. What are you proposing is the monetary fix?


A gold standard is terrible monetary policy, even worse than the status quo.

Good monetary policy would be the fed doing NGDP level targeting a la Scott Sumner: https://www.mercatus.org/system/files/NGDP_Sumner_v-10%20cop...

Monetary policy only goes so far though, if you actually want to distribute resources in an equitable way, you probably also need good tax policy.

Good tax policy would be taxes which decrease wealth inequality (e.g. luxury consumption taxes, land value taxes).


I don't think the Mercatus Center is the best place to look for good or sincere ideas on how to distribute resources in an equitable way


Rampant inflation would fix those student loans real quick. Ditto for discharging the crippling mortgage payments and credit card debt.

Not suggesting I recommended it but it is an option for an across the board sweep.


not OP, but taking a stab, since I agree with their assessment:

bad monetary policy, in regards to student loans, means subsidizing them in two ways (at least):

1. paying the difference in interest rate compared to going to a regular bank. I can get student loans w/3% interest, but a bank wouldn't give me that money for less than 7%.

2. Making it illegal to disperse student loans in bankruptcy.

Both of these are monetary policy decisions that "hide" risk.

The higher interest rate is to compensate for the riskiness of the loan, while making loans illegal to disperse is the gov't response to the risk.

both of these will hide increasingly large amount of risks, where everything looks like its going fine, until it will all fail, catastrophically.

The politicians will claim no one could have seen it coming, but if they'd not allowed the risk to be so hidden, they'd have seen the institution crumbling decades before.

edit: formatting


Individual animals in the wild die cruel, slow, painful deaths all the time, regardless of whether their species as a whole is thriving or not.

So while this video might evoke a strong emotional response, it is just an anecdote and not actually evidence one way or the other for human-caused climate change.

Of course, human-caused climate change is obviously real, but it's not OK to use bad arguments even if your conclusion is correct.

EDIT: To clarify further, I am not disputing that polar bears are not being harmed by climate change, or even that the particular bear in the video is not starving because of climate change. I am merely pointing out that one could produce a video of a starving polar bear both in our world and in the world where climate change is not happening, because wild animals starve all the time.


We need to use whatever arguments are effective to steer people and governments to good policy as long as we know those arguments are supporting a cause that is scientifically backed. Some people don't get and don't want to get scientific evidence. If there is a plausible causal chain from global climate change to that suffering polar bear we should be using that polar bear to convince people climate change is real and bad. Most people don't have the patience for or interest in scientific arguments and we have to convince most people that this is a problem not just hacker news.


There is no way for the general public to distinguish between arguments presented in this way that are genuine, and ones that are pseudoscientific. It's never ok to knowingly present information which is unscientific, to achieve a political end. It can be discredited and may do more damage to the cause than good. In this particular case, polar bear populations are not observed to be in decline, so it's a poor argument to make, which can easily be discredited with real data.


I strongly disagree with you. There is research in social psychology showing it is difficult for most people to reason from technical, statistical arguments. One thing I have noticed reading political essays recently is that the emotional impact of scientific, technical and statistical arguments can be bolstered with fitting specific anecdotal evidence. Logic is often not enough to change people's views or motivate them to change. This applies to the most logically minded of us as well.


That’s called propoganda. You advocate using propoganda to justify a cause you have predetermined to be scientifically backed. Why not just use the scientific backing directly?


Those who understand Climate Change and love science should not cede the ground to those who disagree with them without a fight.

Most people don't understand science at the same level as many of the participants in HN. They are not bad people, and they are not necessarily ignorant people. But they don't base their decision-making process on scientific principles. Someone needs to reach out to these people and help them understand the world we live in.

Ten minutes on Fox News and similar sites should convince anyone that Climate Change deniers understand how to influence people and do all they can to advocate for those who profit from policies that harm our world. We have to use the available tools to counter their arguments.


You’re denying the people of their right to be treated as an intellectual equal.

If they don’t understand it is their loss. You can bet against them in the capitalist system. Sell them houses near coastlines. If they’re truly ignorant and you are in fact backed by science and facts, then you can profit off of them.

But if you can justify doing this for people you consider not worthy of the facts because they probably can’t understand it, then you can justify it with anyone.

This is the same rationale for why the FBI or law enforcement want more power. People don’t understand that thy are the GOOD guys and they won’t abuse their power.

If the good guys—the people backed by science in this case—lose their accountability to actual science by being good at propoganda and manipulation, why even bother with science? They’ll become bad guys soon enough when they finally realize the facts never mattered.

When propoganda is spewed from both sides the only way to tell is to have facts based on reality that people can independently verify. That’s the whole point of backed by science. If you deny people the option to independently verify then you deny them the ability to find out who’s right and who’s wrong. At that point are you any different from the bad guys?


Because a large portion of people in the world are not swayed by logical arguments. So, if you want to be an effective communicator, you need to talk to them in a way that works.


The opposing side is currently employing a constant deluge of propaganda to push their side of the debate. Insisting that we educate all of humanity in enough science to sway them would be bringing the proverbial knife to a gunfight.


Yet the pro-global-warming side has been caught falsifying data on any number of occasions -- from "tweaking" a century-long trend via a correction curve to a recently publicized outright falsification of data in order to "proove" that sea levels increasing.

This muddies the water and gives me great doubt to the veracity of the overall claim. If man-caused global warming were happening, the data would show it. Why fake the data?

Perhaps because data proving anthropomorphic climate change does not exist and it must be manufactured? But it has become a near religious issue and dissent is not tolerated. Daring to ask for proof is a thoughtcrime.


...sigh...

They corrected a known design flaw in the temperature recorder. This isn't freaking tweaking, it's properly accounting for external variables. The fact that you bring this up shows once again that depth of knowledge and experience in one field does not automatically translate to another field. I'm going to break it down in a way I'm pretty sure you'll understand. Back when the browser wars were going on web pages would have to "tweak" their code so that a page would display properly. Just because someone had to write a bit of code because because of a flaw in the browser didn't mean that the data from the website was a lie.

The thing with the seal level rise? Is this that old Morner chestnut that's been thoroughly debunked numerous times and has been around since the early aughts? The guy was found himself to be falsifying data but every few years someone trots this out as new and it's just not. I mean it's a pretty straightforward well known phenomenon that adding more water to a body of water will make it rise and since we are losing land ice we're adding more water to the ocean. You can easily replicate this by putting an ice cube in a glass of water.

There's mountains of proof of human caused global warming and really it's basic science, but none of that matters because if you choose not to believe the proof there is nothing anyone can say to you to make you believe otherwise. No one can make you stop believing hucksters and conspiracy theorists. You can dissent all you want, but it doesn't make you right.


Nope. This is yet another story, which broke last week: http://www.breitbart.com/big-government/2017/12/06/tidalgate...

"The whistle was blown by two Australian scientists Dr. Albert Parker and Dr. Clifford Ollier in a paper for Earth Systems and Environment."

It's another example in a long line of data "adjustments" that turn science on its head, showing the opposite of what is actually observed. I call it fakery. You have other words. We disagree.

As I said, it muddies the waters. There may be real evidence, but this fraud doesn't help the cause.


I wonder, is this the same Albert Parker who went out of their way to hide acceleration of sea-level rise [1] and the same Cliff Ollier that "is a member of...a climate change denial organization consisting of mostly retired engineers and scientists from the mining, manufacturing and construction industries" [2]?

[1] https://quantpalaeo.wordpress.com/2016/03/20/albert-parker-h...

[2] https://en.wikipedia.org/wiki/Cliff_Ollier


Yes, exactly. Faking data (and making emotional appeals like this one) make me doubt the argument. Or at least the motives of the people behind the argument.


Are you saying everything that exists outside of deeply researched technical arguments is propaganda? I'm not talking about misleading people, I'm talking about finding ways to reach people who will never be interested in digesting the type of science it takes to deeply engage with the scientific arguments. Those people exist and they vote.


Propaganda can be used for good. A lot of the stories that children are told by their parents and teachers when they were young are based on the same principals as propaganda. Propaganda is just done on a larger scale.


Using bad science to influence gullible people is a great way to end up with terrible policies.


This is not right. If ends justify means then we are in for a world full of strife and tribalism. This will make it difficult for people to trust anything. This is how we have ended up in this highly charged outrage porn filled world where everything and anything real or fake is being used to influence and recruit people towards one ideology or the other. This in the long term does more harm than good even for the people who think they are on winning side. It is more than ever important to be genuine and real and not do or say anything just so that we can win an argument, however right we might think we are.


Tangentially to this, I was wondering the rather simple question of - "Where do the old age ducks/birds go to die?". They mostly get eaten is the rather obvious answer [0].

[0]: http://ww2.rspb.org.uk/birds-and-wildlife/bird-and-wildlife-...


> I am merely pointing out that one could produce a video of a starving polar bear both in our world and in the world where climate change is not happening, because wild animals starve all the time.

Good thing most people already know we don't live in the world where climate change is not happening, so this doesn't need to act as proof of it happening, but rather as an individual example of what is happening all over.


Yes, this is probably effective propaganda, and moreover, propaganda for a just cause.

Nonetheless it is not evidence for climate change, and moreover it is not even necessarily an example of the effects of climate change - even we 100% fix climate change, there will still be plenty of starving wild animals available for sad videos.

The actual way to ensure there are no starving wild animals is to exterminate them in the wild. But that's probably not a good idea.


"propaganda for a just cause"

That's a troubling turn of phrase.


Has anyone in the linked article or in these comments tried to position this video as evidence of climate change? I don't see it if so.


Sure, it's in the original article.

> By telling the story of one polar bear, Nicklen hopes to convey a larger message about how a warming climate has deadly consequences.


I don't think the photographer would say that the polar bear should convince anyone that climate change is occurring. Instead, they assume that reasonable people are already convinced, and are trying to demonstrate a reason one should care.


No. "Here's an example of the effects of climate change" is not the same thing as "this is proof of climate change"


Being an animal is generally a terrible experience, so using an individual, terrible experience of an animal to make a political point seems disingenuous.


Isn't it much more likely that Tether is keeping some or most of its reserves in bitcoin, rather than USD, as they claim?

Most people get Tethers from trading on bitcoin exchanges, not from buying it with USD. If Tether accepts payment for tethers in bitcoin, they're supposed to immediately cash the bitcoins out to fiat. But my guess is they're not actually doing this, and just keeping a good portion of their reserves in bitcoin. If that's the case, they'll be solvent as long as the price of bitcoin doesn't collapse (big if) - in fact, they would have made a fortune over the last couple of months.


This seems like the most likely explanation to me. It's a classic "Heads I win, Tails you lose" calculation, but with much better odds and payoffs than usual! If you are bullish on BTC as I am, this will turn out to be fine for everyone.


Related to this, does anyone know of a way to bet against bitcoin (or cryptocurrencies in general), without buying cryptocurrencies themselves?

There are lots of more-or-less sketchy cryptocurrency exchanges which allow you to shortsell bitcoin, but these all require buying cryptocurrency in the first place as collateral.

The upshot is that if you bet against bitcoin and win, you win a bunch of... bitcoin. Not so useful if you want to bet on a big collapse of crypto in general.

Selling bitcoin futures seems like a way to bet against bitcoin in the long term, using actual USD, although I don't think that will ever be an option for smalltime investors / individuals.


I recently sold a bunch of Bitcoin, and I believe it will go down, but I would never, ever short it.

When you are long you have limited downside because the price can never go below zero. When you are short your downside is unlimited because there is no limit to how high the price can go.

Bitcoin could easily quadruple in a few months before any crash happens, and your short would probably just be liquidated at a total loss to you even if a crash comes later. As we all know, markets can remain irrational longer than you can remain solvent, and these markets are very irrational right now, as clearly demonstrated by the nonsensical valuation of recent scam Bitcoin forks "Bitcoin Gold" and "Bitcoin Diamond".


What is a crash though? Even at current levels, if it quadrupled and then crashes down 50%, is that a crash? Depending on when you acquired your initial position, you might not care much.

Apple added 250bn in market cap this year. The entire crypto space is about 250bn in market cap. That does put things into perspective. Given crypto is completely global, this space is still tiny. Visa has a $250bn market cap.

My assumption is, if it does end up being a massive bubble and mania, it will run high (like the .com bubble -- into the many trillions) but there will a major correction at some point.

But how likely is it that will be any time soon? And are todays prices still cheap then? I would wager digital gold is probably worth more than $150bn in total market cap, even after a major correction.


What would you rather own? All the shares of Visa Inc. or every cryptocurrency token in the world?


Not exactly the best comparison. Obviously you would pick the shares since they are less volatile -- if they could magically appear in your brokerage account.

However, say I had a net worth of $50m, then I'm quite certain I would rather own $5m in Bitcoin or Ethereum today than own $5m in Visa shares.


My point is that if you magically owned Visa tomorrow, you’d be very rich. If you magically owned all those cryptocurrencies tomorrow, would they be worth anything? Why would anyone want to buy them from you?


Visa is a company and produces profit from people using it, and the people using it don't need to be shareholders, so if you're the only shareholder, you're not reducing the utility for anyone else.

If you own all of the cryptocurrency in the world, nobody else can use it. Obviously it becomes worthless. The same would be true of any currency. That's just a small part of what makes a currency different to a company.


Companies are different in comparison to commodities though. That said, you're right -- it is still early, and people aren't exactly sure how much exactly these things are worth. But part of the value derived comes from having a network of people using it.

If tomorrow you owned all the gold in the world and no one would be able to get or buy more gold -- that gold wouldn't be worth much.


> If tomorrow you owned all the gold in the world and no one would be able to get or buy more gold -- that gold wouldn't be worth much.

Wouldn't it be worth a lot because you cornered the market on it?


I agree that gold is impossible to value (at least, the current valuation is impossible to justify). I always liked this argument by Buffett: http://www.businessinsider.com/buffett-on-gold-2012-2

But for bitcoin the situation is much worse. Gold would still be worth something. It is a physical object, a collection of atoms with some nice properties. It has industrial uses and the bling-bling factor wouldn’t disappear. If anyone needs or wants gold he would have to mine it, an expensive thing to do, or buy it from you. You could always sell it marginally cheaper.

Anyone could easily come up with an alternative cryptocurrency and what advantage would yours have over the others?


I can follow your reasoning, but: 1) just because something is digital does not make it worthless, 2) those gold use cases are really exaggerated when it comes to price setting. Gold derives its value mostly from the fact people say it is valuable, not from jewellery and industrial purposes.

One way to think about it: you have video games. There are marketplaces where people can buy and sell digital skins / clothes / gold etc. Every year, people buy and sell for more than $50bn in digital goods through these marketplaces. Ten years ago, I sold a top notch World of Warcraft character for a sizeable amount of money. Are all these people crazy? Maybe.

But how is value of these items set? Supply and demand and network effects. Why did a World of Warcraft character sell for €10,000 easily 10 years ago, and now it goes for a fraction of that? Less people are playing the game.

The same applies to cryptocurrencies -- you have two major cryptocurrencies: Bitcoin and Ethereum. Bitcoin has limited use cases, and yet, most of money flows into it, due to its network effect and brand recognition. It has all the properties of gold, that you also described. It was created in the image of gold.

You could create something similar no doubt. But would it get traction? Unlikely. These are for the most part winner takes all markets for a specific value proposition. For Bitcoin, it's digital gold. Ethereum is smart contracts, etc.

I'm not saying investing in crypto is a sure thing. It is very risky and speculative -- a bet on the future, with great upside if things work out. But at the current stage crypto is in (at least Bitcoin and Ethereum), in comparison to say a high risk investment in a startup or in a risky small cap stock, it seems (to me at least) to be a less riskier when looking at the potential upside.

If it doesn't end up being digital gold, it will most likely be worthless. But institutional money wouldn't be looking at buying in if they didn't think there was a decent chance of success.

Again, not investment advice, just a personal opinion.


Today the two major cryptocurrencies are bitcoin and ethereum. The two major cryptocurrencies two weeks ago were bitcoin and bitcoin cash. Three years ago they were bitcoin and ripple. Four years ago [1] the two major cryptocurrencies where bitcoin and litecoin. The next two major cryptocurrencies were peercoin and namecoin (they are 50-75% down over the last 4 years). Will bitcoin remain on top forever? Maybe. But it doesn’t have the properties that give some intrinsic value to gold. It’s not just the network effect. You cannot replicate gold, but you can create new cryptocurrencies (how many hundreds do exist already?) which are just as good as bitcoin or better. Why does the network effect matter anyway if it’s not used for transactions?

[1] https://www.forbes.com/sites/reuvencohen/2013/11/27/the-top-...


A high market cap (for wealth transfer bandwith and lower volatility), high hash-rate (for security) and large number of people willing to accept it as money (for conversion use) also give it value. Hence it isn't exactly true that you can just copy it, these other network effect metrics matter.


Pointless comparison:

Cryptocurrencies aren't very useful if you own the entire supply.

Visa is still useful even if only one person owns all of the shares.

More interesting would be "if you have $X to invest, at what value of X does it become preferable to buy Visa shares instead of cryptocurrency?"


I agree, comparing the total quantity of bitcoin or dogecoin or dentacoin in circulation with the market capitalization of a traded stock (or the valuation of a private company, for that matter) is pointless.


Kraken allows you to short-sell. You can use USD on the exchange, so you can immediately short-sell and convert to USD if you want.

https://support.kraken.com/hc/en-us/articles/209238787-Margi...


Yeah if anyone knows how to short these futures for someone with a few grand in his pocket to throw away on a risky gamble, please let me know.


Interactive brokers accounts can trade cme futures. Whenever these come out it will be just what you're looking for. The margin requirements allow you to short with a lot of leverage too.

Good luck.


eToro let’s you short BTC, Dash, Ethereum, LTC and XRP, though no leverage is available for those, and the spread can be a bit daunting.

Disclaimer: I don’t usually trade crypto there, this is not investment advice.

EDIT: apparently it’s not available in the US.

If you sign up with my referral link we both get $20: http://etoro.tw/2BmqxQb


I should clarify that I am in the U.S... seems like most of these sites are unavailable here.


Oh, sorry. I believe one of the main exchanges (bitsane/bitfinex Maybe?) was going to make options and margin accounts available, don’t remember which one. There’s also cex.io, but it’s in the UK.

I guess you’ll have to wait for JP Morgan’s fund to go live...


I’m sure you could get odds in Vegas, though probably not long terms.


I'm exploring buying options with the publicly traded Bitcoin based funds. No way do I want to depend on one of the Bitcoin exchanges. Plus, if things really go south, I'm sure I'll still be able to get paid using the stuff on the nyse or nasdaq.


Which funds are you looking at? I haven't found any way to reliably trade Bitcoin options.


Some forex brokers offer bitcoin CFDs, where you can take the short side.


Why would you do that?


Because it seems like an obvious bubble, with little / no real value?

Bitcoin was / is a pretty cool technological innovation. But the main use-case seems to be buying illegal things or laundering money, and vast majority of transactions are just speculators.

Also, the fact that it is difficult to short is further evidence of a bubble.


Two reasons you should not short Bitcoin.

1) Actual legitimate Bitcoin payments are increasing and real.

https://blog.bitpay.com/bitpay-growth-2017/

2) Even if this whole thing does collapse, never short something like Bitcoin unless you have unlimited cash to support such a short. Even if you're right, can you continue to sustain your short if the peak is $100,000 a coin before the collapse? (And the new bottom is $10,000, still higher than right now)

If the price goes there, can you afford the short?


There's a dead comment that says this:

>* Traders will devise a suitable risk management plan prior to initiating a short position. Unlike the media driven nonsense about a large activist shorting a bull market permanently, many traders can and do make leveraged long and short trades all the time, with the intention of exiting at a loss if the trade doesn’t go immediately (for some definition of immediately) in their favor. *

In my experience working at prop firm that was trading (speculating) on futures, this is the case. We would enter leveraged short/long positions based on some kind of signals with some kind of risk framework for all assets in the portfolio.


What about a put instead of a short? There aren't any options markets yet, but that might be interesting.

Also a smart shorter will always put in a stop loss price, but stop losses can slide because cryptocurrencies move fast.


I can't find any decent options market. Do you know one?


If I did I wouldn't have said what I said...


How can a single coin attain $100,000 ?


By someone being willing to spend $100k to obtain one.

I’ve been watching bitcoin since sub-$1 prices. A further 10x increase at some point is not so crazy.


Good luck with that. I mean no disrespect, but I'm guessing you don't have much experience in this sort of thing if you're asking how to short bitcoin and claiming you've predicted the top of the market when it's been steadily climbing for 7+ years. It doesn't matter what reasoning you've come up with, markets are rarely driven by reason. As a wise man once said, the market can stay irrational longer than you can stay solvent.

If you want to make money, buy BTC. You might win and you might lose, but your odds are far better than if you try to short something that has made a 1000% gain in the last 12 months.


And don't you think that buying illegal stuff is a very large market and enough to make some cryptocurrencies specially interesting?


No, not really. It makes it useful for criminals, and a small number of people who might have good reasons to buy things, but that doesn't mean it should have a market cap of billions.

I'm not saying there are no good uses for bitcoin, of course. For example, if someone used bitcoin to import prescription drugs from Canada to the U.S, that's great. But it doesn't justify a $100B+ market cap, and as it stands, the hard part of doing that is importing the drugs, not exporting the money.

Similarly, just because Tor is useful mostly to criminals and a few human-rights advocates or privacy enthusiasts, I don't expect the Tor Browser to ever gain a major market share.


You are confusing illegal stuff with being a criminal and also subestimating the market size of just the casino industry.


Yes, gambling is arguably much larger than drugs by mail.


Ok, so you have no clue about this market and are making investment decisions based off your gut intuition and wild guesses. I’m sure you can find someone willing to sign the other side of that contract. Just don’t hurt yourself or gamble more than you can afford to lose.


Go long in real currency or physical commodities?


Nice. In general, I think cryptocurrency and blockchains are pretty over-hyped, but they are a good example of a non-trivial but still simple application to show off a programming language.

For example, compare this to https://news.ycombinator.com/item?id=14439789

for a simple blockchain implementation in Haskell.


The comparisons in this article are mostly against the high-end Intel core line, but these CPUs support server / enterprise type features like ECC memory, lots of PCI-E lanes, and virtualization features (I think?).

Shouldn't Threadripper be compared to Xeons?

EDIT: Or rather, what I'm really wondering is what these CPUs lack that AMD's server line (EPYC) have.


EPYC has double the PCIe lanes, double the DRAM channels, and will have enterprise level support. Threadripper is classified by AMD as a Ryzen family product, and is consumer focused (or super high-end desktop focused) rather than enterprise focused. TR will be on shelves, EPYC will not.

AMD's 16-core EPYC part (the 1P 7351P) is around $750, but supports 2TB/socket and 128 PCIe lanes in exchange for a good chunk of frequency (2.4G base, 2.9G Turbo). Threadripper is also single socket only - most of EPYC is 2P.

Though given Intel's pricing, if AMD has the ecosystem, then the mid-range of the Xeon line might migrate to TR/EPYC.


I've never heard of Capture the Flag. Can someone a bit more familiar describe what the format is exactly, or what type of questions / challenges there are? Any examples from last year's competition?


In general CTFs are a list of problems you're trying to solve in a set amount of time, ranging from a few hours to a week or so (and some open CTFs are not time bound, just for learning). They will tell you something (normally very, very minimal info or a hint) and you try to find or figure out a string to 'capture the flag' and get the points for that problem. The harder the problem, the more points you get. The person or team with the most points win.

The types of questions range depending on the CTF and the goal of the CTF. Some of things from the OWASP top 10 (1), while others might have logic problems, math problems, or reference problems where you figure out the answer instead of finding it.

1)https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Proje...

See https://en.m.wikipedia.org/wiki/Capture_the_flag under 'software and gaming' for security CTFs


I used to participate to a few as a hobby back in my university days. The online ones are generally jeopardy-styled, where you get a bunch of challenges across a variety of categories. Web, Crypto, Stego, Binary Analysis, Reverse Engineering, network packet analysis, etc. There's a bit of everything really, and they also try to have a range of difficulty with an appropriate point system.

CTFTime [0] is a great central hub for finding, tracking and reading about CTFs. After each CTF event, many people publish "writeups" which is basically their "walkthrough" of how they solved each challenge. You can try looking at those to get a rough idea of the various skill sets required for these events. The ctfs github group [1] is filled with them.

[0] https://ctftime.org/

[1] https://github.com/ctfs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: