Consider a one-week trip to Hawaii in the darkest time of winter; supplementation studies have shown that the body can safely absorb and store 100K IU, that it takes about 2 months to deplete such an amount. Your skin produces 20k IU in 30 minutes on a sunny day at the beach (wearing a swimsuit). If you're in the PNW (or central Europe at the Southern German border) your skin cannot produce vitD from October through April because of how low the sun is in the sky even at its zenith.
I definitely should do more sunny trips in the fall/winter. I am going to try tanning in a UVB bed a couple times a week during the bad months this year. I have heard that it has helped some people.
If you decide to supplement vitD in winter time note that US government recommendations are way too low [1][2] compared to normal practices in Nordic countries: should be daily 3000 IU for kids (toddlers and up) and 8000 IU for adults.
This is anecdotal, but I can confirm this from my experience. I was taking 2000 IU per day, and I felt the effects of seasonal depression. When I increased it to 5000, I felt so much better.
I agree that the recommended dose is probably too low, but part of the reason for the discrepancy may be poor absorption of many supplements. I ended up with side effects (racing heart and thirst that wouldn't go away - which reliably went away when I backed down the dose and came back when I upped it again) from only 2000IU/day (as an adult). But I was taking it in the form of drops under the tongue (and as a combined formulation of D3 and K2) which I suspect is absorbed much better than the typical pills which are swallowed.
for OS class we had to write a distributed game on top of the Andrew File System; debugging a nasty crash I managed to distill our team's code down to 10 lines to remotely crash/reboot any chosen AFS workstation on campus; thus I regularly emptied workstation rooms at CMU (most students just gave up after a couple of sudden computer reboots) so that CS friends could work on and finish their homework; I may have also crashed some professor workstations from time to time, idk
For the last 25 years at 8 different jobs everything is over NFS. Every company uses Unix groups and some have used groups to manage project access. Sometimes it can take 3 days to get added to a group.
When I started college in 1993 we learned in the "Intro to Computing Environment" class our first semester how to create ACL groups.
I have a degree in computer engineering so I understand binary, octal, hexadecimal, but chmod 755 or 644 or whatever is not exactly intuitive.
AFS permissions are much easier to understand. When we had a group project we would all make a directory for that project and only give access to the other people in our group. We could give them read or write and everything worked great.
I know NFSv4 has ACL support but I've never seen it actually used anywhere.
I've been using the PurpleAir website (map.purpleair.com) for many years now, mostly out of concern for heavy wildfire pollution and winter inversions vs personal outdoor+athletic activities; one interesting aspect is that you can buy your own sensor and thus join the community of local sensors for increased accuracy of measurements. They let you pick between various international standards and types of pollutants. They seem to have a dense network of sensors in North America, not sure about the rest of the world.
Strength Through Joy (KdF) was not the same as FKK. KdF was focused on the "beauty of labor", middle-class leisure, physical health, and (domestic) tourism.
Freikörperkultur (FKK) has roots in the 19th century Lebensreform reactionary movement, and got an official structure as of 1898, so it predates the division of Germany into BRD and DDR by several generations.
Yes! this book is a first-person account of product development at Apple, showing how both the design and the features of a system tool keep progressing under the selective pressure of iterative review sessions, with creative and specific challenges provided by managers at multiple levels, all the way to the top (Steve Jobs at the time of the book's tales). It's a great book, very well written, the only one I've read that provides insight into (some of) Apple's creative process. When they say that Steve's DNA has profoundly shaped the company, I imagine this is one aspect of what they mean.
So they should promptly update their policies to a) stop logging so much, b) delete all past logs, and c) sharply limit the span of time until deletion of whatever logs they decide they really need to track for internal needs.
They should avoid logging, and rapidly rotate logs, to thwart future subpoenas from the total surveillance state.
PyPi is in a tough spot because they're also getting hit with an onslaught of malicious packages, which got to such a bad point they had to disable signups. How do they mitigate that kind of activity without logging basic metadata like the IP address that published a package? Also, as a user of PyPi, wouldn't you prefer that a malicious package is at least _somewhat_ traceable to an attacker? Of course most would be behind a VPN but it's better than nothing (or maybe it's not, depending on the tradeoff).
Note that the blog post doesn't say they handed the entire database over to the feds. They received three warrants scoped to specific packages and returned only the data they had available that was associated with those packages.
That's also a highly effective mitigation against legitimate users, especially those already disadvantaged everywhere else by a lack of disposable income.
Charging goes well for many online services. Hosted email, hosted VPS, and hosted SaaS are some that come to mind. Apple Store and Google Play charge to host mobile games.
Now they need to subpoena both PyPI and the payment processor. It does slow them down but effectively does nothing to "thwart future subpoenas from the total surveillance state".
For the kind of service they are providing I think the logging is appropriate.
I mean if DOJ is interested in PyPI logs the only reason I could think of, is if it was used as a supply chain vector into breaking in into other organizations.
or locating the people responsible for DRM breakers like youtube-dl?
Keeping all the data makes you susceptible to subpoenas like this, which costs money to comply with. There is no reason to keep any data that isn't necessary to the service.
youtube-dl is an application, there's no benefit for the author to publish it there.
I also checked PyPI repo and looking at it I don't think it is published, there are few that look like might be it, but looking at them, they feel like something that would install something nasty on your computer.
As a somebody who relies on PyPI for development, I want them to store all kinds of data that would discourage PyPI to be used for any shady purpose.
Also perhaps there's a better example than youtube-dl, as they really don't do anything illegal. Yes their github repo was taken by DMCA request but then it was reinstated back. The GitHub spin it that they are standing up for developers, but the DMCA was just frivolous. I guess, still good for them, because they could just take it down and do nothing.
This is for package management. I want the supply chain to be secure and would rather know when something unusual happens. Not logging that data would be irresponsible on PyPi's part.
"As a result we are currently developing new data retention and disclosure policies. These policies will relate to our procedures for future government data requests, how and for what duration we store personally identifiable information such as user access records, and policies that make these explicit for our users and community."
PyPI still fully supports mirrors (though it is becoming increasingly hard to run a full mirror of PyPI, last I looked a full copy of PyPI is about 30TB).
The only thing we ever removed was designating any particular mirror as official and an auto discovery protocol that was quite frankly extremely insecure and slow. That worked by giving every single mirror that wanted to be an "official" mirror for auto discovery a subdomain of `pypi.python.org`, labeled {a-z}.pypi.python.org. A client would determine what mirrors were available by querying last.pypi.python.org, which was a CNAME pointing to the last letter that we had assigned, that would tell it how many mirrors there were, then they could work backwards from that letter. So if the CNAME pointed to c.pypi.python.org, the client would know that a, b, and c existed.
Immediately you should be able to see a few problems with this:
- It is grossly insecure. Subdomains of a domain can set cookies on the parent domain, depending on ~things~ they can also read cookies.
- It does not scale past having 26 mirrors.
- It does not support removing a mirror, there can be no gaps in the letters.
So we needed to remove that auto discovery mechanism, which raised the question of what, if anything, we should replace it with?
Well at the time we had only ever made it up to g.pypi.python.org. So there was only 7 total mirrors that ever asked to become an official mirror. To my knowledge we never reused a letter, if a mirror went away we would just point the mirror back at the main PyPI instance. I don't remember exactly, but my email references there being only 4 mirrors left.
From my memory at the time, most of those 4 mirrors were regularly hours or days behind PyPI, would regularly go offline, etc.
But again, we never stopped anyone from running a mirror, we just removed the auto discovery mechanism and required them get their own domain name. We even linked to a third party site that would index all of the servers and keep track of how "fresh" they were, and other stats (at least until that site went away).
Running a mirror of PyPI is a non trivial undertaking, and most people simply don't want to do that. We never had many mirrors of PyPI running, and as it turns out once we improved PyPI most people decided they simply didn't care to use a mirror and preferred to just use PyPI, but still to this day we support anyone to mirror us.
Firstly, Debian's mirror network URLs allow a mirror operator to attack the base Debian.org site if they rely on cookies on debian.org (they may not, I'm not sure). Specifically the `ftp.<country>.debian.org` aliases cause this. On PyPI we did use cookies at the base url, so this was a non starter for us to keep.
The second thing here is that Debian and PyPI from a technical level about how mirrors are configured and hosted are generally similar. Meaning other than the above aliases, mirrors are expected to have their own domain and users are expected to configure apt or pip to point to a specific domain. Debian does have a command that will attempt to do that configuration for you to, to make it easier.
The third thing is that Debian's mirrors are as secure as the main repository is against attacks from a compromised mirror operator. This isn't the case in PyPI where you're forced to trust the mirror operator to serve you the correct packages. There is vestigal support for a scheme to support this in the mirroring PEP, but nothing ever really implemented it except the very old version of PyPI (none of the clients, etc). That scheme is also very insecure, so it doesn't really provide the security levels it was intended to.
The fourth thing is that a Debian mirror is easier to operate.
Packages on Debian don't live forever, as new versions are released old versions get removed, and as OS releases move into end of life, entire chunks of packages get rotated out. However on PyPI we don't have the concept of an OS release, or any sort of phasing out of old packages. All packages are valid for as long as the author makes them available. This means that the storage space to run a PyPI mirror (currently ~30TB) is a lot more than the storage space for a Debian mirror (~4TB).
On top of that the way apt and pip function are inherently different. Apt has users occasionally download the entire package set so that apt has a local copy of the metadata while pip asks the server for each package for the metadata (it does some light caching, but not a lot). This means that to discover what packages are available, apt might make one request a day while pip might make 100 requests for every invocation of pip. Packages on apt release a lot slower and less often than on pip. so many times people may not be needing to download more than a handful of packages, but people generally need to download a lot of packages from PyPI at a time.
I believe? the Debian mirroring protocol is rsync based, which is generally pretty reliable, while the PyPI mirroring protocol is a custom one which works, but it sometimes has a tendency to get "stuck" every few months and require operators to notice and fix themselves.
I suspect the differences between the strength of the mirror network is some combination of the two, but I suspect the the third and fourth things are the biggest differences, particularly when PyPI's CDN solved the problem in most users minds that would cause them to want to host or use a mirror.