Is this https://lccn.loc.gov/2003636872? It’s in Spanish and directed by George Melford, but it’s on VHS and it’s dated 1992. Plus it has English subtitles.
It would be fun to see this, but I get the impression that it would involve a visit to the Library of Congress. Or maybe Cuba?
Is there more information on the Library web site that I’m missing?
It's an extra on the Dracula Complete Legacy collection; I don't think it's rare or anything, usually it gets bundled with Universal's collections of Dracula films i think.
Not rare, and outside of Lugosi's performance, is considered the better film. Think about it, the Spanish language crew could watch the English dailies and improve on their work.
It looks like any registered reader at LOC can request it for on-site use, but that seems like the only way to see it. Anyone can become a reader but you need to complete registration in-person at the building.
The www-data user (or whatever the web server is running as) should not own any files that are served by the web server. The user should not be able to log in either (its shell should be /bin/false or something similar).
Use an entirely different user for file ownership.
Please please please use standard command line flag format (getopt_long) instead of the Go’s flag format. It’s super irritating that Go programs do it their own, less convenient way.
I believe GP's remarks are that single hyphen followed by letters should each be individual flags, and double hyphen followed letters is a single flag in long form.
The application in question is using -generate rather than --generate.
I use my iPhone X regularly at contactless terminals at stores and on the bus. The trick is to double click the lock button first, select a card if you want, then hold it by the reader.
I don’t have any problems with FaceID — it’s more reliable and easier than TouchID for me. I live in a rainy place, though. Obviously a YMMV situation.
With TouchID do you just put your finger on the home button and hold it by the reader?
Interesting, I thought I'd tried that before and it didn't work because it wasn't near a reader. I'll try again, thanks!
I'd argue that it's still a slightly more complicated interaction than with TouchID, though, requiring a bit more mental preparation, perhaps.
As for TouchID: Yes, that's how it works. It's a more fluid action because typically you'll have added your thumb as the fingerprint, and it's easy to hold the phone with your thumb on the home button while holding it near the reader.
> Check out your code into some random directory. Copy the directory to a USB drive and walk it over to an air gapped machine (no wi-fi, no ethernet, clean dev environment installed). Copy the directory to the box, make some small code change and try to build your binary.
I don’t follow. Why is the code change important?
Why not just use the go module cache? Seems like a much cleaner solution with very little overhead.
The idea of making a change and rebuilding is about proving that the build process does not require a network connection and that you have properly resolved the depencenies on that machine.
If you’ve built once using the standard go module functionality then you will be able to rebuild as long as you don’t pull in more dependencies. Naturally you can move the cache around.
This has been an increasingly difficult problem as more and more pipelines move to containers for testing and building. What other solutions have folks come up with?
I'm usually really obsessed by build speeds because I know how long build times can suck the life out of a team. Slow builds cause a lot of negative behavior and frustration. People sit on their hands waiting for builds to finish; many times per day. It breaks their flow and leads to procrastination. If your build takes half an hour, it's a blocker for doing CI or CD because it's not really continuous if you need to take 30 minutes breaks every time you commit something.
Here are a few tricks I use.
- Use a fast build server. This sounds obvious but people try to cut cost for the wrong reasons. CPU matters when you are running a build. This is the reason I never liked travis CI because you could not pay them to give you faster servers; only to give you more servers and they used quite slow instances. When your laptop outperforms your CI server, something is deeply wrong.
- Run your CI/CD tooling in the same data center that your production and staging environments live in and avoid long network delays to move e.g. docker containers or other dependencies around the planet. Amazon is great for this as it has local mirrors for a lot of things that you probably need (e.g. ubuntu and red hat mirrors).
- Use build tools that do things concurrently. If you have multiple CPU cores and all but one of them are idling, that's lost time.
- Run tests in parallel. If you do this right, you can max out most of your CPU while your tests are running
- Learn to test asynchronously and avoid using sleep or other stop gap solutions where your tests is basically waiting for something else to catch up while blocking a thread for many seconds where it does absolutely nothing useful whatsoever. People set timeouts conservatively so most of that time is wasted. Consider polling instead.
- Avoid expensive cleanups in your integration test. I've seen completely trivial database applications take twenty minutes to run a few integration tests because somebody decided it was a good idea to rebuild the database schema in between tests. If your tests are dropping and recreating tables tables, you are going to increase your build time by many seconds for every test you add.
- Randomize test data to avoid tests interacting with each other. So, never re-use the same database ids or other identifiers and avoid having magical names. This helps you skip deleting data in between tests and can save a lot of time. Also, your real world system is likely to have more than 1 user and the point of integration tests is also finding issues related to broken assumptions related to people doing things at the same time.
- Dockerize your builds and use docker layers to your advantage. E.g. dependency resolving is only needed if the file that lists the dependencies actually changed. If you are merging pull requests, you can avoid double work because right after merge the branches are identical and the docker will be able to make use of that.
For reference, I have a kotlin project that builds and compiles in about 3 minutes on my laptop. This includes running a over 500 API integration tests running against Elasticsearch (running as an ephemeral docker container). None of the tests delete data (unless that is what we are testing). Our schema initializes just once.
A cold Docker build for this project on our CI server can take 15 minutes because it just takes that long to download remote docker layers, bootstrap all the stuff we need, download dependencies etc. However, most of our builds don't run cold and typically from commit to finished deploy takes around 6 minutes and it jumps straight into compiling and running tests. Our master branch deploys to a staging environment. When we merge master to our production branch to update production, the docker images start deploying almost immediately because it already built most of the layers it needs for the master branch and the branches are at this point identical. So a typical warm production push would jump straight to pushing out artifacts and be done in 2 minutes.
Not really worth it on our build since compilation is not that slow. In my experience, you get the biggest time savings from optimizing the process of gathering dependencies and making tests and deployments faster.
With other languages that build dependencies from source, doing that in a separate docker build step would probably be a good idea so you can cache the results as a separate docker layer.
It would be fun to see this, but I get the impression that it would involve a visit to the Library of Congress. Or maybe Cuba?
Is there more information on the Library web site that I’m missing?