The key thing that I don't think enough people understand is that this campaign finance reform that Lessig is working on is an issue for anyone who has ANY cause. Whether your cause is is climate change, net neutrality, or anything else, meaningful change with that issue is largely blocked by THIS issue (money in politics).
Lessig's personal path of giving up his work copyright reform as his "issue" to focus on corruption is exactly this understanding - that this needs to be treated as the core problem before any progress can be made with net neutrality, immigration or other issues.
For this reason I'm very surprised that the tech community, who is so loud about net neutrality, isn't speaking up more about this corruption. Sure, there are different approaches to fighting against it, but I think Lessig's experiment is a great one and definitely worth our money.
1. Enacting this scheme can get a few extra honest politician into the office.
2. The pressure point is "re-election". You don't have to recall them, you just have to make it known to them that your continuous support requires certain kind of voting.
This issue is about limiting the ability to speak. Anyone that is against Super PACs and "getting money out of politics" is arguing that speech needs to be reduced. It's treating the symptom, not the cause. The reason so much money is being spent, is that the amount of money the government controls and the amount of regulations it creates the potential for groups to get significant advantages.
The correct solution is to decentralize power; it easier to win one election instead of 50. And to reduce the amount of activity the government does, less pork spending and less market distortions.
> Anyone that is against Super PACs and "getting money out of politics" is arguing that speech needs to be reduced.
No they're arguing Super PACs distort and limit healthy, democratic speech; that Super PACs are detrimental to speech.
The corollary to your assertion is that Super PACs increase speech. But this 'increased speech' can similarly be 'increased' by foreign political donations, which highlights the absurdity of these 'speech' claims to anyone interested in reclaiming a properly functioning democracy.
His platform is not that speech needs to be reduced. It's that as many parties as possible need to be able to have a voice, and because certain entities can spend dramatically more money than the rest of society, they can drown out the voices of others.
It does reduce the ability for people to speak. You said it yourself, it makes it so they can't drown out the voices of others.
People are not equal and we need to recognize that; some are famous - their speech is hear by more people than mine, some are rich, some are well connected, some are beautiful, some own a newspaper, etc. Not everyone has the same voice, and that's ok.
If money is speech, and speech is free, where's my free money?
If money is speech, especially with regard to politics, lack of money is the stifling of speech (and to an extent, indirect disenfranchisement). Ergo, through not giving those without money any funds with which to perform their speech, the government is violating the first amendment.
> The correct solution is to decentralize power; it easier to win one election instead of 50. And to reduce the amount of activity the government does, less pork spending and less market distortions.
The only way to implement any solution - whatever solution you think is needed - is to implement Lessig's solution first.
How do you plan to enact this sort of reform? If your plans include a popular movement, you will greatly benefit by having your voice amplified vs that of the financial interests... which is exactly what this issue is about!
I think one of the meta-takeaway is that understanding the fundamentals of web caching can help with your general CS knowledge ("There are only two hard problems in Computer Science: cache invalidation and naming things." -- Phil Karlton).
Looking at Apache, we see a few strategies:
* Include last-modified metadata
* Include content metadata (eTag/md5 of content)
* Include explicit expiration date
* Include a max-age
* Include metadata about who can cache (public/private/no-caching, i.e. users can cache but proxies cannot)
These approaches could be used when designing data flows with Memcache, Redis, etc.
The best way I've seen for dealing with cache expiry, which the article does not talk about, is to use version numbers on assets. We found this to be especially important with javascript, css, etc -- if all of that stuff doesn't expire at the same time, it can hose the layout of your site.
Also there are may be many layers of caching between you and the user; not only HTTP caching in the browser, but you have to take into consideration any CDN's (Akamai, etc) and sometimes even caching reverse proxies in corporations.
At my previous job, we handled the versioning with deployment-time rewriting of the assets included in the base page to include the version number (As tagged by the build software with branch name + build number).
That said, enabling browser side caching was a huge win for page speed on the site.
One thing I don't understand. If the server has asked the client to cache an image for a year, and the image is indeed updated in that time, is there some way of telling the client to download that image anyway?
I'd take it to Google, but I have no idea how I'd ask that in Google query form.
This is actually referenced in the article. You can use the Last-Modified date and the server will either return a 304 (Not Modified) or the modified image if it is newer.
I read that, but if you say "this image won't change for exactly one year" and the client doesn't even request that resource from the server any more, how do you start that dialogue again?
pork has offered that you add a junk parameter to the end of a GET request and that should disrupt the cache, I'll need to read in to this. I'm interested in optimizing web speed as much as possible and this sort of thing and caching has always been something I've understood poorly.
Yep, that's the problem with long expiration dates -- the client may never check again (that's what we wanted, right?). The workaround is to request a new url which restarts the process.
Separately, the easiest way to get started with all these optimizations is to run the page speed check online:
I've actually been playing around with this stuff all day, pretty much since my last comment above. I've enabled smarter caching on my website, replaced multiple image requests with a single spritesheet, optimized my images, and cleaned up my CSS file to remove unused code. Google's PageSpeed has been an invaluable tool, as well as webpagetest.org which breaks down the data in an intelligent way.
Turns out Google Analytics is actually doubling my page load time, but the data is too valuable to give up.
In HTTP, since it's stateless, you don't "tell" the client anything without it first asking. The usual way to bust the cache is to add a junk parameter to the end of a GET request.
I see. I assume you would change the html code to say <img src="image.png?cache=no"> or something like that to force the browser to redownload it? What if the html page itself were cached for a year? Is there an Apache setting that can give a global "no caching" command, or something like that?
Yep, exactly -- not only can the images be cached, but the HTML too!
The ideal way to do it is have the "loader file" (index.html) only cached with last-modified date, so as soon as it changes the client is aware. The client requests the file each time, and is returned the full file or a simple Not Modified response.
Within the file, you have references to permanently-cached, versioned resources (<img src="/images/foo.png?build=123" />). If the cache expiration is far enough away, the browser won't even issue the request to check for a new version.
Some browsers don't cache query params so you might use rewriting rules to change foo.png to foo.123.png. This rewriting is done automatically for you with Google Page Speed module for Apache.
It sounds like a much deeper subject than I first appreciated. I'll definitely read up on this. Using URL rewriting with caching is interesting, I've not seen that before.
I work with one lady that complains about a slow loading JQuery slideshow, and smarter caching may very well be the solution (at least, after the first load).
I've experimented and managed to shave 2 seconds off a client's website upon reloads - that's significant! Still playing with it, but I've already learned a lot.
I know you say that in jest, but I would say that it is still good to know the underlying technology. It is much easier to debug issues in the future once you understand them.
Concur - I think that, once you have a certain amount of experience built up working in frameworks like Rails, you can only go so far before you hit a wall. At that point, it's necessary to start learning the protocols and the lower level stuff to advance your understanding and your craft.
To be fair, nor is this. We use CodeMirror (http://codemirror.net), which at least gives you line numbers, syntax highlighting, braces matching, and (coming soon) autocomplete. Sure, it's no [vim|emacs|Visual Studio|Eclipse|nano], but it is better than just "a text field on a web page".
Yes, I just looked at the form again (the link still works) and it asks:
University, Major, GPA, strongest programming language, second programming language, and preferences for location, and then a five-option part where you rate yourself out of:
Not my thing (none)
Can make do (fair)
Comfortable (good)
Comes easily (better)
World Class (best)
On the following:
1) Applications and Services: this is Google's term for the software that is directly visible to end users. People who focus on applications and services typically work on improving our capabilities in a variety of areas that span the range from new features to increasing performance and efficiency at Google scale. They will be tasked with devising and building new approaches to Google's problems, and exploring their effectiveness.
2) Systems: People who have a systems focus are oriented towards behind-the-scenes software and systems, often building them from underlying components and services. Systems work spans from platforms (hardware, OS, networking) to infrastructure (shared services such as storage, cluster management) and everything in between.
3) Sys-admin: Our system administrators keep all of Google's systems running, and help deploy new ones. They deal with issues involving single machines to those involving huge numbers. They work with native Linux environments, and Google extensions and services.
4) Verification and Test: Our test teams helps make our systems resilient and reliable - we put a lot of effort into this. Building world class applications at world class scales doesn’t happen by accident. It takes insight, innovation, and precision to verify our systems perform as expected.