This is something that's been going on for a while - Google killing small web apps; Convertors, calculators, movie listings, ip finders, weather stats, stocks. It's not all low-hanging fruit. I'm not saying they shouldn't be doing this, nor that it's intentional. Their goal is to be the best search engine which means connecting searchers with answers as quickly as they can. But even so, it sucks for the web apps who get made redundant by Google.
Also, it's interesting to compare this with, say, Yahoo's approach. Yahoo would have put an "IP" widget and a "weather" widget on their portal homepage. Google waits until the user searches for the info before giving it to them - which keeps their homepage clean and more importantly keeps their message strong "we do search well", while Yahoo's always seemed to be "we do a whole bunch of stuff, some of which you may need". I know Google/Yahoo comparisons aren't really du jour, but still it's interesting.
This is why I mentioned that it's not all low hanging fruit. Movie listings? Requires feed integration, handling a lot of data, non-trivial presentation. Same with weather. Getting a good weather app is not 10 seconds of coding. Some, like ip address, are simple things, but even so whatismyip.com built a huge range of products around that one simple service.
I always hated how weather sites could not just detect my approximate location based off of ip (eg weather.com) and required me to enter in my zip code, google fixed that.
Pretty sure they got that after google. For years, wunderground was the best of a bunch of terrible web sites. They're still about 40% ads by pixel though, and have a hugely cluttered UI. I'll use them as a second step (Google's "detailed forecast" link) after typing "weather" into the chrome address bar. But broadly, they still suck compared to Google.
Actually the only way for a "my IP" application to make some money is be indexed as first result in Google I guess. At least I never remember the domain of one of that 200 trivial apps and search on google. So makes a lot of sense they showing you the result if you search for "ip" or "my ip".
I bet some of them find alternative ways to get visitors and that a decent number of people use the bookmark feature. Now, how valuable a visitor is to a page like that is another matter.
with the more significant things like movie listings, weather..... it's about Google + the world's information. They want it to flow through them/be indexed by them/found and offered up by them
It still disadvantages those people who want to use that service while there is nothing to perform the job. Take google recently killing code search http://googleblog.blogspot.com/2011/10/fall-sweep.html. One of the commenter's had been building their own code search platform before google did. Google entering the market meant that this was no longer viable though http://news.ycombinator.com/item?id=3112444. A new code search platform will no doubt pop up, but it will take a while to get something that works well.
This is the problem with the giant company stepping into new areas, it often leaves destruction in its wake.
DDG in general seems to do a much better job of second-guessing what I'm looking for. Amazing how good it's become and that Google is (in my experience at least) playing catch-up.
Yeah, and I think google has just accidentally "killed" whatismyip.com . Not totally but I'm pretty sure the site's traffic will decrease significantly
Remember, if the answer is delivered with HTTP, the reported IP may be the IP of your ISP's transparent proxy server. If you want the IP of your NAT box, you need a what-is-my-IP where the response is delivered over HTTPS.
This seems like a case of some things being features, not applications. Entire web sites build just to report your IP back were probably going to be replaced by one thing or another, eventually.
Both Google and Apple (and most other companies) are smart enough to see that if a simple feature is heavily used and the experience of using it can be improved for their users, they may want to make it a "native" part of their products. Let's face it, this is a better experience for that search, and you can still go to the indie sites if what you need isn't covered by it.
If your site is so sparse that Google can ruin you just by handling a search query, your business model was broken or non-existent. There must be something they can do with all that traffic data to differentiate. Where's the aggregate statistics?
Any of the bigger ones could spring off into an ISP review site.
Ok, so I have a curl/awk one-liner that can get me my ip in shell scripts that uses checkip.dyndns.org. It's super simple because the results are super simple from checkip.
Anyone want to take a stab at this for google's result page?
Google really doesn't it like it when you script their search. Better to use one of the many, many services designed for this purpose. Or set one up yourself in roughly one line of code.
how would a modify headers add-on be able to let the server think that you are coming from a different address? It's not as if the browser sends the origin address as a HTTP header.
It doesn't have to. It's part of the IP packet which contains the TCP packet which contains the request headers.
Are you sure that you are not using a proxy server at the address Wolfram gives you?
Edit: On second thought, you could try to fool server-side detection by setting a non-standard X-Forwarded-For header, but a what is my ip service shouldn't trust that and just report the real remote address.
Wow, like the IPv6 anf IPv4 breakdown along with ISP information. Wolfram would win, but as usual you can't scrape it with the SED tricks of earlier posts... or can you?
I recently noticed that searching for dictionary words, using the old define:something trick or queries like "ubuntu release day", "evanscence genre" returns related information or 'best guess'. Nifty.
I'm slightly amazed at how inconsistent these special queries work. For international users (I simply assume it's not just me), this trick only works if I add &hl=en to the URL. Any explanations?
It's also a great news for Malware too as they usually check for their remote IP. But maybe that will be another way of Google to detect if the local machine is infected in a near future.
Malware usually opens a reverse (TCP) connection and thus does not require the remote IP of the infected machine. It only needs to know the IP or domain name of the server it wants to communicate with.
Regularly reversing malware samples, we still see many malware getting their respective remote IP from remote services. This is even used by some recent malware to update the bootstrap DHT with their own IP... In such case, they don't even need to contact directly the C&C.
I don't get why my previous comment is down-voted ;-)
ShieldsUP! - a free service from Gibson Research Corporation - will tell you your IP address and a whole lot more:
https://www.grc.com/x/ne.dll?bh0bkyd2
Also, it's interesting to compare this with, say, Yahoo's approach. Yahoo would have put an "IP" widget and a "weather" widget on their portal homepage. Google waits until the user searches for the info before giving it to them - which keeps their homepage clean and more importantly keeps their message strong "we do search well", while Yahoo's always seemed to be "we do a whole bunch of stuff, some of which you may need". I know Google/Yahoo comparisons aren't really du jour, but still it's interesting.