It seems a bit extreme to offer apps one of two extremes: either my sub-metre location or no location at all. It seems the reasonable default for location sharing should be something like sharing the rough suburb that I'm in. This solves most use-cases such as showing closest store locations, delivery options, dating apps showing nearby matches, etc. It's only the occasional navigation app that needs to know exactly where the user is located.
I specifically get directions “home” to a neighbor’s house a bit away. Once I know where I am, I kill Google Maps. I’m sure they could, if they wanted to badly enough, figure the whole thing out, but at least my profile has a certain amount of uncertainty (though what utility that has is probably debatable).
I would love to be able to fuzz my location within a certain (randomized?) radius of my home for certain apps. Strava has a ‘privacy circle’ that essentially accomplishes this when sharing GPX tracks of runs around one’s home. An OS-level feature would be fantastic in many cases.
The wifi and cell towers your phone can see, so even without using GPS.
If you enable the location services of google, it keeps a history which is typically within 50 meters, within a few minutes accurate (at least in a city).
What google tracks if you disable location services, I don't know.
Just so you know, if you're using regular Google-infused Android, then depending on how you ‘kill’ Maps it may still be running in the background. And it may start in the background without you running the app.
I'd also bet that Google's other apps transmit your location anyway―if only because other apps use it too―and that it's not necessarily reflected in the timeline in the web profile.
I have a different maps app (Yandex) that keeps popping up in the process list despite me killing it off with ‘force stop.’ Probably not the only one, for that matter.
Flicking an app in the recent apps list doesn't close its background processes. And you won't see it there when such a process runs again. See e.g. the ‘OS Monitor’ app for the actual list of processes (for Android ≤6).
Google's apps are likely even more privileged. Play Store hogs the processor and network every time I enable wifi. On a past phone, Google Maps also ran conspicuously on boot and, iirc, when wifi was turned on.
Something might've changed in newer versions of Android, dunno. But I doubt it that Google would limit its own abilities.
> I would love to be able to fuzz my location within a certain (randomized?) radius of my home for certain apps
I'm not sure about the details of the implementation but with location you still want something reasonably accurate. So the random radius can't vary too wildly. After you collect enough data points couldn't you infer the real location from that circle?
That's a good feature but quite different, if the user is at home when they need to use the app that first time then it reveals where the user lives and possibly personally identifies them.
Your IP address when on WiFi almost certainly can be connected to a specific address by a databroker. That doesn't mean location shouldn't be limited in resolution, but there are other ways to get the same thing.
Isn’t is usually SSIDs that are used for location mapping of WiFi access points. The mapping cars gather that information when they’re doing street view stuff.
There is a difference between sharing your IP address with a data broker to maybe get your location (which may not even be possible with GDPR?), and having your exact GPS coordinates sent directly to your database.
Oh, you’re right of course, I was misremembering that they won’t be able to get the ssids of nearby wifi networks. However, a vpn is a solution to the ip address problem.
People are actually working on something very similar to this in research [1]. By applying random noise to location data the user's individual privacy can be protected while still allowing for collection of usage (or in this case location) statistics etc. This is the key idea behind local differential privacy (which Apple also uses to collect anonymous statistics on usage data [2]).
Yahoo's Fireagle was an attempt to make a "Location broker" for apps. You could allow applications as much or as little detail as you like. I think it was a product before it's time.
Would this be possible with todays phone and hardware etc?
> Is the 'light' reflecting off the asteroid emitted from the sun? I had never seen or even knew radar could distinguish value like what is shown here.
Yes it's emitted from the sun, it's clear given that the time-lapse shows "Inner moon eclipsed" when passing behind the site of the asteroid that is not lit.
I was always under the assumption that with Radar images that it didn't rely on a light source as technically the radar is the source. So that the impression that there is a light an dark side was based purely on the motion towards and away from the receiver.
But, you have pointed out something interesting and that would appear to indicate other wise. The clear rotation independent of source already shows that I am wrong on that.
You are correct, the "light" is the transmitted radar pulse from Goldstone, not from the sun.
They do all sorts of fancy signal processing to get this sort of resolution.
Because radar power received goes down with 1/(distance^4), the inverse square law, squared, this is hard to do at astronomical distances.
The power level differential between transmitted and received power can be in the order of ~10^15.
Goldstone transmits a 500kW radio pulse but will probably get nanowatts back, which is amplified millions of times by the big dish, amplifiers and signal processing.
There are two things: the radar images are created with radio waves emitted by the radar installation. The second thing is that the visualisation shows a light source. They are unrelated to each other. I assume that the light source was added during rendering of the film and that it approximates the sun.
I don't either :) but sort of a silly question I guess. I presumed it was the sun due to the angle of illumination as well as common sense that the sun is probably the brightest object in the solar system along multiple bands of light; I was really just amazed at the relatively large amount of detail in the image.
Well, if the illumination was from the radio telescope, then the moon would be hidden behind the parent asteroid when it is eclipsed. We would not see it go dark, it would just be obscured.
That's just an artifact of the visualization; the point of view is above the north pole of Florence, which points decidedly away from Earth (probably +/- a few degrees from normal to the plane of the ecliptic).
Paradoxically, respecting the wishes of the dead is more about empowering the living. If there are no such guarantees then people alive will take actions to enforce their wishes past their death.
In the context of Wills, that would mean more people would transfer their estate before their death. In the context of Facebook, it would mean some people may delete certain data (or in some cases delete their entire account) if they don't want the data discovered after they die.
I don't know if it is related to the constitutional issue, but someone was convicted of having Simposons pornography because it pictured child characters from the show.
The UK, Canada, NZ etc. and otger countries (to a weaker extent) have similar laws. It is ridiculous in my view.
Depends on the application of this, i.e. whether the total is more important or the individual values are more important.
Example: You're filling out a timesheet for a contracting job, and you worked 8 hours on several different tasks for your client, but your client's software rounds things to the nearest hour, then it would make sense to use an algorithm like this if your pay was going to be determined by this data entry. If your pay was not determined by this data entry then it may make sense to just round normally.
A lot of research in astronomy is conducted to figure out how bright certain objects are, one classic example is "Cepheid variable" stars, these pulsate at a rate correlating with their brightness.
He uses the following example for when to throw your own errors:
var Controller = {
addClass: function(element, className) {
if (!element) {
throw new Error("addClass: 1st argument missing.");
}
element.className += " " + className;
}
};
I don't think this is a very good as a native error will have all this information already in a stacktrace, and if you're running Chrome dev tools with 'Pause on exceptions' then you'll be shown the exact place this fails. Additionally, you need to now keep the function name in sync with the string (your IDE / build tool will not tell you if they get out of sync).
Better cases for custom errors are situations where a native error will not be thrown, such as:
- in his example method: if className is undefined
- valid objects in a state you don't expect
- switch statements that don't match any expected case
First, they don't act like that, and if they do that's dumb.
Second, how is that different from writing in a statically-typed language like Haskell/Rust/what have you and then compiling down to assembly which is for all intents and purposes dynamically typed? We do it hoping to gain safety from the typing, but do we lose the type safety in the machine code? (We don't, the type-safe language rules out compiling to certain classes of erroneous code.)
Yes, that's what I vaguely recalled. But we don't have enough observations to test that hypothesis. I wonder, through, whether alternate-path gravitational lensing could provide enough snapshots of some other galaxy to be useful.
Not entirely sure, but could be using the data from the Orbiting Carbon Observatory satellite[0].
This satellite orbits the Earth on the order of ~16 times per day which would not be enough granularity for the smoothness of this video, so I assume it's been combined with other weather data and advanced modelling to provide the smooth interpolation.
OCO blew up on launch, but OCO-2, its successor, has been producing data since 2014. This nature run was from 2006, which predates the OCO-2 era, and predates GOSAT, another CO2 satellite. It may overlap partly with SCIAMACHY, an ESA mission.
But in summary, I don't think the video was heavily constrained by actual CO2 observations, which are done only at a small number of sites on the ground (TCCON or FTIR). The video was probably constructed based on models of plant respiration (which is observed, indirectly, by remote sensing), winds (ditto, of course), and ground emitters.
OCO-2 has offered more significant constraints on global CO2, with a roughly 2km x 2km footprint (per pixel), 1ppm accuracy (in a ~400 ppm quantity), and global coverage every 16 days. There are some videos of observations (not models) at:
Sloppy of me! You're right, of course. I knew people who went to see the launch at Vandenberg, and (being on the science team) they were crushed [edited to clarify: metaphorically! -- having spent so much hard work in preparation for eventual results] just a few minutes after launch, so I put it into the wrong mental category.
You can also use 'type assertions' to minimise restructuring, eg, the following is the same number of lines as your first example (and compiles to identical code):
const a = A()
a.b = new SubtypeOfB()
(<SubtypeOfB> a.b).attributeOfSubtypeOfB = 123 //works