You forgot to mention choosing a country that doesn't have an extradition treaty or similar with your own. So for me in the UK, it would rule out most of the EU and North America.
I read this comment when this story was on the front page, and it's occurred to me ask, what would you consider the antithesis of Wired? You know, something you'd recommend someone read...
I would begin with a figuring out a more secure, desired configuration before trying to automate it. (Especially given, IMHO, the very steep learning curve for Chef & Puppet)
I couldn't figure out how the quadrocopters were coordinated. Turns out it's a high-speed motion capture. More here: http://www.flyingmachinearena.org/
I've ranted on this topic before [1]. I still think it's a bit dishonest... academics shouldn't promote their work to general audiences without properly denoting caveats. In this case... they're using a ~$50k vicon motion capture system that returns millimeter-precision pose estimates at ~1kHz.
Without the vicon and (probably) a central control PC, this feat would be SUPER difficult -- by ~1-2 orders of magnitude. The real problem is: when someone does solve that herculean problem, the general public won't care. They'll just say, "eh, we've seen that before."
Who cares what the public thinks? The funding and eventual commercialization will come from people who do understand what is going on here.
I'm no expert, but I would imagine that research using remote viewing and centralized processing
is just a first step that could make it easier to validate particular solutions, which can then be "baked into" self-contained platforms.
It's the same as industry... good PR goes a long way toward acquiring funding. Would you be upset if (say) Apple, Google, or Microsoft's PR machine was intentionally deceiving folks? Academics are even more sensitive to this -- it's why we're always railing against "shitty science reporting."
Also, I suggest reading the article. I am an expert... I greatly admire this group's research(!!), which is why I don't mind giving 'em some tough love. But I also want the group that "cracks the egg" for self-localization and decentralized control to get their deserved limelight. This video makes that less likely.... as evidenced by the fact that so few people in this thread (probably) know what a Vicon is, how it works, or that it's being used.
What if they demo it in a non-laboratory setting, especially one where noisy backgrounds would make vicon impossibly difficult. If I understand correctly, the main problem with vicon is that it can't really be used in a real world scenario, right? So if the demo shows what can be done with the non-vicon solution, it should still be effective.
Projects using quadrotors to do things like fly cooperatively and do acrobatics [1,2] will often use high speed motion capture systems with multiple fixed cameras. This decouples the problem of perceiving the environment from the problem of responding to the environment.
More broadly in robotics there are plenty of people working on ground-based robots that don't rely on multiple fixed high speed cameras - self-driving cars [3] would be one example. There are also people working on cooperative ground based robots that don't rely on multiple fixed high speed cameras - for example, the RoboCupSoccer Middle-Size League [4]. There are also people working on quadrotors that don't cooperate/do acrobatics and who don't rely on fixed vision systems - for example people working on autonomous quadrotor mapping [5] and using IMUs and on-robot vision [6].
Yes, but none of them are (permanently) aerial and some of them use internal combustion for power. Computation requires weight and energy, and computer vision requires lots of computation.
Yeah, kind of restricts to the lab, unfortunately, although a secondary camera drone meshed with the others might serve. Lighting would be a problem... though they could use IR beacons or something.
That must be the purpose of the spherical marker-balls on the "pendulum" (looks like a rod???), two of them dividing it into three segments. I wondered what they were for, I'm guessing they help with the motion capture/tracking.
That's a great question. Right now you could include the git version in the comment of your DevJoist commit. In the future we plan to have much better integration with git/GitHub and other commonly used developer services like Heroku. A part of that will be opening up an API so that you can integrate DevJoist into your workflow however you like.