I always felt the Atrix and this sort of modular design was the future. Google killed it heavily with their cloud-synced devices push, and Sundar Pichai killing Android's laptop development project. I love the idea of Project Ara, but with Google taking it from Motorola, I fear it's going to be locked down into proprietary services like Google does with all their products.
If they can get things like processing power to be modular, so you can have your apps and data on your phone, but be able to plug in and use desktop power when plugged into a larger station, that'd be when that sort of solution would really take off. I want everything on my phone, but for that phone to access desktop level UI and processing capability when I'm at my desk.
Meh, the Android laptop stuff was long dead before Pichai. Rubin was even stonewalling extending Android into tablets back in the day by insisting that Android was for phones.
And Atrix ended up going nowhere, because Motorola could not keep their designers in check. end result was that no two phones could share a dock, as the ports where spaced differently. their last, "universal", dock basically had two rubberized wires that the user had to manually fit each time.
As for Ara being locked down, you may be right. So far it seems that modules goes through a Google storefront, and they have yet to confirm or deny that trading used modules will be an option. Never mind that it seems Google will the lone provider of the endoskeletons. I find myself somewhat reminded of the early days of the PC, back when it was the IBM PC. I wonder if there will be a clean room endo firmware offered...
Sure, the Webtop design had implementation problems, but the concept was solid. I think it's biggest deathblow was that it was released before Android supported large screens, so it was gimped to a version of Linux with only a couple pre-installed apps, rather than the tablet UI Webtop later could provide, after the concept was already killed in future models.
Yeah, the Google that is making Project Ara is a long way away from the Google that embraced open protocols and standards, and open sourced platforms like Android and Chromium.
I never understood why Chrome OS had to be different from Android. I've also never had a Chrome OS device, maybe that's why I simply don't get it, but can anybody shed light on the differences?
As noted, an Android laptop was in the works. I think an Android laptop with Chrome browser, with extension support, would've been the holy grail. But Sundar Pichai wanted to protect his Chrome OS project, so when he took over Android, he killed that project. (And yes, I can dig up a reference to this, if anyone questions it.)
Chrome OS and Android may seem similar from a distance – they’re technically similar and do similar things, but they have very different goals.
Android was trying to imitate iOS and needed to establish itself and survive in a harsh market environment. It needed to be flashy, appeal to elitists, be affordable enough for the „poorer“ market segments, foster an economy around „apps“, etc. The result was an unholy monstrosity that, nevertheless, managed to beat iOS (on market share) as it was supposed to.
Chrome OS instead was an attempt at reducing a computer as much as possible to being just an interface to the internet – merely a technical artefact required to interface with the digital world, since humans don’t happen to have WiFi built in. Sort of like Google Glass, but envisioned in a world were smartphones did not yet exist. Market concerns and technical viability were secondary. The result was something that functionality-wise works as well as current technology allows, but nevertheless is the only laptop out there that truly „just works“. If anything actually ever breaks, you can go to the store, buy another one and have it work exactly the same like your old one – just type in your login credentials and everything’s back to where you left it.
I agree that in a perfect world both these „things“ should be achievable by only one product, but, reality being the mess hat it is, lead to Google developing two different products and now painfully trying to converge them into only one as much as possible.
Android is a odd one. Google bought it, and until recently it pretty much acted as a separate fief to Google.
Chrome(OS) on the other hand was, i think, started as one employee's personal project. Likely based on a observation that many of us spend our days mainly using a web browser, with the rest of the OS sitting idle around it.
I've always felt that in concept, Chrome OS belonged on phones and Android on laptops. Think about it -- Chrome OS really needs an always-connected device, which is a phone, whereas Android runs all apps locally (except apps which have a sole purpose of getting online such as web browsers).
My ideal mobile device is an x86 phone that can dock into a tablet which can then dock onto a laptop, which can then dock onto a stationary docking station with GPU, monitors, storage (I was one of the people who backed the Ubuntu Edge, such a shame that it didn't make the goal).
I think we're coming very close to making this possible on both the hardware and the software front. Core M seems only a few generations of optimizations away from being feasible for smartphones in terms of power consumption (Atom is already there if performance is not a top priority), and Windows 10 scales pretty well between tablet and desktop at the moment, and integrating the Windows Phone functionality into the core OS seems like it could be a possibility eventually. The power management improvements to Hyper-V in Windows 10 should make it possible to run other OS's (possibly even Android eventually) in parallel with very little overhead and power consumption penalties.
I would very much like a phone, tablet and laptop that are just different views into the same computing environment (obviously with accommodations for the different interfaces), but I don't see any reason why I would want them to snap together.
It would be sort of handy to be able to borrow larger devices, but plugging in a cable and having the phone "take over" doesn't have a whole lot of disadvantages compared to having them snap together.
In the case of portables like tablets or laptops, it might be preferable for the phone to snap in for easy carry. But for a desktop a cable or dock would be more than good enough.
I'm looking at it from a perspective that using borrowed hardware would be a rare thing, so instead of snapping the phone into the tablet, 99% of the time you leave it in your pocket. The hole in the tablet would usually be a hassle, while cabling a phone to a laptop would only occasionally be a hassle.
Hopefully the technical details of USB3 make it possible to build a smart cable that makes it possible to plug a phone into an untrusted device (only allowing the phone to push video and pull power). I guess such a mode could also be built into the device, but the cable would be a handy place to put a fuse or whatever. A special cable can also have a very straightforward user interface, if it can't be configured, it can't be configured incorrectly.
I feel like something like oculus would be better suited. GPU and storage are the size of a cell phone these days. Look at 128G sd cards. How many of those would fit in an additional 1mm thickness of an average cell phone ? 50 ? 150 ? Won't work for the monitors, of course. Also 30"+ monitors are expensive, because they require a large amount of silicon to be precision manufactured, whereas resolution is only really limited by the speed of electronics (that's why a cell phone 1080p screen is 60$ whereas a 30" one is still $300+, and you can barely have higher res displays in bigger sizes. It's cheaper than cpus and the like because it's few layers and very large features, even the worst fab can produce perfectly adequate monitors, but you still need the material).
But an oculus rift can use a 60$ screen to make you think you're in a room with a 600" display (really, check their cinema demo. Wow. Just wow). Or a room with 50 600" displays, no problem. Nimble VR can track your hands, and their competitors claim they can do it accurately enough to make you see a usable a virtual keyboard [1].
Why sit down at a docking station with GPU, monitors, storage when you can take it with you ? Why can't I have 5 30" monitors with full computer games while sitting on the train ? In an airplane ? In a 1 sq meter room ? Why would employers bother with any computers/peripherals other than a cellphone at work when a cellphone + occulus can provide a better and more useful experience ? Why can't I sit down in the morning train, snap my fingers, and have 3 large monitors with my working environment appear around me, usable ? Why can't I sit on a plane in an economy seat and do the same, all the while my mind convinced it's really sitting in a huge outdoors open space ...
Additionally I wear glasses. Let me tell you, there's no way to have the perfect glasses for every environment. Some part of the monitor is a bit blurry. Just enough to be irritating. The bigger the monitor, the more distance difference there is between the closest pixel to my eye and the furthest. That degrades quality. But I can see every pixel on an oculus rift display perfectly. So, at least for me, such an environment can actually be higher quality than reality (or at least compared to real displays).
From my perspective, Atrix-like scaling up is an illusory solution - the engineering work to make it work properly is more than that required to make the cloud work properly, and the latter is better in the long run.
I suppose I need to justify that the cloud is better. It seems like a chicken and egg problem to me: people don't expect all of their data to persist between devices, so most apps (for mobile and desktop platforms) either don't integrate cloud sync, or do it in a haphazard way that requires effort to set up and is usually buggy and/or limited. Because most apps don't integrate cloud sync, people's expectations are set low enough that they don't demand it to work...
And this applies twice over for power users. In part because they're just likely to use more applications and more obscure applications in general, straying further from the beaten path and encountering wilder terrain. In part because they tend to use the command line, and command line interfaces are very portable, which means people still use decades old software practically unmodified, which is great in itself but ensures that it will never be possible to break expectations (e.g. file accesses might take time, and might not succeed) and expect most programs to eventually be updated to adjust to the new expectations. Not even over the span of a decade.
But for heaven's sake, NFS is 30 years old and allowed you to access all your files from any client. It's also completely unsuitable for today's mobile networks (or really anything other than a perfectly reliable wired link), but we've learned a thing or two about syncing protocols since then. Dropbox does a
reasonable job at the protocol level, and works so-so with the command line, but is a centralized service (I think an ideal one would be decentralized) and doesn't aim high enough (you only sync one directory, and some applications use Dropbox-specific APIs to get smarter syncing functionality; it's not integrated deeply into a whole operating system). iCloud (+ Continuity) has the right idea in general, but is also centralized, and not (yet) sufficiently pervasive.
If this is so hard, why not just do the Atrix thing for now? -- Because while that can maybe be an okay solution with some more work, it's only a good solution if apps seamlessly switch interfaces when you dock or undock. Which is in practice is probably going to look very similar to cloud sync (data being transferred between separate applications), just on a single device.
There's the conservation-of-CPUs aspect, but I don't see a single CPU being suitable anytime soon for both maximum speed (when plugged in) and low power (when not). At best you arrive at a compromise that's not ideal for either. And do you really want a laptop dock that's a brick without a phone (i.e. if you leave the phone at a different location, or if you want one person to be able to use the phone while the other uses the laptop), if including a CPU and some minimal storage (enough to get what you need from the cloud...) would not be that much relative increase in price?
Do you really want to hold fast to a world where losing or damaging a device that you take with you everywhere can cause the loss of important data? I found the Google's goofy "people in lab coats destroy Chromebooks in amusing ways" ads (conveying the concept that no data is lost) pretty compelling; too bad Chromebooks are so limited even today.
There's the NSA angle, but the fact that most of today's consumer cloud services do not include client-side encryption and/or are closed source is hardly inevitable or unfixable.
There's the offline experience, but "cloud" can always be a protocol that can operate totally standalone on a LAN if necessary, adding the Internet as a communications channel when available. (Certainly today's cloud services could do better with LAN optimizations - if my phone and laptop are on the same Wi-Fi network, I should never have to wait for the Internet to get content from one on the other. Actually, this is true if they are close together, period. Use Wi-Fi Direct if you have to.)
The latter requires you give all of your data to giant corporations that you can't control. The latter requires that you sacrifice any freedom or choice in what software you run, where your data is stored, and more.
The cloud is one of the world's worst compromises, and we're all just sitting around waiting for it to bite everyone in the rear.
Today it usually does. As I said, though, since green-field development is required either way, there is nothing preventing the 'cloud' from being client-side encrypted, and there is also nothing preventing it from being open. (Well, other than political inertia, but I'd hate to see that compromise a technically superior solution. :/) Think of ownCloud...
Yeah, as long as companies can profit from mining your data, the wealth of cloud services will be built that way. And consumers' obsession with "free stuff" only encourages it. I've played with ownCloud extensively, though Sandstorm.io is the new hotness that I think has a lot of potential down the line.
If they can get things like processing power to be modular, so you can have your apps and data on your phone, but be able to plug in and use desktop power when plugged into a larger station, that'd be when that sort of solution would really take off. I want everything on my phone, but for that phone to access desktop level UI and processing capability when I'm at my desk.