This seems really...unlikely. Yes, laptops displaced desktops, but it's because they offer essentially the same benefits and experience, but with portability and convenience. No, they're not as powerful for the money, and they're not as expandable, but as computing power has grown, both of those have begun to matter less to most people.
But with a handheld device, you're giving up a tremendous set of features and benefits for very little additional benefits. Yeah, I can stick a phone in my pocket, so it's marginally more portable than a Macbook Air, but it's pretty rare that I'm out somewhere and wish I could develop something but haven't brought my laptop. Maybe super-developers just run around with their phones and long for the means to develop software 24/7, fingers and eyes glued to their 2.5 inch screen :)
The point is, I don't know that's reasonable to assume that because laptops displaced desktops, handheld devices will displace laptops for development purposes. Writing code is still very much about handling text, and phones present huge barriers in terms of input, display, and ergonomics. Think about all the articles that show huge boosts in productivity as size and number of displays increase. And now we're going to go back to a 2.5 inch screen? Highly doubt it.
I really like most of what PG writes because it seems like he writes to explore topics, but I think he headed down the wrong rabbit trail with this one.
EDIT: To be clear, I definitely agree that handhelds will continue to replace more and more of people's laptop use for everyday tasks, but I sincerely doubt they'll replace developers' use of laptops and desktops for writing software.
Imagine having a monitor/keyboard at work and a separate monitor/keyboard at home. What if you could simply plug in (wired or wireless) your phone into either setup and have your OS available? That's incredibly convenient.
I think it far more likely that this kind of transparent OS portability will come from the cloud, rather than from carrying it around on my mobile phone.
I would tend to agree with you if you're only thinking about the US but 3rd world countries are a totally different ballgame.
There was a Nokia video out last week about the year 2015 and what mobile would look like at that point. While none of it was revolutionary what struck me the most was the thought that BILLIONS of people are going to begin making the kind of money necessary to get a smartphone in the next decade. Those 'phones' are going to be their only computer, their only access to the internet, and their only device that would allow application development.
If we, using the royal hacker we, could develop a mobile or web-based development environment that worked on mobile devices it could enable a new population of developers that dwarfs the number of devs we have in the world now. I would be surprised if the number of mobile applications developed on mobile devices was extremely high in the 3rd world in the next decade.
laptops displaced desktops, but it's because they offer essentially the same benefits and experience, but with portability and convenience
But they didn't originally, and not for a long time, even though it was inevitable it would happen one day. I carried various laptops around for years before finally being able to buy one that was powerful enough to be my only development machine.
phones present huge barriers in terms of input, display, and ergonomics
Keyboards are available that are full-size, yet which take up no physical space - a laser projects an image of a keyboard onto any flat surface, and a camera figures out where you're typing.
Mini projectors are starting to come out, meaning you don't need any physical space for a good monitor either.
Combining these two, with a good handheld platform and two flat surfaces (e.g. a desk and a wall), I don't see any real barrier to making this happen.
Alas Heroku, you could have served us well here...
Instead I had to build my own ugly browser ide/code publisher to post code to an App Engine data store and make a class/controller to render it back as XML or HTML.
Since I am doing Opensocial development on GFC and Google Sites, my code is loaded from a published url by those containers (after the cache expires, that is). This means, when I am on the road and I have a thought, I can pull out my web-enabled phone and publish some small change to my code and its live.
This tool I am describing allows me to develop Javascript client code easily enough, but I could see adding a module to my ide app that would load up dynamic python too (using exec). This would allow me to "edit" backend code. It would probably be good enough.
Anyway, couple this functionality with PhoneGap and I think you might have a workable solution... for the Android platform anyway.
This will work even better once Android reliably supports bluetooth keyboards. Typing on those small buttons really wears out my thumbs. :)
EDIT: I have also thought of setting up a deploy server for the backend piece. I could edit my backend code in the web ide, click a Save button and have it kick off a deploy command on the remote server. The process would pull down the code I scratched out in the browser-ide, save to SVN, then run the app engine deployment.
You know, I thought this was what Heroku was going to do for me. But, they stripped out the web ide which was the part that I really needed.
For display, you can hook up video display glasses to an iPhone today, then use ScreenSplittr to mirror the display to the glasses. With some Windows Mobile phones, like the Touch Pro or Touch Pro 2, you get TV out "out of the box" with no extra software (do need a special cable). That gives you a 640x480 screen for ~$300 with the myvu crystal glasses (myvu.com). Does anyone know of an Android phone that supports this? There were reports that the G1 has the appropriate hardware but I haven't seen anyone get the tv out to work.
The sticking point is the rest of the interface. There's the BlueInput drivers for Windows Mobile, that lets you hook up a bluetooth mouse and keyboard. You could then use RDP to connect to a machine elsewhere and have it host your IDE and compiler. Almost certainly not the "right" way to do it but it's a start.
Multi-touch interfaces turn out to be hard with video glasses, at least on current phones -- you don't see where your finger is. Of course you could ignore the video glasses and just look at the phone screen, but then you have a small screen.
May be worth thinking through what kinds of development are best suited for a person walking around. Are you planning to camp out at a coffee shop and work for six hours straight, are you planning to make quick changes to an existing app "on the fly" as you think about them, or something else?
I spend a lot of time staring at my iPhone's screen, and for Python/Ruby/bash? I'm sure the processor is powerful enough to write some useful things.
I think the real problem is input. Would it be enough to have a "hacking mode" software keyboard? Or would it require a physical keyboard at this point? Would the current iteration of physical keyboards even work? I've not used a Droid keyboard, but they have nice screens and I'm sure setting up a dev environment would be a lot easier in android than on the iPhone. That is to say, possible (sans jailbreak).
I think touchscreens and development go together well, but perhaps not in the way you're imagining. The best thing about a touchscreen is that generating any symbol is equivalent in complexity to generating any other. Unlike real keyboards, which have fixed symbols in fixed positions, a virtual keyboard can be as large and complex as you want—if you run out of space on one "layer", you can just add another set. This would make a language like APL perefect for mobile development, especially if combined with some manipulable visualization of the resulting parse tree (I.e. What most visual PLs try to be.)
I love this idea. Is there an existing handheld device that is capable of having an external keyboard and monitor plugged in to it? Or would anyone doing this need to figure out how to make a good virtual keyboard for programming and a dev environment that will work in something tiny like 480x320 pixels?
The Pandora should be shipping soon, though it may be a bit bigger than what most people would want in a phone. It has (will have?) a usable but small screen and keyboard, but it also has USB and composite and s-video out.
As far as screen size, a readable 80x25 screen and a regular keyboard so vim works normally is sufficient for on-the-go development. I can get 80x34 tiny, tiny characters on my blackberry now, or half that (or quarter that, depending on how you're counting) with characters that more suit my eyesight. And while developing on it's a non-starter, it's sure come in handy when I've been out and about and needed to fix something (using ssh for access).
Anyone working on this idea and submitting it to YC will need to have a solid answer to the following question.
Apple can change their policies towards iPhone applications instantly (see footnote 4 in PG's essay, relevant quote:
"The problem is not Apple's products but their policies. Fortunately policies are software; Apple can change them instantly if they want to.") What will be so special about your development platform that you will still stay alive even if/when Apple does so?
My first computer, a Timex-Sinclair ZX-81, had BASIC commands tied to each key which made typing easier on that membrane keyboard. Perhaps it could employ a similar input device where the user could configure a keyboard with shortcuts for a given language.
Since I am walking down memory lane...I remember programming a computer by setting 4 switches in octal to load the instructions. As funny as that would be today, it shows how a simple input device can be used to produce complex results.
Has anyone done a dissection of what ended up being the point of failure for OpenMoko?
It seems like they would have been well positioned for exactly this sort of thing. I'm not sure if they had problems with the hardware or whether they just couldn't get traction.
In general, hardware products that appeal only to hackers (OpenMoko, Neuros, OGP, etc.) fail while hackable mass-market hardware (G1, WRT54G) succeeds. You can't ship a "rough" product and expect the community to polish it, since polish is what the community is worst at and peer production is so slow that your product is obsolete before the community can finish the firmware. Also, hacker hardware is expensive because it lacks economy of scale.
Their hardware was epically bad. It used an ancient and buggy GPRS radio (1.5G, ~5KB/s), had naive drivers (no power management!), the sound chip didn't really work, the shitty resistive touchscreen was recessed so you always needed a stylus... It was not feasible to use it to place or receive telephone calls.
The software was just as bad, and was completely rewritten every couple months. It was based on each of [GTK+, QT, Enlightenment, DirectFB] at least once apiece.
I don't see how they could have possibly done a worse job.
There are quite a few smartphones and PDA's which are programmable. I own an old (circa 2000) Palm Pilot and have a C compiler and a Lisp(Scheme) interpreter there. (Also there exists Forth compiler). Nokia 770/N800/N810/N900 series is as powerful as the development platform as your Linux machine. There exist on-board development environments for Windows CE/Pocket PC/Windows Mobile (even Common Lisp and Emacs) and Symbian (including Python).
So in this respect the iPhone/Android platforms are a step back from truly open mobile development environments. This step back is the result of the backwardness of operator-controlled American mobile market.
The main problem of the onboard development is the lack of convenient input. So I think the main task is to improve projector technology and ergonomics of projection keyboards (use more natural hand movements?).
The keyboard model is tied to the use of source code, and source code to the keyboard.
This implies wholesale replacement of source code. Probably with an icon-based language and drag and drop development environment. This also implies a substantial library of icons be available for various common tasks; of very substantial task abstractions.
Classic development and debugging and tracing models are all tied to the screen space.
This implies having extra display space available that can be slid into view on demand whether for selecting tasks or for tracking, or a requirement for in-band debugging displays, or removing the need for debugging and tracing.
You will need to build a box of task blocks, an icon drag-n-drop IDE, and icon-based debug and trace. And you will want ancillary pieces, including dump and trace and update tools.
In Apple Xcode terms, a gonzo and finger-based and viewport-constrained IB.
You're not likely to build a web server nor other general application within these constraints, but purpose-targeted applications with capabilities somewhat past an etch-a-sketch UI do seem feasible.
Think Pokemon. Not "general purpose".
And the vendor can sell "widget packs" for these target applications.
- it's a tool for making small web apps. The web apps can be run locally or pushed out to a public URL (so other people could run the app, or if they had the platform, could download it, run it locally, and make changes to it.
- the language would have to be really terse. Something like APL (per derefr) or K
- you'd want users to be able to do something interesting with the phone's data, like draw data from the calendar to tag photos.
As many comments say, with the right tweaks input is sortable - everything from the laser projected keyboard on a desk to the USB or bluetooth rollable keyboards, or a reworking of the aging Cykey ( http://www.bellaire.demon.co.uk/index.html ), the abandoned Twiddler ( http://www.handykey.com/ ) or the vaporware/endless prototype Senseboard ( http://www.senseboard.com/ ). Can't say I'm a fan of the thumboard, but... if the code was more like txt spk than Java it might be tolerable.
Also, output is achievable - either in small on screen, with a pocket projector up to around VGA on a nearby surface, or a head mounted display like the Vuzix ( no need to go all Steve Mann and use a head-mounted CRT these days http://wearcam.org/stevewearcomp6cropgrey.gif ), or a pluggable external small LCD might work.
Most of this is existing kit you could put together.
I wonder, though, if the best help would be a programming language that jointly minimizes typing, screen use, and encourages think-before-you-code. Sounds a bit like the brief for Arc to me.
So I think the sticking point is somewhat hardware, but mostly software - what software support would it need to make it usable? What auto-sync to the cloud dropbox style? What create-and-publish-in-one package like wordpress, twitter, posterous mobile clients?
So - what about a kind of twitter/posterous for code? Write/run and then upload for the world.
Also, make the normal phone/PDA functions simple and scriptable - with cron and with plugins/hooks/callbacks/events.
Since I'm not going to make it myself in the near term, here's my idea:
+ Glove (or rings, or paint-onto-nails chips, for Pete's sake) motion sensors to sense hand, finger movement. Chording input, gesture interpretation, whatever you want. No more need for a keyboard and the real estate it demands.
The motion sensors are already ubiquitous. This is just a new form factor, plus probably wireless communication à la Bluetooth or similar.
+ Heads up in glasses (or contact lenses, or whatever) display. Again, eliminates the need for the real estate, and power consumption, of a traditional display. A side (or additional primary, to those who worry about these things) benefit is the recovery of privacy; shoulder surfing becomes much more difficult if not impossible, depending upon design.
+ Audio: Well, that's already solved as well. Just a matter of what you plug the earbuds into (wire-fully or wirelessly).
Goggles will also aid 3D display.
All of these elements are coming along nicely. All that's needed is for someone to put them together in an appropriate form factor.
You can develop Python, Perl, JRuby, and Lua programs right on your handset, and they have access to the Android API through a bridge to a native (Dalvik) activity.
Having a portable development environment is incredibly handy, and if there's a startup that can pull this off, I'll be thrilled.
I recently bought a netbook, because I'm doing quite a bit of travel, and I still wanted to be able to work on http://newsley.com while I was on the road. So, I purchased a 9 cell battery, juiced the RAM on it and installed Windows 7. Between the 9 cell battery and the original battery that came with the netbook, I get around 14 hours of compute time. It's been enough to allow me constant use during a trans-Atlantic flight. It's also light enough for me to not mind carrying it around in a small satchel when I go out and about. I find myself taking it with me more often than not, if I know I'm going to have down time somewhere. Small portable development machines are a great idea.
Is the display on a handheld large enough to be useful for programming? Studies show that larger monitors increase programmer productivity. It seems reasonable to conclude that a smaller monitor would decrease it even more significantly.
An iPhone screen (320x480) is similar to what the Mac Plus had (512x342), but it subtends a much smaller angle of your field of view.
Perhaps the appropriate measure is in square "characters". The VT100 is probably the bare minimum -- 80x40 characters, or 3200 characters square. There's no way the iPhone can display that legibly.
Not saying the idea is impossible, but this is definitely a problem to overcome, either through hardware (HUDs in special wearable glasses? Big lenses to enlarge monitor size, like the movie Brazil?) or software (some other method of input?)
I do most of my arduino and processing development on a tiny eee pc.
The key thing that makes this easy is a compact programming language I think. Plus I'm usually making embedded software so it's small and cheap enough to leave lieing around running sensor logging scripts etc.
Another commenter said:
it's pretty rare that I'm out somewhere and wish I could develop something but haven't brought my laptop
Perhaps hacking embedded "internet of things" type software where there world around you is a rich hackable environment is what will drive handheld development machines.
The high pixel density screen and physical keyboard actually make a Droid very possible for this. Mine is very comfortable for reading, but for creating documents it will take some practice--almost every element is part of the UI so it's easy to tap the wrong thing.
I don't see any reason that with the proper UI mapping, a program couldn't let me bang out some writing on this thing.
You wouldn't want to write your entire app (or novel) on it, but for quashing a bug or tweaking some CSS? Absolutely.
> in the same way that laptops displaced desktops.
Whoa there buddy, that's quit a leap/assertion to make with zero evidence.
I'm sure that's true for certain small niches such as young entrepreneurs working from coffee shops and coworking offices. But I and most developers I know work on one (or more) beefy desktops connected to real keyboards, real trackballs/mice, and 2+ very large displays. Many of us have a laptop for home/road work but we only use it if we have to.
Every engineer at my employer has a laptop with a 17" screen and full size keyboard, plus a 19" external display. The laptops are dual processor, with a minimum of 2GB of RAM.
I've not yet found a need for a full size workstation.
A few months ago I saw a bunch of drunken youths on a train heading to a party. Incredibly they were editing myspace profile layouts on their Sony Ericssons and exchanging CSS and html tips. So coding on handheld devices is possible but we are not the target audience. Perhaps more yahoo pipes/yql style coding.
I am blind. I work off a laptop for development, using an external keyboard at times to make things more comfortable. I say use a screen reader for output. Forget a screen, except for making visual layouts.
Am I missing something? The Winter 2010 application deadline is already over; so is the RFS just a source of potentially good business ideas or is there something I don't know?