Rob has been around a while, but was most recently working on Angular 2.0. However, he's had technical differences with the team and has returned to a framework that matches his approach better.
Kudos to Davis, I'm a big fan. Writing TempleOS: amazing. Writing TempleOS with the dog or bird background noise you hear around 5:00 in the video: more than amazing!
Nice app! Have you considered adding a voice command mode, so that when I'm out of reach of my phone or too lazy to read/touch the phone screen, I can still remote-control websites?
Is it not self-evident? How many people connect through social networks, which is an obvious benefit? How many people benefit from research papers about 3d model generation from photographs?
Now _there_ is a questionable assumption. Given that increasing numbers of people are _leaving_ FB in saturated markets, and peak membership seems to top below 50% of the population, there seems to be a limit. And I could turn up studies showing negative effects of social networking / media saturation ranging from social isolation and depression to broken marriages and lost jobs to health and life-expectancy loss due to inactivity.
How many people benefit from research papers about 3d model generation from photographs?
First: a false equivalence and shifting goalposts. Your initial claim was "most of the academic research".
Secondly: academic research covers a huge range of areas, from improved health and diet to better machines and alternative energy sources to faster and more accurate computer algorithms.
Third: what you see as a useless toy has some pretty evident applications that I can consider. Attach this method to a 3d CAD/CAM or printing system and you have manufacturing or parts replacement from a 2D photograph (AutoDesk has demonstrated similar modeling/capture systems but based on multiple images, but these can come from any camera). Art interpretation, archaeology, X-Ray modeling, geological imaging, and astronomical applications come to mind. There might be applications in protein folding or other microscopic imaging applications.
And the beneficiaries of such technolgies could extend far beyond just those who are currently plugged in.
Blindly claiming social media vastly exceeds the value of such research fails to pass the most casual of sniff tests.
I don't think it's reasonable to question that assumption. Humans are social creatures, and social networks make it easier to connect with people over arbitrary distances. To deny that social networking is not beneficial is equivalent to arguing that telephones and postal services are not beneficial.
Your analysis focuses only on Facebook. Of course people are leaving Facebook. But is the total user population of all social networking apps decreasing? I doubt it.
> First: a false equivalence and shifting goalposts. Your initial claim was "most of the academic research".
Poor phrasing on my part. My original goalpost was "the academic research like this," which is admittedly vague. What I meant was research projects focused on image processing and interpretation.
> Third: what you see as a useless toy has some pretty evident applications that I can consider.
I don't see it as a useless toy. I just think it's far less useful than social networking services, which have a very practical obvious benefit.
> Blindly claiming social media vastly exceeds the value of such research fails to pass the most casual of sniff tests.
It's not a blind claim, it's what I feel is an extremely obvious claim.
I don't think it's reasonable to question that assumption
It's reasonable to question ALL assumptions.
Your analysis focuses only on Facebook.
No it doesn't. I pointed at FB as the largest of the present SNs, but referenced other SNs as well. FB is a leading exemplar of the field. My use of it isn't intended as exlusionary of other SNs.
My original goalpost was "the academic research like this,"
Which largely moots the rest of the argument. Though as I pointed out, "research such as this" actually does pose some reasonably interesting and useful applications. We can argue over those magnitudes, but I'll stick with my initial assessment that the net benefits of such research are likely to be high.
Also, but narrowly identifying what you feel is and isn't valuable research, you're sharply skewing the results to your favor. It's as if I said "but I meant by 'social media' 4Chan and HotOrNot".
it's what I feel is an extremely obvious claim.
And it's what I feel requires citation.
Which you've failed to provide, being rather more inclined to engage in rhetoric.
When I see Apple putting forward as a key feature the number of bits in their processor in a general public announcement... I'm really feeling the "new" product is not really new. In the past Apple was always laughing at the PC ads boasting processor speed and memory capacity.
Apple laughs at numbers-based ads when they are behind, and play up the numbers when they are ahead. Remember the Pentium snail ads? There's nothing surprising here. They have a spec advantage right now and they're playing it up. If and when they drop behind again, they'll be back to talking about more qualitative aspects.
Isn't that what EVERY company does when it comes to marketing? I have never taken a single marketing class but I'd think putting emphasis on your strong points and gloss over your weak ones is common sense?
Not on Hacker News. On Hacker News your press conference should alternate new feature paragraphs with apologies.
"We have a new fingerprint sensor! But, you probably don't want to use it because privacy crazies online think Apple is a front for the NSA."
"We're moving to a 64-bit architecture! But, geeks with low reading comprehension think it's not that useful because we have tiny RAM, so you should just ignore this point too."
"We have the best mobile phone camera ever created! But, everything was already good enough, so we've probably just wasted two years developing this and wasting shareholder dollars instead of entering the virtual cow social abuse market."
In terms of raw sensor performance, certainly. The new camera moves the software stack forward in a way that Nokia didn't though - extremely high frame rate to "catch" the best moment, programmatic selection of said moments, merging of exposure information across multiple consecutive frames, etc.
As a photo enthusiast that part of the presentation was a lot more exciting than the (rather marginal) improvements to lens and sensor.
You don't need to mess around with these software hacks when the hardware is as good as in the Lumia 1020. Besides these software features are already done in the HTC One (lookup 'Zoe'). Apple's playing catchup here.
Lumia 1020 has very slow shot-to-shot and start times. On iPhone 5 I can start the camera app and shoot 9 images in 10 seconds. Just did it. On 1020 you might get 3 shots of in the same time. But the key is the time to first shot. On iPhone 5, from off to first shot is roughly 2.5 seconds, and just .5 of a second or so for the second. 1020 takes 4-5 seconds for first shot, another 2-3 seconds for the second (based on my experiments in the store).
Since most people use their cameras to shoot pictures of cute cats or children, and then upload them to FB, I think the vast majority of people would prefer the fast and very good quality of the iPhone over the slow but excellent quality of 1020.
Best is too vague. My issue with it is that it's a huge bulge on the back of the phone, so in my mind it can't be the "best mobile phone camera ever created".
There are a lot of interesting things you can do with a massive address space even if you don't have the RAM to back it. You can mmap massive files. False pointers are virtually nonexistent for conservative GCs in a 64-bit environment (I believe modern Objective-C is compiler supported refcounting though so this doesn't really apply here). You can virtual alloc a 4GB array and just let it grow in physical memory on demand.
There is also a new instruction set to go along with the bump to 64-bits which improves things. However, I remember Herb Sutter saying that, in the case of x64, Microsoft generally found that the improved instruction set performance gains were a wash due to the increased cache misses caused by the doubling of the pointer width. I'm not sure how much ARM 64-bit instruction set improves things.
I'm definitely out of my depth here, I guess I just wasn't really convinced 32-bit was ever a ceiling on the iPhone. I'm sure Apple have their reasons though, and maybe massive files and 4GB arrays and really do matter to iPhone users more than I thought. I'm sure Apple have some reason beyond marketing, since I doubt the general consumer really cares.
> (I believe modern Objective-C is compiler supported refcounting though so this doesn't really apply here)
Yeah, I don't think they ever supported GC in iOS. Now that you mention it, it's probably not a coincidence that it was added to the Mac shortly after the entire product line had switched to 64-bit. You could still do it in 32-bit, but I doubt they were expecting many developers to start writing new 32-bit apps at that point.
What does use a conservative garbage collector on iOS, however, is Safari's JS engine. But I assume that the conservative scan is only used on the stack (that's what FF does), since it would be kind of silly to do a conservative scan of the heap for a language that doesn't support pointers. So it doesn't seem likely under normal circumstances that you'd have many false hits even with 32-bits.
Memory bus bandwidth - moving large chunks of data around just got twice as fast. That means loading textures for games, or hauling photos up from flash memory will be substantially faster now.
Yep, of course. Which is why I don't understand when people act like this is something weird. Talk up your strengths, talk down your weaknesses. If a point that was a weakness last year has become a strength this year, you emphasize it even though you de-emphasized it last year. That's just how it works.
Because no other company ever criticizes their competitors' marketing?
This discussion is frankly insane. This stuff is called "marketing". Virtually every company does it. Companies that don't do it are called "failures".
More than most brands, Apple uses 'meta' advertising, calls out competitors for being corporate and mainstream and focused on machines over humans ("1984", etc.) It helps them mint money selling nice-looking consumer electronics, and at the same time it makes them justly more susceptible to this kind of criticism. It's all part of the same package. I think they can weather a little criticism for the hypocrisy in their marketing, let them take their lumps as they take their money.
> they'll be back to talking about more qualitative aspects.
The problem is that they hardly have qualitative aspects left that they can claim over the competition. They are engaged in a race to the bottom.
When the highlight of the keynote is a feature that has been available on cheap Thinkpads for five years, you wonder if Apple will ever innovate again.
An increase in processor word size has been part of Apple's marketing materials in the past, as was the transition from PowerPC to Intel. Apple tends to not get hung up on speed and capacity, which is just always regularly increasing, but step-wise jumps like word size are a different type of change since they require substantial technical hurdles and, as is the case here, can double the speed of certain applications in a single generation.
Apple are hoping that journalists are lazy or rushing to meet a deadline, so they get headlines line "New iPhone is 40x faster than old iPhone!".
They could have put "2x faster than iPhone 5" in their slides, but they deliberately chose to put 40x (referring to the original iPhone) and leave it slightly ambiguous.
It's the first smartphone with 64bit CPU, it's a significant achievement, and they are rightly proud of it. I thought comparisons to the original iPhone were very silly though.
Non techies that I know hear 64 bit and immediately think that it's better. When friends show me a new lap top or whatever else, they say it's 64 bit. When they're shopping they ask if something is 64 bit. They don't understand why it's important, but they think it is.
My "is it new?" test works this way: how would I explain what's new to my father (who is 74 years old and knows not much about tech). A7, M7, iOS 7, better camera and flash: not much to say. Touch ID OK that's new, but that's not much.
It's not about your fingerprint though. It's just that cheap scanners are easy to trick and you leave fingerprints everywhere, including on the very surface you scan it on.
Scanners like the one used only work with live fingers, and read geometry deeper than the surface. You're not going to trigger it with an ordinary collected fingerprint.
It needs to be contextualized and explained for practical usage. They had the same problem with Siri, which was pretty awesome, but presented very gimmicky.
I suspect that slide was partially a message to ARM and their partners as well as Intel:
"Architecture transitions used to be a big deal. We have built a software and developer ecosystem that makes them worth a single slide. ARM: Don't get too comfortable. Intel: Show us what you can do."
Looking at Intel's roadmap and the success they've had with Haswell ULV, I think they have at least a fighting chance at ending up in an iOS flagship product within 2-3 years.
Smartphone hardware is in the 9th inning. These devices have mostly hit the natural limits of what's achievable for now. Bad for Apple since this effectively levels the playing field.
Flexible displays would certainly be neat, but their utility for a cellphone largely escapes me.
Wearable peripherals are a different category, they're not smartphones. Regardless, I dont have terribly high hopes for these future devices.
Mobile payment is mostly not a hardware issue. It's a software issue and a matter of coordination or market forces selecting a standard or two. It's things like THIS that are the next battleground in mobile: services and integration.
They are in rectangular boxes because that shape, for various reasons, happens to be extraordinarily efficient. TV's haven't changed shapes.
I think smartphone hardware is pretty much dead. The major leaps - touch screens (which is so ridiculously underappreciated as an innovation), HD screens, HD cameras, CPU horsepower, nice OS's, voice recognition, blah blah - are behind us. There is a reason that almost all of the best selling smartphones look alike, feel alike, and generally have the exact same feature sets. The differences between each other, in the grand scheme of things, are lamentably minute.
These are great points. To add to your list - I'd love it if as part of mobile payments they do what's needed to completely replace a wallet.
If I'm at a restaurant paying for a drink with my phone, I should also be able to send them a verified copy of my id with photo and age. Then I could stop carrying a wallet entirely.
Integration with low power peripherals will be nice, but don't forget about high powered peripherals like tablets and tvs. If I put down my phone right now and pick up a tablet, I should be able to finish typing this comment with no interruption.
Usually, but not always. In 2007, it turned out that the available technology allowed something way better than what was actually being sold. Apple realized this and used this fact to go from zero to smartphone dominance nearly overnight. Gaps do happen, they just don't last long before someone comes along and gets rich by exploiting them.
Except that Apple just doubled the processing power. That, with everyone else now needing (and, given a year, able) to catch up & exceed, is hardly "the natural limits of what's achievable for now". The processing power curve shows no sign of slowing down, and with wireless tech racing past LTE toward 100Mb territory meaning local storage capacity becomes a mere buffer instead of a limit, we're nowhere near "natural limits".
The only limit we face now is users finding sufficient aggregate need for all that power & bandwidth. Build AppleTV into a touchable monitor, drop a wireless keyboard on the desk, and eliminate that 4" bottleneck for most users - BAM, death blow to Windows etc.
The iPad 3 pushes 2,048 by 1,536 pixels. A single monitor 30" requires 2560 x 1600 (or maybe less, if it's crap). So ... I can see an iPhone being able to drive a single 30" monitor some time soon.
People will be using their phones as desktops (if not serious workstations) sometime soon. And once they use them as desktops, phones won't be fast enough until they have performance comparable to workstations (which, as you point out, won't happen).
Why not? If it doesn't melt while being maximally used in my hands, why should it be thermally impossible to put it on the table while connected to an external screen?
The processor is more powerful than many old computers, and the 30" screens need no more pixels than the iPhone screen already has.
I wonder if it would be practical couple the CPU directly to an externally accessible thermal pad, and have some sort of docking station which includes additional cooling. There are probably better cooling options available (liquid, heatpipe?) if you could find a way to connect/disconnect them reliably and not compromise much on the mobile aspects of the design.
Yes, it would be very useful to have a single OS image that could scale its user interface and capabilities to the hardware it finds attached at any given moment.
OSX already does this. Got a base model Macbook Air and a big powerful iMac or Mac Pro? Connect a thunderbolt or a firewire cable and boot in target disk mode, you'll boot from the Air's hard drive but get the full hardware capabilities of your iMac or Pro (better GPU, more CPU power, whatever).
Apple's advantage on craftsmanship is already well-established and doesn't really need to be sold (to people who haven't already decided they hate Apple, that is). 64-bit is not the primary differentiator of the iPhone the way a few hundred MHz is the only real difference between laptops in a Best Buy.
I don't see them talking about this any more than they've talked about every iterative processor improvement in the past. Being the first 64-bit smartphone is a slightly bigger deal than selling 2.6GHz instead of 2.4.
I think the "Touch ID" fingerprint matching that was just revealed is way more interesting than their performance improvements. I wonder why they chose to order the announcements like that.
https://twitter.com/iamdevloper/status/540481335362875392
"I think I've had milk last longer than some JavaScript frameworks."