Hacker Newsnew | past | comments | ask | show | jobs | submit | normac's commentslogin

Count me in on this as well. It's very strange because it's not a voice, but in some sense there is sound--I can "hear" when there's a rhyme, for instance.

I've heard it claimed that the reason some people think they dream in black and white is that there wasn't any color content at all (not even grayscale). They think back and can't remember any colors, and assume it must have been grayscale, because their waking consciousness can't imagine what it would be like to see something without color.

Similarly, it's hard to imagine how you could hear words with no voice speaking them (at least it's hard for me) even though we're experiencing it right now!


On the off chance you haven't heard of him, you might want to check out Daniel Tammet [1]. He's a high-functioning savant who has number-related synesthesia, can do calculations by juxtaposing images as you describe, and he's cultivated these abilities to the point that he can perform incredible feats of calculation and memorization.

[1] https://en.wikipedia.org/wiki/Daniel_Tammet


This is such a great demonstration of both how fast JS engines have become in the browser, and how much less efficient native software has become overall. Running on top of an x86 emulator in JavaScript, it's still faster to pop open a file browser and click around than it is on some modern smartphones.


Most modern smartphones run Android, which means they have a bastardized Java VM between your "native code" and actual metal... not unlike having a JS VM.

It's not that "native software" has become less efficient, but rather that it is slowly disappearing.


As of Android 5.0, the Android Runtime (ART) compiles Dex bytecode ahead of time to native code. That probably still isn't as efficient as C++ or the like, because there's still a garbage collector, and Java methods are virtual by default.


  how fast JS engines have become in the browser
I am not so sure. It is more like "JS barely runs a software that has a recommendation of 66 mhz CPU".

Maybe it is about emulation the whole system but still I was expecting better.


I agree it is a very cool demo.

Is there anyway I can find out the call flow of this JS program?

Like to see the actual dynamic call flow graph instead of just simple static flow analysis output.


That's because the Win98 code is that much simpler. It has less to do with speed and more to do with complexity.

Your phone is doing all sorts of weird shit behind your back as you poke about. Stuff that it's not supposed to be doing if app-makers were actually respectful of their users...

On that note, why does that crap take so long on smartphones anyway?


One tweak that made my G2 feel much more responsive is simply turning down the UI animation length in Dev Settings. That's not to say that app developers don't love their non-native webview apps crammed to the brim with ads, though...


UI animations have a lot to do with it, but I'm more referring to UI jank over simple stuff like 'enumerating a list of sharing targets' or 'swiping to the next page of the launchboard' when the phone is otherwise idle. Or, my favorites: the multi-second pause from unlocking my phone during a phone/skype call, or the taking of an actual eternity just to answer said fucking Skype call, of which half the time the call has already hung up once my phone is finally responding...


That's why I switched to windows phone. It's a whole new experience. I can actually answer my phone without the obligatory wait for catch up before I can swipe to answer.


Had a client who later wanted me to do add all sorts of nasty things to an Android app template that I had made for him: Locking out primary navigation buttons, partially replacing the home app, spying on the user, and the capability to send out very expensive SMS texts and auto dial long distance phone calls. Unfortunately I was unavailable to do the addons he wanted, despite him asking repeatedly over the next 18 months. Never trust an Android app...


I think a big part of it is simply loading in resources and the like. Smartphones try to keep a _lot_ of apps in memory, so there's a lot of switching around.

Another thing is that Windows would put certain native widgets (like file selection) on a higher priority than other program code. Android, at least, tries to put as much stuff in userspace as possible, so you might be experiencing the reality if everything run at the same priority.


I miss the shitty machinery of win9x days[1]. The weird part is that I was deep into CGI, and compositors made so much sense to allow for all kind of graphical operations; yet.. I miss the refreshed icons, blinking cursor .. as if I was one to one with that simple machine.

That's very subjective, I'm in a minimalist passeist phase.

[1] Except for the non isolated driver model.


> I'm in a minimalist passeist phase.

> ...one to one with that simple machine.

Recommendation: Forth (cf. colorForth)

Warning: Save snapshot of current mental state/perspective/worldview first, to guarantee sustained mental health :P


I regularly scan (ansi|gnu|color|*)Forth webpages. Thinking Forth is on my mental shelf for so long. ML and Lisps keep delaying reading it.


> I regularly scan (ansi|gnu|color|)Forth webpages.

I occasionally do too :P

> Thinking Forth is on my mental shelf for so long. ML and Lisps keep delaying reading it.

I know the feeling... but I'm still at the "Lisp looks like parenthetical line noise, and what even is* ML?" stage, so I haven't yet tackled those.

Forth, to me, seems to be bring out the "mechanicalness" of the computer, in a weird sort of way. Of course it's just another programming language, but the philosophy and mentality behind it seems to lean in that direction. I like it for that, and its minimalism. :D


I love lisp and ml to bits, it's more that there's a whole cosmos to learn from there (type systems, macros, logic resolution, you name it).

I kinda understand the mechanicalness of Forth, if you mean that there's only a few principles and that even 'syntax' is built on that. The kind of less is more that frees your mind


I suspect smartphones to have crippled IO and memory subsystem. Marketing emphasize on "Hexa Core 128 bit Samsung silicon from the future" while the rest is subpar, leading to weird performance. JS engines are beautiful these days, especially given the adhoc nature of js, but a JVM should beat it hands down (based on blogs claiming reaching 1-3x C perf).


I ran this successfully on my smartphone...


You might think it's a great demonstration if you try it in a desktop browser.

Meanwhile, on actual mobile hardware, the performance is so atrocious it's completely unusable.

No knock on the implementors of course. But the idea that this demo has some deep insight about the performance of native smartphone software is just inaccurate.

http://cl.ly/2O0L3t290s37


I didn't mean to compare native mobile apps to web apps (though I see now how it could be read that way). I'm talking about native software in general. I chose smartphones for the comparison because they're the most notorious about being slow, even as their hardware performance approaches what PCs were like just a few years ago. Although even some desktop apps manage to be as janky and dog slow as almost anything from the Win98 era, e.g. iTunes until a couple of years ago. By contrast, the cheapest Third World market smartphone would run Win98 apps blinding fast.

In general there's an arms race between hardware getting faster and native developers getting more and more lazy about efficiency. On the desktop, the hardware is finally winning--it's just too damn fast. Hopefully that will happen with smartphones too.


> Although even some desktop apps manage to be as janky and dog slow as almost anything from the Win98 era, e.g. iTunes until a couple of years ago. By contrast, the cheapest Third World market smartphone would run Win98 apps blinding fast.

It is easy to take potshots at e.g. iTunes, but the real reason software "got worse" is not developers getting lazy. What happened is that expectations for CPU-intensive features rose (memory protection, ASLR, NX, encryption, low-latency audio, high-efficiency codecs, ClearType) while willingness to pay vanished. In Win98 times, you bought your music player from the developer (remember Winamp?) whereas now it comes free with your OS, which is itself probably free. So of course it is all half-assed now, but it's not "lazy" to spend resources on software someone will pay for, and not on software nobody will pay for.

You could absolutely reverse this, but it has nothing to do with native or web technology, and everything to do with changing consumer attitudes about choosing software.

> On the desktop, the hardware is finally winning--it's just too damn fast. Hopefully that will happen with smartphones too.

From 1995 until today, power consumption in desktop processors grew about 5-15x, depending on how exactly you measure. 5-15x more power on mobile devices is simply not an option, unless we have a "new physics" kind of breakthrough in both battery and thermal technology.


Willingness to pay didn't vanish, it's just that how we pay has changed. No, we don't pay for iTunes, but iTunes is the entry point to the iTunes store, so there's plenty of revenue coming in through that. And while iTunes and OSX are free, the hardware that OSX runs on makes up for that by being more expensive.


It's getting better (particularly on high-end phones) but single-threaded performance is still nowhere near circa-2013 desktop x86_64 (3.5ghz+ haswell). And it'll probably never catch up due to the thermal and battery life constraints.


"Never" is a very dangerous word to use considering that just 70 years ago, the ENIAC was created and that about 20 years ago, we were using Pentiums with 150-200MHz. The fact that we've come such a long way in such a short time means we probably have quite a ways to go, too.


Python occupies a niche that isn't going away any time soon: making it easy and natural to write readable, straightforward, more-or-less imperative, slightly boring code of the type you learned in CS 101.

This is still a very practical way to solve many problems and I'd wager for most programmers it's still the easiest way to do things. Maybe it will always be. It's hard to imagine there'll be a generation of programmers some day that finds it easier to compose dozens of tiny modules, chain callbacks with a variety of async abstractions, and implement as much as possible in tiny idempotent functions.

I feel like the worst case scenario for Python is that it will fade into the wallpaper of mature and unsexy languages like Java and C++ that nonetheless run the world and will probably be around for another 100 years at least. I'm guessing Guido would be cool with that.


At my work we recently needed to hire a developer. We gave them all a very basic problem to solve and told them "use any language, use any libraries". The idea was to get a feel for their coding style - do they comment their code, is their logic something the rest of the team could follow, will they address unmentioned issues, will they press for clearer requirements, etc.

A few notable solutions: 1) The C guy. Damn if he didn't blow that problem out of the water. I never want to be responsible for anything he coded. Entirely too complex, no comments, lord knows what side effects he put in place.

2) The java girl. Didn't finish. Didn't do any logging. Very logical separation of code. Lots of comments. Had to continually reference a text file in order to run the command to show output.

3) The .net guy who attacked the problem with Python. Imported a handful of libraries. Wrote 14 or 16 lines of code. Completely baffled that we would provide such an easy problem. When asked why he didn't use his strongest language, he laughed.

One of them got hired and won't have to write a single line of .net anything for a very long time.


One of my favorite pieces of code I've written is a 150 line Python script I made to solve a really ugly text processing problem. I have tried to use it as a code sample when talking to potential employers, but it backfires because it makes the original problem look so simple that they wonder why I bothered to send it.


If it's any consolation, Peter Norvig's Sudoku solver seems so readable on the surface that I've fallen into the trap of thinking it looks easy, or that I fully get it. :)

0: http://norvig.com/sudoku.html


Maybe you should try a reverse interview process. Send them the original problem, ask them to have one of their top developers solve it, and then compare solutions.


Is this a real thing ?


It should be.


I once wrote a custom report generator that could pair basically arbitrary input formats (pluggable, I think by the time I was done I'd written importers for CSV, fixed width, and JSON data) and output custom PDF output, with fully-user definable formatting/elements - using XML "templates" just because I didn't want to write a custom parser. Included loops, if statements, etc, and some pretty fancy output features (e.g. output N records per page, with custom sorting, headers/footers/etc). Used ReportLab for the PDF generation. Whole thing was under 1k lines of Python 2.


Why can't Julia fill that role?


Try to do machine learning in Julia; it is difficult. If I want to say fit a gradient boosted tree or SVM its not support in Julia, whilst appears in Python/R libraries. Also, with Spark being more and more popular in the data science landscape, the lack of Julia bindings is also a no for the data scientists I work with (Spark has Python/R bindings).


You will have to reach a little deeper for machine learning methods not supported by Julia. XGBoost has a Julia interface and you can google Julia SVM for myriad of alternatives. Packages like Mocha and MXNet are a few deep learning alternatives. PyCall is also an easy solution for interfacing with Python for things such as pyspark. It also has some of the most convenient to use parallel / distributed computing tools for numerical computing.

Point being that even if Julia is not there to replace Python, there is still a strong case for using it as a way to augment Python workflow.


If Julia just want to replace Python as the glue interface, it seems to have no chance winning...What it can do, as a glue layer, that Python cannot do?


It's much easier and lower-overhead to call into C, Fortran, and soon even C++, from Julia than it is from Python. If there's a library in Python but not yet in Julia, it's really easy to call into Python from Julia.

What you can do in Julia that you can't do in Python is write high-performance library code in the high level language. If you need to write custom code that isn't just using stock numpy or scipy algorithms right out of the box, and needs to use custom data structures and user-defined types, Julia is a fantastic choice. You can try with Cython or Numba or PyPy, but you're either working with a limited subset of the language, or forgoing compatibility with most of the libraries that people use Python for.

Julia feels like writing Python but does not allow some of the semantically impossible-to-optimize behaviors that you can find in Python, and has a type system that you can use to your advantage in designing optimized data structures for the problem at hand.


Thanks for explaining this.

As to my own experience dealing with data, the degree of freedom, as basically a programmer, is small. Specifically, I have to think and bear tools in my mind from the start. Which might not be ideal, but cant avoid anyway.


It's not meant to compete with Python as a glue language. The point is that you can start using Julia right now and be productive by calling other languages' libraries to fill in the holes.


The questions is why would I do that? Because Julia is new?


Well, what do you work on? Julia isn't for everyone.

I use it because it has quite good numerical primitives, and I can quickly make a slow, Python-like first pass at an algorithm, then profile and get C-like performance in the bottlenecks with minimal effort. And if I need a particular library, I can call Python's. Also: macros and multiple dispatch make a big expressiveness difference for my type of work.


Lifetime values, customer segmentation, lead scoring, customer life cycles, customer attrition and also quite a bit of reporting. Some text analysis. I use R because it offers superb speed of development, extensive documentation, commercial support and many partner opportunities with the likes of Oracle, Microsoft, Alteryx, Tableau, Tibco and pretty much every analytics vendor. In my experience, R's slowness has been greatly exaggerated.


Yeah, I would use Python for this kind of task, too. Vectorized operations are fast enough in a lot of cases, and the library advantage is important. At this point in time, Julia is a great C/Fortran replacement, but for Python/R/Matlab, it's a trade-off.


Because it's young and we still don't know how it will pan out.


Which role? A general purpose programming language with an emphasis on readability?


Yes...but one that is also very fast, portable and with great generic programming.


I had never considered Julia as a general purpose language. I assumed it was targetted mainly at data science etc.


The core language is general purpose, and IMO really good for high-performance work, but the community is focused on numerical code, so there aren't a lot of libraries for non-numerical/scientific/financial work at this point in time.


Stuff like Numba greatly decreases the need for Julia.

C++17 also feels surprisingly dynamic, and together with Cython for easy Python-C++ interop also decreases the need for Julia.

And reports from the Julia world are not exactly encouraging - http://danluu.com/julialang/


That report is over a year old, which for a 4 year old language is a very long time. The top comment in [1] is from a week ago and highlights why it is now mostly invalid.

[1] https://news.ycombinator.com/item?id=11070764


Totally out of left field here, but I got some auditory synesthesia from watching this, especially on high speed. If any of you did as well and are interested why, it's probably the same phenomenon talked about here: https://www.newscientist.com/article/dn14459-screensaver-rev...


Interestingly, Tcl was parsed the same way all the way up until Tcl 8.0 introduced a bytecode system in 1997. Each line was parsed right before execution with no separate parsing stage--your code could crash in the middle of executing from a syntax error.

Tcl is now about as fast as a typical scripting language and its implementation is regarded as a model of high-quality readable code. I wonder if John Ousterhout (the creator of Tcl) knew he was building a rough first system that he would need to rewrite later, or he really thought it was the right way to do things at the time.

(To be fair, Tcl is a whole lot simpler to parse than JavaScript and Ousterhout is a good coder, so even the pre-8.0 versions were fast enough to be practical for everyday scripting tasks.)


Most shell interpreter follow this model, perhaps he started Tcl has an improved shell scripting system and only later improved it? It would explain some of the syntax choices which is fairly reminescent of c-shell and some bourne thrown in.


That's pretty much what Tcl is, even to this day. Ousterhout saw scripting languages as extremely high level tools used to issue commands to the low level code that does the heavy lifting. It's very much the same philosophy as shell scripts, except you implement new commands as extensions to Tcl instead of command line programs, and it all runs in the same process.

Of course, the language has matured and now it's also usable for building rich and complex apps top to bottom, just like any modern scripting language.


> Interestingly, Tcl was parsed the same way all the way up until Tcl 8.0 introduced a bytecode system in 1997. Each line was parsed right before execution with no separate parsing stage--your code could crash in the middle of executing from a syntax error.

Fun fact: most modern JavaScript engines in browsers work this way today, though at a different level of granularity. Parsing slows down application startup, so many JS engines don't parse a function body until it's first called. This means you can have a syntax error in a function body and won't know if it's never used.


That's pretty neat. Clearly some level of parsing needs to happen before run time, or else it couldn't even balance braces to know where the function body ends. So it must be that it parses the function body just enough to figure out where it starts and ends, then does a complete parse at runtime (probably building off the results from the first stage).


> or else it couldn't even balance braces to know where the function body ends

That's literally all it does. It tokenizes and counts braces.


I used to work with John and founded a company with him.

Whatever he believed he would have convinced you he was right :-)


This kind of thing makes me wonder if there are dozens or hundreds of other cases where people were lost at sea and survived for many months and thousands of miles, but never made it to land so we'll never know.


This raises an interesting problem--could we find the word even faster by using something other than a standard binary search?

The first thing that comes to mind is skipping past letters that don't appear very often at that point in a word with the current prefix--or to be more granular, you could change your counter (here g) to a float and somehow weight each letter by how rarely it occurs after the letters you've established so far. So if you've currently established that the first four letters are "pro," and "z" almost never occurs after those letters, "z" might be given a weight above 1 so the counter skips right past it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: