Everyone on this thread is so dismissive about the language that they forget to credit the accomplishments of these wonderful boards.
See, Javascript maybe a 'bad' language according to many of you, but it has massive adoption unlike other languages. These board creators just want to ease the path for most web developers to become hardware developers. It not only opens up a whole new industry to work with, but also it creates a good 'filter' to filter out the bad ones. I will explain.
The thing about hardware products is that most people dont care about internals. Most of them care about the experience. I am NOT an Apple fanboy, but in this occasion I would like to cite the iPhone's sales as a good example. If you suck at programming in Javascript, it will show up, especially in the Hardware world, easily, and you/your product will be rejected.
Also, when you develop, say, a DSLR Quadcopter[1] with this board, people aren't going to ask you "What language is it running on?", "How slow is your language?". People are going to be asking about the footage you're going to film with it. Let's not dissolve ourselves into the hatred of a language. Instead, let's take the time to appreciate what these developers have achieved and what we can build out of these boards.
> These board creators just want to ease the path for most web developers to become hardware developers. It not only opens up a whole new industry to work with, but also it creates a good 'filter' to filter out the bad ones. I will explain.
What exactly happened to the programmers who, upon realizing that they had to use a new technology, learned the new technology instead of feature-creeping something until it gets to their familiar JS boat.
I'm not going to rant that this is never going to fly. Were it based on technical consideration, the current image of Web 2.0 would have sunk like a rock, let alone even try to fly. But this is horribly inefficient and backwards that encouraging seems incredibly detrimental to our field.
Come on, people, it's not that hard. If you want to do embedded development, a reasonable subset of C is all you need to know, and it certainly takes less to learn that it took to learn half a gazillion JS frameworks. Ask your boss to give you a one or two-week leave and give you a break from your 70 hours a week streak, and learn something that's actually new for a change.
The key draw of running Node.js code in the embedded environment for me is that it works out of the box with all the web libraries that I'm already familiar with. With Node.js I can easily leverage existing modules to do things like make a device that responds to social media by accessing my Twitter account and when it sees a new tweet mentioning my company make a light flash or something like that.
I would hate to try to code such a program in C for connecting to the web. Sure I could definitely do it if I took a couple weeks off, but from what it looks like I'll be able to code that program in under an hour using the Node.js I already know and the libraries that the open source community has already developed.
Anyway, that's why I ordered a Tessel and am looking forward to developing with it. If it turns out that I really fall in love with embedded programming then sure I'll bust out the C compiler and learn the low level coding. But in the meantime I welcome the chance to learn about Node.js in a familiar environment where I can get stuff built quickly using a toolset I already know.
> I would hate to try to code such a program in C for connecting to the web.
Why would you want to connect a wall switch to the web in the first place? The Internet of Things doesn't necessarily have to mean the WWW of things.
For instance, you can always expose low-power devices through a low-overhead, low-power & short-range communication protocol to a <that-protocol-enabled> router. It also makes sense to have all of those devices configured from there (indeed, via a web interface exposed on the gateway) rather than having a web server on each of them.
For what it is worth this is one of the upcoming stretch goals:
nRF24 – low power wireless communication with mesh capabilities (good for tying lots of Tessels together without WiFi)
So one Tessel can be the web server to connect a cluster of Tessels communicating via nRF24 to the web. Or you could use one Tessel as the router to connect a bunch of nRF24 Arduino devices to the web.
Come on people, it's not hard- if you want to program computers, reasonably structured assembler is all you need to know, and it certainly takes less to learn than it took to learn half a gazillion C libraries.
That's a carricaturized view of it, from which I gather you haven't done much embedded programming. A subset of the (famously thin) standard library is quite sufficient.
Of course it's characterized. That's the whole point of a reductio ad absurdum argument. You're basically making the same argument as the old guy yelling at kids to get off his lawn.
Out of curiosity, what do you think about Lego Mindstorms? Especially the later versions which allow a large multitude of programming languages to be used?
Leveraging the great node ecosystem comes at a significant cost in terms of power consumption, complexity of the manufacturing process, board size and of the system itself.
Power consumption and, to some extent, board size have been very significant drags for the Internet of Everything. Mindlessly throwing libraries at them just to help people who are too lazy to learn a new programming technology is only adding a drag on it.
I certainly don't want to drag this into the mud. It's certainly a good learning platform which can provide exposure to a range of devices for people who would otherwise not even hear about them, or for whom the initial technological barriers would be too high to overcome in a single evening. But this is far, far from adding any kind of value to the struggle towards universal Internet connection.
I'm not disagreeing with you completely. I don't think I would be thrilled creating a consumer product with this kit. However, you have to admit that this would be very ideal for rapid prototyping. Unlike you, I have never programmed hardware. I think that what I would do with this product is build a prototype or two or three, just to a point where I get a proof of concept nailed down. From there, I'll be extra-motivated to reduce the size and improve power-consumption, reliability, and responsiveness by learning C in the context of embedded programming.
If what you say is true, it's a bad sign for the future of hardware. If we fundamentally can't leverage existing libraries, we can't build the standing-on-the-shoulders-of-giants complexity pyramid that allows for such reliable, rapid development in other spaces. It will always remain a niche field.
Of course we can leverage existing libraries and standing-on-the-shoulders-of-giants complexity pyramids. Slapping a webserver on them isn't necessarily a good approach though.
I think this is a great idea for getting more people to experiment with hardware projects. Then they can move on to learning more things. But for a first try the barrier to entry needs to be as low as possible.
i quote "leverage the great node ecosystem" : which is not written with performance on embedded in mind => useless.
Embedded dev is all about performances, power consumption ,etc ... it is about focusing on hardware , not about how much libraries you can throw at a problem, since you usually do not throw any,because of limited memory, computation power...
I'm getting really tired of this "it has massive adoption" trope.
Let's be clear about it, the merits of the language have nothing to do with its adoption. It's widely adopted because Javascript has a stranglehold on the browsers and people don't have a choice. Nowhere in the software space would such a monopolistic position be acceptable, but hey on the web for some reason it's OK.
So yeah, it's the dominant language but not because it's oh so good or because people think it's awesome (though some of them do and I respect that), it's only because there's absolutely no alternatives.
> Let's be clear about it, the merits of the language have nothing to do with its adoption.
Having a platform that is already out there and which you can execute against without even an install might not prove much more about the merits of a language, but it is a merit in its own right.
Of course, in this particular case... JavaScript isn't exactly widely deployed in hardware, and this device is what provides the platform so... yeah.
> It's widely adopted because Javascript has a stranglehold on the browsers and people don't have a choice.
That's not entirely true. People have had lots of choices. There was Java. There was Flash. There was even VBScript on IE (I bet you didn't know that), not to mention all the EMScripten fun that is now available. Time and again, people choose JavaScript if for no other reason than for its lowest common denominator qualities.
> Nowhere in the software space would such a monopolistic position be acceptable, but hey on the web for some reason it's OK.
If you consider the list of widely used languages for which there is an approved standard by an independent standards body and multiple independent implementations, you end up with a surprisingly short list (and some qualify but by the skin of their teeth). I think, sadly, this is actually quite widely accepted.
Come on. VBScript and Flash? Really? What I meant and you pretend to not understand is that there is no way to use another language just like you use Javascript when scripting a web application. I can't open a console and go
@document.query_selector_all(".fancy").each do |element|
...
end
I find that annoying. That a few companies have tried to introduce proprietary extensions is another topic, it doesn't mean that there is a real choice for developers.
The point is that some people decided some time ago that scripting the web and accessing everything browser-ish (DOM, Canvas, SVG, WebSockets, Web Workers, etc.) meant Javascript and that was it. The lack of choice and variety at hand is absolutely ridiculous. In no other software space would people tolerate to be forced into a technology like that.
> That a few companies have tried to introduce proprietary extensions is another topic, it doesn't mean that there is a real choice for developers.
I spoke in the past tense because those were choices available to developers... JavaScript seems to have won out as a preferred choice. EMScripten opens up a lot of possibilities though...
> In no other software space would people tolerate to be forced into a technology like that.
Good arguments. We had silverlight, and Adobe Air / Flex that may or may not have been trying to displace JS - but they were ways to program in the Browser. Sure they were put forward as solutions by companies that had an agenda.
Such a language also is dependent on HTML-standards.
We should see interpreters for other languages written in JS - or is that far fetched?
Acting like java or flash are or ever were alternatives to javascript is displaying a very profound ignorance of what javascript is for. How do I manipulate the DOM in java? Oh I don't? It is not an alternative option then. Javascript absolutely is the only option, and people have no choice. That is why people invest so much time in writing LANGUAGE_X to javascript compilers, so they can write code in a less terrible language even though it has to be deployed as javascript.
Flash has similar (arguably much better) API's for this as well. There are graveyards filled with other attempts.
> Javascript absolutely is the only option, and people have no choice.
Being the most broadly supported and most integrated solution doesn't mean developers have no choices. It just means they JavaScript might be their best choice. On almost any platform there are going to be certain programming languages that are more integrated and better supported.
> That is why people invest so much time in writing LANGUAGE_X to javascript compilers, so they can write code in a less terrible language even though it has to be deployed as javascript.
And there you have it. JavaScript wins by virtue of being a better deployment platform in the browser space. At one time Java used to enjoy an even broader advantage. Acrobat, Flash, VBA, Bourne Shell, MSI, Unix DBM, SQL, sendmail, PHP, MySQL, Windows, Linux, C, POSIX, PostScript, XWindows, etc. have all ridden these kinds of waves. Some are more successful than others, and certainly the web browser is probably the most ubiquitously deployed runtime environment ever, but really, if anything this is normal and the broad diversity is not.
It's not unfair or unusual, so much as inevitable.
> So, write javascript to trigger a java applet that changes the dom is a reasonable alternative to writing javascript to change the dom?
Depends on what you mean. These days people often have Applets disabled. But if you have an Applet with the access to do so, you can manipulate the DOM as much from it as you could from JavaScript.
> Yes, it literally does. When presented with one option, you have no choice.
There are lots of options, its just one is better than the others.
>> And there you have it. JavaScript wins by virtue of being a better deployment platform in the browser space
> Are you serious?
Absolutely. People made a go of it with Applets. Once they realized they could get the job done with JavaScript, they dropped Applets like a hot potato, to the point where Applets are being dumped.
Most other in browser programming environments are platforms in their own right that talk to browser platform. JavaScript's platform is the browser, and that turns out to make a big difference.
> Just that javascript sucks and we're stuck with it against our will.
You really think pulling the "you are too dumb to understand my bullshit" card is effective? Javascript is used to make changes to the dom in response to the user doing things. User clicks button, stuff changes. Have you ever actually tried doing that with a java applet? Have you noticed how 75% of the api doesn't actually work in any major browser?
...and there you go again about lack of support. Have you noticed that Java Applet's just generally aren't supported anymore? We tried it (and yes, all the event handling worked). No one used it and it exacerbated security problems.
People arguing about programming languages are like people who focus more on cameras than on the art of taking good photographs.
Those of us defending JS or PHP or VB (in discussions which aren't about programming languages) aren't suggesting we should take a point-and-shoot (or a leica) to an action game.
> People arguing about programming languages are like people who focus more on cameras than on the art of taking good photographs.
Hmm... just to play on that metaphor: there are certainly camera choices that can make the process of learning to take good photographs easier or harder. Isn't that a relevant point?
ingent learning a few things about composition and picking an interesting place to take your first pictures than it is making sure you have the best camera.
You don't need to have the best camera, it's true. But you do need to have an adequate camera, where adequacy relates to the purposes you have.
For example, i used to have a little digital compact. It had autofocus; the autofocus wasn't perfect, or even particularly great, and there was no way at all to focus manually. I have countless pictures which were ruined by being out of focus, and there was nothing i could do about it. I now have a camera which lets me focus manually, which means that a picture's being in focus or not is now entirely in my hands. The former camera was not adequate for the photographs i wanted to take; this one is.
An interesting point is that a 40-year-old film camera would also have been adequate, although much less helpful in other ways.
This feels like it could be a good metaphor. C is a 40-year-old SLR covered in dials and switches, enormously capable but a nightmare to work with to anyone but a master; JavaScript is a digital compact which automates everything whether you like it or not. Java is a modern dSLR, capable and more automated than C, but still clunky. Rust is a Leica M9, still manual but modern in other respects. Go is a bafflingly-horrendous-to-outsiders Lomo camera. PHP is a Fisher-Price toy camera. Clojure is a Lytro, weird but capable of amazing things (but weird). Scala fans think their language an E-M5, but it's really an EOS M.
> The former camera was not adequate for the photographs i wanted to take; this one is.
Right, but that doesn't necessarily comment on its effectiveness as a tool to assist in learning how to take good photographs.
> An interesting point is that a 40-year-old film camera would also have been adequate, although much less helpful in other ways.
This is more in line with what I'm getting at. There are perfectly good (even great) cameras out there, but there are cameras that are better suited to helping you get up to facilitating the learning process, and others that will hamper it. While perhaps not the most essential component of the learning process, they still kind of matter and are a perfectly reasonable aspect for someone to discuss.
The language shape the minds, define the boundaries and the kind of solutions that can/can't be done in a reasonably time.
Look as pretending enlightenment to say that languages not matter, that only matter the man behind them. Well, is that is true, the mans behinds the languages mean nothing? Is only the work that create the ones using the languages that matter but not the work that make THAT possible?
The tool matter. You can build a city with only a hammer. But is stupid. Some languages ARE better than others. Some ARE faster. Some ARE more legible. Some ARE more performant. Some ARE safer. Some ARE more productive.
Maybe two languages too close in his objective give small returns, but surely exist order of magnitude improvements between different groups...
agreed - these guys, like Raspberry Pi, are making a bet on Moore's Law - that's been a good bet for forty years and will keep being a good bet for a long while yet - certainly long enough to make the cost of interpreted JS insignificant
And yet our operating systems are still written in C(++).
Javascript web code epitomizes the "long tail" of random ideas being articulated. The code is written quickly, needs to change a lot in response to user behavior and designer inspiration. It's usually < 10k lines, so its terrible medium- and long-term maintenance characteristics are manageable. It's written by millions of independent teams to bring to life millions of small ideas used by (typically) only a few users. And that's great, we really need languages for that.
But, humor me for a minute, and let's define "the C law": Anything important enough to be used broadly will eventually be replaced by something written in C(++). Why? Because solution X not written in C is always vulnerable to replacement by solution Y written in C with the equivalent feature set, and the demand is now high enough to provide the time and talent. See: every python module ever used by more than 1M people. See: every programming language implementation (incl javascript and luajit!) ever used by more than 100k people. See every operating system, every database used by more than 1 million. See every web server running top 1000 websites. See (almost) every tech startup that becomes a fortune XXXX company, and rolls through its infrastructure rewriting its ruby or its python or its perl in C/C++/Java. See why we're not all using jitted PyPy yet (hint: those pesky c modules make real world cpython often just as fast or faster and more memory efficient).
Languages that trade performance and correctness for productivity are fantastic when the project is young, small, and of dubious value (yet), or narrowly targeted and not generally interesting. Or the browser gives you no other choice.
Consumer hardware product development just doesn't align with this kind of thinking. It races right past the C law threshold. Physical stuff introduces serious economies of scale, design costs, significant difficulty of change, etc, where shipping a few hundred of something doesn't make a lot of sense (as a business; maybe as a hobby project).
So, if you're going to (aspire to) ship a million of something, investing in the software side to keep costs down, maximize battery life, mitigate risks related to a misapplied language runtime (not designed or heavily tested on embedded), guarantee performance and latency characteristics, etc, makes sense--the compromises that are appropriate for a website because you want to make a little gamble fast don't apply... you're making a pretty big gamble, and it will take awhile to get right anyway, and your capital needs are higher in general, so doing software "right" to save on COGS is just practical.
BTW, "C" here is usually C or C++, but it can very occasionally be Java [see: zookeeper] or some other JVM thing like Scala. Regardless, it basically represents the final state reached by the tool/language/platform/project race. If your project could be replaced by a version that says "like $project, but fast/battery efficient/cheap!", you are not yet at the end game, and the C law could always be invoked (if the demand was sufficient), and your project will probably ultimately lose the lion's share of the market (or open source mindshare, or some other version of "market").
So, maybe this project is betting on our lives transforming into everyone paying more for their hardware, and everyone using a lot more small/local/boutique type stuff created by small hardware teams. This doesn't really jive with the way technology products are currently marketed, hardened, and distributed, so I'm operating on the assumption that change doesn't happen anytime soon.
It could be useful/fun for hobbyists or prototyping, though.
you are right - for any given expected unit sales, there is a level where using C first time is sensible.
But ultimately there is always a market below that point - people are selling arduinos now for surveillance or monitoring simply because the market is too small for anyone skilled enough in C to bother.
As the price performance of the hardware drops more of these markets will open up - today the "stuff it use C" point is maybe a 100 units or a thousand. tomorrow a million. then 10 million.
Forgive my hardware ignorance but imagine a system on a chip with the cpu and memory and buses of say today's MacBook Air. everything you want, on a thumbnail, just hook up electricity. If I could buy that for 2 cents I would be foolish if I decided to write almost anything in C till I got real time stats from my first 5 million users.
Twitter did not drop Ruby for Java till they were at the billions of messages level. There are an awful lot of markets and price points between 1000 units and a billion units.
(Especially now when we can realistically talk about every human being having a hand held in x years. which blows my mind but for good reasons)
ps - no I do not think Moores law will hold in terms of transistors in a chip for evermore. but we have barely scratched the surface of "everything on a chip".
(Would be interested on the feasibility of literally a PC on a chip ? If we took a literal count of transistors on say a model two years old and then looked at price to fab that many transistors today what would we see?)
edit: rereading it seems to indicate that there is hardware out there were C is the sensible option - I am just trying to say there is a spectrum of / 8 bit assembler / embedded C-like / DSLs / anything a PC might recognise and that climbing that spectrum is inevitable based on hardware price/performance
Moore's law doesn't apply to batteries. There's nothing stopping you from putting the MBA chip in a phone... except energy consumption.
If your program in javascript takes 14 times as long to run than the equivalent C version and yet still manages to be performant from a user experience perspective, then you'd better be close to an outlet because we're talking about a beefy and energy hungry processor.
There's a sweet spot related to programming effort and power consumption. I don't think Javascript can hit that sweet spot yet for most devices.
I'd argue that the same problem, but to a lesser degree, is present in other non-mobile devices. I'll buy the one that costs $20 more but costs me $20 less per year on my energy bill.
I think we may even see a bit of a reversal in the current trend of programmer productivity over program efficiency. Clusters of cheap multi-core servers consume significant energy.
I remember mobile phones that had to be kept in the car because the batteries were like dumbbells. we lived with it because it was what we wanted.
There are going to be physical limits to battery technology, to processing power, all of these things. but those limits are not upon us yet and I have a suspicion the true physical limits lie in a place that will make us 20th century simpletons look like slack jawed savages.
So, your conjecture is that physics will continue, but our appetite for new features will plateau? I don't buy that, personally; in fact I think it is exactly the opposite. I want a calendar program that is smarter than me, I want search that understands my tastes in restaurants, shoes, books. I want sw that immediately translates, perfectly and idiomatically, any language. I want to talk to my phone. I want it to compose music for me. I want a pony (okay, that last one is a different list).
Now, we do know one limit - the human brain fits in an X sized area, and requires Y sized plumbing and energy store to accomplish what it does. My feature requests exceeds what any one brain can do. I'd be astonished if we could shrink that to phone sized, but maybe, just maybe.
At that time we can renew this conversation. Until then, burning batteries, and running data centers, matters. Until then, C (or a safer version thereof) matters.
> ultimately there is always a market below that point - people are selling arduinos now for surveillance or monitoring simply because the market is too small for anyone skilled enough in C to bother.
I agree, and this could be interesting for some of those small volume domains--but their copy in the slide deck is broadly encompassing and far reaching, implying mass-market products like Nest. I'm just taking them at their word and addressing that application.
> And yet our operating systems are still written in C(++).
The last time a mainstream OS was developed from scratch was the late 80's (1989 to 1993), with windows NT. Linux is just a kernel for an OS from the early 80's (gnu). OS X is a derivative of nextstep, also developed in the second half of the 80's. C was the state of the art back then.
The investment to build a competitive mainstream OS from scratch in a new language is huge. Probably 10x to 100x the effort to build windows nt, which took 250 people 5 years. I think there would be a lot of value in developing an OS in a programming language that deals intrinsically with the topics of security and multi-processing, but at this point it's too expensive to do that and match other OS's feature for feature.
Most of that effort would be spent writing drivers. Writing a scheduler (how many has Linux had so far, 3?), process manager, virtual memory system, simple filesystem and networking (TCP/IP) stack is not that much work. Supporting most existing hardware, however, takes years.
EDIT: for an example, compare linux/drivers/ to linux/kernel/ (core kernel code, including process management and scheduling) and linux/mm/ (memory management) in Linux. The former is huge.
> So, maybe this project is betting on our lives transforming into everyone paying more for their hardware, and everyone using a lot more small/local/boutique type stuff created by small hardware teams. This doesn't really jive with the way technology products are currently marketed, hardened, and distributed, so I'm operating on the assumption that change doesn't happen anytime soon.
So, didn't and won't. At least not mass-market. Maybe very specialized, low unit volume kind of applications.
This concept of course not going to amount to anything more than a hobbyist or prototyping/POC use for quite a while purely from a cost perspective. You can hire a master C programmer and pay him $100 an hour for a year, or an advanced JS programmer and pay him $40. Well after launch, you may have saved $150k, but your hardware costs you $50 extra per unit to produce, requires more power to run, and has lower performance and timing precision (not real time). Your break even point is now 3,000 units. This is a non-starter. Other engineering, design and marketing costs will likely land you in the red if you are only producing 3,000 units. Perhaps in some niche markets this concept might be viable (like $500k plug in medical devices), but I doubt it.
There is no way to really getting around learning hardware. Who is going to write the hardware interface drivers or driver/JavaScript bridge.
Power consumption will rule out the possibility of many portable devices, loss of performance will rule out many options as well as will latency caused by the garbage collected (non real time) nature.
This is the 'write everything in assembly' argument.
Productivity and prototyping gains that allow you to proof of concept, manufacture and sell small runs of (expensive) toys, enables large scale investment to manufacture more mature products.
There's absolutely nothing wrong with this approach, and, honestly, the 'big bang' approach of writing it all in C and producing 500,000 units before actually getting them out to anyone is extremely risky.
That's why hardware doesn't get investment.
Just look at kickstarter; great ideas popping up, people wanting them, people getting them.
If you only ever end up making 5000 units for your 5000 backers, so what? Great idea. Prototyped a thing. People who wanted one got one. Not a mass market thing? Ok. We're not not $4,000,000 in debt.
> That's why hardware doesn't get investment. Just look at kickstarter; great ideas popping up, people wanting them, people getting them.
What exactly leads you to believe that hardware doesn't get investment? Besides Kickstarter, I mean. Hardware development tends to be massively funded. You get less exposure for a lot more money, which is why Kickstarter is obviously not a good place to look, but in most projects I've seen it was software, not hardware that tended to be underfunded.
I've practically never heard of people getting funded with hardware ideas outside of Kickstarter.
If you've got some links to incubators/hardware startup scenes, please share.
I honestly can't think of any hardware startups which have been funded off the top of my head; maybe Nest? MakerBot (weren't they privately funded by the founders)?
> I've practically never heard of people getting funded with hardware ideas outside of Kickstarter.
You mean you've never heard of small startups getting funded with hardware ideas, which is unsurprising considering that it takes significantly more money to develop a working prototype than it takes to develop a web application.
Kickstarter is not the only place where people get funds, and startups with four people working 80-hour weeks are not the only places where innovation happens.
There are things like littleBits, of course. But the fact that new electronic gadgets keep hitting the shops is a clear enough indication that they get funding from somewhere.
It's not just about the money it cost to write the code. I mean, as a master C programmer I am not without sympathy for your suggested approach, but if you follow the philosophy of 'real men write the entire stack in hand-optimized C' and as a result it takes an extra six months before you're shipping, you might miss a key market window and then all the other considerations might end up not mattering.
It's not like we haven't seen this play out before. Over and over again, flexibility, easy debugging, time-to-market and Moore's law have gone up against the 'real men' approach - and won hands down. History doesn't always repeat itself - but it's usually the way to bet.
>These board creators just want to ease the path for most web developers to become hardware developers
Most web developers don't know javascript though. Even the ones who write javascript, I'd estimate fewer than 1 in 10 know the language even at a basic level. Most people are just grabbing messes of jquery infested crap and copy+pasting it. Then making random changes until it seems to work. That's why so much javascript out there is invalid according to the language spec, and doesn't work in less popular browsers.
To me the big story here is that Tessel apparently has a working JS->Lua-bytecode compiler.
Lua has by far the smallest, most portable, easy-to-integrate runtime of any embeddable language I am aware of. JavaScript has good implementations (V8, etc) but they are orders of magnitude larger, more complex, and imposing if you're linking them into your binary.
If this is truly a robust JS implementation, this means that there is now a tiny, easy-to-embed implementation of the programming language that powers the web. This could enable JavaScript to start making inroads in the "embedded language" space, and really become the language of both client and server.
If it can compile to LuaJIT bytecode also, it could also possibly be competitive in speed with other JS implementations, though some of that would depend on how efficient the resulting bytecode can be.
I think this would actually be a cool trend. Having a code-base that can run either in the browser or "natively" is a powerful approach. Though Lua is a cleaner language, JavaScript is a totally decent language if you use it right -- much better than a lot of people give it credit for. Hint: if you think JS sucks because of browser incompatibilities, what you really hate is bad implementations of the language, not the language itself.
Of course another approach to achieve "one language" would be to have a Lua->JS compiler (or Lua->asm.js, or a Lua interpreter in asm.js). But Lua the language is a bit more of a moving target; to preserve cleanliness and orthogonality they sometimes break the language in non-backward-compatible ways.
> Hint: if you think JS sucks because of browser incompatibilities, what you really hate is bad implementations of the language, not the language itself.
At the risk of going slightly off topic, this is true only if you have purely academic interests.
A programming language, fundementally, is just a document with a spec (for the sake of argument, let's ignore the languages that have a reference implementation as a "spec"). That spec is all there is to it for PL geeks.
Us engineers, however, we want to use a language. I don't care what's in a spec, I care about whether it works, and how. Whether it works is completely related to the implementations of the language that I need to consider, and completely unrelated to the language spec.
Besides bad JS implementations, Python is a fun example. It effectively has two mainstream implementations: CPython 2 and CPython 3. If you want to support both, you face all kinds of hurdles, much like cross-browser JS programming.
Does this mess badly reflect on the language itself? Hell yeah.
Do keep in mind though that writing JS for the web is one of the few instances today where an app is expected to run flawlessly on several completely independent implementations. Put any other language in JS's place and you'd probably have gotten a similar number of incompatibilities between browsers (especially since many/most of these incompatibilities are actually in the DOM -- an API). This is a symptom of software diversity, not of JS.
"Do keep in mind though that writing JS for the web is one of the few instances today where an app is expected to run flawlessly on several completely independent implementations."
This is indeed a hard problem, and is precisely why the ECMAScript specification exists. Whether it's precise enough in all instances is another question. In other realms, the same problem is faced by Java, and there the implementation is split into the compiler and the JVM. Breaking it into two pieces at these boundaries has advantages - compliance suites can test that the compiler generates valid bytecode (the Java equivalent to native machine code) and further test that the JVM behaves as expected with known bytecode.
Most of the inconsistencies in JS are with respect to the DOM / standard library exposed by the various browsers. That is surely not part of the PL.
As for Python 3, if they hadn't made breaking changes, people would complain instead about the various warts the language has acquired over the years that they refuse to fix for the sake of backward compatibility. There's no solution that makes everyone happy, but I'm glad that the python community is still willing to make high risk / high reward decisions about the language's evolution. That's a sign that it's not yet set in its ways.
Oh, sure enough. I agree with the Python 3 decisions. In fact, my beef is mostly with the (otherwise excellent) ecosystem not doing enough effort to keep up. You only need a few major frameworks and libraries to say "ok, next release is Py 3 and Py 3 only" and the whole ecosystem moves over within a year, I'll bet.
>CPython 2 and CPython 3. If you want to support both, you face all kinds of hurdles, much like cross-browser JS programming. Does this mess badly reflect on the language itself? Hell yeah.
I don't think that it's at all the same as the current JS problems. Most Python devs (myself included) are sticking to the newer 2x releases because
- things work perfectly well in 2x
- 2x is still actively developed
- we can pull down 3x behavior from 'future' when wanted
- no one is putting a gun to heads to make people upgrade
- python isn't client side. if you can't make up your mind or ship correctly, that's on you, the dev
Which is to say, I don't see how this reflects poorly on the language. If a developer feels that he needs to support Cpython 2 and 3, that's his own deal because he could pick one and bundle his application appropriately.
Now for JS, you don't really get a say as a front-end dev because the client is in charge of what interpreter he brings to the table. You don't get a say, unless you know what browsers you can get away with targeting (e.g. ie6 users aren't likely to be shopping for the latest android device).
The latter, in particular, is important. If there is runtime translating Javascript idiosyncrasies to Lua, performance may be really bad. For example, it may be a lot of work to map Javascript's == with all its quirks (http://www.rossgledhill.co.uk/2011/10/12/javascript-quirks-e...), where the integer 3 equals the string "3" to Lua's equality operator (http://www.lua.org/pil/3.2.html), which behaves like Javascript's ===: "If the values have different types, Lua considers them different values."
And no, that does not stop at "but no sane person should use == in Javascript". '<' is different, too: "To avoid inconsistent results, Lua raises an error when you mix strings and numbers in an order comparison, such as 2<"15"."
This is why the presentation keeps talking about web developers. There are more web developers than there are any other kind of programmer. If you want to make your programming platform accessible, make it accessible to web developers.
Is it really this hard for devs to learn proper languages for whatever they're doing? I almost think knowing multiple languages and having the ability to learn new ones quickly is what it means to be a programmer/developer, etc...
I'd be more interested in what embedded JavaScript would allow to be done that other existing tools can't currently do. After all, if you're a JavaScript person and you needed to do something on a device or platform, you wouldn't wait for a language runtime to become available, you'd just go learn the existing language/tools and build it?
Is it really this hard for devs to learn proper languages for whatever they're doing?
It's not hard.
But given a choice between Platform A which says "write in a language that all of you already know", and Platform B which says "write in this language most of you don't already know and would have to put in time and effort to learn before you could do anything useful with this platform"... which one would you bet on?
Not sure what sort of crack everyone else is smoking, but I generally base my decisions on how painless it is to develop for a new platform rather than one I already know. Other than that, if I don't know the language, but the environment really kicks ass, I'm going to learn the damn language.
Unless I'm really rushed and have 2 days to do a vast amount of work.
I addressed this in my last paragraph. Lua is a nice language (and I use it myself) but it has downsides too. Noticeably it is a moving target that breaks backward compatibility. And I'm not sure what the state of the art is in terms of running Lua in the browser.
Lua breaks backwards compatibility every 3-5 years in minor ways, and they support version n-1 in version n. And it is not very hard to migrate code bases between versions of the language.
having written a fair amount of lua code, there aren't many libraries out there. Their npm equivalent is called 'luarocks'. To give you an idea of how old the libraries are: many of them mention the perl package it was translated from.
The reason for that of course is that it's so trivial to embed it in a C program, that if you want a library you can just expose a C library in lua. luajit even makes this absurdly easy with its FFI api.
Right, but libraries consist of many other things, including ORM or testing suites. There aren't many great ORM libraries written in C, and you need the test suite to work in lua because you're testing lua code. In general, the world of lua is pretty small, probably smaller than languages like Clojure or OCaml.
The fact that many libraries are "translated from" Perl packages does not mean they are old, it means a significant part of the Lua community has used Perl before. Actually, because Lua 5.2 has been released recently and broke compatibility, libraries that support it are by definition not "old" (or at least they are maintained).
That being said, the community is small and there are indeed way too few libraries available in LuaRocks (slightly over 300). We have been discussing that on the language's mailing list and I will probably talk about it at the upcoming Lua Workshop (http://www.lua.org/wshop13.html).
The parent isn't arguing for local by default, just against global by default. Insomuch as you buy into the arguments in that link, the answer is to do neither (i.e. you must always use "var"). Most of those arguments are actually even more damning to global by default.
> I would personally be happy to sacrifice that feature and add an explicit "global" keyword but not everybody agrees, and it would break a lot of code.
For sure. That ship has probably sailed. And, like you said, linters alleviate a lot of that pain. But that's a separate question from, "is this a good language feature?"
Anything by default is idiotic. Lexical scoping works when variables are declared. Lexical scoping becomes a burden when deceleration is not required and variables fall back to some default scope.
Kind of remind me of GWT Compile Java to Javascript.
In order for "debug" (not just writing/copy-paste some demo code) any "real" project with that, you have to to be expert in Java, Javascript and all the tiniest details of GWT framework. Any crash, stack trace involve stack trace thru the JAVA framework, javascript stack and all the wonderful translation, compilations, VM layers.
The only few programmers who can really do that is probably the few folks who wrote GWT.
Have been programming C, Linux Kernel Driver, Embeded, VB, Tcl, Python, Android/Java, JS, SQL, for past 20+ years, one thing I still love is KISS - Keep It Simple Stupid.
I think that was one of the big drivers of source maps in the first place (along with dart and coffescript). They now have a pretty good implementations, but it's still not complete.
The traditional way of debugging GWT is in eclipse as java emulating javascript emulating java. This actually worked surprisingly well most of the time, but sometimes you did have to step through the compiled code in the browser. Luckily there was a "pretty" compile option.
This is all fun for sparking creativity, but it always seems like a massive diss to the EE's in the crowd when web devs run around fronting that they are going to disrupt the embedded world with their transpiled bloatware. EEs are so stupid and use such crap tools!!! Put some damn Bootstrap on that circuit-to-PCB layout tool. It's so not even flat OR responsive! How the fuck am I supposed to drag-n-drop my codez?!?!?!
My favorite is when someone posts a vid of LED PWM or, worse, just basic blinking... You spent how much time and money on what? And you created the embedded version of the blink tag? Definitely a web developer.
Tester board acquired. Let's go find some problems!
Can't wait until craigslist is full of requests for bringing a dream device to fruition... It'll be an iPhone-killer that also sets the temperature of your house and blinks to let you know your dog bowl just tweeted you and donate a bitcoin to the NSA because you forgot to put the induction recharger next to the eFacuet controller this morning. Equity only. NDA required.
One person converts an arduino or whatever to a red-inked start-up that gets taken out for $1B and it's on...
I agree with you but I am thinking that your point on bolting training wheels might be more valid than sarcastic.
See getting into full EE is a serious amount of work (I get it, whilst I work on ML stuffs I have spent time in the embedded space; and the web crowd could learn a lot from the embedded folks). I do however wonder how many people would be willing to undertake the effort if they had a toy to hook their interest at the start.
Me, I will happily stay with my FPGA's and verilog, but then I grew up on computers that were meant to be messed with (e.g. the spectrum), in an era where my parents encouraged me to take apart, _understand_ and mend electronics.
Today it feels like this is no longer the case and that bugs me.
History and Moore's law would tell us that at some point we won't have to fiddle bits for every single embedded project.
Is that time now, maybe not. But it seems like a great tool to prototype with. I'm really confused how enabling a whole legion of programmers to get creative with hardware invokes such bitterness in you.
I'm sure what you do is very special, and this is no direct threat to that.
And history of embedded systems tells us that at all times you will have to fiddle bits for nearly all embedded projects.
MCU's are getting faster at the expense of getting power hungry. Plus there are a several other things that need low level optimization that C provides. There is a reason why C has such a invincible death grip in the embedded domain. It will take you trillions of dollars worth of investment and nearly two decades of effort if you have to dethrone C from there. Nearly every stake holder, is so deeply entrenched in C its not even pragmatic to think you are going to replace at anytime sooner.
>>I'm really confused how enabling a whole legion of programmers to get creative with hardware invokes such bitterness in you.
All programming, is creativity with hardware. Because the form factor got smaller doesn't mean a thing here.
This is the gripe I have with many Rasberry Pi Users, who call printing hello, world 10 times with a python script as 'hardware hacking'. The situation is so super hilarious.
Say you wrote the same Python script on a laptop with the cover removed, now that you see the electronics inside the laptop and you are also running the python script, did you just got magically creative with hardware? Why wasn't it magic when the electronics wasn't visible.
And yes, for any real creative thing of production and mass deployment value. All the best trying to do it in anything apart from C.
When the target embedded circuit needs (and energy needs) are far, far tidier, it is laughable to think that a bunch of wasted IC overhead will undercut any advantage to developing the prototype on a board like this. Simple things can and will be simple.
ATMEGA's are looking for problems. General devices don't compete with specialized. See CPU vs. ASIC's in bitcoin. Hi bitcoin miners with GPU's... show me your hands. Here's your ass back.
And I think that's the comment's point. These prototyping platforms are nice for some narrow band of testing ideas, but don't translate for the "hacking the internet all the things!!!" when compared to what the trained circuit-designers do in their sleep and get mass-produced by morning coffee.
I can make drop-shadows in photoshop and you use CSS3? I'll just use the data layer plugin to make the bestest sites evar!
The point is not that EE suck; it's that if the IoT market starts growing explosively, as many expect it to, there'll be a penury of EE skills, and the market will belong to whoever allows to circumvent that penury.
Originally, web applications were best written by Unix developers; but PHP and the likes allowed to write them without growing a neckbeard, and took pretty much everything.
The thing is, hardware manufacturers might be all for something like this - if it means they can increase their market from x embedded developers to 100 times more web developers. They are already trying to make embedded development drag and drop - an example is this tool by Infineon called 'DAVE' - not sure how successful it is, but its built around modular components that you drop into your project.
This sort of thing breaks the very moment you have to bring in your own customizations and changes. And that is super common in a embedded system.
Otherwise it wouldn't be much of an 'embedded system'. It would be a general computer.
Drag and Drop code generation doesn't work in embedded programming for the very same reasons it doesn't work with higher level languages. Its far too inefficient,code subject to total redesign for small changes in feature requests and beyond all its not economical.
Some day we may get there. But some day we may not even have the need to program anything. Forget embedded systems, but programming in general.
Long ago embedded systems were a pain. You programmed eproms with ASM and UV lights were involved. Someday the computorium will be so strong, cheap, and ubiquitous that you can program it with whatever you want. This isn't that day.
I applaud the spirit of what they've done, but that's a shrinkified version of a rack server, not a smartened version of a light-bulb. The Internet of things is more about swarms and emergent behaviors and less about turning everything into a tiny stand-alone version of our datacenter servers.
I get it. And its a start. But its so huge and expensive its like considering the rest of the car an API to a tire.
That's a crap-ton of computer and the sysadmin that goes with it to flip a switch. I don't have a beef with programming the 'net of things in java or lua or even pascal. What I'm saying is that that's way too much computing too far down the stack. The closer we get to the GPIO pin, the less computer, expense, and electrical requirements there should be.
I'm just saying that the thing that should be hitched to the actual GPIO pin ought to look and function more like an RFID tag and less like a mini rack server.
I cannot agree (I mean I do from a pure techie point of view) but last week I got my hands in a Raspberry PI for the first time and knocked up a traffic light demo for a school in Python, with LEDs directly shoved into the ribbon cable.
I have not been so child-like excited for years.
by throwing a wasteful amount of computing power into just turning on a light, we put this in the reach of more and more people.
we could have web servers in assembly language, and it would be more efficient use of computing resources - but that's not been the optimal way and it won't be here either
Interesting perspective, in relation to your "swarms of emergent behaviors" vs. "tiny stand-alone ... servers", are these things mutually exclusive?
Everything involved in moving servers downstream is getting smaller and smaller and less and less expensive. Why not have the end node with as much computing power it needs so that it can control and communicate with swarms, rather than the swarm being controlled by a centralized server.
Its less about the actual amount of computing power at the actuating node and more about how much like a computer we treat it. If we have to sysadmin it like its a full-on server, that's probably already too much. Not picking on javascript in particular, but cramming a web-server-ish stack into a light switch feels like it might be heading in that direction.
Its not really a swarm being "controlled by a central server", its nodes discovering each other and creating a de facto, discoverable api amongst themselves without overt configuration. LivingRoom.Lights.Bright=5 will cause modules you've tossed in your living room to say "hey, I'm a light, I'm in the living room, I should be brightness 5 now!"
The thing is, these lights are actually more complicated than our old-fashioned web servers, because while they too have to communicate with other systems, update their codebases, maintain state, and so on, they also have to do all of that in unforeseen environments, surrounded by unknown services they have to discover and use. Plus they have sensors and whatnot. The sysadmin problems are still there, but now they need to be self-serviced, or at least serviced through some kind of autonomous negotiation with the swarm. If it's not allowed the conceptual bulkiness of an overtly-configured server, I don't see how it could be a self-configured one, which seems much more heavyweight.
A bit more generally, I'm just not sure how a light can be all the things you want it to be without having a pretty complicated stack underneath it. This is a hard problem, and there's going to need to be a bunch of technology in there. Even a common stack of technology. In fact, a web server seems like it would be exactly how you'd want here, because it presents a simple, consistent interface for communication. Generalized interfaces between diverse systems is exactly what HTTP has been so successful at. What are the alternatives and how would they make these devices less computery?
It sort of sounds like your objection is that many web servers aren't simple enough in the sense that they have the complicated configuration APIs which are used by their admins, who want to make really specific human-targeted applications out of generic but wide frameworks built specifically to make that possible. Routing and same-origin policy and access control and extension points and caching and whatnot. Perhaps you're about that, but it's not really a problem with the depth of the stack, and none of that is central to being a web server, or even part of NodeJS.
I think that once you factor in things like battery life, mass production price and complexity, they are.
You can have a swarm of lamp-toggling or temperature-showing things built with two reasonably unpretentious chips and a very simple production process, that allows you to build them cheaply and reliably, and have little enough processing power that you can run them off a small battery.
Throw in the stuff you run Node.js on (of all things...) and it's suddenly not such an unpretentious chip that also ensures running them on battery is out of the question.
A lot of people think hardware design is all about milking the processing power. Part of it is, but there are a lot of things that don't get much exposure because most software developers take them for granted. Battery life (and its cost) is a prime example.
I can understand the motivation. But not sure if I want to be surrounded by hardware powered by software written by people who think that Javascript is actually a good programming language.
That's ridiculous and contributes nothing to the discussion.
Javascript is a good programming language, in that it's expressive, fast (much faster than Python), and more importantly it runs everywhere. No ecosystem can grow without developers, and the success of Node.JS clearly shows that developers like a system they can dive into without much prior effort, using skills they already have.
Tessel allows web developers to use skills they already have to do new things. How could that possibly be a bad thing? Just because Javascript has '==' and '===' and both 'null' and 'undefined' and {} + [] !== [] + {}? I swear, everybody seems to have seen the 'Wat' talk and thinks they're now experts on Javascript development.
Javascript, like many languages (even PHP!) can be written in a 'good' way and a 'bad' way. Thankfully, it's not very hard to write in a good way and it lends itself very well to I/O bound applications like webservers. Embedded devices, depending on the application, could also be very I/O bound, waiting on sensors, cameras, wifi, etc.
Javascript is actually one of the fastest interpreted languages in existence and thousands of new programs are being written in it daily. It's easy enough for a total beginner to use yet powerful enough to port Unreal Engine 3 to it. Get over your biases and recognize that JS brings with it something that Haskell, Scala, etc., will never be able to match: large numbers of ready developers.
Javascript is fundamentally not suitable with embedded systems. Its string is basically UCS2 literals. To deal with byte streams you have to use this beast http://nodejs.org/api/buffer.html
Yes, you can use Node Buffers (which are not that bad), or JS native Typed Arrays (which are also not that bad but a little clumsy). Yes, in embedded systems, that might be really important so it can get clumsy.
It's not the best tool for the job. It's also not the worst tool for the job, and it's an extremely popular language. If it brings more people into the hardware space (which is its expressed goal), it will be a success.
Wait... when did we get to the point where things that were faster than Python were deemed "fast"?!
And of course, "fast" isn't the right term for embedded systems. The right term is "efficient". In embedded terms, Java barely qualifies as passable, let alone Python.
There are no "good" programming languages (excepting, perhaps, Prolog, Lisp, Smalltalk, or APL). There are merely many languages that are less bad.
Javascript only has so much room for the bad to hide in--it's a little quirky, but isn't half as open to WTFs as, say, C++ (any generation thereof).
Don't be hating just because these new kids don't have to bang their faces into machine code or shitty C macros--it's great to have a way of trying different programming techniques for embedded systems.
> Javascript only has so much room for the bad to hide in--it's a little quirky, but isn't half as open to WTFs as, say, C++ (any generation thereof).
You know, I only know JavaScript about a fraction as well as I know C++, and I can say with a great deal of confidence here... no.
C++ has lots of WTF's in strange little corner cases (mostly around the C compatibility issue). C++11 has done a good job of cutting down WTF's. In JavaScript you needn't go to the corner cases... it's right there in the normal cases.
We're talking about a language that still doesn't have a standard way of reading and writing bytes... for talking to hardware. Think about that for a moment.
> Don't be hating just because these new kids don't have to bang their faces into machine code or shitty C macros--it's great to have a way of trying different programming techniques for embedded systems.
Honestly, I'd not have blinked if the idea was to use Java (even with its lack of unsigned arithmetic), C#, Objective-C, Lua, or even say Go to do the job. I mean, you need to use some kind of a language and you want it to be something that a lot of programmers will find accessible.
But picking JavaScript for this job is kind of like picking Forth to build the new, more accessible database query language...
"We're talking about a language that still doesn't have a standard way of reading and writing bytes... for talking to hardware. Think about that for a moment."
Wrong. Javascript has typed arrays. When's the last time you actually looked at Javascript? 1998?
"Honestly, I'd not have blinked if the idea was to use Java (even with its lack of unsigned arithmetic), C#, Objective-C, Lua, or even say Go to do the job."
They are using lua. They just have a front end to lua that uses javascript syntax, it would seem. Because languages without curly braces are scary, and the goal is to make this stuff accessible.
To the expert, making your niche accessible to plebes is scary. I understand.
As recently as a few months ago I had to base64 encode a binary protocol because the JavaScript guys couldn't handle decoding the raw bytes.
> They are using lua.
Yes, but it seems like lua is merely used as a target rather than a programming language. What language people actually code in is highly relevant if your goal is to make a domain more accessible.
> They just have a front end to lua that uses javascript syntax, it would seem. Because languages without curly braces are scary, and the goal is to make this stuff accessible.
There are a lot of languages with curly braces that aren't half as scary as JavaScript.
> To the expert, making your niche accessible to plebes is scary. I understand.
I appreciate your "understanding". Next time you are trying to make the big "I can read your mind" power play, you might want to read the context a bit more carefully.
I don't generally do programming that talks directly to hardware, so even by your rationale I have no reason to be threatened, and I'm all for making this niche more accessible. I've seen excellent jobs of making hardware accessible to novice programmers using Java, Smalltalk, and even things like SCRATCH and Alice. I think that stuff is great. As you put it, "making a niche accessible to the plebes" is a great endeavour and what I have spent most of my career striving for. I've learned a thing or two about that path along the way. This isn't the first time a choice like this has been made. I thought I'd share that this particular choice, if anything, undermines the goal.
"I thought I'd share that this particular choice, if anything, undermines the goal."
but that isn't what you shared. All you had to share was vague old man gripes about a language, with the only concrete example being something that is only really half true.
For christ's sake, you suggested Java and Objective-C for the task. How do you expect to have any credibility after that? For a novice language, why are you expecting a typical task would be decoding a binary protocol?
To claim that you are just sharing your wisdom here is intellectually dishonest.
> For christ's sake, you suggested Java and Objective-C for the task. How do you expect to have any credibility after that?
Because while they might not be considered "hip" languages, they have large developer communities and are comparatively straightforward to pick up for any web developer that isn't familiar with them, yet they still have good support for the task at hand: talking to hardware.
> For a novice language, why are you expecting a typical task would be decoding a binary protocol?
Because they're talking to hardware, and because working with binary is actually much simpler (if your language doesn't have some bizarre aversion to it).
If you've ever taught novices to program, it's actually easier to start with binary as you avoid all the complexity of text encodings and parsing. You want an integer? Read an integer. No length prefixed fields. No reserved characters. No escape sequences. No terminating characters. No character set encodings. No case sensitivity or symbolic equivalents to worry about. The worst you might have to deal with is endianess and usually you can dodge that issue by starting with native endianess. C makes it harder with it's "not sure what width that is" fixed point integers, but that's part of why you don't pick C for that job either.
Working with binary only really becomes a pain once you have to start talking to humans.
> comparatively straightforward to pick up for any web developer that isn't familiar with them
compared to javascript.
really?
you think java is "easy to pick up" ?
I'm going to have to just say I disagree with you there. I don't like my chances of convincing you how silly that sounds. You seem to be operating on some very peculiar assumptions.
> compared to javascript. really? you think java is "easy to pick up" ?
Yes, compared to JavaScript, it absolutely is.
Really, the only PITA with learning Java is all the framework-itis (which is starting to become an issue with JavaScript as well, but that's another story...). For things like embedded systems, Java doesn't have any of that stuff and it magically returns to the comparatively simple language for programming network aware hardware it was originally intended to be.
> You seem to be operating on some very peculiar assumptions.
My assumptions are from teaching and watching other teach both children and adults how to program (usually with adults it has been people who are already professional programmers). I'd absolutely agree that Java isn't the best teaching language, but particularly if you've got someone who already knows how to program (which is kind of what I think of when I hear "web developer") at least at some basic level, it tends to be pretty easy to pick up.
JavaScript on the other hand, is quite the opposite.
I've worked with "copy-and-paste" developers who couldn't really write their own program from scratch to save their lives. When you asked them to explain what the code they'd pasted in was doing, JavaScript was invariably the language where they had the hardest time deciphering what was going on, and more often than not, you ended up sympathizing with their difficulty. (C++ and Perl can also be quite difficult to decipher, but they tend to have the advantage that anything much more complex than a one liner generally won't work at all if just blindly copied and pasted, so copy-and-paste coders tend to be rarer breeds there).
> Anybody can open up a browser and immediately start playing around with Javascript.
Tell me, if you were looking to teach people to write Java code for web applications (I know, perish the thought!), would you start them off writing applets?
> You simply can't say the same for Java, C, C++, or assembly--frameworks and the other nonsense notwithstanding.
Your statement about the ubiquity of JavaScript runtimes is really JavaScript's strongest feature IMHO, but I don't think it is terribly compelling in terms of developer accessibility. You could make something like that argument could say something pretty similar in support of DOS shell, PowerShell, AppleScript, VBScript, VSMacros, XSL, but no one would ever think that a defensible argument for them. Sure you want accessibility, but I think it's okay to suggest that a one click install not be outside the grasp or patience of a developer on their way to writing for distributed device systems... ;-) The difference between "open your browser and now start to learn this language" and "open your browser to download and install this app so you can start learning this language" shouldn't separate anyone who was going to make it in the first place.
Well he can make his point any day now. Still waiting. For now all I have is that he doesn't like javascript, because it can't do binary. (although it can. so the conflict should be resolved right? Did he have another point?)
He doesn't care, probably. You're just some kid who can't/won't program in anything besides javascript (of all languages). Nobody owes you an explanation of why that's limiting. I'm surprised he indulged you for as long as he did.
For someone who has no clue what they are talking about, you are remarkably dismissive.
I get the point that he doesn't like javascript. Why should anyone care about his personal preferences?
I code in lots of languages besides javascript. I am not insisting on anything. If it were my choice I would have gone with lua, or some dialect of logo or lisp. Or even haskell. it is him insisting that javascript should not be used, with the only justification being something that is not true. Having thoroughly debunked that one piece of evidence he had, what's left? Why should anyone take his opinion seriously?
Resorting to calling me "some kid" is pretty classic though. Did you run out of real things to say, and decided to resort to just dismissing me with ad hominem?
> For someone who has no clue what they are talking about, you are remarkably dismissive.
You keep using that word. I do not think it means what you think it means.
> I get the point that he doesn't like javascript.
Actually, that wasn't one of my points.
> Why should anyone care about his personal preferences?
I have no idea.
> it is him insisting that javascript should not be used
Read again. I did not insist that JavaScript should not be used. I said I did not feel it was a good choice. As you pointed out, and I pointed to some obvious signs of why it might not be. My opinion being what it is, that is no basis for insisting upon anything, but even if my opinion was law, the fact that something might not be a good choice is no reason to insist it not be done.
> Having thoroughly debunked that one piece of evidence he had, what's left?
You keep using that word. I do not think it means what you think it means.
> Resorting to calling me "some kid" is pretty classic though. Did you run out of real things to say, and decided to resort to just dismissing me with ad hominem?
Hey, one more dismissal! I think we have a record.
Seriously? You're going there? Did you forget that you bolstered your argument with "old man" just yesterday? One might be concerned about projecting so much...
sorry about "old man".
What i wrote was "old man gripes", which was intended at the "I'm afraid, uncertain, and I doubt this technology", not at you as a person.
Poorly chosen words. Mea culpa.
"Some kid" clearly was directed at me as a person. so..yeah. I'm going there.
No worries. I didn't take it personally, but it was kind of a "what kind of argument is that?" moment.
I generally don't think of FUD as an "old man" argument (more like a "man keeping you down" kind of thing ;-), and not really very applicable to a venerable and pervasive language like JavaScript, though I guess Node is newish (and ironically the JavaScript environment I'm most familiar with).
> "Some kid" clearly was directed at me as a person.
I think if you look carefully at the thread, you'll see that the "old man gripes" comment unfortunately presented you as a petulant child and provoked an anti-ageism response from jbooth, which ultimately ended in the "some kid" comment.
I don't think jbooth was trying to make a technical argument at that point; it was indeed personal, and an empty ad hominem, but you unintentionally opened the door for that. Something to keep in mind when processing his comments.
It was a personal argument but it was aimed at a lack of perspective, and not knowing what he doesn't know (a chronic failing of youth, and I'm young myself), rather than an attempt to insult him for it's own sake. As you're hinting at, rather than being upset at me calling him 'some kid', maybe he should wonder why his age is so easy to peg.
He didn't seem to grasp why working with streams of bytes is important in this context, and why extremely spotty/inconsistent support for it is not sufficient. If he had a broader base of experience, or if he was inclined to listen to those who have such experience, maybe he'd get it.
> They are using lua. They just have a front end to lua that uses javascript syntax, it would seem.
Regarding the limits of how far this Lua / javascript combo can or should take us I refer you to the words of Roberto Ierusalimschy, creator of Lua, himself (I agree with him):
> I must confess that I would be very reluctant to board a plane with flight control implemented in Lua or any other dynamic language.
I think if the goal is to make little gadgets that put large block letters over jpegs of cats and post those jpegs to social media sites, Tessel's software side is on the right track. It might not be so great for "internet of things" gadgets that interface with the real world, in real time, in novel ways.
(in fairness their hardware looks interesting, though currently overpriced)
I think the market this is aiming at is more arduino scale. Would you board an airplane powered by an arduino? Or an apple II, for that matter- Another product aimed at making something more accessible to "software" type people.
Except I'm not sure how well that would work since it would then be compiled down to Lua...
And of course, C++ isn't exactly a great choice for making hardware programming more accessible. I hear tell that's how a lot of it is done already. ;-)
C++ isn't merely messy--it's a bloated festering gibbering idiot swimming in a fetid pool of infectious waste, the true depth and horror of which is hidden beneath a tapestry of warts and scabs. For the unwary, it seems sound enough, and then a few steps and templates later and the whole rotten structure has given way beneath their feet, plunging them in over their head in offal.
Again, Javascript is a relatively tiny language compared to C++.
I am concerned that many of these products will not hit practical state until issues like power consumption and cost can be addressed. I don't want to have my device wired and I don't want my $1 dollar light bulb to cost $40.
If these guys insist on using JS they should compile it to 8bit code, so that we can better address to cost and power issues. Right now they have something that is larger than an Arduino. My gut reaction is that python might be a better fit.
Well, they compile to Lua and then use a JIT on the device.
Using JS for hardware is an audacious, repulsive, brilliant, horrifying idea - maybe people don't want to call it web assembly language, but it's certainly web lingua franca. If you want to execute in the client, you have to use it. If you don't like it, everybody is writing frontends that compile to it (e.g. coffeescript), asm.js to compile it to, and writing books about the "good parts". So... every web developer knows javascript... and assuming it's they who will make the "web of things"... it makes sense to harness all that work and future work around JS (though I'd think the "internet of things" would be made by more hardcore folk, like the TCP/IP designers).
I don't know if this will pay off for them, but kudos for the chutzpah to actually do it (and for even considering it!) But it really could work out - whoa dude, awesome leap of faith!
That meme "Javascript is basically Scheme with a C based syntax" has been refuted so many times it's not even funny. Does it have a metacircular evaluator? Is it homoiconic? Does it have macros? Seriously, read SICP or some other Lisp book. You have no idea what you are talking about.
Javascript has first class functions. It has closures. it has nice object/literal syntax. If you don't consider it cheating to use a library, it does have macros if you use sweet.js. It has a metacircular evaluator, from the very beginning called "narcissus", and if it had homoiconicity, it would be even more direly hated than it is now. Having syntax, for better or worse, is a strength that means that people actually use the language. For reals. Not just that one time at college.
The real question is how are any of those features actually useful from anywhere but a theoretical purity standpoint? We're not exactly writing AI systems here. At the end of the day the language is there to write programs. And the sophistication of the programs you need to write in javascript doesn't really need those things most of the time.
As a Javascript guy with some Lisp experience, I wouldn't say Javascript is particularly Lispy, but I would absolutely say that Javascript is closer to being Lispy than pretty much any other language in which I've ever worked.
Most of my recent web development work has been all frontend, with the server-side integration as lightweight and generic as possible and Javascript to do everything else, not so much because that's necessarily an ideal way to do it as because Javascript is so much more expressive, and less painful to work in, than any of the server-side options of which I'm permitted to avail myself -- PHP and Perl, basically, with a strong institutional preference for the former, because apparently being able to pick any of a hundred random idiots off the street and have them write code for you is a benefit? A low barrier to entry is not, in this context, a good thing. (Granted, the same can be said of Javascript, but there's a qualitative difference in that Javascript at least makes it possible to write good code.)
That's one. Not "so many it's not even funny". From the sound of it, you should have been able to rattle off 5 without hesitating. Maybe you meant "one. There's exactly one refutation"
as for your second remark, uh... Douglas Crockford?
Douglas Crockford just said that even though Javascript's syntax resembles an imperative language like C or Java, it's much more similar to a functional language like Lisp or Scheme. From that comment a whole myth began which states that Javascript == Scheme. That's like saying Java == C just because both are imperative. Handwavy and completely wrong.
What you should take from that assertion is "Javascript is more functional than imperative" (which is correct) and not "Javascript is lispy" (which is not).
Mark Jason Dominus also says the same thing about Perl in his Higher Order Perl book [1][2] and goes onto show that Perl has 6 of the 7 features that Norvig describes has making Lisp different [3].
[2] Perl is much more like Lisp than it is like C (from Preface)
[3] ... the book "Paradigms of Artificial Intelligence Programming", by Peter Norvig, includes a section titled "What Makes Lisp Different?" that describes seven features of Lisp. Perl shares six of these features; C shares none of them. These are big, important features, features like first-class functions, dynamic access to the symbol table, and automatic storage management. (also from Preface)
Remember, I was recruited to "do Scheme", which felt like bait and switch in light of the Java deal brewing by the time I joined Netscape. My interest in languages such as Self informed a subversive agenda re: the dumbed down mission to make "Java's kid brother", to have objects without classes. Likewise with first-class functions, which were inspired by Scheme but quite different in JS, especially JS 1.0.[1]
I presume you saw the word basically in my post? There's no doubt it is simplified, but that's a strength as well as a weakness.
no it is not. half the people that are using it do so because it is shiped in browsers ,the other half are server side masochists that are not smart enough for Scala or Closure ...
I think the problem is that the syntax looks too much like C and most people assumed they already "knew" Javascript without bothering to learn it from scratch. "Javascript - The Good Parts" should be mandatory reading before touching the language.
At the same time it's a double edged sword - because it looked so similar to C, it most likely was so successful.
I'm talking historically as someone who was the typical C / Perl programmer back in the 90s when the web took off and made this exact mistake.
> Javascript is basically Scheme/Self with a C-based syntax[1][2],
You forgot to mention "and without a native way to read and write bytes", which is one of just a few highly relevant examples of why it might not qualify as "a good programming language" for the job. (Others might include having a probabilistic parser...)
"Typed arrays are available in WebKit as well. Chrome 7 includes support for ArrayBuffer, Float32Array, Int16Array, and Uint8Array. Chrome 9 and Firefox 15 add support for DataView objects. Internet Explorer 10 supports all types except Uint8ClampedArray and ArrayBuffer.prototype.slice."
They are in ES6 which will be standard any year now. If that's not good enough for you then I guess there's nothing stopping you from just writing anything you need in 6502 assembler.
How long do you think the list is of languages that didn't wait until revision 6 before adding support for working with bytes? or are you seriously trying to suggest that it has to either be JavaScript or 6502 assembler?!
Do you think it is maybe possible that a language that had standard ways of working with bytes say within the first decade of popularizing it just might be better suited for talking to hardware?
Okay, reading another one of your posts I think I've gained a tiny insight into what you're trying to get across.
You have the assumption that "Talking to hardware" necessarily involves dealing directly with binary protocols. And javascript isn't good at that. (though it CAN do it, if pressed)
And you're right on both counts.
I think what you are missing though is that on a product like this, why wouldn't you do the binary protocol stuff in a C module and expose a nice easy api in the scripting language as is the usual practice? In the presentation it very much looks like that is exactly what they do.
>why wouldn't you do the binary protocol stuff in a C module and expose a nice easy api in the scripting language as is the usual practice
You guys have been going at it for a while on this thread. I thought what cbsmith was saying the entire time couldn't have been clearer. I also think your statement here completely concedes his/her point.
You went from calling him/her an "old man" (your words) for not accepting your contention that Javascript can do it all, to advocating the use of C to handle the stuff it doesn't do well.
Okay so he/sh was preaching to your choir. But s/he needs to work a little harder to persuade or teach someone who isn't already convinced.
I never said javascript could do it all. I said that it happens to not be true that javascript doesn't have a native way of dealing with binary. It turns out it does.
that's not everything. That's one thing.
>But s/he needs to work a little harder to persuade or teach someone who isn't already convinced
It was pretty clear. You just didn't seem to be in learning mode. Your "old man" comment was dismissive on its face and you made assumptions rather than ask thoughtful questions. You said yourself that you understood once you went back and re-read. There wasn't anything new there.
I honestly have no idea what you are talking about. He had nothing but dismissive comments about the language. There was nothing to learn or ask questions about. it only finally slipped out of him almost by accident what he was actually trying to get at, and even that isn't much of a relevant point.
And just when I thought you were finally getting it.
From cbsmith's very first post on the thread:
>We're talking about a language that still doesn't have a standard way of reading and writing bytes... for talking to hardware. Think about that for a moment.
You directly took exception to this and even quoted from that part of the post in doing so.
You don't have to "buy" my depiction. Just go back and read (for at least the third time, apparently).
What's your point?
You quoted a fact that he stated, which happens to be wrong, and I pointed out that it was wrong. That's evidence for my view of things, not yours. It's not a teaching moment, he had nothing useful to say. He still doesn't so far as I can see. Where we are now is let's just re-iterate the same wrong point over and over again, and pretend like it means something.
Maybe if you're so concerned about me getting it, you can point out what his actual point is, which you seem to think is so obvious? spell it out for me.
My apologies. You are correct that I thought it was obvious.
I'd started typing a reply that outlined what just happened, but it would essentially be a strange Cliff's notes recap of the thread. I'm not sure that I want to continue or that it would be sufficient in any case.
So, apparently I'm not as concerned about your getting it as I initially thought I was. Perhaps someone else can jump in here and "spell it out" for you but, as it is, I'm content to leave things where they are.
well, you don't have to recap the whole thread. just write something useful or insightful.
Keep in mind that he criticised javascript for not having a standard way of reading and writing binary bytes (even though it does) and then suggested lua (which does not, at least not anymore than javascript does). My vague understanding of the point he was trying to make is that since javascript wasn't originally designed to deal with hardware, it's not a good fit. I don't find this to be a compelling argument though. It's more of like, a thesis, with no supporting evidence or premises, and the only point being the binary bytes thing, which is pretty debunked. There's no other point. No detailed analysis about exactly what it is about javascript that he thinks is unsuitable. Just vague handwaving.
Do you have anything to add which might flesh this out? You do know the difference between an argument and a vague complaint right?
> Keep in mind that he criticised javascript for not
No, I didn't criticize JavaScript. I criticized the _choice of JavaScript_ for the task. There's a world of difference there.
> the point he was trying to make is that since javascript wasn't originally designed to deal with hardware, it's not a good fit.
Actually, as I stated that there were a number of factors that pointed to it not being a good fit, and that was just an example of one of them. I'm kind of shocked that a statement like that somehow lead to so much controversy.
> I don't find this to be a compelling argument though.
Cleary.
> It's more of like, a thesis, with no supporting evidence or premises, and the only point being the binary bytes thing, which is pretty debunked.
Yeah, it really, really wasn't a thesis. It was a single point , and not presented as a conclusive point, but as an easily identifiable and understood indicator of a cause for concern.
And I really take issue with the "pretty debunked" characterization. I can't believe you'd say that a feature that over 15 years after it was initially conceived and popularized, after a dozen revisions, nearly a half dozen major revisions, and lord knows how many countless reimplementations, wasn't even consistently implemented on the language's primary platforms, was a byproduct not of the revisions of the core language but rather of another effort to add a specific feature (WebGL), and which you yourself described as being standardized "any year now", is a "standardized" feature of the language.
> There's no other point.
Actually, I made other points. I didn't go in to them in much depth because you seemed unable to accept even the most basic points and didn't offer any counter points (despite me inviting you to).
> There's no other point. No detailed analysis about exactly what it is about javascript that he thinks is unsuitable. Just vague handwaving.
Perhaps I should feel bad about this, given the detailed analysis of precisely what it is about javascript that you think is so suitable. Just a lot of flaming.
> Do you have anything to add which might flesh this out? You do know the difference between an argument and a vague complaint right?
You might want to consider what possible motive I could have at this point for attempting to further explain my point.
I think we got off on the wrong foot here, and I find that I am pursuing this to absurdity. I'm gonna try and set this right.
In a sense, you are right, there is nothing about javascript that makes it particularly suitable for programming hardware. Nothing at all, for many reasons that you kind of hinted at but, as you say, never really went into in any depth.
The main plus for javascript, is that it's accessible to people who already know javascript, and there's quite a lot of them. The other plus, potentially, is that existing software may be able to run on it. However, as I pointed out in a much much older comment, truly, it's not clear to me really what the use cases for this device are, other than that it's a way to run javascript on hardware, and some people might want that. (as I point out in another more recent comment, if it were my own project I would have chosen differently)
My goal was, initially, to just try to turn the conversation away from language wars toward actually discussing the device. and its merits. I have clearly failed.
What I've been trying to get you to do was, rather than just say "I don't think javascript is suitable", to actually go into some detail about that. over time you revealed that you believe that being able to deal natively with binary protocols is a core feature of a language suited for hardware.
While I concede that this is a late added feature to javascript, it is a core feature of node.js, which is the api this hardware device purports to emulate. node.js is a kind of defacto standard at this point, which is good enough. To my knowledge there are no standards documents for lua, python or ruby and people have no problem using those languages.
So what else is there that gives you the willies? You seem to believe you know something, trying to impart some knowledge you have, but I can't get at what it is, because you may have assumed you said all kinds of things you haven't actually said. (such as, the assumption that I needed to intuit, about binary protocols)
> I think we got off on the wrong foot here, and I find that I am pursuing this to absurdity. I'm gonna try and set this right.
Good on you. I was hopefully that if we rode it out, eventually we'd somehow uncross the Rubicon. Good on you for doing so. Thanks.
> The main plus for javascript, is that it's accessible to people who already know javascript, and there's quite a lot of them.
For this particular context, I couldn't agree more. The one caveat I'd put on that is that a not insignificant quantity of those people who already "know" JavaScript don't really "know" it; they could probably be fooled for a disturbing length of time if they were swapped over to Java (and that's not to say the languages are really that similar). Still, people with at least passing familiarity with JavaScript are numerous.
> The other plus, potentially, is that existing software may be able to run on it.
Yeah, that one I'm not buying much. I can't think of any other top10 development language that wouldn't have a richer software ecosystem for this problem domain. Even for the more general server-side development, the Node software library is growing fast, but it is very, very thin and represents a tiny fraction of the JavaScript software world.
> it's not clear to me really what the use cases for this device are, other than that it's a way to run javascript on hardware, and some people might want that.
Yeah, and even from that context, if you went with a JVM runtime, you'd have JavaScript (admittedly not quite as nice as with V8, but still enough for most people who'd want to play around with an embedded JavaScript server) along with a plethora of other languages to choose from, which I'd have to think would do a much better job of getting a broader selection of web developers started in the "Internet of Things" paradigm.
> My goal was, initially, to just try to turn the conversation away from language wars toward actually discussing the device. and its merits. I have clearly failed.
Hehe. Well, responding to questions about the language choice can do that. ;-)
That said, I will ask that question: there are lots of other efforts to package up Cortex-M microservers for sensornets and ad-hoc devicenets and bundle them with user-friendly API's. I haven't been able to discern what's particularly exciting about this approach. What do you think is uniquely interesting about this solution?
> over time you revealed that you believe that being able to deal natively with binary protocols is a core feature of a language suited for hardware.
"core feature" means different things to different people. I look at it as a building block that a lot of core functionality for working with devices tends to build on top of, and a very important tool to have in your back pocket when dealing with a legacy device with lord only knows what fun little protocol bugs^H^H^H^Hquirks. It's not that you have to have it, but not having it tends to discourage the rich development of that larger ecosystem, and in JavaScript's case I've had first hand experience with it.
> it is a core feature of node.js, which is the api this hardware device purports to emulate. node.js is a kind of defacto standard at this point, which is good enough.
Agreed. My main problem is "good enough" is not exactly the kind of quality that makes me say, "oh yeah, we obviously should choose this one". If someone hands me a Node.js server and says, "talk to this air pressure sensor", I'm not going to say, "it can't be done" because obviously it can be done, and without having to move mountains. But if I'd not been handed anything other problem of talking to the air pressure sensor, even if I was looking to bring on a ton of web developers with little or no familiarity working with hardware, Node.js wouldn't exactly have sprung to the top of my mind. ;-)
> To my knowledge there are no standards documents for lua, python or ruby and people have no problem using those languages.
There are no EMCA-like standards bodies for those languages, but there are documents defining the language (though Ruby in particular seems to have a bit of the, "however the runtime works" mentality it is still shrugging off). That actually has been an impediment from time to time, though obviously not a huge one.
Lua's language is quite detailed about bindings to the native platform and even has a specific type, "userdata" for unmanaged blobs of memory, and Lua's deceptively named "string" type holds an 8-bit clean arbitrary set of random bytes, so not only are its "string functions" and IO libraries naturally capable of byte-oriented processing, there's little friction for even ancillary libraries supporting and exploiting it. Ruby's string implementation is similar.
Python has, as far back as I can recall, had support for binary IO and processing, though I'd qualify that by saying it has been somewhat hokey, though things like structs and Cython have helped mitigate it. Until 2.7.x and 3.x.x came around, I'd give Python demerits in this area, though not as bad as JavaScript.
> So what else is there that gives you the willies?
As I mentioned, there's the probabilistic parser. When you are learning it is nice to have forgiveness, but it is much better to have each and every mistake pointed out to you. A probabilistic parser makes things less transparent. That can be worth it if you're going with a "do what you can" mentality that makes perfect sense for web documents, but with hardware imprecise communications are just as wrong as incorrect ones. This doesn't mean a language need be hard, it just means that you want to be painfully clear about what is and isn't correct from day one.
Then there's JavaScript's bizarre and limited approach to arithmetic. Java's lack of unsigned fixed point arithmetic has drawn some deserved criticism, but those concerns seem puny when compared to JavaScript's quantum numbers that exhibit double/int32 duality. Not a good can of worms to open up, particularly when talking to hardware that might (per Murphy: read will) have odd numerical representations of its own. Even if you assume numbers are passing back and forth as decimal arithmetic strings, that's a Pandora's Box of fun just waiting to be opened that isn't going to make the experience any easier even for programmers fairly familiar with JavaScript.
Then there's the whole async I/O, event driven callback model. While I personally love that model, and at first glance it seems like it'd be a perfect match for sensors and devicenets, it does come with some downsides. It's a less natural paradigm for a lot of more novice programmers, and the way it decouples logic and injects layers of indirection in to the code can lead to confused developers when tackling new paradigms. Coroutines seem to present an initially more accessible approach to that model, and the thread model is initially often much more approachable for developers (though the popularity of Java NIO is an obvious testament to advantages of getting comfortable with I/O state machines sooner rather than later).
So there's some more straight forward concerns. Again, not the "it can't do X" variety, but more the "we're trying to get a fish to climb a tree" variety, is this really the best way to get to the coconuts?
> He had nothing but dismissive comments about the language.
The comments were not intended to be dismissive about the language; I'm kind of surprised that statements with qualifiers like "might" would be interpreted as being "dismissive". Perhaps you didn't notice them?
If you look at my comments, while I was pretty strident about the facts, I took great pains to extensively qualify my opinion. To illustrate, I'll quote from my initial comments to you with highlights to draw your attention to the qualifiers:
> ...it might not qualify as "a good programming language" for the job...
> ... Do you think it is maybe possible that a language that had standard... just might be better suited for talking to hardware?...
> ... Maybe JavaScript makes working with hardwaremore accessible than other choices, but I doubt it...
Now, no question I was expressing a difference of opinion, but I think if I had said any more times and ways that my critique wasn't of the language in general, but of its suitability for the task in particular, I'd have felt like a broken record. Not only wasn't I dismissive of the language in general, but I wasn't even dismissive of the application of the language for the problem in particular. I expressed that I didn't think it was a good idea, but I explained my basis for the opinion and qualified it extensively, to the point of stating that I could be wrong even if I didn't think so.
I honestly can't explain your characterization of my comments other than speculating that perhaps you came in to the discussion already wearing your language advocacy hat, and that framed & distorted your perception of the discussion.
This is kind of obvious, but given the context I think it might be necessary to say it anyway.
Perfectly good programming languages tend to have strengths, weaknesses, and areas of focus (it's an inevitable consequence of both language design and Darwinian forces that dominate the marketplace of ideas). I don't think any of the flaws I pointed towards are necessarily bad points about the language and in certain domains are actually great strengths (probabilistic parsers are a great idea for web content).
> it only finally slipped out of him almost by accident what he was actually trying to get at, and even that isn't much of a relevant point.
I think a better way of describing it is that it "finally slipped in to you". ;-) When something is said multiple times in different ways, that's not "slipping out".
Obviously, you didn't understand what I was communicating, and no doubt this is partly a reflection on my own failings to communicate effectively. I apologize for not doing better.
However, you might want to consider the possibility that particularly given the limitations of the medium, the nature of communication failures, and that my point was deemed clear as day by at least one other 3rd party... you may have missed something.
Maybe we can learn something from this experience.
I have edited comments in some cases immediately after posting them, as I have a habit of noticing grammatical errors, typeohs, errors in punctuation, etc. only after posting.
I didn't alter anything that substantively altered or added to the meaning of what I said (when I have done that in the past I have marked the relevant text as an UPDATE so that the change gets noticed by anyone who may have already replied, but that wasn't necessary in this case), and I didn't edit anything any time after (or shortly before) you posted a reply. I find once a comment has been up long enough for people to read it, even in place corrections of gibberish to English increases confusion more than it decreases it.
If you saw an earlier version of an edited post you might have found a non-sensical phrase or two (e.g. I do recall one post where I fixed an accidental "maybe maybe" to just "maybe", I do remember my first post had something like "which just the many of few examlpes" and I'm sure there was one or two superfluous apostrophes in some of the comments which I later removed), then it's conceivable you read something that was later edited, but if it was fully legible (more like, as legible as it is now ;-), then definitely nothing was edited. I can assure you that the qualifiers were there in the original postings.
Truth is HackerNews locks down comments pretty fast, so by the time I might want to change/add/remove to what I've said in a way that even subtly changes the meaning, it's way, way too late for an in place edit.
UPDATE: Murphy's Law strikes again. I remember one edit I did do that might be deemed more significant than the others I mentioned. In the original post where I said, "..it is maybe possible that a language that had standard ways of working with bytes say within the first decade of popularizing it just might be better suited for talking to hardware", I had original used underbar instead of asterisks to emphasize certain words. Once I hit post my mistake became obvious and I went back and fixed it. I made the same mistake again when I wrote the comment where I was quoting myself with previous excepts (so I changed "with italics" to "with highlights" and then promptly fixed the rest of it so that "with italics" was if anything finally the more accurate phrase ;-).
> You have the assumption that "Talking to hardware" necessarily involves dealing directly with binary protocols.
I wouldn't say that it necessarily involves dealing directly with binary protocols, but the issue tends to come up. Particularly when talking with hardware that can't just easily be upgraded, you often find that even "text" based protocols might require some manipulation at the byte level in order to decode them successfully.
But really, the binary protocol aspect was just a really obvious and simple example of the larger "square peg, round hole" aspect of using JavaScript for the task. Maybe JavaScript makes working with hardware more accessible than other choices, but I doubt it.
> why wouldn't you do the binary protocol stuff in a C module and expose a nice easy api in the scripting language as is the usual practice?
I'd argue once you've done that, you're done. The hard part about the "internet of things" is that devices generally don't have a nice clean API's, and that's what needs to change.
Once you have a nice API, you can embed a JVM in their quite cheaply and then support almost any language web developers might want (including JavaScript), and quite cheaply and efficiently at that. Better still, just do a simple REST-ful interface and leave it to the developer to do whatever they want.
Well, considering we're talking about a $90 product that already has an ARM Cortex-M processor in it, it seemed it'd be hard to argue with the "quite cheaply" argument: http://www.st.com/web/en/catalog/mmc/FM141/SC1169 (particulalry since you can get free samples).
No, not really. My point wasn't about a checkbox item on a feature chart.
The point is the language has evolved for quite a long time in a direction very different from talking to hardware. Basic capabilities like the ability to work with bytes are a foundation that a lot of other constructs are built on top of, and adding it in late in the game doesn't undo all that came before it, or provide all that should be built on top of it.
It's not that you can't do this kind of work with JavaScript (obviously you can). It's just that it seems like very bad fit (even if you ignore its history with browsers), when there were a lot of other choices that seem like they'd have made things easier for programmers getting started in this area.
Well, there are JVMs and native compilers for the embedded market, able to target less powerful boards, hence cheaper design per unit, with much more performance out of the box.
Yeah , it has a little bit of functional features , but it would be like saying PHP is a functional language because it has closures and first class functions. When the rest is broken , ( bad design , no concept of expressions like Ruby or Scala ) . It doesnt make the language magically good.
Javascript is not a choice for many web-devs and for all front-end devs, that's basically the only thing you can use in a browser. And even if one uses a transpiler, 3rd party scripts are still in JS. That's why the hate. Otherwise people would not give a damn about wether it is good or bad. When one is used to something better (like Scala), one doesnt want to spend its time in this poor ecosystem.
DOM apis are quite good ,that's the only reason why most people are using JS today. They are just still inconsistent between browsers.
What on God's green earth gave you the idea that PHP has either closures or first-class functions? PHP, in its current version, has a Closure class whose name is an outright lie, and which, along with some really nasty syntactic sugar, constitute the basis for PHP's equally dishonest claim to have finally gained first-class functions. It's not just that the rest is broken -- it's not even just that this stuff is broken too; it's that the people responsible for it either don't mind lying about it, or they genuinely believe they're telling the truth. I'm not actually sure which prospect disgusts me more.
I would rather be surrounded by these guys, than those who think Scala, Haskell, Clojure etc would have been commercially more viable for this project. Basically, common sense.
I shouldn't have said this, but it's not a very inappropriate response to your comment.
Your point is moot. You switched it from commercially viable to good or bad. Javascript really is not a good language, no matter how you put it. He wasn't arguing whether it was a good business decision.
In the context of building things, a good language is one that gives the best bang for time; and improves our chances of success.
JS sure has its warts. But it isn't really that hard to know what to avoid. Once you do that, it is expressive enough to write apps without sacrificing productivity. It has all the basics you'd need; higher-order functions, closures, decent performance etc. The big win though, is that it is the only truly cross-platform language available today.
The major difficulty today (on the server-side) is that JS has no language support for handling callback interspersed code. This will be mostly resolved in ES6, due next year. We use ES6 features today, via flags.
Oh, don't get me wrong. I have no problem with the people who wrote Tessel programming my oven. Even in Javascript.
I also have no problem with a Javascript Kiddie programming my oven in Javascript. AS LONG AS HE/SHE COULD HAVE ALSO DONE IT IN SCALA/OCAML/CLOJURE/HASKELL.
> I also have no problem with a Javascript Kiddie programming my oven in Javascript. AS LONG AS HE/SHE COULD HAVE ALSO DONE IT IN SCALA/OCAML/CLOJURE/HASKELL.
Whether or not something achieves your lofty expectations for "greatness" on HN doesn't make it any less immature and disrespectful to make pithy, destructive, flamewar comments that are unrelated and unhelpful to the discussion and the innovators.
You would be surprised if you saw what kind of code powers the hardware surrounding you. Javascript is actually beautiful compared to some of those atrocities.
I've formally verified hardware for a living. All of us doing the verification had PhDs. Average hardware has much higher quality than the average code that runs in your browser tab.
When I'm creating the flight control system for my moon rocket I'll use some more advanced hardware. For 99% of the things I might use this for, Javascript will be fine.
"Oh, my room lights lit up 4ms too late because an interrupt &c., &c."
No one is even suggesting that Node is a suitable substrate on which to build, for example, a flight control system, and reduction to absurdity does nothing to improve your argument.
OH YEAY! HN found a new excuse to wail on its least favourite language!
I don't want to come off as a spoil sport, but can you please edit your comment into something insightful and/or constructive?
As for myself, I will spin this into a question:
What existing javascript library or program would most likely benefit from being able to run on this device? Is it just appealing to web devs? the presentation doesn't really get into what compelling use cases this device enables.
Actually knowledge of of any those is a proxy for the general quality of those developers. They could end up writing C++ but the fact that they went out and sought to learn language they thought was beautiful or had a different paradigm tells something about them.
I get that it is possible to generalize public opinions and say that something is "objectively" beautiful, or ugly. But I don't get why people familiar with those three languages always praise their functional purity, as if imperative languages are barbaric. How come there aren't ugly functional languages?
I'll talk about Erlang for example. Functional purity is objectively beautiful because it solves or minimizes a large number of typical problems. These problems such as large mutable states, large mutable state modifications during high levels of concurrency. A good functional language, using closures can mimic other patterns while having a very small core set of rules.
Purity doesn't mean moral superiority (although it sounds like) purity means referential purity for example, where the function when called no matter how many times with the same argument returns the same result. That is beautiful because it allows for interesting compiler optimizations, good for testing. Other looser explanations of purity are -- confinement of mutable state (this could mean monads in Haskell modifications to shared global state), immutable data structures in Erlang or Clojure.
Now that said these are all tools. As I mentioned, in practice, a lot of these people in their day job will end up using Java or C++. But the fact that they decided and managed to learn a new paradigm is what is the key.
Heck it could have been data-flow programming or logic programming (Prolog is awesome too, especially when mixed with constraint satisfaction).
Yet another way of putting it, familiarity with these things point to a level of passion and sets someone apart, that is quite desirable. Now, yes, there is a self-referential quality to it all, the more we think functional programming is a proxy for developer quality, the more people will do tutorials just to put it on their resume. Well, then the next fashion will be something else -- quantum algorithms perhaps, who knows.
I am sure those high caliber gurus will have no problem learning javascript overnight. That wll be excellent addition to their toolbx for that rare case when their internet enabled fridge gets hijacked
On the other hand, a quick straw-poll through my contact list - out of all my friends/colleagues who I'd guess have used (or even heard of) those languages, I don't think a single one of them has been "on the job market" in the last 4 or 5 years (they're all either comfortably long-term employed at their present gig, or they've been successfully running or partnering small consulting firms of their own).
That is where good headhunters come in. I don't have a linked in account, Facebook or google+, and am quite happy at my current job, yet get calls from google or other companies periodically. Still don't know how they find me out, I do go to conferences so maybe they buy lists of attendees...?
1) I'm guessing (by reading between the lines) that you also haven't "been on the job market", and haven't been plausibly tempted to "jump ship" to any language-purist position?
2) <cynical mode> Just because you haven't set up LinkedIn/FaceBook/Google+ profiles yourself, that doesn't mean they don't all have "shadow profiles" for you based of the social-graphs they've got where your colleagues/friends have "leaked" information about your existence to them and enabled them to infer skills/abilities from them… I don't _know_ that they sell data from "shadow profiles" to recruiters, but I do suspect there's enough money in tech recruitment to make it likely…
As funny as it might sound, you should try Node then.
Single threaded. Concurrency via processes, impossible to share state. And, among the more efficient dynamic languages out there now due to the browser performance race.
There's a slippery word if ever I saw one. Does Java count? Python? C? You could make arguments for all, or none, of these depending on what "proper" is.
I said "properly" to avoid confusion, but it probably only increased it. Java is not a strictly compiled language, since it still uses JVM. Most modern dynamic languages aren't purely interpretive and they use various techniques (like JIT) which make them closer to compiled, but they are still not "properly" compiled in a sense. That's what I meant above.
There is already a strong effort in Ruby land - check out http://artoo.io/. Artoo was developed with robotics in mind but it is not limited to that domain.
You could - but V8 is actually much faster than Ruby or Python, and JS syntax is quite good for I/O. Better than that, a ton of people know JS and the more developers find this accessible, the better.
Once upon a time Javascript was this fragmented language, mostly associated with blinking effects, pop-ups and lame attempts to protect a web page from being viewed as pure HTML. After years of misery the community was tired of how things was and said, "No more"! And so html-tables and javascript was sent to a zip-drive (you probably won't remember those, think of it like a SD-card) to never be seen on the web again.
But evil people who sold out to the dark forces was destined to bring javascript back. By making frameworks they masked the incompatibilities of the language, making it seem like a friend. And the young people who never fought the war, they welcomed this new technology with open arms.
But seriously, we took a typesetting language made for writing documents (just like word) and turn it into a technology for making software. It turned out into a total mess, and now you wonder why taking a language made for manipulating the DOM and using it for manipulating the stack is a bad idea? "When all you have is a hammer..."?
Twenty years ago I read exactly the same about using the abomination that was perl/CGI instead of the so called correct C. And history tells us that the prototypes that were made to test the concept where, most of the times, good enough to stop further development in the academic way. This product (or a similar one) can empower lots of people in ways that we cannot imagine yet.
Not all hardware has to be made by hundreds of thousands and be able to tinker with real world devices will be a bigger asset than we think right now. The web we know now hasn't been developed from tall towers but from the trenches. New ideas will flourish and then great developers will be needed to optimize, refine and scale projects that will change our view. At least that's what I hope.
You didn't address any of my points. Perl may have been horrible, line noise or a read only language, but 20 years ago was the quickest way to write a visitor book or a mailform. You could do it in C but didn't, just because you didn't need to. If the perl prototype was good enough, you could use it in production, and thousands of sites started to build whole e-commerce systems and found they worked.
Thad was a tipping point for the web, and the world now is different just because of that.
If you have to choose between power and easiness of development, most people will choose the later, and if I can try a new hardware board that doesn't force me to learn anything new, I'd probably try it.
I think Javascript is an acquired taste. People usually start hating it, but then they end up loving it. At least that's my case. Seriously, is not such a bad language.
A lot of people are building awesome things with it. JS haters used to say that it was because it was the only choice for the web browser, but now we see more awesome stuff outside the browser: on the server with node.js, for windows apps, and now on embedded devices.
Seriously, a language that allows you to write awesome things could not be such a bad language. Ranting about people using a language that's not your favourite is a waste of energy.
Hell no. These are the same people who didn't want to be bothered to write syntactically correct html. They view specs as nuisance and standards as cruft.
I code Javascript only because I have to, but its a miserable language.
Javascript sucks as a language but there's a huge numbers of developers that know javascript, and of the ones that don't many use C derived languages which would make it easy for them to pick it up.
Another major advantage of Javascript is that it is continuously optimized in the ongoing race for performance in the browser. The node engine has been shown multiple time to be significantly faster than other interpreted languages.
Finally, while young, node has a quite nice eco system.
"Javascript sucks as a language but there's a huge numbers of developers that know javascript"
The same could be said of Visual Basic. I still wouldn't want my thermostat running VB.
"Another major advantage of Javascript is that it is continuously optimized in the ongoing race for performance in the browser"
In an embedded system like this, what you want is something continuously optimized to be bug free and work without fail, not something optimized to be mostly ephemeral and need to work in an environment where if you leak memory for 45 days, people rarely notice.
Browser performance is also not optimized for anything that matches the application workload of this kind of device.
Given that this is realistically a tool for hobbyists or people rapidly prototyping a concept, does it matter if you choose to build a thermostat in VB (assuming you liked VB enough to consider it) or hand tuned assembly?
Anyone who tries to bring a product to market with this will presumably be crushed by those who can use less expensive components from better optimized code. At scale, even a few pennies per unit can make a huge difference in your profitability. It seems quite unlikely, for at least the foreseeable future, where you will buy an off-the-shelf thermostat with one of these inside - or anything else running Javascript for that matter - simply due to component costs, if nothing else.
But if you are a web developer who wants to hook up a light that flashes when someone visits your web page, something like this might be a reasonable choice. Being able to build something that works reasonably well with minimal effort is appealing. Who cares if it adds $30 to your BOM and maybe doesn't work every single time?
Personally, this is probably not a device I would be interested in. Playing with embedded systems at a low level is what makes it fun for me, but not everyone wants to learn C and assembly just to throw together a project in their spare time. I think this could be a pretty great product for that certain niche.
Devil's advocate, being 100% stable is somewhat less of a concern when you have a wifi link for easy update.
Besides, you can choose sensible things. if you need something stable you stay on a known stable version of node (which has it's own issues of course).
"People who pigeonhole JavaScript as a 'bad' language are typically just too close- or feeble-minded to grasp or command things like Prototyped objects, functional scoping, closures, etc... they are better left in the insular, strongly-typed environments they came from and leave the real problem-solving of the web to JS devs." ... See how ridiculous you sound when you make these kinds of statements...
Every time I see a thing about the internet of things, I get a little excited, but then they don't really show exciting examples of what you should do with it. Outside of Lockitron and Nest, I am unimpressed with what's come out so far. But it's tough. The field seems so expansive that most people get overwhelmed with possibilities, but few bare any fruit. For my part, I've only had one decent idea in the space. I'd love to see a robot that cooks food for me while I'm commuting home. Possibly like a Zojirushi rice cooker that I can program to make stuff at a certain time, but with more flexibility and a web API.
Sure, but that's hardly the only problem. The simple fact is most people don't have systems in their home that they want/need to interface with through an app or something. Here's a close to comprehensive list of the electronic things in my home:
Several computers,
Microwave,
George Foreman Grill,
Rice Cooker,
Fridge,
Thermostat,
Toothbrush,
Lights,
Dish washer,
Washer/Dryer
Mechanical engineering doesn't seem to be blocking point for any of those except maybe the washer/dryer and food related stuff. I could definitely see the opportunity to have a device that could portion out N grams of rice per person and then have it ready to eat when I get home, but for the rest, I don't see it as the stumbling block.
Machine vision is also a problem. One of the biggest I think. You could have a tiny tractor that grows you delicious veggies, but machine vision can't deal with plants.
Yeah, but if we had that we wouldn't be talking about "internet of things" we would be talking about "actual AI everywhere" which a whole next level of difficulty up.
Injection molding is the cheapest process on the planet, if you have a production run of >10k for a mass produced product. It is hard to find anything on Earth cheaper than an injection molded object.
I fail to see the connection between the end of web development and using javascript to write code for hardware (something that has been done a long time ago).
Tessel is another good idea that will help the DIY homebrewers to relate their existing skillsets into prototyping fresh ideas to a growing market segment.
If you object to javascript being used, by all means go back to building your 555 timers by hand or feel free to construct a competing unit in whatever religion...I mean language...you feel is the one true answer.
exactly! This is a good effort and if people have a problem with JS, why not make something like this in their language of choice? I mean i understand why people are hating on JS for this application, but I don't think "robust, mission critical systems" are the aim here. this is just for hobbyists and to let a larger number of people bring their ideas for life. Once you get those ideas out in the open, use some of u veterans could make it in a "proper" language. Currently people dont program embedded systems because the task seems daunting. This project is just an attempt to remove that initiating feel of hardware. Their motto is "Don't teach webdevs abt hardware, teach hardware about webdevs!" and it's a pretty nice way to bring a massive number of brain power to the domain of embedded systems that interact iwht the internet
Kudos to these guys that are doing something different. I am sure a lot of web developers out there will be more than happy to try creating new things on a completely different field using their current skills. Seriously, thank you!
It is so easy for the people here to criticize and make a lot of judgements over a few slides and their lack of knowledge of how things work in your product. Don't listen to their prejudgement, these people are obviously not your product's target.
1. For most people, Arduino would be easier to use (being that many Arduino guys are non-programmers to begin with), with a great ecosystem and a variety of hardware.
2. On the other hand, no one would ever use this hardware for an actual product. Cost is too high, power consumption - I assume - is also up there.
3. As opposed to Arduino, this is not a "hard" real time system. It's severely limits the development of applications featuring, for example, motor control or orientation sensing (gyros/IMUs). And from what I'm seeing, even Arduino didn't make great strides in the commercial market; product development often requires very precise level control over your hardware as well as real development tools (namely a JTAG debugger).
So we're left with an interesting experiment, the longevity of which mainly depends on the community acceptance. It basically targets web developers that want to turn LEDs on and off. This is as far away from the "internet of things" as one can imagine, unless these things are one-off hobby projects.
Yes, embedded development sucks, and product development can be an exercise in futility and despair - but this is not the answer.
I entirely agree. I'd much rather be tied into someone else's ecosystem with an electric imp or Xively, than have to work with hardware designed by people who think the software is the difficult bit.
I'm not sure. It seems like a big waste if you're going to run an interpreter for a decent language and use it exclusively as a compile target for an awful language.
I think the appeal is that web developers won't need to learn another language in order to do embedded development. That's another barrier to entry to embedded development lowered.
If you want to do new things, why not learn appropriate ways of doing those things? Is it really so hard to use more than one language when that makes technical sense?
As a member of a small startup (4 person technical team, all of us holding down day-jobs as well), the _prime_ decision trigger for technology is "what 'way of doing things' is the person allocated to a particular task going to be most productive in right now?". That's driven us in some directions that're non-obvious to outsiders - our stack is basically ARCH Linux, CherryPi, Python, Arduino, HTML5/JS - I can easily see people thinking "WTF? Why?", but there's some very good reasons behind those decisions - reasons which have probably allowed us to get to market 6 months earlier than if we'd chosen the stack based purely on technical merits rather than considering the skills and competencies of the existing team.
If we'd had corporate funding behind us, it'd almost certainly be different. But as a small startup, I'm 100% sure our slightly odd choices are the right ones for us. (And I could easily see why a different startup might decide node.js/javascript on the hardware was the correct choice for _their_ technical/development team)
This is perfect logic for a small startup, it is less so for a hardware team that is going to be pay dearly for changing decisions they make today (moreso than a software team). Hitting a big ecosystem is definitely a viable reason to do something when targeting developers, but what about the longer term costs associated with it? Their goal is to be the bridge that lets web devs start bringing hardware online. I think it's fair to say that other less painful languages with large developer bases were available, and some have the holy trinity of sucking less, being able to run faster, and having a sufficiently large ecosystem around them.
FWIW, we are a "hardware team" as well as a software team - and we're fully aware of how much some of our decisions might cost us down the track if we don't get things right enough" today… (details masquerading as shameless self-promotion http://holiday.moorescloud.com and http://dev.moorescloud.com ))
Admittedly my comment (as most comments on the internet) was a bit of armchair generalship. I honestly wish you guys the best because it's a cool concept. Would you guys consider building something for programming in Lua directly as an alternative to the JS to Lua bytecode conversion?
I just wish it hadn't been in JS, partially for the selfish reason that JS inexplicably makes me un-focus in a way that no other language can (it might be the formatting? honestly don't know why).
So I (and "we" in my posts upstream) are not the original posts JS hardware team.
(FWIW, on _our_ hardware, flipping the "dev mode" switch to "on", ssh-ing into your xmas tree lights, uncommenting the ARCH/ARM repos in mirrorlist, and running "pacman -S lua" should just work. The API for our compositor (which abstracts the fine timing details of controlling the WS2812 LEDs) is available and documented both on our dev site and on github (or if it's not today, it is scheduled to be up there by next week I think).
For some people it may be easy. For others they may have awesome ideas but decide not to execute on those ideas because learning Arduino C++ or figuring out how to install Node on a RaspberryPi seems like a large barrier to entry for them. Maybe some of those people are those that make those awesome websites, and maybe with an embedded controller that feels more like the environment they're used to, they're more likely to bring their awesome ideas to fruition.
Disclaimer: I went to school with the Tessel folks.
From where I sit - it seems a lot of the UI for "internet of things" hardware is going to be your phone, so your end user facing code is likely to be html5/javascript web apps served from the webserver on the embedded hardware. There's tehn a lot of good reasons to run javascript code on the device as well. (Which isn't the approach we're taking, we've got html5/js for the UI and Python running most of the code on our hardware (for details and shameless self promotion, see http://dev.moorescloud.com/ ).
I know right? I wish there was a recording of the talk-- the slides are glossing over something that was probably either in the QA or explained during.
Having written software for Arduino[1], I see this initiative succeeding for two main reasons:
1. JavaScript means developers don't have to understand manual memory management to program their hardware (and many developers simply _can't_ grok manual memory management these days)
2. node.js gives access to the npm package ecosystem - which opens up a huge potential "common lib" to hardware developers (potential because incompatibilities e.g. node.js network support vs luasocket will need to be handled)
Hardware hackers are generally a very curious bunch - they're not afraid of learning a new language on the software side, so it's not really _JavaScript_ per se that is Tessel's advantage here.
Straight to Lua could have been a second choice, but the package ecosystem in LuaRocks is hugely immature.[2] Because Lua execution in a LuaRocks-supporting environment is the exception, not the norm, most developers continue to manually bundle dependencies[3], which again doesn't encourage the package ecosystem to grow.
If hardware doesn't require a traditional web UI then why even use JavaScript? Did we all suddenly forget the history of why we even have to use JavaScript on the web. Here's a hint: the web browser. I don't use it cause I like it, I use it cause that's what we were forced to standardize on. There are much better languages and frameworks out there we can use for this.
Familiarity. If you are someone who builds traditional web UIs, then Javascript is what you know best, and that is the market segment they are trying to attract what this product.
If people truly want the better languages and frameworks that you mention, then the products that come out with those choices should win out in the marketplace eventually.
Except that no. Because of players within the market entrench themselves, like JavaScript for example has, there's practically no way for a change.
The iterative change is constantly restricted by past decisions, abstractions and investments. It's very hard to come up with nothnig but "good enough, sort of" solutions in this kind of a model.
I don't even know why I care. Psychology is a bitch, and I suffer. :(
Because they have identified a demand from people writing web UIs and wish to make money fulfilling that demand, I expect. There seems, from my anecdotal vantage point at least, to be a distinct trend of developers becoming more interested in hardware and it would appear that some of those developers would like to write software for that hardware using the tools they already know. Since the web has been the place where a significant portion of software has been written in the recent past, Javascript happens to be the tool that is familiar.
I think JS is very similar to the English language as they both have lots of imperfections but are also both wildly used across so many fields and countries.
By the way, the DOM part of javascript is to what's used in a browser. The rest of the language can't be more agnostic.
This is awesome. I have always been interested in hardware programming, but always hated the languages I had to learn to do it properly (java, c++, c, assembly, etc.).
Technology is reaching a place where I can eventually start doing embedded programming with Ruby. I can't wait for that to happen - I am sure it will.
> Then you weren't interested in hardware programming; not one iota.
That is a bit of a leap. I read what (s)he meant as more "interested in programming hardware" - as in controlling hardware and making it do things. That doesn't mean you need to be interested in the low-level hardware side of things to be interested in making the hardware do stuff.
Failing to learn what it takes to solve a problem you have is not a viable business case. One doesn't simply decide:
"no this product i want to exist is impossible because I can't be bothered learning something new or because I have a non-technical and non-financial hipster-hued-bias against the existing commoditised technologies that lead me be able to solve the problem."
if you have something you need to get done for work or otherwise, you use the most appropriate product.
if you "want to learn about hardware" you use the tools and resources that presently exist or make you own. Otherwise you do not want to learn about anything, you just want to screw around so you can tell your boss you are a hardware expert in ruby or some bollocks.
So many negative comments so let's say it loudly: this freaking rocks. Massive kudos to the team.
Maybe the performance is not yet on par with C. Maybe it will never be. But using JS means we can use this for very rapid prototyping and move on to C when we do need the performance.
The people complaining about performance are the same ones who build very fast and scalable products that nobody uses.
And about JS, let's remember there are two types of programming languages: the ones which everyone complains about, and the ones nobody uses.
Being in the hardware business myself, I can only laud their attempt of simplifying and easing access. However their approach only solves a basic problem that already has been solved for quite a while (access, look at the likes of arduino, contiki os, rasperrypi). The much much harder and real problem is developing a coherent system for 'everything'/'the internet of things'. and others, particularly contiki, are much more advanced when it comes to knowledge, development state and commercialisation.
So in short: I want to see product before I am impressed. Other than that I see little novelty and a bad choice of programming language for embedded dev..
For the longest time I was wondering how they were fitting node.js in 32 MB RAM when V8 uses something like 256 MB of RAM. I'd never have expected cross-compiling to Lua bytecode.
Not that this thread needs any more opinions, but I think it'll work, because the market and the people who want to build stuff with it are there.
No matter how flawed their concept might be, or how shitty JS is as an embedded language, people are out there that want to run with this idea. And when they run with it (one of Tessel's major goals seems to be getting their users to be able to get up and running fast, to boot) -- things will get made. When things get made, lives change, markets get shaken up, simple as that.
The theme at this year's JavaOne was (once again) The "Internet of Things" (IOT) and the community keynote address was by Freescale's CEO who proclaimed that the price of an Internet connected (or connectable) uC would need to be about $0.30 (US). At that price point, I can see a world where everything is Internet enabled. Of course, the devices we were discussing were all running Java.
Tessel looks like another implementation that attempts to bring the same technology to another demographic. Their fancy new system (the developer doesn't touch the Lua parts), should have a broad appeal and running NodeJS provides a lot of room to extend the system in the directions needed.
I see a couple problems with IOT and another that's more specific to Tessel ... if these problems have been addressed, perhaps I haven't noticed! Tessel is based on Javascript, but there is no standard for interacting with hardware (performing I/O) in Javascript ... I think the eco-system would be healthier with a whole slew of companies like Tessel, but is anyone working on this standard? The IOT problems are more social (since we have uC that connect to the Internet today). 1) Are we ever going to take the security of devices that cost pennies seriously enough? You might not care if your water heater is rigorously protected when Internet connected, but a few thousand of them that are programmed to turn on at the same time is an issue for the power company. 2) Do I want everything around me to be that "aware". Privacy can be lost in little dribbles and we've seen our increasing power to correlate this data into your identity.
So the whole point is to compile JS to LUA bytecode, the rest is pretty common in all those IOT-related kickstarter projects these days.
I would expect four PHDs from MIT to make JS to LUA compilers,even then, why not just use LUA directly, all language has its shiny spots in the real world, in this case, why do I need use JS at all? is LUA that hard to use or should we really use JS for everything?
To the Tessel team: please do not allow the negativity reflected in the above comments to deter you. You're onto something. Enabling 20,000,000 JavaScript developer to take the jump into physical computing is huge.
One suggestion: there are now several successful platforms for prototyping embedded systems: Arduino, Raspberry Pi, Beagle Board/Bone, etc. What has yet to emerge is a platform that is actually viable for low volume production. If you want to be truly disruptive, find a way to build the other components that are necessary to create real solutions: 12 volt interfaces that can be plugged into a car's accessory port and fire up when the car starts and can cleanly shut down when the power cuts; cases that not only mount the Tessel and its daughter-cards securely, but which pass standard RF emission tests; simple but secure methods for updating firmware on production systems. Make something that not only helps people get started, but helps them to build a business.
After seeing HN posts such as "Why hardware development is hard: Verilog is weird"[1] I can see the value of trying to make hardware development more approachable and less niche... And JavaScript is anything but niche.
And there I was thinking someone finally had created webdev tooling that made webdev more bearable and more like other software engineering disciplines.
Only to find out that it's about yet another hardware board with a bloated software layer so people don't have to deal with pointers.
Meh. Sorry for being grumpy but I haven't had my morning coffee yet.
No they generally don't. They are just pretty hard to use but once you get over the learning curve they are just as easy as any other program. Yet of course you still need the electrical engineering knowledge to create something useful with it.
Actually, I think this is the end of hardware prototyping as we know it. And I feel fine about that. This seems like an excellent way to prototype wireless devices before creating a more optimized design.
That said, I don't think it's going to replace said optimized design. There's a big difference between using high level languages on general purpose computers, and using it on embedded devices, and that difference is marginal cost. The marginal cost of distributing a program for a general purpose computer is independent of the performance of the program. The marginal cost of distributing a program that runs on a device you are selling is strongly dependent on the performance of the program. The slower your program, the more expensive the hardware you have to put it on.
"You don't. You teach hardware about web developers."
I mean I hate the limitations of C macros and the esolang that C++ can become, as well as facets of every language, but seriously, if language is a barrier then how did anyone learn javascript? It's been said that JS is one of the languages that people think they don't need to know in order to use. Every language is like this, honestly. You write hello world and abstract away from there.
Look at it, clearly, without surrounding distractions:
---Nobody, I mean nobody, thinks they need to know JS before they start using it, and lo and behold everyone, and I mean everyone, can learn to use it--- What a phoking coincidence!
To some degree isn't the community holding itself up by pretending JS is a security blanket instead of cough that thing that Node is written in that is apparently graduating to a real language? Is anyone going to get absolutely worse at JS when it becomes totally real?
Now, when it comes to setting some pin to hi-state, it becomes very important to be able to execute some instruction to move some register value to some memory address. Do it once. Put it in a C function. Bind C function into Cython. Call Cython from Python. You never have to do engineer your code precisely once you've given the low-level part an API into a high-level language. Let me think about the barrier to entry into writing Cython. Serously, it takes 5 min. Reading the documentation on calling the function that does that thing you need to do takes 5 min.
An understanding of the device and its limitations becomes very helpful when it's breaking. That's something I don't want to figure out from the Phonegap API. I want to reduce my problem to hello world in C, the world where nothing happens if I don't do it, the clear void from which a single expression can be tested without a complicated data-model below it. VM's don't tell you about real devices. I'm not asking anyone to get their RAM emulator out, but can we get over this BS about JS and web being something for everyone but all that code architecture stuff being only for people who double-majored EE and CS? Study a line of brainfuck. Read a minimal program written in LLVM IR. Write a program that outputs '1'. You're programming in that hard stuff. I assure you, you didn't just give away four years of your life and 50k in student loans.
Meanwhile JS/HTML/CSS, while ubiquitous, suck pretty hard. JS is the least sucky. I hate HTML like I hate metastasizing pancreatic cancer. <hello><twice><everything/></hello></twice>. CSS, that language that accepts no math because expressions are hard and Photoshop is easy so only programmers end up using CSS and designers who would benefit from something easier can't be bothered to even learn that(!?!?!?!).
I maintain an application framework that uses data-binding in a terse format, Cython for fast/low-level things, and Python for the development API. AMA. =D
I do want to add on top of this criticism that I was impressed by the work and do applaud the pro-activeness going on. Openness and accessibility are very important. I just think it's very important to constantly, persistently, call "nonsense" at perceived barriers that prevent the community from bootstrapping itself in any way whatsoever. There is a way. It's okay to say JS and C++ are both crap. It's not personal. We can all learn tons of languages and have that one that we use to think in.
probably faster bootup time and smaller too. also RPi has no analog inputs. the cortex M3 has decent analog in.
I'm with the 1% of EEs who have no major issues with Tessel... except that I'll be facing 1000s of friends who want me to write device drivers for peripherals that aren't in the stock lineup. :)
The tone of the presentation is like this: "Hurr durr, look at the dumb EEs, using bits. They probably even design and validate their stuff before implementing it, how retarded and un-agile!"
Just backed it, and I think it's a great idea -- because of npm, and I actually develop in Lua (game scripting...).
I think the commenters here are missing the point: this is a great system for prototyping. Perhaps as software developers, we take prototyping for granted (too damn easy?) but prototyping in hardware is a huge business and this board is great idea.
Why go with WiFi instead of bluetooth? Wifi is hard. If I was to give a tessel to a friend pre-programmed, how does he enter his wifi SSID/Password?
Bluetooth avoids all that. Use a smartphone as a proxy to the web. If he doesn't have a smartphone, you could even use Twilio with MMS to send data to the 'cloud'.
Other than that, really cool idea/product. Heck even though I'm a compE I will always choose JavaScript over C/C++.
JavaScript is fine. We could do a lot worse. We could also do a lot better. The point is that regardless of arguments hypothetically better languages, we are converging on a ubiquitous scripting language, and its name is JavaScript. If you don't like it, use ClojureScript or CoffeeScript or whatnot. Resisting the inevitable is only going to cost you time and mindshare.
Tool's aren't the problem, I think most system level developers in industry are busy doing 'real things' to solve 'real problems' to care much about bringing about the Web-of-Things. They would rather build a sate-light communication link then get paid 50% more to program a toaster. Arduinos are much easier to use then JS.
Why all the bashing? If you don't want JS controlled hardware, use the Raspberry PI or an Arduino. It's an alternative, with a different approach directed towards people with a different skillset.
The successful funding shows that there are well enough people interested in this.
The campaign was massively successful and a lot of devs I know are excited by it and have ordered 1 or 2 to play with. We've even ordered a couple for the office just for our devs to play with when they need distraction from their work.
This is certainly interesting, but this title couldn't be more hyperbolic link-bait if it tried, especially since content has nothing to do with "the end of web development as we know it."
Everyone is talking about the Internet of Things. I wonder when we're going to start talking about firewalls harm the Internet by breaking end-to-end connectivity.
See, Javascript maybe a 'bad' language according to many of you, but it has massive adoption unlike other languages. These board creators just want to ease the path for most web developers to become hardware developers. It not only opens up a whole new industry to work with, but also it creates a good 'filter' to filter out the bad ones. I will explain.
The thing about hardware products is that most people dont care about internals. Most of them care about the experience. I am NOT an Apple fanboy, but in this occasion I would like to cite the iPhone's sales as a good example. If you suck at programming in Javascript, it will show up, especially in the Hardware world, easily, and you/your product will be rejected.
Also, when you develop, say, a DSLR Quadcopter[1] with this board, people aren't going to ask you "What language is it running on?", "How slow is your language?". People are going to be asking about the footage you're going to film with it. Let's not dissolve ourselves into the hatred of a language. Instead, let's take the time to appreciate what these developers have achieved and what we can build out of these boards.
Cheers.
[1]A sample DSLR quadcopter for reference: http://farm7.staticflickr.com/6225/6868828438_e6d798c68d_b.j...