Huh, I just downloaded the latest atom-shell and ran a hello-world that dumps the "node" version and it says v1.0.0-pre. That sounds like IO.js alright.
Why then would they still refer to "Node" in the release notes, all the way up to the most recent version??
"Fix initializing node integration in the webview when the page in it navigates."[1]
> Moore's Law should apply, and eventually processing power should catch up. I think we'll see the 2MB limit removed by 1997/1998.
That's a nice summary on the quality on web technologies.
2009: "Hey guys, we can now draw on a widget; we called it Canvas!"
2011: "Wow, look at this cool OpenGL wrapper that locks up your system in ways VoodooFX didn't manage to do in 1998! But you can use it on your web site, which is what people want, right, guys?"
Yeah, I find this stuff depressing. I see people jumping up and down over a demo of the Unreal-1 engine running at 0.04 fps in javascript and I think to myself, if my old compaq POS from the late nineties could run that engine without trouble, why can't my brand new Macbook do better, or at least the same?
It feels like we've redefined "progress" to mean "the same thing as 20 years ago, but worse"
More like the same thing as 20 years ago, but the dominant language is a LIS.. I mean a Schem... I mean it's a bit like Self.. look, it has garbage collection, a JIT, supports objects and goes somewhat fast. Oh, and it's used to drive a GUI made up of stringly-typed ... things that kind of looks like SGML and has no chance in hell of achieving good draw performance. But you can calculate fibonacci in CSS and get even worse performance than doing it at compile-time with C++ templates!
WebGL puts in heroic amounts of effort (and many performance compromises) into working around 3D driver bugs and trying to bring about a memory-safe 3D programming environment that still leverages 3D acceleration. Blame the system lockups on buggy drivers, and systemic problems in GPU software stacks, not "web technologies".
Regular expressions are not the way to do syntax highlighting yet many of the popular editors do it this way. Create a super huge string in vim, textmate or sublime text and watch it blow up. Then try it in Xcode or Intellij and see that there is no problem. The reason for Xcode and Intellij's fast syntax highlighting is because these do not use regular expressions, they use a lexer. They're examining the text one character at a time with some amount of forward and backward looking state and building the syntax highlighting and smart completion/features that way. As long as your editor is using regular expressions for syntax highlighting this will be a problem for you.
In addition to log files (covered in another reply)...
- Database dumps (if you're dumping a database to a text file, how often will it be under 2MB?)
- Data, in general. Tab/CSV text files are still the lingua franca of non-hierarchical data and that will probably never change.
- Code you don't control or haven't refactored yet. OK, we can all agree that a 2MB source code file is probably "doing it wrong." But maybe you don't control it. And even if you do control it... well, you need to edit it to refactor it, right?
That said, I don't think the 2MB limit is a total death knell for an editor. I keep TextWrangler (OSX) and UltraEdit (Win) around precisely because they're good with larger files. If my primary code editor is good at large text files, that's a bonus, but it's not a total necessity.
Crazy, but not out of the question. I use Sublime to edit conflicts in .xcodproj files. These brutally detailed XML files - necessary to iOS/OS X development - can be huge. At my previous job one was 7mb.
Definitely situational but not out of the question. Maybe I want to use my favorite text editor to examine a logfile. 2mb is nothing when it comes to production logs.
So the thing about Atom (and any "disruptive" software) is that it doesn't have to be amazing at everything. It only has to be 10x better for some niche of people who will be this product's early adopters and evangelists.
That will gain it enough traction to prove itself in the market, and over the next few years it will fix all the problems such as the 2MB limit, and whatever else is there. And then, laggards (well, you're probably more of an "early majority" type of person in Crossing the Chasm lingo) will see enough value to join.
And well... it isn't. The biggest thing I can tell is that it's "hip". Meanwhile Sublime Text spins around it at 40MPH doing donuts.
I tried. I really tried to use Atom. But the text editor is just so slow and unresponsive, and that is a flat out dealbreaker for a text editor. Its extensibility isn't any better than Sublime's, and I feel like Python's a way better language than Javascript, so I'm happier tooling against Sublime.
I too wish Sublime Text was open source, coming from an open source development background, but I understand exactly how hard it is to develop open source software and feed and clothe yourself - people only buy "support contracts" for the first year and rarely renew, and only for software they feel is technically complex enough that they might actually need help or customizations.
For the record, I also backed GNOME Builder (which I also already find superior to Atom) in the hopes of some day it becoming the text editor and development environment I lust after. But my greatest fear is that more time will be spent on the shiny features and not enough time on the things I really care about like a decent GUI debugger so I don't have to live with poking at GDB, ahead-of-build compiler warnings/errors, and code refactoring tools.
I was really interested in Atom but I found it to be a little sluggish (particularly when I switched it to the background) and sometimes unstable on my Macbook.
The trouble is that an editor shouldn't require a 2010+ MacBook Air in order to be blazing fast. As I recall, we were running syntax-highlighting editors on ~166 MHz Pentium machines in the mid 90s. Hell, maybe I even ran one on my old 80486 laptop.
I have a 2014 Macbook Pro and it's sluggish there. I have half a dozen Linux machines around me daily and it's entirely unusable on them. You can visibly tell when the GC is running, it's that terrible.
You just described a load of things that it's amazing at:
- it's hip (dont discount that, its important. it excites people, and gives contributors a reaso to contribute)
- it's in JS - maybe not right for you, but there are lots of people who have the opposite opinion.
- it's open source: for some, a closed source editor is a deal breaker. Imagine the people who say "i hate emacs and vim; I'd love to use sublime but it's closed source" - they'll run to atom.
- its got corporate backing (gnome builder may die or wither, there are many who can believe that atom wont because of GitHub's backing)
The biggest issue with Sublime is that it's been unchanged for over a year. I think that's the big problem that Atom is solving - Sublime + active development.
Single developer shouldn't be that big of a factor in slow development and complete media silence for six months to a year on end; any half-decent developer can churn out an update every week, whether it's a new feature or just a bugfix. That's how you keep people excited about your program, and how to prevent it from seeming stale and dead.
I don't use it; I just know that this is the reason many of my coworkers are moving away from it. I've heard concerns about it breaking on a future OS upgrade, or just failing to keep up with others in features.
If you're looking to lose any and all good will you've established, charging all previous customers $70 to get updates is a great way to do it. ST2 dev is dead, which is what matters to the people who already paid for the product.
The 2MB/slowness isn't something that can just be "coded away". It's a direct result of the technology stack chosen. It is not a solvable problem. Therefore Atom cannot succeed beyond niche usage.
Why is there a 2MB limit anyway? Is it because the document is rendered to HTML? In this case, why don't they render only the part currently visible on screen?
They were probably tired of people complaining about problems with this corner case of opening humongous files in an editor meant for editing source code. Atom properly should be optimized for kilobyte sized files, not megabyte.
Atom is essentially a web browser running a text editing web app. Given how much web pages tend to "expand" when loaded into memory in a browser, it's not so surprising.
I'm guessing that Zeus isn't based on a web browser but was designed from the beginning to be an editor.
Zeus, like a lot of text editors is an in memory editor, which basically means it loads the whole file into memory.
So it too is limited by the total memory available to the application.
For a 32 bit Windows application the available memory comes in at around 2 Gigs and for a 64 bit Windows application that explodes to 8 Terra bytes.
So even if Atom was 32 bit it should have 2 Gigs of memory to play with.
I'm not sure why Atom does limit files to 2 megs, but if it is because of the expansion of memory then there must be some serious expanding happening there.
Why do all threads about Atom seem to be full of people opening huge files with text editors? I've never had performance issues with Atom with a 2013 MBA.
The C++ project I'm working on at the moment has many dozens of 1Mb header files. Its very frequent for source files to go past the two megabytes mark. Splitting these files is no option either; there are over three thousands of them and compilation times are slow enough already (many hours without distributed build systems).
As for data files, we have XMLs (yeah, I know..) over 50Mb in size.
Atom is a long way from supporting such projects. This probably doesn't apply to most web projects, but 2Mb is far from a huge file :)
No, just data definitions used by the application. Most of it uses generated parsers and serializers but we still need to manually edit the files now and then.
Right. I think this is the point that people are really making.
Right now, Atom is just too slow when compared with other editors such as Sublime. Given enough time it shouldn't be an issue (maybe), but right now I think it's a huge pain point for a lot of devs.
I just switched back to emacs from testing atom for a few weeks, and my biggest issue with it was the speed in opening files from a closed client. emacs deals with it by daemonizing emacs-server, atom deals with it by trying to be performant, but failing. In this day and age, i'd rather sink some cheap memory and keep the daemon running.
If atom has a daemon feature that I never found, or one upcoming, I'd consider it again. Until then my workflow involves too many client closes for me to not notice the significant delay that atom has when opening a new file.
I feel like daemonizing Emacs should be the default. I can happily add random stuff to my init.el which causes it to take a little while to cold start, but I never see a cold start so from my perspective Emacs is instantaneous for editing a random file. I keep hundreds of files open at all times and never have an issue with performance. I just go through and prune my open buffers once a month or so, for organizational reasons more than resource usage.
It depends on your usage. Sure, source files are (hopefully) not going near that size, but many people regularly open multiple-MB log files or data files.
Also, for me at least, it's the principle of the thing -- we are talking about a measly few MB of text; if you can't even open that, IMO something has gone horribly wrong.
If you are looking for unusual conditions, scanning through visually is sometimes easier than it would be to try to grep for the particulars of an entry, especially if you don't know the particulars of the unusual condition yet.
Some editors with "minimap" functionality (Sublime Text and Emacs that I'm aware of, maybe others) can give you a fairly good visual overview of large swaths of a file at once. Sometimes strange entries are obvious just because of the shape of the line and in those cases the minimap makes it easy to see when something unusual is happening.
Never code, but data, all the time. I write scrapers and such that produce MBs worth of output text. For quick hacks, large text files are great. I even drop large binaries into my text editor just to search for a string sometimes (easier than opening up a cli to strings | grep). At my job I frequently work with XML files measuring 100+MB, I didn't make that design choice, but I gotta work on it now.
Why do all threads about Go seem to be full of people who cannot live without generics?
Snark aside, feature X is rarely absolutely needed (you can replace your generics by some boilerplate; you can use grep instead of your text editor to process huge files), but in some cases it can make your work significantly easier and that's why people want it.
In my own case, opening video files in Sublime and having the hex and ASCII side-by-side made my job significantly easier when I was writing a parser. I could easily select portions of the data (e.g. an embedded metadata), annotate it with comments from the spec, then paste it in a unit test.
I have Atom installed, but see no reason to migrating since Sublime works well enough. My only concern is that the Sublime package ecosystem will likely start rotting now that Sublime itself is essentially abandonware, so at some point I might need e.g. autocompletion for Typescript and the best package will be on Atom, not Sublime.
I see this limitation mentioned often when Atom is discussed.
On the constructive side of things, anybody have recommendations for editors that do handle large text files well?
Obviously vi and emacs do. (Correct?)
OSX - TextWrangler and its big brother BBEdit are the best I've found - but this is not something I've extensively researched.
Windows - UltraEdit boasts that it can edit files of arbitrarily large sizes and in my experience this is true. UltraEdit costs money but I have an old version I keep around for when I need to work with bigger files.
I haven't looked at it for long, but as I remember it could help if there was a visualized for those files. Often it's a png, a jar, it would be less frustrating if it could open anyway, just in the right editor?
ES6 support is not a binary thing. IO.js has a newer version of V8 that supports more ES6 features than node.js's V8 version, but its not even close to all of ES6. to 6to5 shims the rest of the ES6 featureset. see here for more info: http://kangax.github.io/compat-table/es6/
This doesn't explain it, why do you need to compile ES6 at run-time? Plugin authors should compile before distributing.
This would annoy me as a user, I don't care that you really want the await/async feature. ES5 is not that bad. It's not like the difference between Lua and Vimscript. Just use ES5 or compile your ES6 prior to distributing.
You compile ES6 at run-time in developer mode - ES6 compiles are cached and installing a plugin precompiles it on your machine automatically. However, if you're developing a plugin, ES6 will be compiled on-the-fly.
However, that io.js ships with v8 4.1 is quite a large step ahead for dependants, not just for language features, but also under-the-hood stability and performance.
Of course, there is also everything that node 0.11.x has, plus a release schedule to match Atom's own release cadence. :)
I though the whole point of io.js was that they would work on the project faster than Node, but eventually merge back into Node once Joyent became more transparent. The Node release cycle was pretty slow and they had some dispute with the contributors (top 5 original Node contributors now working on io.js)
They publicly stated they will merge back in upstream eventually all being well. They're doing it for the good of Node.js, not to create a rift and divide the community's efforts.
IIRC the Node Forward initiative was more explicitly focussed on merging back than io.js is. The subtext of io.js seems to be "we'll move forward with or without Node moving foward with us".
I'm curious as to how they will handle 6to5 versioning. 6to5 seems to be very much in a "move fast and break things" stage, and I'm not sure that's going to change any time soon.