Hacker Newsnew | past | comments | ask | show | jobs | submit | deuill's commentslogin

I agree that the bickering is tedious and old at this point (and perhaps has been since the start), but there is a sense that the technical arguments miss the point of what the contention is about. Now, my argument is based mostly on conjecture and observation, and I mainly use XMPP because Conversations is the only thing that runs on my Blackberry Q10 (another thread entirely); I have not much of a horse in the race in any other way.

Though XMPP and Matrix are diametrically opposed in terms of their protocol semantics, and can be meant to occupy different use-cases at their edges (say, message passing vs. eventually-consistent data stores), their core use-cases are much the same: one-to-one and many-to-many messaging for both public and private groups, in competition with other, proprietary applications occupying the same space (e.g. Slack, Signal, Telegram, Discord). For sure, it might be agreed that there's a wide spectrum of difference even in the aforementioned proprietary applications (the whole banquets and barbecues notion), but I'm hoping it's not controversial to think that one can implement an application in the same UX vein as the proprietary alternatives using either XMPP and Matrix.

That, I feel, the the core of the contention -- that somehow Matrix has wrested focus/effort/velocity away from a still-viable project in terms of end-user-goals, thus somehow dooming it (or open-source/community-owned messaging in general) in the process. At face value, this might very well be nonsense; if XMPP cannot compete, or is not viable, nothing anyone outside the project does will affect this fact. Conversely, if Matrix didn't exist in its current form, it might not exist at all -- it's not a foregone conclusion that efforts would've been poured into XMPP or whatever else instead. In any case, competition is generally thought to be a good thing, insofar as it helps drive competitors to improve.

Emotionally, though, I think I understand the contention, and I've seen it happen not with protocols, but with things like Linux distributions, where people would lament the proliferation of disjoint efforts, where no clear, viable contender to the proprietary solutions existed at the time -- and some might argue does not exist still; nevertheless there's not much lamenting nowadays, where multiple viable distributions and desktop environments exist.

In some sense, community-owned messaging is still in a precarious state, and splitting up efforts, as it were -- since efforts are not just split in protocol implementation, but also in the ecosystem of applications etc. -- makes it feel even more precarious. Whether there's any rational basis to this, and whether either protocol has a technical advantage over the other, I don't know.


This isn't necessarily an endorsement of one protocol/ecosystem over the other, nor do I have direct experience with integrating Matrix or XMPP (though I run the latter on my home-server for family), but XMPP has seen a few large deployments, including in healthcare (in the UK)[0][1] and in Germany[2].

The consumer-facing client ecosystem for XMPP has indeed seen less rapid development than Matrix (the latter probably benefits from a more cohesive approach), but the server ecosystem for XMPP is very mature, and servers such as Ejabberd are known to scale to hundreds of thousands of connections on a single, modest host[3]. Obviously, that's only one part of the puzzle, hence why Matrix was chosen here.

Still, it'd be interesting to see how the two evolve and compare down the line.

[0]: https://www.erlang-solutions.com/case-studies/pando-health-c... [1]: https://medium.com/miquido/successful-migration-to-a-custom-... [2]: https://twitter.com/iNPUTmice/status/1203611711967813633 [3]: https://www.process-one.net/blog/ejabberd-nintendo-switch-np...


I was part of a project that helped bring a usable Linux distribution on this line of palmtops -- JLime (you can find references on the net, but the main site has since gone dark). This includes the 620LX, 660LX, 680 and 690 (based on SH-3 processors), as well as the 710 and 720 (based on ARM processors).

I personally had the 690, which was suprisingly usable (and probably still is), even with its 133MHz CPU and 32MB of RAM. Finding and porting software was, even in ~2006, especially hard, given the hardware constraints and unique screen aspect ratio (640x240 resolution).

I still have the page for the distribution up here[0], and a software repository set up here[1], in case the author (if they see this) or anyone else that has any of these machines stashed away is feeling adventurous...

[0]: https://deuill.org/code/jlime-vargtass/ [1]: https://repository.deuill.org/hp6xx/vargtass/


(Author here)

I did spend quite a bit of time on your site, linked sites, and webarchive. The files on your site did not work out of the box for me. I did eventually get a combination of kernel and configuration that kind of worked, but I think the CompactFlash card was too big. It stalled out on hard drive interrupts.


Ah, that's too bad! AFAIR zrafa's site[0] was the place to go for bootloader/kernel files specific to the 620LX/660LX, but the userland should work on any 6xx series.

[0]: https://web.archive.org/web/20150626030407/http://fz.hobby-s...


Thanks, I'll have a dig in the basement then I guess! Is the bootloader code open source?


Indeed it is/was, though I cannot seem to find an original copy on the internet, so I uploaded to the repository[0]. I'm not sure what the requisite toolchain is, though.

[0]: https://repository.deuill.org/hp6xx/misc/jshlo-1.1.0-CE-2.11...


Toolchain probably is Visual Studio 6 with the Windows CE SDK.


Hey, I have used JLime on my Jornada 728 back in the day, thank you for your work!


I always felt that rebasing as part of a workflow was part lack of discipline when creating commits, and part desire to hide work that is incomplete or "imperfect".

As with most things, there's no silver bullet that applies to every single case; different workflows work best for different teams and projects. Gitflow is both reasonable and fairly simple to understand and implement, but so is commiting straight to master and tagging "when it's done" for a single-person hobby project.

There's also no free lunch, and maintaining a clean history (whatever that means) takes more effort than just applying a set of rules (such as, always rebase, squash commits) after the fact. Linus Torvalds has said it better than I could ever do[0].

As far as the article itself goes, the author maintains that:

> A few common commits on my branches are: “WIP,” “Run Rubocop,” and “Fix typo.”

And then goes on to say that the solution to this is to rebase, rather than creating better commits from the get-go, or just deferring any commits until the end of the feature work, and then staging interactively in order to create multiple thematic subdivisions, or just commiting the entire workspace.

[0]: http://www.mail-archive.com/dri-devel@lists.sourceforge.net/...


> that rebasing as part of a workflow was part lack of discipline when creating commits

Because we have rebase, there is not as much need for discipline when creating commits, because you can change change them later.

> and part desire to hide work that is incomplete or "imperfect"

When I do any kind of work, I usually submit the finished work to whomever needs it. It's not hiding, it's just giving people what they need.

When I use `rebase -i` in my workflow to alter my commit history, it's almost always for the purpose of helping code review. I want my commits to lead a reviewer logically down a path that helps them understand the role each change makes in a push request.


Awk is actually very intuitive once you learn the basics (should take about an hour). Awk is essentially a line-oriented pattern matching language, that is, a collection of patterns -> actions that are applied to every line of input. Syntax is very similar to C, but feels much more lightweight.

To decompose the example above:

  NR > 1 {
     print $1, $4
  }
There are two parts to this, the pattern (the part outside the brackets), and the action (the part inside the brackets). Every line parsed is split into fields and stored in $1..$NR, where NR is the number of fields. The entire line is also available in the $0 variable. The default separator is the space character, though that can be changed.

So, knowing the above semantics, the meaning of the above example should be clear: If the number of fields for this line is larger than 1, print the first and fourth field of the line. It's a very powerful paradigm, and you can do crazy stuff with Awk. Examples are an x86 assembler [0] and a SASS-style CSS preprocessor [1] (plugging myself there a bit).

Unix coreutils are very powerful once you're familiar with them.

[0]: http://doc.cat-v.org/henry_spencer/amazing_awk_assembler/ [1]: https://github.com/deuill/fawkss


Not intuitive enough apparently ;)

$NR stores the number of records (or the current record number) not the number of fields (that's $NF). In the example the "NR > 1" is meant to exclude the header line from the output.


You are entirely correct -- I got tunnel vision/brain fart and mixed NF and NR up, newbie mistake!

NR is initialized to 1 on the first line, and is incremented for every subsequent line read. So, in this case, the above pattern will match any line after the first.


FWIW, I've had better mileage out of `dtrt-indent` by increasing the `dtrt-indent-min-quality` option to `90.0`, which has the heuristics a tad more strict in choosing the correct indentation. However, there are still cases where the wrong choice is made and fixing is extremely annoying.

I've been using Spacemacs for about 3 months now, moving from Sublime Text, and while the initial experience was mixed, there are things that I just couldn't do without (project management felt weird at first, but now feels seamless, the fact you can edit over SSH transparently etc).

There are still many pain points, such as the sub-standard multiple cursor implementation (with `evil-mc`, `iedit` isn't much better) or the fragmented and incomplete tagging implementation (GNU Global doesn't support everything and generating isn't transparent, though `dumb-jump` is being implemented which may help).

However, I feel that both Emacs and Spacemacs have very strong foundations and extremely vibrant communities, and am confident a lot of the pain points (at least the ones that aren't architecture-related) will be fixed given enough time. Spacemacs is well on its way for a new major release, which should fix a few things.


Your `dtrt-indent-min-quality` tip has essentially solved my indentation detection issues, thanks!

I've never really been a fan of multiple cursors, Vim users tend to use macros to solve this. It's a highly valuable thing to learn and practise.


Have you tried using http://forecast.io/? You can get historical forecasts via the JSON API[0], which is, from what I can tell, free for low-volume use.

[0]: https://developer.forecast.io/docs/v2#time_call


I think he's trying to get historical data of actual weather, not forecasts.


> Please note that we only store the best data we have for a given location and time: in the past, this will usually be observations from weather stations (though we may fall back to forecasted data if we don't have any observations)


That looks really nice.


I used to help port Linux to various devices like the HP Jornada [0] or the Ben Nanonote [1], both of which have (up to) 32MB of RAM available.

Finding a suitable web browser with reasonable support for "modern" web standards (basically CSS2) and a lightweight footprint was terribly hard... the better ones, as I remember them, were:

1) Dillo[2], which was one of the most lightweight graphical browsers under active development, albeit a bit light on features as well... FLTK is a great toolkit, and runs well on resource-starved devices like the ones mentioned above.

2) Netsurf[3], a relative newcomer, had one of the best rendering engines out there considering its lightweight footprint. It too is under active development, and is moving towards HTML5 and CSS3 compatibility. Has GTK2 and framebuffer backends, the last of which is better in terms of memory footprint.

3) Konqueror-Embedded[4], which hasn't seen development since the mid-2000s, was actually the only browser with reasonable support for web standards and support for Javascript. Built against QT2 (which is a massive chore in itself), it runs fast and has a low memory footprint.

4) Links-hacked[5], which again hasn't seen any development in more than 10 years, worked pretty well in graphical mode. It's a mix between elinks code (before Javascript support was gutted out) and links2 code (for the graphical parts).

Some failed experiments were:

Firefox version 1.0.8, last release with GTK 1.2 support, was, unfortunately, too slow to run in any reasonable way. Startup time was around 1:30 minutes on the HP Jornada, and navigating to any web-page took more than 3 minutes.

Hv3[6], a browser and engine built in TCL/TK, looked promising but was a nightmare to compile and never worked correctly.

Finding modern software that can run well in such memory-constrained enviroment was hard enough, let alone something as complex as a web rendering engine.

[0]: https://deuill.org/page/1/jlime-vargtass [1]: https://deuill.org/page/3/jlime-muffinman [2]: http://www.dillo.org/ [3]: http://www.netsurf-browser.org/ [4]: https://konqueror.org/embedded/ [5]: http://xray.sai.msu.ru/~karpov/links-hacked/ [6]: http://tkhtml.tcl.tk/hv3.html


The New Yorker also published an excellent piece on Perelman and the solving of the Poincaré back in 2006: http://www.newyorker.com/magazine/2006/08/28/manifold-destin...

It's interesting to see the political implications behind breakthroughs like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: