Hacker News new | past | comments | ask | show | jobs | submit | kreelman's favorites login

Run `emacs -nw --daemon` to start a background emacs server.

Now, you can do `emacsclient -nw <file>` to boot up with almost no startup time. I have vi aliased to this, and `emacs -nw --daemon` opened by launchctl.

A gotcha: If you change your emacs config and need to restart, be sure to `pkill emacs` to get rid of the daemon.


Really interested to play with dlisp if/when it is open sourced!

I worked for Cycorp for a few years recently. AMA, I guess? I obviously won't give away any secrets (e.g. business partners, finer grained details of how the inference engine works), but I can talk about the company culture, some high level technical things and the interpretation of the project that different people at the company have that makes it seem more viable than you might guess from the outside.

There were some big positives. Everyone there is very smart and depending on your tastes, it can be pretty fun to be in meetings where you try to explain Davidsonian ontology to perplexed business people. I suspect a decent fraction of the technical staff are reading this comment thread. There are also some genuine technical advances (which I wish were more publicly shared) in inference engine architecture or generally stemming from treating symbolic reasoning as a practical engineering project and giving up on things like completeness in favor of being able to get an answer most of the time.

There were also some big negatives, mostly structural ones. Within Cycorp different people have very different pictures of what the ultimate goals of the project are, what true AI is, and how (and whether) Cyc is going to make strides along the path to true AI. The company has been around for a long time and these disagreements never really resolve - they just sort of hang around and affect how different segments of the company work. There's also a very flat organizational structure which makes for a very anarchic and shifting map of who is responsible or accountable for what. And there's a huge disconnect between what the higher ups understand the company and technology to be doing, the projects they actually work on, and the low-level day-to-day work done by programmers and ontologists there.

I was initially pretty skeptical of the continued feasibility of symbolic AI when I went in to interview, but Doug Lenat gave me a pitch that essentially assured me that the project had found a way around many of the concerns I had. In particular, they were doing deep reasoning from common sense principles using heuristics and not just doing the thing Prolog often devolved into where you end up basically writing a logical system to emulate a procedural algorithm to solve problems.

It turns out there's a kind of reality distortion field around the management there, despite their best intentions - partially maintained by the management's own steadfast belief in the idea that what Cyc does is what it ought to be doing, but partially maintained by a layer of people that actively isolate the management from understanding the dirty work that goes into actually making projects work or appear to. So while a certain amount of "common sense" knowledge factors into the reasoning processes, a great amount of Cyc's output at the project level really comes from hand-crafted algorithms implemented either in the inference engine or the ontology.

Also the codebase is the biggest mess I have ever seen by an order of magnitude. I spent some entire days just scrolling through different versions of entire systems that duplicate massive chunks of functionality, written 20 years apart, with no indication of which (if any) still worked or were the preferred way to do things.


Compared to Inferno? It's simpler. I think plan9 is great example of getting a lot done with a very simple design.

Author here. I stopped working on this a few years ago, so now's as good a time as any for an overly-long, rambly, unedited retrospective.

When I started this, I was working on GNOME's window manager full-time, and wanted to learn intricately how X11's drawing model worked, so over the course of a few weeks in a hotel room, I recreated large parts of X11's drawing model in a web browser, fixing artifacts as I went along, until I feel I had a really good grasp of it. My initial test scene was a traditional desktop-like approach with a taskbar and xeyes, both of which are still in the codebase today, but untested [0].

I didn't know what I wanted to do with it, until I settled upon using snippets of it to build a long-form article. I learned a lot about the difficulty of writing, of pedagogy, of that blurred line between being technically correct 100% of the time vs. telling a few small lies here and there to keep the flow consistent and help people see the broader picture.

At my day job, I had mostly moved onto Wayland, where some of the bits I picked up here really helped me design better protocols and systems. My goal with the series was to try to be as neutral as possible, and my original design was to have a giant caution sign around "Author Opinion Zones" where I would talk about how certain design features haven't held up well in practice. But quickly, people on Hacker News or Phoronix or Reddit seemed to skim the article, pick up a piece here or there, and go on straight to bashing Wayland, gleefully unaware that I was one of the people making it.

So, the end result was that I basically stopped working on Xplain basically after the second article. The COMPOSITE article was one I made after a colleague was having trouble understanding COMPOSITE, and I figured it was easier to write with my framework than explain in a chatroom, and maybe some others would appreciate it.

I have a deep passion for sharing my knowledge, and Xplain was the format I first really used to do it widely, so I tried to keep it exciting for me by changing it from "Xplain" to "Explanations", and opening up the topics from X11 to just about anything, but at some point I was just unhappy working on it.

The last thing I was working on was a continuation of my Basic 2D rasterization article, where I had a fun code editor you could use to make your own graphics [1] [2], but as fun as the technology was, I couldn't find a satisfying flow to the article, so I stopped it. Parts of it were later recycled for an article on the histories of 2D and 3D graphics. [3]

Around mid-2016, I had stopped working on Linux and open-source graphics entirely, and by 2017 I had exited the open-source industry completely and jumped ship to professional game development. I still have a deep love for graphics and a passion to explain things. I just released a new side project a few days ago for it, even.

Here's some stuff I'm working on these days. It's much cooler than X11/Wayland flamewars, in my opinion.

https://noclip.website/

https://blog.mecheye.net/2018/03/deconstructing-the-water-ef...

https://www.youtube.com/watch?v=8rCRsOLiO7k

--

[0] https://github.com/magcius/xplain/blob/gh-pages/src/clients/...

[1] https://u.teknik.io/XdSbC.webm

[2] Sort of up here at https://magcius.github.io/xplain/article/rast2.html

[3] https://blog.mecheye.net/2019/05/why-is-2d-graphics-is-harde...


Very interesting indeed!

Off-topic: May I suggest that you switch to using imgur instead of the image host you chose? The ads on imgur are much less intrusive.


CL allows us to write code the way we think about a problem--and then bring life to that way of framing the problem. We can come up with an ideal way of describing a solution and then make a language work that way. I say a language because the target of our code might be C or JavaScript (these days that is more often the case for me than targeting CL itself, cf. 4500 recent lines of Lisp that turns into 8000 lines of terse JavaScript).

Our ability to reason correctly about systems directly corresponds to how complex they are. And I posit that complexity in code is best measured in number of symbols (because lines can be arbitrarily long, and longer symbol names can actually be helpful). So a system that reduces the number of symbols necessary to express a solution increases the size of a solution about which we can successfully reason. Just as computers are a "bicycle for the mind", homoiconicity+macros (of which I posit CL is still the best practical implementation) is a "bicycle for the programmer's mind".

Lisp provides an optimal solution for thinking of programs as nested sequences of arbitrary symbols. Sequences that can be transformed (and must be, for a computer to evaluate them, unless we hand-write machine code!). Common Lisp provides an optimal set of built-in operators for composing operations and transformations on its fundamental data types (atoms/cells/lists/trees). Other languages might provide better implementations of particular paradigms or whatever, but CL is the best language for implementing macros. Other Lisps make macros "safer" and miss the point.

As Vladimir Sedach wrote earlier on Hacker News[1]:

"The entire point of programming is automation. The question that immediately comes to mind after you learn this fact is - why not program a computer to program itself? Macros are a simple mechanism for generating code, in other words, automating programming. Unless your system includes a better mechanism for automating programming (so far, I have not seen any such mechanisms), _not_ having macros means that you basically don't understand _why_ you are writing code.

This is why it is not surprising that most software sucks - a lot of programmers only have a very shallow understanding of why they are programming. Even many hackers just hack because it's fun. So is masturbation.

This is also the reason why functional programming languages ignore macros. The people behind them are not interested in programming automation. Wadler created ML to help automate proofs. The Haskell gang is primarily interested in advancing applied type theory.

Which brings me to my last point: as you probably know, the reputation of the functional programming people as intelligent is not baseless. You don't need macros if you know what you are doing (your domain), and your system is already targeted at your domain. Adding macros to ML will have no impact on its usefulness for building theorem provers. You can't make APL or Matlab better languages for working with arrays by adding macros. But as soon as you need to express new domain concepts in a language that does not natively support them, macros become essential to maintaining good, concise code. This IMO is the largest missing piece in most projects based around domain-driven design."

[1] https://news.ycombinator.com/item?id=645338


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: