Hacker News new | past | comments | ask | show | jobs | submit | zaiste's favorites login

Learning that some folks can produce so much value with crappy code.

I've seen entire teams burn so much money by overcomplicating projects. Bikesheding about how to implement DDD, Hexagonal Architecture, design patterns, complex queues that would maybe one day be required if the company scaled 1000x, unnecessary eventual consistency that required so much machinery and man hours to keep data integrity under control. Some of these projects were so late in their deadlines that had to be cancelled.

And then I've seen one man projects copy pasting spaghetti code around like there's no tomorrow that had a working system within 1/10th of the budget.

Now I admire those who can just produce value without worrying too much about what's under the hood. Very important mindset for most startups. And a very humbling realization.


Has anyone seen a convincing argument for why you would want a dedicated vector database in place of a normal database with a good, fast vector index implementation?

The existing DB + vector index option seems so obvious to me that I'm worried I'm missing something.


Concerning stephen's item (2). The stricter set of rules was laid out by Richard C. Waters in Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package, page 46 (document page 48). See reference Waters(1989a).

The paper's language is a bit different than contemporary (2023) language.

`map()` is called `map-fn`.

`reduce()` a.k.a. `fold` seems to be `collect-fn`, although `collecting-fn` also seems interesting.

sorting, uniqueness and permutation seem to be covered by `producing`.

Just think of McIlroy's famous pipeline in response to Donald Knuth's trie implementation[mcilroy-source]:

  tr -cs A-Za-z '\n' |
  tr A-Z a-z |
  sort |
  uniq -c |
  sort -rn |
  sed ${1}q
As far as pipeline or stream processing diagrams are concerned, the diagram on page 13 (document page 15) of Waters(1989a) may also be worth a closer look.

What the SERIES compiler does is pipeline the loops. Think of a UNIX shell pipeline. Think of streaming results. Waters calls this pre-order processing. This also seems to be where Rich Hickey got the term "transducer" from. In short it means dropping unnecessary intermediate list or array allocations.

Shameless self-plug: Eliminated unnecessary allocations in my JavaScript code by adding support for SERIES to the PARENSCRIPT Common Lisp to JavaScript compiler. The trick was (1) to define (series-expand ...) on series expressions so that they can be passed into (parenscript:ps ...) and (2) the parenscript compiler was missing (tagbody ... (go ...) ...) support. The latter is surprisingly tricky to implement. See dapperdrake(2023). Apologies for the less than perfect blog post. Got busy actually using this tool. Suddenly stream processing is easy, and maintainable.

When adding a Hylang-style threading macro (-> ...) you get UNIX style pipelines without unnecessary allocations. It looks similar to this:

  (-> (it :let*-symbol series::let)
    (scan-file in-path-name #'read)
    (map-fn T #'some-function it)
    (collect 'list it))
Sadly, the SERIES compiler available on quicklisp right now is a bit arcane to use. It seems like it may have been more user friendly if it would have been integrated into the ANSI Common Lisp 1995 standard so that is has access to compiler internals. The trick seems to be to use macros instead of (series::defun ...) and use (series::let ...) instead of (cl:let ...). Note, that the two crucial symbols 'defun and 'let are not exported by SERIES. So using the package is insufficient and pipelining fails without a decent warning.

Am chewing on the SERIES source code. It is available on sourceforge. [series-source]. If anybody is interested in porting it, then please reach out. It seems to be of similar importance as Google's V8 relooper algorithm [relooper-reference]. Waters(1989b), page 27 (document page 29) even demonstrates an implementation for Pascal. So it is possible.

References:

dapperdrake(2023): Faster Loops in JavaScript https://dapperdrake.neocities.org/faster-loops-javascript

Waters(1989a) document page 48, paper page 46 https://dspace.mit.edu/bitstream/handle/1721.1/6035/AIM-1082...

Waters(1989b) document page 29, paper page 27 https://dspace.mit.edu/bitstream/handle/1721.1/6031/AIM-1083...

[relooper-reference] http://troubles.md/why-do-we-need-the-relooper-algorithm-aga...

[series-source] https://series.sourceforge.net/

[mcilroy-source] https://franklinchen.com/blog/2011/12/08/revisiting-knuth-an...


Hi! I saw your PR review of a community effort to add Bun to the Techempower benchmark. You had really great, exact feedback about "unnecessary allocation here", "unnecessary allocation there".

It was eye-opening, in terms of how often us JS programmers play fast & loose with "a little filter" here, "a little map" there, and end up with death by a thousand allocations.

Given that context, I'm curious:

1) How much you think Bun's (super-appreciated!) fanatical OCD about optimizing everything "up to & outside of JSC" will translate to the post-boot performance of everyday backend apps, and

2) If you're tempted/could be compelled :-D to create a "Mojo of TypeScript", where you do a collab with Anders Hejlsberg to create some sort of "TypeScript-ish" language that, if a TS programmer plays by a stricter set of rules + relies on the latest borrowing inference magic, we could bring some of the Bun amazing-performance ethos to the current idiomatic FP/JS/TS style that is "lol allocations everywhere". :-)

Or, maybe with Bun bringing the right native/HTTP/etc libraries wrapped around the core runtime, doing "just the business logic" in JS really won't be that bad? Which iirc was the theory/assertion of just-js when its author was benchmark hacking Techempower, and got pretty far with that approach.

Anyway, thanks for Bun! We're not running on it yet, but it's on the todo list. :-)


It amazes me how many people in that thread keep saying that we shouldn't even want it, that this is not what nix is about, that no one should install historical versions etc.

BTW, another issues linked is from 2015 https://github.com/NixOS/nixpkgs/issues/9682

Given that my question never got an answer in HN (not until I had a discussion with the original article's author on Twitter and he posted this solution), it's clear that:

- this issue still exists

- how to do this is still not properly documented

- no one really knows how to do it in nix

- this extremely common functionality that is literally required for actual reproducible builds is entirely glossed over in all the articles praising nix


Recently a Chinese media interviewed me and I talked about a few side projects I have done in the past. I talked about the Neuralrad Mammo Screening project and Neuralrad multiple brain mets SRS platform. More awareness on radiation therapy to the general public will greatly help the community and we believe Stereotactic Radiosurgery (SRS) will eventually replace majority of the whole brain radiation therapy (WBRT) in the next five years.

Here is the link to the original article: https://www.toutiao.com/article/7094940100450107935/


I'm a small time angel investor (I write 8-10 $100K checks a year), so YC is really important to me. Almost all my checks are in YC companies. I don't have the time nor skills to do proper due diligence, but with YC I don't have to. Investing in a YC company provides me with already vetted founders who I know will have an incredible support network and are already well connected. Yes, I give up some upside because of how much YC already owns, but to me it's worth every penny for the service that YC provides to me as an investor and to the startup.

That's the investor side of the equation, which if you believe this thread is his main complaint -- that it's too investor friendly.

But I also get to see the founder side of the equation. Since I'm writing such small checks, a lot of my value to the company is offering help at the early stage, often with architecture review or product reviews, and sometimes even interviewing engineering candidates for them. Because of the close relationship, sometimes I end up becoming friends with the founders I invest in. Which lets me see how YC helps them as founders.

Need help connecting with someone at company X? There's almost always another YC founder who can make the intro. Need more money? YC has got VCs on speed dial for you. Need media coverage? There are journalists from highly respected outlets asking YC for good stories every day. Even basic stuff, like, "I need an accountant" or "I need a lawyer who can handle this special type of law" can be much easier solved with a quick intro from YC.

Getting into YC just makes everything else about doing a startup so much easier. It doesn't make it easy, because startups are never easy. But it takes away a lot of the low level friction.


A teeny-tiny orientation for (prospective?) Nix newcomers:

Having spent years on Arch and Gentoo, imo the NixOS community is outstanding even among 'advanced' Linux distros in terms of expertise and kindness.

I'll never forget one time I was having problems with a Python package generation tool and when I mentioned it on IRC, and one of its authors (Rok Garbas, who is so quite active in the community) just hopped on a call to debug the issue I was having right then and there.

If you're on the fence about Nix and that kind of community is valuable to you, make sure to check out the forums [1] and realtime chat [2] (the latter of which is unfortunately still in the process [3] of migrating away from Freenode).

If personal assistance/mentorship of the kind I described at the beginning of this post is appealing to you, several generous and knowledgeable community members host Nix ‘office hours’ [4]. I'm not sure who all are still running them, but I know tomberek is for sure [5] and you can find a link in the footnotes.

There is one serious problem with NixOS and the wider ecosystem right now, namely that the best experience available depends on Nix features that are unreleased. Nix makes it very easy to run bleeding edge builds of Nix and enable these features, but the unofficial status of it all has slowed adoption, integration, and documentation of these tools in the wider community.

That aside— i.e., if you're willing to be a little bit of a pioneer— the community is on the whole pretty rich with good documentation of everything but the bleeding edge, unreleased bits at this point. Those pieces mostly impact the CLI and specifying inputs to the expressions you use to define your system. Everything in the NixOS and Nixpkgs manuals is still accurate. The unreleased components are also fairly mature despite their unofficial status: many folks in the community have been using them for a year or two now.

Finally: if you're interested in dipping your toes in without committing to a system fully managed by Nix and you're a macOS user, nix-darwin [6] provides a pretty NixOS-like experience using a module system and CLI based on NixOS'. There's nothing equally complete in terms of managing system services in a declarative fashion on non-NixOS Linux, but home-manager [7] provides some functionality for enabling user-mode services in a declarative style.

You can check whether your favorite software is packaged for Nix here [8], and additionally NixOS does support several other forms of cross-distro packaging/deployment, including Flatpak, AppImage, and Docker. Steam support is native and works without much fuss, too.

---------

1: https://discourse.nixos.org/

2: https://matrix.to/#/!MKvhXlSTLGJUXpYuWF:nixos.org

3: https://github.com/NixOS/rfcs/pull/94

4: https://discourse.nixos.org/search?q=office%20hours

5: https://discourse.nixos.org/t/nix-office-hours/11945

6: https://github.com/LnL7/nix-darwin

7: https://github.com/nix-community/home-manager

8: https://search.nixos.org/packages


FWIW...

1. Toshi https://github.com/toshi-search/Toshi (Rust, 3.1k stars)

2. Tantivy https://github.com/tantivy-search/tantivy (Rust, 4.5k stars)

3. PISA https://github.com/pisa-engine/pisa (C++, 486 stars)

4. Bleve https://github.com/blevesearch/bleve (Go, 7.4k stars)

5. Sonic https://github.com/valeriansaliou/sonic (Rust, 10.9k stars)

6. Partial comparison https://mosuka.github.io/search-benchmark-game/ (tantivy Vs lucene Vs pisa Vs bleve)

7. Bayard https://github.com/bayard-search/bayard (Rust, on top of Tantivy, 1.4k stars)

8. Blast https://github.com/mosuka/blast (Go, on top of Bleve, 930 stars)

Algolia alternatives with some compatibility

1. MeiliSearch https://github.com/meilisearch/MeiliSearch (Rust, 12.4k stars)

2. typesense https://github.com/typesense/typesense (C++, 5.1k stars)


I’m doing around $2k/month with https://divjoy.com, a tool for a React developers. Not wildly successful, but it pays the bills and I’m optimistic I can grow that to $10k+.

If anything I’ve seen a slight bump from COVID. Lots of people with time on their hands who want to launch an MVP.


I run Nomad List and Remote OK by myself. Nomad List brings in ~$336,000/y with 972,480/mo pageviews and Remote OK ~$301,000/y with ~628,210/mo pageviews. I do everything myself from coding, designing, front end, back end, marketing, etc. I have one person on emergency call in case the server goes down when I sleep but that hasn't happened in years.

You can see live revenue/traffic here, as I share it all:

https://nomadlist.com/open

https://remoteok.io/open

I have no funding, no debt, no employees, just revenue and profit margins are somewhere in the 80%-90%.

Nomad List got started when I was traveling and working remotely ~2013/2014 and wanted to discover more cities that would fit the criteria of nice weather, affordable and fast internet. Since then I've added hundreds more criteria and it's become a giant database, and also a community. The community is how it makes money as people can pay to join the site and access a chat, a trip planner, a forum and many more features.

Remote OK is a much more simple business as it's just a job board. It got started because after building Nomad List a lot of people around me wanted to start working remotely and traveling but didn't have remote jobs. There was like one remote job specific job board back then and it was quite limited. I thought "why not aggregate remote jobs from traditional non-remote job boards". So I did that, and slowly started selling my own job posts on the site which is how it makes money now.

The Coronavirus has substantially affected my business:

Nomad List especially has been affected losing over 50% of its revenue. The site is made for people working remotely and actively traveling so that is to be expected during this crisis. You can even see the complete disruption the Coronavirus brought to traveling members of my site, scroll down to "Trips Taken by Users" on https://nomadlist.com/open.

Remote OK is less affected and might even get a positive effect out of this crisis since remote work becomes more popularized and mainstream during and after this. There is in fact a rise in jobs posted, scroll down to "Job Posts Sold" on https://remoteok.io/open.

Personally I'm less affected financially since I don't have employees and I've saved most of my revenue over the last few years, hardly spending anything. Most people have told me to repeatedly over the years to hire and spend more, but I did the opposite. That means I have a very solid cash buffer now so I can weather this storm quite well. I feel sad/scared about other businesses with high costs that might not be so lucky, especially the employees involved.


McKinsey & Company | Engineers and Data Scientists | NYC, Bay Area, Atlanta, Waltham | Full-time | Remote for now

My team at McKinsey & Company builds software tools for some of the biggest financial services firms in the world. We also do digital business building in real estate and other industries, as well as machine learning to solve real world challenges.

We are hiring folks who want to work on interesting problems, in a professional environment.

Your skills: application development, data pipelines, testing / software quality, agile / working with stakeholders

What we offer: excellent benefits, competitive pay, flexible working environment, interesting problems

Apply here:

Front end: https://www.mckinsey.com/careers/search-jobs/jobs/front-endd...

Full stack: https://www.mckinsey.com/careers/search-jobs/jobs/fullstackd...

Data science: https://www.mckinsey.com/careers/search-jobs/jobs/seniordata...

Email me with questions at Chris_Anderson at mckinsey.com


Gruntwork | 100% Remote | Full-time | Full-stack engineers, Frontend Engineers | https://gruntwork.io

Gruntwork co-founder here. We aim to improve humanity's most important invention: Software. Our product enables software teams to launch and maintain production-grade cloud infrastructure in days, not months.

We create the building blocks that devs and DevOps engineers can use to make launching in the cloud 10x better / faster / easier. We think of our work as creating a new paradigm for how DevOps can be done, one that leverages the insight that so many companies re-invent so much of the foundations that software engineers need to build and launch their apps. We work primarily with AWS, K8s, Terraform, Go, Typescript, and React, and introduce new tech as needed. We’re a small team (~15 people), but our clients include the United Nations, Adobe, TicketMaster, Verizon, and lots of startups.

We are profitable, self-funded (no investors, no debt), pay salaries, equity, and bonuses according to transparent formulas, and are very focused on building a company we're proud of. We are 100% remote, with half our team in the USA and half in Europe/Africa. We have company-wide in-person meetups every few months. We welcome applicants from all backgrounds.

Our measure of a successful Grunt is (1) make impact, (2) think like an owner, (3) communicate effectively, and (4) be a good person. If this sounds like you, we're hiring:

- Software Engineer (full-stack)

- Senior Software Engineer (full-stack)

- Principal Software Engineer (full-stack)

- Frontend Engineer (bonus points for some design skills)


Tuune | London, Singapore and REMOTE global | CTO, Engineering Manager

We're going to help the world cope with the explosion in remote working caused by Covid19. No more lifeless, Windows XP-esque videoconferences — our product will use technology as it should be: enhancing the human experience.

To achieve our mission, we need to build an enterprise-grade videoconferencing platform that is 10X better than other tools out there, and we need to do it fast.

We started 3 weeks ago and have secured an initial round of funding. Now looking for a world-class CTO and engineering manager. Particular interest in those with solid experience in building WebRTC, high scalability platforms, voice and conversation ML/AI, or signal processing.

Contact work@tuune.com. I'm one of the co-founders, Aaron.


Keep in mind that probably the most important spec when considering a new laptop is one that is often not directly stated: the processor series.

I'm not talking about i3/i5/i7, but rather U/Y/H. This letter determines the TDP (thermal design power/point) them machine is designed to run at. The TDP will govern the setting for the base clock speed, and, just as importantly, the throttling behabior under load.

Processor series TDPs are Y: 4.5W, U: 15W, H: 45W.

The new MacBook Air appears to have a Y series processor, like the MacBook, which means it will be aggressively throttled to keep power consumption and heat generation low.

Practically, that means that the new Air will not be capable of running sustained workloads much above its base clock speed, which makes it unsuitable for many programming-related tasks.

The Pro is still a much better choice for programmers. The 13 is suitable for many things, but the 16, with the H series processor, is really preferred for computationally intensive work.

You can get away with this machine if your workflow primary involves a text editor and remote servers, but otherwise I would still opt for the pro.


What a simplistic way to put things in.

You need to do a lot of things you do not love before you have the skills, the vision and the attitude to actually do what you love, and accomplish the goals you want to accomplish...

As a millennial, I think this kind of attitude is the problem with our generation. I mean both simplifying things into instagram-quote-sized senteces like "always do what you love" and also convincing ourselves that if you are not doing what you love, you should be doing something else.

Maybe we should focus more on how to be able to still motivate one-self to do things you do not love so much, but that are necessary.

Often, things that I really do not want to do are necessary for me to achieve things I really want to achieve.

Articles like this, and the beliefs they seems to want to propagate, are probably a big source of depression and failure in young adults (and not so young adults) these days.


For the record, I do agree with your general sentiment that many of these abstraction layers are unnecessary, and in many ways harmful. Especially when it comes to data integrity, trying to enforce constraints and triggers at the application level is always going to be clunky and error prone, and monolithic frameworks like Django and Rails actively encourage you to do everything at their level and give you many ways to shoot yourself in the foot. But I also don't think Hasura and Postgraphile and related projects are the silver bullet. Why can't we have something in the middle? How about a web framework that leaves data integrity up to the DB and just gives you an easy way to "glue" together bits of logic and queries, and do migrations, etc? Or even just use Django/Rails/whatever but don't use the footgun features? At least then, if/when you do need to scale (in any sense of the word) past what a relational DB can offer you, you can easily do that instead of trying to shoehorn everything into the DB.

Aside: I'm working on a GraphQL library for Python that I hope will fulfill these goals, but development on it has stalled due to lack of time. Hopefully I can pick it up again soon.

FUD may have been strong, but in general I detest when people decry entire categories of technology as being "wrong" or unfit in an absolute sense (or, conversely, tout a technology as a silver bullet), when in fact everything has its tradeoffs and there are good reasons to use almost anything. Understanding those tradeoffs is vastly more important than searching for the universal answer to everything.

PS: Hope you're doing well! :) Your contact info isn't on HN, but email me if you want to get coffee/lunch sometime!


I have this idea for a new framework/language. I'm sure if it either already exists or it's a dumb idea in practice but anyways.

You build a monolithic application. Everyone works on the same code base. Things are broken up into modules/classes/packages. From the programmers point of view it's just like working on a standard Java project or something similar.

The magic happens at the method and module boundaries. When the application is first started everything works normally. Methods call other methods using addresses. As the application runs some parts of it become hotter than other parts. At some trigger point an included process starts that spins up 1+ cloud instances. Only the hot code is deployed to the instances. If necessary the instance is load balanced on multiple nodes. You configure the triggers and whatnot as part of the applications config. The framework/language would either come with support for popular cloud services or allow you to create whatever system you need to create the instances.

My hypothetical language/framework would proxy all method calls and remap object instances to the new instance(s). If the extracted code cools down enough it is integrated back into the main monolith. At that point proxying is turned off and the methods use address again.

Using this approach you get the all the advantages of a monolith (interface compatibility checked by compiler, not needed EVERY service writing their own http code, etc). Of course you can't optimize latency as easily and merging is harder with monoliths. There's undoubtedly a hundred other reasons why this is a terrible idea.


> These are the olives of technology. Olives aren’t candy, and tasting them the first time isn’t always pleasant, but soon you will develop a taste for them. They have been here since ancient times, and they will remain for centuries to come. They are good for you, solid, reliable, nutritious. Eat less candy and more olives.

For what it's worth, olives are basically inedible from the tree and require a considerable amount of processing before they are actually edible.


Your class will not work without language-level if/else construct or something equivalent. In Samlltalk if/else is implemented purely through message passing. There is no "real" if/else.

IIRC, Smalltalk has Boolean class and two subclasses of Boolean: True and False. There is a single method with two arguments (:ifTrue:ifFlase). The method is then overloaded. True calls ifTrue argument. Flase calls ifFalse argument. This is happening dynamically, at runtime. Again, the mechanism is generic enough to fully replace all use cases of "traditional" if/else constructs.

Clearly, you haven't thought this through.

Edit: People here would do well to read this: https://pozorvlak.livejournal.com/94558.html


>`ifTrue` and `else` are extension methods added to the `Boolean` type.

You don't seem to understand what this discussion is about. Extension functions in Kotlin are statically dispatched, so while they are a nice feature, they are completely irrelevant here.

It's not about how your invocation code looks like. The important part is that at some point the code needs to make a decision whether to invoke "if" case or "else" case. Smalltalk achieves this by having two objects/classes (True and False) that handle the same message differently. The implementation of those objects does not have a hidden control flow statement. Your code would.


[Chicken Scheme](https://call-cc.org) is fast, makes native binaries, and has a giant library of "eggs" covering most of the SRFIs. It's R5RS working its way towards R7RS. I've been using it for my "Python but fast" code for the last year or so, and it's one of the best production languages I've ever had.

[Chez Scheme](https://scheme.com) is super fast, and has the best REPL I've ever seen, but can't easily make binaries, and has limited external libraries. It's R6RS, which I prefer, but in the event you find other Schemers to work with about half of them are going to be annoyed it's not R7RS.

I found Racket to be substantially slower to compile and at runtime, the library is weird and not what I expect of a Scheme, and DrRacket IDE has some annoying quirks (it destroys your REPL environment every time you edit & run source, which is just monstrous). It's really heavily designed around educational uses, not so much production, and with the "Racket 2" changes it's likely to fragment and chase off any serious users.

Learning one Scheme (with SICP, TSPL, etc.) gets you 95% of the way with any Scheme; not so much with the three major LISPs (CLISP, Clojure, Arc). You'll still spend half your time reading library docs and SRFIs, which is where they all differ.

With any Scheme or LISP, you're going to face opposition from soi-disant "programmers" who don't like to learn anything about programming, and managers who don't want to support anything that isn't in the last 5 buzzwords they've heard, but if you own your own project, it's pretty great.


Former professional Zope guy here! (I made a living building things with it in the early-mid 2000s).

What happened to Zope? Well, Chris McDonough (creator of Pyramid, and Zope veteran) blogged about this in 2011 and from my perspective he got it exactly right. I still think this history is fascinating and I wish more people knew it.

https://web.archive.org/web/20161019153348/http://plope.com/...

After the history lesson, some of his "lessons learned" bullet points seem very apropos in the context of Dark, particularly the first two:

* "Most developers are very, very risk-averse. They like taking small steps, or no steps at all. You have to allow them to consume familiar technologies and allow them to disuse things that get in their way."

* "The allure of a completely integrated, monolithic system that effectively prevents the use of alternate development techniques and technologies eventually wears off. And when it does, it wears off with a vengeance."


First I’ve heard of Dark. The blog post is cool, but reading it I felt two main things: (1) no pull reviews would be terrible for a team, and (2) everything being instantly live in prod and automatically versioned etc sounds like a nightmare once you move beyond a simple program.

Reading on Darklang.com I see they kind of address this. The ecosystem they’re building is training wheels for developers, a way to make “coding 100x easier.”

I can see that working in a sense, and being an entry point that a lot of people use to make neat toys. I could even see it being a gateway that people use to get interested in and learn about programming. But I can’t imagine wanting to build a business with more than one programmer, or any kind of scale on a completely black box system like that.

I will also say, there’s a problem when it comes to entry-level systems, trying to teach people to code:

The struggle with the complexity is actually important. Dark isn’t actually making all the work of building and running a web application go away, it’s abstracting it all into a platform such that you can’t see it.

Suppose a person gets into coding self-taught and learns to work this way. The knowledge isn’t going to be very transferrable - ie when they look at other languages or systems they’re likely to struggle with problems like “what does prod versus dev mean, I just want my program to run for the world...”

You usually move knowledge between ecosystems by translating “I did it this way in (toolset A), so what’s the parallel in (toolset b)?” The more you have an idea of the underlying principles the easier this is to figure out.

That said, Dark certainly looks neat, and I imagine the implementation is quite cool.

The only other nit I’d pick is the name. To me, its not really a language, but maybe a development environment, or framework. I suppose it had a language in it, which I’d probably call DarkScript or something.


”it's too unnatural to take one from the array, clone it and change it. It seems more natural to have something class-like: An "empty" prototype and a constructor function.”

That may only feel more natural to you because you are used to a Platonic/Aristotelian view of the (programming) world (https://en.m.wikipedia.org/wiki/Aristotle#Epistemology), where classes are defined as “that what all instances of it share” as opposed to a Wittgensteinian, who argued that there often isn’t anything that all of the instances of a class share. Games, for example, may or may be played on a board, have turns, and may not even have players (https://en.wikipedia.org/wiki/Conway's_Game_of_Life)

Class-based modeling also breaks down when, for example, modeling mammals that may have two heads, four or five legs, defining ‘object of art’ (should, for example, be flexible enough to include some, but definitely not all urinals), etc.

Also, with unica (happens a lot in UI programming, where most dialogs are unique, as https://cdn.preterhuman.net/texts/computing/apple/newton/Pro... argues) why split them into two, the class and its sole instantiation?

Prototypical inheritance allows you to implement class-like behavior in the way you mention, but also is more flexible. One can implement the equivalent of static methods in the prototype, for example, create “once per class” state in intermediate objects, etc.


I have a small, CPU-intensive benchmark which shows the performance of QuickJS to be comparable to other interpreters written in C. It's on par with MicroPython and recent versions of Ruby, and a little faster than CPython and Lua.

However, it's still 2-3x slower than the optimized, CPU-specific interpreters used in LuaJIT and V8 (with their JITs disabled), and 20-100x slower than the LuaJIT, V8 and PyPy JIT compilers.


Here are the steps to reproduce the attack from what I gather:

1. Find a vulnerable site. The author picked Revolut, a 3-year old, well-funded fintech startup. Others might be found at https://www.openbugbounty.org.

2. Inject the script. The author did so by tacking a URL parameter containing script content to a link he obtained from the Revolut site.

3. Preview the attack with Google's Web Rendering Service, which apparently uses the same version of Chrome used by Googlebot.

4. Submit the link to Googlebot for crawling.

5. View the cached page from the Google results page.

> I reported this to Google in November 2018, but after 5 months they had made no headway on the issue (citing internal communication difficulties), and therefore I’m publishing details such that site owners and companies can defend their own sites from this sort of attack. Google have now told me they do not have immediate plans to remedy this.

Translation: Google declares open season on this attack.


The self-definition comment is a reference to the common LISP exercise of writing a "meta-circular evaluator"[1] - that is, writing a LISP interpreter in LISP. This is famously done[3] in "Structure and Interpretation of Computer Programs", which is a very popular textbook and MIT course which uses LISP as a teaching language[2]. This can be found for example, in lecture 7 of the SICP lecture series on Youtube[4].

Dijkstra's comment on this is quite acerbic, isn't it? Here is the full quote for easy reference:

> [LISP's] promotion of the idea that a programming language should be able to formulate its own interpreter (which then could be used as the language’s definition) has caused a lot of confusion because the incestuous idea of self-definition was fundamentally flawed.

I agree to the extent that the exercise of writing a LISP interpreter in LISP is indeed often a very confusing exercise for beginner, partly because writing any interpreter (or compiler) is fairly arcane for a beginner, and partly because keeping track of which parts are in implementation and which parts are in the target language can be made more difficult; also because it's easy to "cheat" and rely on outside language in inappropriate ways. I would recommend using a language you already know well and implementing a moderately simple language as a first exercise to a student. But on the whole the exercise of writing an interpreter in its own language - or alternatively writing a self-hosting compiler[5] - is rewarding and does teach you a great deal about how programs work.

I also can't agree the so-called "incestuous idea of self-definition" is "fundamentally flawed:" it is in fact fairly interesting and related to important meta-mathematical concepts like Godel numbering[6], reductions[7], and the Church-Turing thesis[8]. Writing a meta-circular evaluator seems like a reasonably concrete way to start building intuition towards these important ideas. Perhaps Dijkstra has in mind such problems as Curry's Paradox[9] or Russel's Paradox[10], both of which are paradoxes arising from self-reference? In my view a meta-circular evaluator is much closer in spirit to Godel numbering (which is valid and useful) than these paradoxes.

[1]: https://en.wikipedia.org/wiki/Meta-circular_evaluator

[2]: https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...

[3]: http://www.sicpdistilled.com/section/4.1/

[4]: https://www.youtube.com/watch?v=0m6hoOelZH8

[5]: https://en.wikipedia.org/wiki/Self-hosting_(compilers)

[6]: https://en.wikipedia.org/wiki/G%C3%B6del_numbering

[7]: https://en.wikipedia.org/wiki/Reduction_(complexity)

[8]: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

[9]: https://en.wikipedia.org/wiki/Curry%27s_paradox

[10]: https://en.wikipedia.org/wiki/Russell%27s_paradox


So many frameworks do this, and I can’t understand why [1]:

    onClick: "checkTodoItem()"
Why? What compells people to go ahead and say: yes, we’re writing everything in JS/TS which has perfectly fine support for actual functions. That’s why we’re going to arbitrarily use strings that evaluate to function calls in some parts of our framework.

[1] See views https://typescene.dev/docs/introduction/overview


Short answer: nothing. (Even Bret Victor's work doesn't begin to touch it. http://worrydream.com/ Although he is well cool along other dimensions!)

For the flavor, imagine a type-safe LISP that one edits though something like Brad Templeton's Alice Pascal https://www.templetons.com/brad/alice.html or Emacs ParEdit https://www.emacswiki.org/emacs/ParEdit so that an incorrect program cannot be described in the system. (You can make a program that doesn't do what you intended, but whatever it does it will do so correctly.)

For the underlying technology, I would recommend reading up on Conal Elliott's "Compiling to categories" http://conal.net/papers/compiling-to-categories/ The connection won't be immediately clear. The connective is the Joy programming language:

https://www.latrobe.edu.au/humanities/research/research-proj...

https://en.wikipedia.org/wiki/Joy_(programming_language)

http://www.kevinalbrecht.com/code/joy-mirror/joy.html

http://joypy.osdn.io/ (full disclosure: this last one is my own project.)

The key is to see Joy (as opposed to, say, LISP) as the sort-of-AST that HOS is manipulating. If you restrict Joy to its type-safe subset and squint a little you have HOS. Then as a kind of bonus, Joy code turns out to be perfect for the "Compiling to categories" stuff (which is beyond HOS.)

For the UI, AFAIK Jonathan Edwards' Subtext http://www.subtext-lang.org/ is the closest anyone's gotten.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: