Unlike most minimalist LISP languages with a community of individual hackers who each have their own macros and packages, this one is battery included while still being the most flexible with the #lang header.
Seriously, it comes with awesome data structures (actuelly more than Python).
The only thing that prevents me from using it as much as Python or C++ is the lack of tools for major editors.
DrRacket is NOT ok, I would like a langserver and vscode extension so badly!
When I have a bit of free time and have finished other projects with a higher priority, I'll definitely give it a try.
(yes I'm aware there are some langservers but none of them is really production ready)
Hey, author here! The extension is only a syntax highlighting one atm (albeit a pretty good one, I hope), but I have bigger plans for it. The problem is, the development of Racket LSP has slowed a lot as of late, and I'm not skilled enough to make one myself.
And yet I'm still too stupid to understand the macro system. Oh well.
I also didn't know there was a Chez version. Racket already has a native AOT compiler, right? What would the advantage be to run on Chez, faster or smaller compiled code?
For Racket specific macro tooling syntax-parse [0] in combination with syntax classes is usually the suggested starting point. Hygenic with pattern matching and a bunch of other features for parsing s-expression grammars. As mentioned down-thread Greg Hendershott's Fear of Macros [1] is a good starting point for a number of folks. The curriculum for Racket School 2018 [2] also has exercises that help introduce the syntax-parse tooling.
As other comment said, you have to find some good opportunity to use a simple macro.
(In this case, it is better to use the build-in `for`. Actually, `for` is defined in a macro, it is not part of the internal "secret" low level language.)
Once you are confident and you grok the difference between a macro and a function, you can try the other ways to define macros. These other ways are more flexible and can make weirder macros, and have better support when the macro is used wrongly. There are good links in the sibling comments.
> And yet I'm still too stupid to understand the macro system. Oh well.
This talk[0] by Robby Findler helped me a ton when I was first learning about macros in Racket. I've found that the best way to learn, though, is by doing: when you find a use case for a macro, figure out how to do that particular thing and, slowly but surely, things will start to click.
Racket's macro transformers are about as easy as hygenic macros[0] get unfortunately, I can occasionally manage to make a er-macro-transformer based macro in chicken work how I want.
defmacro was a lot easier, but I understand the aversion to it, even if it was rare that it was/is a problem.
[0] syntax-rules based macros are cake, naturally, but they're also incredibly limited, although I have seen people implement massive complex OO systems using a sort of ad-hoc state machine built in pages upon pages of syntax-rules rules.
I really wish there was a standard Scheme that compiled to native for all desktop and mobile OS'es and also supported multi-threading. Does anyone know about one ?
* it has support for OS-level threads via places[0]
* it can produce binary distributions via `raco distribute`[1] by packing together the interpreter and your compiled code into a single executable
* while the current implementation is based on a bytecode-compiler and VM, I believe the Chez implementation actually compiles to native code (although the whole process is transparent to the user)
* it is possible to run Racket on ARM devices, in fact there was a recent thread[2] about that on the mailing list
I realize this isn't 100% what you're looking for, but Racket does come with many nice features. Alternatively, you might also want to take a look at CHICKEN Scheme[3].
LambdaNative is a cross-platform development environment written in Scheme, supporting Android, iOS, BlackBerry 10, OS X, Linux, Windows, OpenBSD, NetBSD, FreeBSD and OpenWrt.
Well, that was my best shot :(. I'm sorry I don't have an answer then.
You were asking for threads, and gambit has lightweight threads, so I assumed there was a way to use them via ln. Multi-core threading is a different story though.
Yes, it's a bit of a pity that great functional languages like Scheme and even the veteran common LISP are loosing out to modern languages like Kotlin because of lack of support in these critical, functional areas. Especially when REPL based development is so amazingly productive.
Common Lisp has real threading with a standardized interface via Bordeaux-threads and, with some limitations runs on most of the desktop and mobile operating systems one would expect (the limitation being that most workflows have you write the GUIs in a different language and then call into CL, although EQL is an exception to this rule: https://www.cliki.net/EQL
Does EQL still depend on smoke like common-qt does? Because smoke is so deprecated now that I simply gave up trying to get it to build common-qt on a recent distro.
but do all applications need multi-core threading? Languages like Python get around this using multiprocess module, which is not the same as "multi-core" threading, but it is probably good enough for a majority of applications.
REPL based development, Interactive development are also features. Multi-core isn't probably the top most thing on everybody's list of things.
Typed racket isn't slow (as far as scripting languages go). Maybe you're thinking of Racket's "contract" system?
AFAIK typed racket does type-checking during compilation (via raco); the resulting code is no slower than normal Racket, although I'm not sure if it's faster either.
The contract system is different; it works on normal, untyped Racket code and performs checks at runtime; similar to using Python decorators to check the input and output of a function. This slows things down, so it's recommended to only check things at module boundaries. For example:
#lang racket
(require racket/contract)
(require racket/match)
(provide factorial)
(define/contract (factorial n)
(-> exact-nonnegative-integer?
exact-nonnegative-integer?)
(fact n))
(define (fact n)
(match n
[0 1]
[_ (* n (fact (- n 1)))]))
This module exposes the `factorial` function, which checks (at runtime) whether its argument and return value are non-negative integers (i.e. 0, 1, 2, ...). The actual calculation is done by the `fact` function, which is private and doesn't do any checks. If we don't separate the checks from the calculation, we would end up running the checks on every recursive call, which would slow things down a lot.
As far as I'm aware, Racket's gradual typing is done by contracts at the interface between typed and untyped code. Hence it's not that typed racket is slow, it's that passing untyped values into typed functions can be slow, if we want accurate blame information for errors.
I'm referring to Takikawa et al (2016), who reported performance 50-100 times worse performance numbers by mixing typed and untyped Racket code together. Actually 100 times slower execution. They used type annotations on the module level. Would be interesting to know if Racket developers have proven them wrong and if so, how.
Just to make it clear, the authors of that paper (including myself) are all to some extent Racket developers. Some big improvements have been made but there are still pathological cases. For the latest published on this see this paper: http://users.cs.northwestern.edu/~robby/pubs/papers/oopsla20...
But also it's important to note that it's not Typed Racket or Racket in isolation that are slow, but the inter-mingling of the two due to contract overhead.
Yes, that fits with my own experience. I've not used Typed Racket itself, but I've used contracts in untyped Racket. I found them so slow that I ended up using a macro which discarded them unless it was a run of the test suite.
Figure 3 in that paper is enlightening: the fully typed version takes 0.7x as long as the untyped version, so Typed Racket is slightly faster than normal Racket. Most of the partially-typed versions take 50x to 100x as long, as you say, showing that it is indeed the contracts that slow everything down.
It depends upon what you mean by slow. Typed Racket code compiles more slowly than normal Racket code, but should run about as fast or faster (for a small set of specifically-typed code[0]).
For reasons I find difficult to understand, the text is between <pre> tags and uses a monospace font (Inconsolata). It is curious that a project that cites Matthew Butterick as a contributor should make such poor decisions about typography, especially when the rest of the documentation is above average in that area.
It's wrapped in <pre> tags because it is preformatted text of the type that would be used in email, etc., announcements (probably becauae it's the exact text from those media, without transformation), so not using <pre> would lose significant formatting.
Of course, the formatting is broken anyway, because two of the continuation lines of bullet-pointed lines are underindented by one space.
Unlike most minimalist LISP languages with a community of individual hackers who each have their own macros and packages, this one is battery included while still being the most flexible with the #lang header. Seriously, it comes with awesome data structures (actuelly more than Python).
The only thing that prevents me from using it as much as Python or C++ is the lack of tools for major editors. DrRacket is NOT ok, I would like a langserver and vscode extension so badly! When I have a bit of free time and have finished other projects with a higher priority, I'll definitely give it a try. (yes I'm aware there are some langservers but none of them is really production ready)