I recall someone creating a crypto system and then forgetting to protect the constructor of the initial object so other could change the constructor and do whatever they wanted with that crypto system, but in the end the creators were just web developers with a little training in crypto.
In those circumstances those millions of coins flying in or out are not a tragedy (at least for me) but a very plausible outcome.
That's a completely different and unrelated type of vulnerability, though.
Implementation mistakes leading to mass coin theft would certainly be cryptocurrency news, but they would not be crypto(graphy) news. Breaking an actual peer-reviewed zero knowledge proof scheme would be.
I am not even a newbye in Rust and also this could be just nitpicking, but it seems that match is comparing strings and not characters, if this is the case then I think Common Lisp can optimize more, since there is a special comparison for characters in CL.
Edited: In the optimized version the author use bytes and generators and avoid using strings. I don't know if Rust generators are optimized for speed or memory, ideally you could define the length of the buffer according to the memory cache available.
Edited: I find strange using input = read_input_file()? and then using eval(&input), what happens when there is an error reading the file? Rust is supposed to be a high security language. In CL there are keyword like if-does-not-exists to decide what to do and also read accepts additional parameters for end-of-file and for expressing that this read is inside a recursive procedure inside another read.
I should stop comparing Rust to CL, better learn Rust first. I consider this kind of articles a very good way of learning Rust for those interested in parsing and optimizations. Rust seems to be a very nice language when you can afford the time to develop your program.
Specifically the ? symbol is currently implemented via the operator trait Try as Try::branch() which gets you a ControlFlow
If Try::branch gives us a ControlFlow::Break we're done here, return immediately with the value wrapped by Break [if any] inside an Err, otherwise we have ControlFlow::Continue wrapping a value we can use to continue with execution of this function.
This is type checked, so if the function says it returns Result<Goose, Dalek> then the type of the value wrapped in a ControlFlow::Break had better be Err(Dalek) or else we can't use our ? operator here.
Reifying ControlFlow here separates concerns properly - if we want to stop early successfully then control flow can represent that idea just fine whereas an Exception model ties early exit to failure.
Thanks for the info. I imagine that in this care, since it seems the error is not captured, it should end producing panic. So a question mark is used when the expected result is of type Result or Error. Also the web page, https://doc.rust-lang.org/rust-by-example/error/result.html, describe the result type as Ok(T) or Err(E), and indicates that is a richer version of Option.
Yeah, if `main` returns an error I think it exits with an error code and prints it out, so quite similar to a panic.
I think the blog post is not focussing on error handling too much, but in any case this is 'safe', just could likely be handled better in a real-world case.
Likely, comparing on `char` ('+') would be slower as it requires decoding the `&str` as a `char` which comes with some significant overhead (I've seen 9% on a fairly optimized parser). Ideally, when you grammar is 7-bit ASCII (or any 8-bit UTF-8 values are opaque to your grammar), you instead parse on `&[u8]` and do `u8` comparisons, rather than `char` or `&[u8]`.
Thanks for all the information you provided. I will read Rust by Example and stop posting in this thread to avoid deviating from the OP. Anyway, perhaps other readers are learning Rust and having the same questions in their minds, so your answers are also welcome for them.
Edited: I will eliminate my catfacts username (changing passsord to a random one), I don't like being downvoted and I know I should not mention it, but things are what they are. Good bye catfacts !.
I remember Clapp a Common Lisp in C++ using LLVM. Clapp was promising but progress has been very slow. Since Clojure is similar to CL, one wonder if Jank will experiment similar problems. Might I ask the author of Jank whether he knows about Clapp and if so, how will this project try to avoid getting stagnated?
In that post and comments we read that Clapp was 100x slower that sbcl, and the author of Clapp claimed: "LLVM is a great library for implementing C and C++ but more work needs to be done to support Lisp features like closures and first-class functions. We are working on that now".
I hope Clapp's author work in the last 11 years could help today efforts. Surely, the LLVM of today is not that of 11 years ago. Anyway, IMHO, sharing some knowledge could be productive for any project that is about C++, Lisp or Clojure using LLVM.
If I recall correctly, compiling Clapp takes a full day, that gives not a good vibe.
On the happy path, I think that Julia transpile to LLVM, but Julia is the result of many men working years at it. Honestly, I don't think that one single programmer to be able to create such a big project as a performant clojure in C++ will the ability to compile code quickly. Getting sbcl speed and compilation speed would be an extraordinary feat!
In Go there were great sacrifices to get fast compilation, and the problems to include generics, trying to avoid blows up compilation because some type checking is NP-complete.
Also perhaps ECL, a lisp in C, can gives us some hints about how to get better performance and compilation speed.
Perhaps I am just too old to be open to new dreams, anyway I hope the best to this project and I thank to Clojurists Together for supporting this project. It must be very intellectual rewarding to work in a project whose aim is to extend and improve your favorite computer language. But the journey will be no an easy one, that's for sure.
> Might I ask the author of Jank whether he knows about Clapp and if so, how will this project try to avoid getting stagnated?
I'm aware of Clasp and have spoken with drmeister about it in the early days of jank. Ultimately, jank and Clasp differ greatly, not only in that jank is Clojure and Clasp is Common Lisp, but also in their approach to C++ interop.
> If I recall correctly, compiling Clapp takes a full day, that gives not a good vibe.
I'm not sure about Clasp's compile times, but C++ is slow to compile, in general. The jank compiler itself builds from nothing in about 1 minute on my machine. We've yet to see how the jank compiler will handle large Clojure project, but I do expect it to be slower than Clojure JVM.
> In that post and comments we read that Clapp was 100x slower that sbcl
That's an old post, so I'd expect that Clasp is faster now. I can say that jank is not 100x slower than Clojure JVM, in my benchmarks.
> Perhaps I am just too old to be open to new dreams, anyway I hope the best to this project and I thank to Clojurists Together for supporting this project. It must be very intellectual rewarding to work in a project whose aim is to extend and improve your favorite computer language. But the journey will be no an easy one, that's for sure.
Thanks for the interest and kind words. It's not easy, but it's doable!
Cognitive load in LLMs: When LLMs are faced with syntactic complexity (Lisp/J parentheses/RL-NOP), distractors (cat facts), or unfamiliar paradigms (right-to-left evaluation), the model’s performance degrades because its "attention bandwidth" is split or overwhelmed. This mirrors human cognitive overload.
My question: is there a way to reduce cognitive load in LLMs?, one solution seems to be process the input and output format so that the LLM can use a more common format. I don't know if there is a more general solution.
LLMs use tokens, with 1d positions and rich complex fuzzy meanings, as their native "syntax", so for them LISP is alien and hard to process.
That's like reading binary for humans. 1s and 0s may be the simplest possible representation of information, but not the one your wet neural network recognizes.
Already over two years ago, using GPT4, I experimented with code generation using a relatively unknown dialect of Lisp for which there are few online materials or discussions. Yet, the results were good. The LLM slightly hallucinated between that dialect and Scheme and Common Lisp, but corrected itself when instructed clearly. When given a verbal description of a macro that is available in the dialect, it was able to refactor the code to take advantage of it.
Agreed, Gleam as a language has very few, generalized syntactic constructs compared to most procedural languages. There's enough of a signal in the data to be able to answer queries about the language; but when writing, LLMs universally trip over themselves. The signal from other nearby languages is too strong and it ends up trying to do early returns, if statements, even loops on occasion.
Yes, the concept of "syntactic complexity" applied to LLMs can be very different of what we think and I think it depends of the tokenizer. Perhaps LLMs could be fine-tuned by using a grammar for computer languages and special tokens for this grammar in order to reduce syntactic complexity. For example in Lisp, a right or left parenthesis could be tokenized in a special way (indicating left-lisp-parenthesis or right-lisp-parenthesis), that way the LLM could learn faster and reduce syntactic errors.
I usually use deepseek (gratis) for code, and when using defun and let it usually lacks one (or more) closing parenthesis. So the way to mark the end is not well understood by this LLM, or perhaps that the height of the AST is usually bigger than in python.
I think a translation layer to a lower-density language might be a good solution; e.g. Iverson's divisible-by-11 check, 0=11|-/d, can be verbosely done in Python with
In those circumstances those millions of coins flying in or out are not a tragedy (at least for me) but a very plausible outcome.