From doing some cursory research, it appears the software in question is called MOSS (Measure of Software Similarity) and is currently being provided as a service [0].
Since it is intended to be used by instructors and staff, the source is restricted (though "anyone may create a MOSS account"). According to the paper describing how it's used [1], "False positives have never been reported, and all false negatives were quickly traced back to the source, which was either an implementation or a user misunderstanding."
There are a couple of reasons why creating a self-hosting compiler can be good idea:
1. Shows to others that your language is capable of a project of moderate complexity, as well as display what an "idiomatic" version of writing code is
2. Remove dependencies on parts you can't control (once you rewrite it in Zig, rather than C++, you don't need to worry about new C++ features or deprecations between versions)
3. Writing code in the language helps catch bugs in the language specification and in the compiler's implementation of the language.
These are just the ones off the top of my head, but people with more PL-Design experience may be able to elaborate.
Contributions are another one I think. Both the authors and potential collaborators might find it more attractive to use their language of choice. Compilers also tend to pose a variety of challenges that force one to really dig into a language.
I know it's entirely other end of the language spectrum, but I think that's why Typescript got successful while Flow did not; Typescript is in Typescript, while Flow is in OCaml. And nobody knows OCaml.
OTOH, eslint is making waves now and it's in golang, so. Who knows.
I'm not a golang fan, but one thing I've noticed about go is that it is possibly the most approachable yet real programming language for beginners I've ever seen. I think this is part of why go is doing so well in the devops space. Sys admins who are not really well versed in programming but know enough to write some scripts here and there are able to bring their domain specific knowledge of admin tasks and write code to contribute.
Like many people here, I tried GnuCash. I enjoyed it, but when I would import data from my bank, it would crash. That, and not having any real way of categorizing my expenditures was annoying. The other thing that annoyed me was not being able to select multiple items
I recently moved to KMyMoney and everything just clicked. I found the interface to be nicer, especially combining the filter functionality with the ability to select multiple things and tag them appropriately.
I appreciate the article trying to explain Double-Entry bookkeeping, and I would suggest people look into KMM as an alternative to GnuCash.
I think I may have misspoken. What I meant to say was that, at least for me, KMM is a little easier to view both accounts and categories, and search within them.
A more accurate phrasing would have been "Filter all the payers that contain this string, and allow me to select them and see how much money was transferred to/from them."
On this topic, another add-on I have found to be fairly helpful is called "Pluckeye" [1]. At its base level, it simply blocks all images and video from your browser. You can customize it by adding/removing websites from the blacklist, and is fairly robust to occasional cravings by having a delay on changes to the blacklist that allow a website. The only downside is that most websites now look broken, but I've found that I don't really miss being bombarded by the colorful graphics on websites.
I think there is significant work to be done on making these tools more widely known, but I'm happy with the progress being made. Good job, again!
* Lisp does actually work a lot like stuff you are used to: evaluation of argument expressions to argument values, which are passed by value: much like C or Java. Functions return a value or multiple values.
It has familiar features like mutable lexical and global variables, control constructs for selection and iteration and so on. There is even a form of goto.
Aggregate objects like structures, class instances, lists, vectors and so on are actually referential values, like in many languages.
It is said that Javascript is a dialect of Lisp. If you understand how Javascript evaluates expressions, that goes a long way toward Lisp. Ruby is sometimes called MatzLisp, after the surname of its creator, for very good reasons. Lisp has inspired many features found in other languages. The comma, ?:, && and || operators in C appear to be Lisp inspired, as is the very idea of "expression statements": for instance when we call a function in C as a statement, it is an expression with a value, which is discarded, just like in Lisp.
Lisp lists lack encapsulation; they are not opaque bags with which you do things like (add list item). That takes getting used to: always capturing the result value of a list construction. It doesn't take that much getting used to for programmers coming from C, who understand a bunch of ways of representing lists, including representations in which a null pointer represents an empty list.
A container-like list data type is easily written in Lisp, either as a function-based ADT with a couple of functions around small state struct (or perhaps cons cell or vector), or full blown OOP.
> Lisp lists lack encapsulation; they are not opaque bags with which you do things like (add list item). That takes getting used to: always capturing the result value of a list construction.
You're conflating two properties: lack of encapsulation and functional update. It's true that lists expose their internal structure as conses, and it's also true that they're usually updated functionally (requiring, as you say, capturing the result), but these properties don't have to go together. It's entirely possible, and arguably desirable, to have functional collections -- where instances are immutable, but which provide operations for creating new instances from existing ones -- which are also opaque data types; see for example FSet [0]. Conversely, it's possible to have fully-mutable lists that expose their internal structure; you just have to wrap each list in a mutable cell. It would be ugly, and I don't know why you'd do it, but it's entirely possible.
You're right; encapsulation doesn't mean mutable state; it means combining code and data (making a capsule).
Lisp list aren't ... whatever you call those stateful collection things that you can mutate with a list.add(42) type code that people are used to in a lot of scripting languages nowadays. That will trip up people who are used to that sort of thing.
I have to admit, the slightly misleading title got me. I'm currently going through SICP myself (albeit at a snail's pace), but it seems like the article's author and I both had the same initial objections to LISP, only to be blown away by its simplicity and expressiveness with a few keywords and lines of code. I'm still partial to other programming languages, but LISP holds a special place in my hard drive.
Honest question: I tried using LaTeX for a homework assignment. I had already done all the work on some loose paper, but it was all over the place and thought LaTeX-ing it would help readability. It took me about three hours and I wasn't even halfway done (there were four questions and I had barely done the second one)...
Is taking this long normal for LaTeX? or is it something you get better at with practice?
I'm doubtful that using Latex for ephemeral stuff like math-heavy homework is a good use case. It's excellent for documents like journal papers and tech reports that you will be revising and distributing, and perhaps coming back to months later. It's also great for collaboration. I could see it working for a lab report.
With mathematical homework, aren't you spending significant time ensuring that what you typed into Latex rendered correctly? (I.e., the edit-compile-look loop?) I sometimes omit parentheses or put braces in the wrong place, which causes the display to be in error. Introducing another step in the process seems troublesome, and would take me out of the "zone" of problem-solving. (I.e., handwritten copy -> Latex -> rendering vs. just handwritten copy.)
I switched from troff to Latex around 1991. The explanatory tables for the sprinkler system and the electrical panel for my house are in Latex. So, I'm a Latex-phile, just skeptical about this case.
On the other hand, doing homework with latex means you'll have less cognitive load when using it for 'real' work later on. I would say it's more a good investment than a necessity.
>With mathematical homework, aren't you spending significant time ensuring that what you typed into Latex rendered correctly?
>[..]
>I could see it working for a lab report.
I think here is your answer: much depends how much your homework is like a lab report. I have occasionally had courses where the were only a few homework assignments and lecturer expected written answers typeset in LaTeX (or similar).
But I wouldn't bother either if I was the only person who would read my written notes.
I use it for homework when I often need to edit my previous work, or so that I can omit the proofs of "obviously true" lemmas the first time round, make sure the whole proof works, then go back and fill them in.
During undergrad, I wrote up all applicable college assignments using LaTeX for ~3 years. I recommend it if you want the ability to skillfully typeset math or if you plan to attend graduate school.
Pros:
- Transcribing from paper often revealed problems with my solutions
- Easy to modify / improve solutions once typeset
- Easier for me and the graders to read (I have bad handwriting)
Cons:
- Steep learning curve (first assignment took me many hours to complete, but provided the template for future assignments)
- Painful to edit sequences of equations if you are explicitly showing your work
I would find someone's homework template and just copy it. Then when you go to copy your work to LaTeX just replace their content without touching the formatting if that makes sense. I too tried to learn LaTeX in college but decided the learning curve wasn't worth overcoming and what I wanted really wasn't that unique.
Eventually you're going to come up with your own formatting ideas that you can tweak over time but it's much less stressful than drinking out of the firehose when your homework is due in four hours.
I used LaTeX for problem sets throughout grad school, although when I was done with coursework I switched over to Markdown for the dissertation. As long as you only plan to convert your Markdown to LaTeX, you can drop into LaTeX whenever you need more control. This requires knowing LaTeX really well however, which is why it was handy that I'd used it for my problem sets. The end product is really much better, and you do get much better at it with practice, particularly if you also learn a real text editor.
Start out simple. Get Learning LaTeX by Griffiths and Higham. It's a short book that gives you the basics to get started and enough experience to start understanding how to do more advanced things.
Definitely a case of practice as others have said. When I first started using LaTeX I was looking things up on tex.stackexchange.com every two minutes. Once I'd gotten used to it and created a lot of documents, I was able to use it to take notes in real time faster than I could write. I now use LaTeX to take all of my lecture notes and it does a great job. I've defined a couple of helpful environments and created a documentclass along the way to set things up how I like, but it's so much faster than e.g. Word or pen and paper.
Many years ago when I did my math homework in LaTeX I never wrote raw LaTeX, instead I used LyX (www.lyx.org). It's basically your standard document editor with a GUI equation editor. If you're not obsessed with all the LaTeX layout stuff it's great for just typing up stuff with equations and making it look nice.
I don't do my calculations/scratchwork in latex, but I know people who do, and take notes with it, etc. It's definitely got a learning curve, but once you've got it down it can be faster than writing by hand for a lot of things in mathematics (at least, according to some people I know).
I wonder: did you spend much of this time on math formulas, or on "regular" typography like lists & headings?
TeX math notation is the pretty much only game in town, and worth learning, but there are many ways to skip/ease the rest, might help your learning curve:
- WYSIWYG with TeX math: Dropbox paper lets you press $$, type formula, press Enter; and I shudder to suggest it but I hear modern Word more or lets you type TeX math [https://superuser.com/a/509805/33415].
- Markdown with TeX math: there is alas no single standard syntax but tons of tools do support it: https://github.com/cben/mathdown/wiki/math-in-markdown
For conversion Pandoc is king, infinitely flexible, and can render through LaTeX, HTML or many other ways.
It takes a little bit of time and effort to get everything formatted perfectly. But it sounds like you were having an especially tough time. It just takes a lot of practice and repitition.
Try typing up notes from a class. Great way to review the material and to learn LaTeX without a time crunch. Plus then you will have figured out all the formatting you need for the next homework.
On that note, the size of the courses are not even comparable. If you look at "Explore Courses", a website that gives information on Stanford's course offerings per quarter, you see a massive disparity between CS106A (the intro course almost all CS majors take if they have no programming experience) and CS 106J (The new course). The "A" course, taught in Java, currently has about 560 people in it, while the Javascript course has about 40. (Source: https://explorecourses.stanford.edu/search?view=catalog&filt... )
I agree that Java may not be the best language to teach to beginning programmers, but to suggest there's been a wide move away from Java (as this article does) sort of oversells the situation, in my opinion.
You know, I'm sort of split on what to think of the click-bait-y article title.
The guy's a PhD at Princeton, he clearly knows the Fourier Transform is more than just a trick - he even mentions that he wants to bring science/math to a wider audience at the end. From what I can tell, this article's more aimed at the nerdy middle/high schooler who likes math than engineers and computer scientists (the main target audience of this site). So while the article's title sort of tricks you into learning about this "trick", as long as people who normally can't be bothered to look at math (because for some reason it's popular to be "bad at math", ie not want to do math) are learning about the applications of mathematics in the real world, I don't see anything particularly wrong with trying to reach a wider audience.
... and I wish people would stop this witch hunt for "clickbait". It seems any headline that includes trace amounts of creativity, imagination, or suspense is seen as illegitimate, usually with an allusion to some mythical past where every book was apparently called "It was the Gardener: A Murder-Mystery".
But I kind of feel like it might give the wrong impression that these things are "tricks" (whatever that even means). I do agree with you though, the guy probably knows what he's talking about.
Yes, I was a bit surprised at the way the title hinted that this was a new 'trick' thing, when I have been familiar with Fast Fourier Transforms since forever when it comes to audio processing and reverb convolution etc., but then I realised that this is domain specific knowledge that is entirely due to me working in this capacity for so long. To someone in another industry or engineering capacity, FFTs may be an unknown entity, and indeed be an 'interesting discovery'.
I've recently begun to think about the availability of housing and how that impacts the middle/lower stratum of the US, and it seems to me that there have been several short-sighted policies put in place that have put us in a much worse position for the future for short term gain (the article explains how subsidies for single-family homes are a reason for the shift away from small apartment buildings). Has this always been happening, or has it appeared to become more pronounced with the recent focus on gentrification? What can be done to improve affordable housing? I'm not all too familiar with this field, so anything helps!
A lot of it seems to exacerbated by limited access to quality public transport. As jobs become more consolidated in city centers, the middle and lower class are more likely to get priced out of living near their work places. High insurance and other private vehicle costs (tickets, maintenance, traffic) make car ownership impossible for mid/lower class citizens.
So you end up with a bunch of people, living in poor housing conditions, with poor access to quality transportation, which means the amount of time they are spending in transit, or the jobs that they can reasonably have access to, is constrained.
America should have spent more time trying to shift demand away from private car ownership and more towards effective public transportation. Doing so would make areas outside of major metropolitan areas available for middle/lower class people and allow smaller/mid sized cities benefit from the tax revenue.
Redlining[0] was going on through the 80s. Much of many Americans' net worth is tied up in their houses, and house prices have risen somewhat faster than inflation in American cities. Consider the magnitude to which black Americans as a population were disenfranchised by this practice until quite recently.
Since it is intended to be used by instructors and staff, the source is restricted (though "anyone may create a MOSS account"). According to the paper describing how it's used [1], "False positives have never been reported, and all false negatives were quickly traced back to the source, which was either an implementation or a user misunderstanding."
Sources:
[0]: https://theory.stanford.edu/~aiken/moss
[1]: http://theory.stanford.edu/~aiken/publications/papers/sigmod...