Hacker News new | past | comments | ask | show | jobs | submit login
Frege: A Haskell-Like Language for the JVM (infoq.com)
111 points by reirob on Aug 13, 2015 | hide | past | favorite | 114 comments



One reason I would love to see this succeed which has almost nothing to do with the particulars of the Frege project itself is the effect it would have on the Scala community.

There is a significant portion of the Scala community which desperately wants Scala to be the Haskell they can get their bosses buy in for. IMO the result is unsatisfactory, both for those people and the Scala community as a whole.

If a straight Haskell variant in the JVM ecosystem were to "take off" then Scala could be its own thing.


So thats the reason most Scala code reads like badly written Haskell fanfic.


Out of interest - are you talking about the use of symbolic operators? Or something else (like the prevalence of the Typeclass pattern)?


Probably that Scala is extremely verbose and boiler platey compared to Haskell as soon as you start playing with higher kinds.

And when you use just regular Scala syntax, it's just pretty ugly code overall IMO (and one of the slowest compilers of the 21st century too).


I think most scala code reads like badly written Java code, mostly because people use at as a "better Java". FP fanatics try to use it as Haskell for JVM, but they are definitely not the majority.

Most scala users have no f-ing clue what a monad is :)


To be fair, most programmers have no clue what a monad is. To understand what a monad is you first have to understand what a monad is. (No, that wasn't a typo.)


I don't see why that should be the case though (apart from the prolification of bad, overwrought, explanations).

Monads are not that exotic. People present it as "deep math" but for a mathematician for example it's kids stuff at the level you need to understand them for Haskell et al.

It's like saying differential equations are some "deep voodoo math" (only monads are even simpler).


I think the bad, overwrought explanations are the whole reason.

Yes, they're much simpler than differential equations. But the difference is, differential equations are the simplest way to solve some difficult problems -- monads are often the gratuitously complex way to solve simple problems.


Monads are simple ways to solve simple problems.

Consider the Maybe monad - it encodes optional values in a simple way. The approach before this is to have something called "null" that would wreck your programs. Maybe monad is one of the best improvements to day-to-day programming tasks that I've personally experienced in my lifetime.


I agree, the concept of monads is very simple, actually trivial. It's just a generalised form of function composition.

But thinking monadically and abstracting monadically is extremely different from what programmers normally learn, for a start because important monads like state and exceptions are built-in features of most programming languages. Seeing that these things have a common pattern, and seeing that it may be worthwhile to abstract this common pattern takes a lot of time.

The mismatch between the utter simplicity of the concept of monads, and the complicated explanations one comes across doesn't help.


To be fair, (I think) you don't need to know category theory to successfully apply FP principles and/or use any functional language. I had a great (GREAT) success at using erlang not only not knowing what monad is, but even not knowing about existence of category theory at all, leave aside monads, monoids, functors, etc. You also don't need to know category theory to understand what monad is [0] This video is all you, as a practicing programmer, need to know about monads and it's just 1 hour long.

[0] https://www.youtube.com/watch?v=ZhuHCtR3xq8


I think a lot of the problems with monads is that they're way too low level to really get a grip on what they're for --- it's like trying to explain algebra by talking about manipulating pixels on a screen.

Once I finally saw some of the things you can do with a monad rather than those useless examples based around Maybe, I had an 'aha!' moment as everything went click.

No sodding idea what a monoid is, though.


A monoid is a type where we know how to combine two elements together. There are two parts: a function that combines and an identity for it. (The function is usually written as an infix operator to make it easier to read.)

For example, we can combine two integers by adding (+) with 0 as an identity. Or we could combine them by (*) with 1 as an identity.

Lists are a monoid because we can append two lists and the empty list is the identity.

The combining operator has to be associative and the identity has to be an identity for it. (Combining something with the identity gives you that original thing back.)

And that's all. There's just not much structure to them. In fact, there's so little structure that mathematicians don't care much about them. It's an exotic-sounding name for a fairly pedestrian concept.

But they're useful in programming. The main reason is that they're so ubiquitous: so many different things form monoids, often in different ways. This means we have a single interface applicable to almost every domain you can think of which is useful even if the interface doesn't tell us much.

They're also a good fit for parallelism. Because the combining function is associative, we can a bunch of combinations in any order we like, making it easy to spread them out over multiple threads. It very naturally captures the reduce in map reduce.

But mostly it's a convenient abstraction that's minimal and simple enough to pop up everywhere while still being useful.


Monoids are much easier. You just need a binary associative operator |+| (+,*, min, max all work for ints, ++ for lists/vectors, for instance) and some notion of an identity for the type s.t. forall a: A, a |+| identity === identity |+| a === a

They are HUGELY useful. One of the most useful is a "union" monoid definition for Maps. In Scala, it's written like this:

implicit def mapUnionMonoid[K, V: Semigroup]: Monoid[Map[K,V]]

So any map with Semigroup values is a monoid (Semigroups are monoids without the zero value). |+| under this definition will combine the Maps, but in hte case of collisions, |+| the colliding values together. This Monoid is the ABSOLUTE KING of aggregation. You can foldMap over lists and produce singleton Maps of the shape you want, and let the monoid instance aggregate for you.


   monads is that they're way too low level t
Monads are neither low- nor high-level. Monads are orthogonal to this. A monad (in the context of programming languages) is a generalised from of function composition. Instead of composing f : A --> B with g : B --> C yielding a function g;f : A --> C, monads compose functions f : A --> FB with g : B --> FC, yielding g;;f : A --> FC. Here F is come transformation on types. Monads connect the types FB and B in a canonical way. That's all.


Ooo! Something I think I can answer! A monoid is just a thing that knows how to add to itself!


My understanding of a monad is that it isn't currying or recursion though.

E:

I shouldn't even call it an understanding of monads. I've read at least 15+ explanations of what a monad is and still don't grok it.


Apologies in advance for the unsolicited monad tutorial (it's my turn to be that asshole)...

Firstly, saying "`X` is a monad" is the same as saying "class `X` implements the monad interface".

The monad interface is usually defined with `return` and `bind`, but I think it's more instructive to borrow the terminology from JavaScript's `Promise`...

* `Promise#resolve(value)` is the same as `return`, also known as `pure`. It just wraps the given value in a promise.

* `Promise#then(function)` combines both `bind` (when a new Promise is constructed and returned) and the functor method `map` (when a plain value is returned).

To provide a unified monad/functor interface for both `Promise` and `Array` (using `wrap` instead of `resolve):

    // `then` provides both `bind` and `map` depending on the return type of the given function:
    Promise.prototype.map = Promise.prototype.then 

    Promise.wrap = Promise.resolve

    // f maps elements to arrays of elements:
    Array.prototype.then = function (f) {
        return [].concat.apply ([], this.map(f))
    }

    Array.wrap = function (x) {
        return [x]
    }
There's no intrinsic value beyond that shared interface, it just allows you to write abstractions without knowing specifically what kind of monad you're dealing with (like the "do" syntax).

EDIT: BTW, I've purposely skipped a few things in this description, it's meant to be illustrative, not definitive...


Not quite ;) Semigroups are defined by their associative, binary operator of type a -> a -> a. Monoids are that plus an identity value of a.


I think using Haskell's type syntax is misleading in this context (and doesn't really imply the required associativity (much less commutativity if you wanted to consider Abelian monoids)).

The operation of a monoid maps from pairs of things to things. So in terms of types:

    <a,a> -> a
Or for some `twin` type constructor:

    twin a -> a
This is a bit more suggestive also in terms of F-algebras, where the operation has the following signature (f is a functor, or mappable container):

    f a -> a


There Haskell people go again =) You try to define something in simple normal terms, and off they go adding highly specific and technically correct words to what was supposed to be a simple explanation.

(I'm not a Haskell expert, I dabble, done CIS194, half of RWH, and I have no idea what you said -- which is a common problem I run into in the Haskell world, lots of super helpful people that have forgotten what its like to not speak their language)


Curious what you think of my monad tutorial! http://blog.reverberate.org/2015/08/monads-demystified.html


I'd sort of like to see 'return' explained earlier, rather than used without explanation in the IO monad example. Also it's weird to hear that a monad is a design pattern, and then hear about the monad deciding things; that implies something more concrete than just a design pattern. It's only vaguely clear that "the monad" in that sense is the specific choice of bind/return functions (if that's even correct!). But while it's a bit rough around the edges, I can see this becoming a really good explanation. It explains how the goals of monads relate the implementation, which seems to be the tricky part, in a way that makes sense to me.


Monads, Monoids, Applicatives, all very simple things. Most people make it seem difficult just to show off and feel as part of an elite. Which is a shame as it prevents wider usage.

James is one of the few people who don't make Monads into a mythical being.

http://james-iry.blogspot.de/2007/09/monads-are-elephants-pa...


Ha true :)

I was making a jest to say that most scala programmers don't know anything about FP and just use it for the type inference and stick to OO practices, vars, etc.


>don't know anything about FP

Depends on how we define FP. Until 2010, when Haskell started to emerge as a HN fad, people were OK to define FP as what LISP programmers do, and didn't demand FP programmers know what a modad is, type theory or even use "immutability" everywhere...

If you read discussions from 2000-2005 for example, very few people define FP as "what Haskell does".


Has Haskell really been a fad here for five years? It feels like just a year or two ...


No I think almost all Scala programmers understand the basics of FP.

They just don't understand / have much interest in the intermediate/advanced parts. I've seen lots of beginners take to for comprehensions, maps, filters etc pretty quickly.

The problem with Scala is that at an advanced level the code can become pretty unreadable.


I'm honestly curious why Kotlin hasn't gathered more steam amongst this (gigantic) group of people. It seems like there's definitely room for a "Java, but a bit nicer", but that's not what Scala is intended to be. Did Scala just start winning before Kotlin was around?


Did Scala just start winning before Kotlin was around?

I think so. By the time I heard of Kotlin, I mentally lumped it in the basket of "just another JVM language". I mean, there are so many now, and you already had Groovy, Clojure, Scala, JRuby, and Jython pretty well established, along with a couple of dozen other niche players. Then Kotlin comes along... I don't know about most people, but I never heard anything about Kotlin that was so compelling that I felt the need to go mess with it. I mean, why what instead of Nice, or Fantom, or Gosu, or Beanshell, or Mira, etc., etc...

Same thing for Ceylon. It looks like just another "also ran" to me. If the people behind these languages want people to use them, they are going to have to work hard to get the word out about whatever advantages (purported or real) they have.


I'll tell you why. Because right now Kotlin is the JVM language made by the largest company outside Java and Oracle. Well, technically, Ceylon is a Red Hat project, but Red Hat is a company that's basically built on lots of very loosely-related projects (Red Hat also fund JRuby, I believe), and one cannot say "Red Hat has Ceylon's back", while Kotlin certainly has JetBrains' back.

As a result, it is also the non-Java JVM language with the best IDE support, best Android support and best build-tool support.


Because right now Kotlin is the JVM language made by the largest company outside Java and Oracle

Truth be told, I don't really find that very compelling. I mean, do programming languages always need "big company" support to be successful? I don't know. So far Scala, Clojure and Groovy have done pretty well, with varying degrees of commercial support.

As a result, it is also the non-Java JVM language with the best IDE support, best Android support and best build-tool support.

I guess, but here's the thing... the languages I use now (mainly Groovy and Java) have at least good enough IDE support, Android support (ok, I don't really care about that) and build-tool support. So even if Kotlin is incrementally better, or hell, even dramatically better, that stuff doesn't do a lot to sway me to Kotlin.

That said, like most geeks, I like dabbling with new languages, and I am sure I'll try out Kotlin (and Ceylon) at some point. And maybe then one or the other will "wow" me. But for now, it isn't a high priority. And I expect a lot of other people are approaching it with a similar mindset.


> do programming languages always need "big company" support to be successful?

History shows that the answer to that is a resounding yes. Aside from a couple of scripting languages (BASIC and Python), there have been exactly zero successful mainstream languages without a large company behind them. Scala, Clojure and Groovy are doing fine, but their adoption is one or two orders of magnitude less than the mainstream languages (Java, C, C++, C#).


I think you're basically right, but the waters are murkier than you give them credit for. I don't really buy any definition of "successful mainstream languages" that doesn't include Ruby and PHP, and neither of those started with any sort of corporate backing, and still aren't tightly associated with any single "big company".


I think Ruby is too small to be considered an industry-wide success. PHP is indeed extremely successful, and should be added to my small list of scripting languages (in fact, it has been far more successful than Python): BASIC, PHP, Python. However, those exceptions are still only for scripting languages (even if you count Ruby), and none of the languages mentioned by the comment I responded to -- except Groovy -- falls in that category.


I think Ruby is too small to be considered an industry-wide success.

I'm not necessarily the biggest fan of the TIOBE index, but in this case I'll cite it, as it is "close enough" I think. Ruby is #13 right now, which isn't bad. And while I think TIOBE has some warts, I think it's safe to say that anything in the top 20 is fairly successful, and anything in the top 50 is "successful" to a certain degree.

http://www.tiobe.com/index.php/content/paperinfo/tpci/index....


This thread is a bit old for me to expect a response, but I'm always curious when I see people using this terminology: what purpose does the term "scripting language" serve? It doesn't seem to tell me anything about what a language is useful for, its runtime characteristics, or the style or philosophy it encourages.


> a large company behind them. Scala, Clojure and Groovy are doing fine

You do know VMware/Pivotal pulled their support for Groovy in March this year and no-one else stepped in, don't you?


You do know VMware/Pivotal pulled their support for Groovy in March this year and no-one else stepped in, don't you?

I do, but I also know that Groovy is becoming an ASF project[1]. So we'll see how it goes with volunteer support, and perhaps a few paid people here and there from companies that use Groovy.

[1]: http://incubator.apache.org/projects/groovy.html


> Groovy is becoming an ASF project

Groovy has lingering problems in migrating to builds.apache.org, see [1]. Some of the Groovy despots from Codehaus times (I wouldn't know which ones specifically) are keeping some computing machinery physically separated from the Apache infrastructure to run their own build processes, going against Apache guidelines. Some of them also have control over the groovy-lang.org domain (again, I don't know who). I suspect Groovy won't ever become an ASF project, but instead just sit it out in the incubation system until the former Codehaus Groovy despots get a better deal where they don't have to share their control democratically with the ASF. They managed to keep Groovy in the Java Community Process for 9 years before being booted out, all the while using the JSR-241 to promote themselves, so they'd think nothing of leeching on Apache's incubator for just as long before stirring up conflict later on to get booted out when it suits them.

BTW, that link [1] to Nabble's Groovy mailing list archive was recently redirected to an embedded view within the groovy-lang.org website where they can collect IP addresses, and all that implies -- another indictment of the Groovy despots longstanding intention to control and instead of sharing.

[1] http://groovy.329449.n5.nabble.com/Jenkins-Groovy-Apache-etc...


Yes. None of them has a large company behind it, and none of them is within two orders-of-magnitude from the leading batch of languages (the leading pack has 5-10M developers each). Maybe Scala is, but even Scala certainly isn't within one order-of-magnitude.


None of them has a large company behind it, and none of them is within two orders-of-magnitude from the leading batch of languages (the leading pack has 5-10M developers each).

So? I'm not making a claim that a language needs adoption on that scale to be considered "successful". To me, Ruby, PHP, Python, Groovy, Scala, Clojure, etc. are all very much "successful". The languages I think of as being less so, would be things like Nice, Fantom, Frege, Forth, Modula-3, Rebol, etc.


I don't see Kotlin as a "wow" language, I see it as a "this is sort of boring but pretty nice" language like Java and Go. Maybe Ceylon is similar, I dunno. It just strikes me that Go could use a stronger competitor than Java that runs on the JVM, but doesn't seem overly fancy (Scala, Clojure) or too dynamic (Groovy, JRuby, Clojure). Kotlin is the closest I've seen to that.


I completely agree. Kotlin is designed to be a modern Java, i.e. a blue-collar language that only adopts tried-and-true ideas.


Kotlin is still in development, and the documentation is patchy. I couldn't recommend it for production usage just yet. However, having used a bit of it in an Android app, I think in a couple years it has a chance to be the canonical "java but better". At least I hope so.


> Did Scala just start winning before Kotlin was around?

Scala has about 2% mind share on the JVM after ten years in existence, I would say that not only is it not winning but its time has passed.

Ceylon is one year old, Kotlin is not even out yet, there is plenty of time for either of these two to gain some solid mind share.

Or maybe not. Maybe Java is going to reign supreme for quite a while. Whatever the replacement of Java will eventually be, I'm pretty sure it won't be Scala.


The thing is that while Scala's main focus is not on being "a slightly nicer Java", it does that job fairly well.

There just doesn't seem to be a good point in using Kotlin, as you can tell from how fast the Kotlin proponent in this thread is changing the frame of reference.

- Higher-kinded types? Better compare with Haskell!

- Compilation speed? Let's pick Java, the language with the least useful typesystem!

- Popularity? Let's compare Scala with Java, but compare Kotlin to Scala!

- Commercial backing? Let's conveniently ignore that the language lives by the cross-subsidization of JetBrain's IDE business, instead of relying on the success of their language.

Point is, there is nothing that Kotlin does substantially better that would give it a niche between Java and Scala.

- Non-incremental compilation speed is not that much faster compared to Scala, and substantially slower than Java.

- Most of the things regularly used in Scala cannot be expressed in Kotlin, it only makes Java idioms slightly nicer to write down.

- Like in Scala, IDE support is ongoing work.

- Compared to both Java and Scala, the ecosystem isn't there.

Compared to Scala, Kotlin makes you pay 80% of the cost for 20% of the benefits.

I predict that future Java releases will keep cannibalizing Kotlin from the bottom, while Kotlin will fail to attract developers from more expressive languages.


To be fair, I'd hardly say that not knowing what a monad is = not knowing anything about FP.


There is nothing wrong with using Scala as a better Java. Indeed that's what I recommend to Scala beginners with a background in OO. That's the beauty of multi-paradigm languages: pick and choose the language subset that works for you.

Indeed I often program in a way that could be called "locally stateful, globally pure-functional". It's a good approach!


"locally stateful, globally pure-functional"

Can you expand on that, please? Because it seems to me that state fulness propagates up through the system. I can't imagine a system that was pure in the large but side-effecting in the small. I'd be interested to hear how that works...


One way is to use local mutable state within functions that are implemented so as to be not rely on external state, which are then indistinguishable to calling code from pure functions.


Yes, that's what I had in mind.

If only there was an effect system that could guarantee purity and at the same time not be in the way (i.e. allow full or Scala-like type inference). Then purity would be guaranteed by types, like it is in Haskell. (N.B. I'm not asking for Haskell's purity by default. I advocate impure as default, with a type guarantting purity.)


The Haskelly parts of Scala are some of its best parts. It's always been a hybrid language, and it's always had a compromised design as part of that. But the result is wonderful, and either half alone would be diminished. And frankly any "boss-friendly Haskell" would have to have Scala-level Java interop, i.e. full support for inheritance, existentials and so on, at which point you end up going down the same path as Scala.

If you want Java you know where to find it. Likewise with Haskell. Scala draws from both and that's its greatest strength.


This is true - I love Scala for its hybrid nature. I'm just saying that I wish prominent members of its community would too. For example if you go on to the #scala IRC channel half the regulars on their will be complaining about how terrible Scala is ... For a variety of reasons that can be summed up as: its not Haskell.

If you spend a bit of time there it fades to background noise and #scala is actually a really friendly and helpful place - but "x is hard to do in Scala because Scala [sucks]" is an odd impression to give a newcomer.


Hopefully any future languages that prioritize java interop will do things like implement java.util.List rather than insist on renaming "add" to "+=" along with ":+", "+:", "+", "++", "++:" so that I have to constantly call .asScala, .asJava on collections while passing them around.


Disagree. The collections are different and behave differently (e.g. the Scala ones are immutable), they should look different. Explicit .asScala and .asJava make the intent clear. They should only be necessary at the boundary.


Why not just import the implicit Java-Scala list conversions?


This obviously comes down to personal preference but I personally prefer the explicit conversions for the added clarity.


There's an excellent implementation of OCaml for the JVM, which is, sadly, not well advertised (its author is a bit on the shy side): http://www.ocamljava.org/


One particular "annoying" feature of Scala is the Java interoperability. It's obviously great because you have a large quantity of tools usable from the Java world, but it leads to large quantities of time spent wrapping them into clean Scala interfaces (or dealing with unmelodious APIs, pick your curse).

I would love if I could avoid this with Fredge. But, in the end, I would probably be better off using Haskell directly.


This is about 50% of the pain of Scala for me. I want to like scala, and some of the concepts behind it are neat, but in reality the act of writing it with the java interop, opaque unrepeatable errors, and compiler slowness makes it a fantastically unpleasant experience.


I'm sharing the same thoughts. I'm relieved I'm not alone in this world :)


> I would probably be better off using Haskell directly.

This is certainly true. You can see Frege as a subset of Haskell 2010 plus some GHC extensions plus the native interface (i.e. the Java FFI).

Unless you really need the JVM, Haskell (i.e. GHC) is the probably the better choice.

Frege is just an offer for the minority that wants pure FP specifically on the JVM. Those people had no real choice until recently, given that CAL is dead, and E. Kmett's "ermine" not yet there.


I agree that when, in the process of learning Scala, you get beyond the "better Java" stage the interop can feel frustrating. On the other hand Scala gains so much from it. Its a huge net-positive IMHO.

As long as you are only bringing things in from Java-land, rather than going the other way though, the wrapping can be fairly minimal.


This is pretty cool and interesting, but I'm curious how they can claim purity if they still permit evaluation of code written in Java or other JVM languages, even if the types have to be redeclared. In other words, I'm curious if they are permitting execution of "impure" functions outside of a monad or unsafe function if the type has been redeclared. May have to pull it down and have a look.


On the related Reddit thread [0] a commenter links to a very nice blog that evaluates Frege [1] in a very succinct way. To answer your question by quoting this blog post:

"Wrapping mutable Java stuff in Frege means that everything will end up in an ST monad, which is almost always IO. That means that using Java’s HashSet will force anything that uses that to be in the IO monad itself. So there goes purity and referential transparency, which are some of the best parts of Haskell in the first place."

[0] https://www.reddit.com/r/haskell/comments/3gr7y6/infoq_frege...

[1] http://taylor.fausak.me/2015/06/25/frege-a-jvm-haskell/

EDIT: formatting


Thanks for this.

This is a bit misleading though I think, as you can actually unwrap and ST unlike an IO - my most common use for ST is actually to use a mutable object in a case where it's inefficient to keep copying immutable (e.g.: large vector or matrix manipulations) - then when you're done the ST monad can be unwrapped and you get the final structure - the entire function remains pure but inside of it, there are calculations that rely on state - but no state enters or leaves the function per say. This would mean using a hash set would be out of the question _for passing around outside of an ST monad_ but you could actually use it internally within functions.


> This would mean using a hash set would be out of the question _for passing around outside of an ST monad_ but you could actually use it internally within functions.

Yes, but this is what the author was saying: you end up writing your code in ST. Remeber, there is no (general) way to freeze a mutable collection that was passed and then use it as if it was immutable. There can be a dozen different threads be busy modifying it.

However, as immutable objects are becoming more mainstream even in Java, there will still plenty of opportunities to make polyglot JVM programs employing pure code. Just not with the JAVA collection classes.


Allowing foreign code doesn’t change the purity of the language. By default in Haskell, an FFI declaration must be in IO, e.g:

    -- int frob(int, char*)
    foreign import ccall "frob"
      frob :: Int -> CString -> IO Int
If you know such a function to be pure, you can use unsafePerformIO to tell the type system:

    refrob :: Int -> CString -> Int
    refrob x y = unsafePerformIO (frob x y)
If you lie to the type system and tell it that an impure function is pure, then all bets are off.


> If you know such a function to be pure, you can use unsafePerformIO to tell the type system

Better yet, if you know it to be pure, you can give it an appropriate type and just use it in pure code, like:

     pure native cos java.lang.Math.cos :: Double -> Double
which should be roughly equivalent to:

     foreign import jvm "java.lang.Math.cos" cos :: Double -> Double
except that no Haskell compiler that I know implements the "jvm" calling convention, for obvious reasons.

In fact, all primitive operations, types and so on are defined this way in the Frege Prelude.


Worth pointing out that this approach isn't particularly helpful when wrapping mutable Java objects, because Java land still has a reference and can/will mutate the object right underneath us. So we still need to write our Fredge code in ST.


Yes, that is true.

But it is also not as relevant as one might think.

Consider how many Haskell programs actually use mutable C data, or export functions that take a foreign ptr to some mutable stuff.

Why should this be generally different in Frege? Useful Java APIs will be wrapped and sanitized through Frege libraries, and that is it then.

You can go directly to Java, just like you can directly go to C in Haskell, but it turns out one rarely actually does this.


> You can go directly to Java, just like you can directly go to C in Haskell, but it turns out one rarely actually does this.

This might be the reason that haskell has not enjoyed nearly the adoption of Scala and Clojure.


I guess that was my question - if they are making it use FFI as in Haskell, since it's Haskell-like not actual Haskell, or if it's something else that's specific to their JVM version.


It is an FFI, though with different syntax, and much richer. Not only functions, but also foreign (in Frege its called "native") types. Which is indispensable IMHO when you interface a OO language where everything is classes and interfaces.

The different syntax should not be a big issue, as Haskell code that uses FFI targeting C is per se not portable.


The compiler isn't very 'user-friendly', but if you're literally looking for "Haskell on the JVM" and don't care about speed, Frege is a lot of fun once you get used to it. Impure functions/variables must be wrapped in a monad (ST) (take a look at [0]. Frege is almost identical to Haskell, but with a few differences [1]

[0] https://github.com/Frege/frege-native-gen/issues/14 [1] https://github.com/Frege/frege/wiki/Differences-between-Freg...


I would imagine you could have some combination of unsafePerformIO + an exception handler.


Here's a write-up from 2013, Frege: Hello Java:

http://mmhelloworld.github.io/blog/2013/07/10/frege-hello-ja...


I know it's a minor thing, but why do people keep coppying this annoying Haskell/ML pattern where I have to type twice the name of a function if I want to explicitly define it's type?

    current :: IO String
    current = ...
(yeah, it's a minor issue compared to the uber-annoying problem of 'name clashes in record fields' https://wiki.haskell.org/Name_clashes_in_record_fields ...but still)


I personally find it incredibly clean. You separate the type signature from the implementation (mostly because a lot of the times it is unnecessary from the compiler's pov, but useful as documentation). To that end, I don't see it as any different from a javadoc or any kind of code documentation.

In addition since Haskell supports multiple case defitions at the top level inline type declarations would be redundant. For example -

current a 0 :: IO Int Int -> Int current 0 b :: IO Int Int -> Int current 0 0 :: IO Int Int -> Int

vs.

current :: IO Int Int -> Int current 0 a = ... current a 0 = ... current 0 0 = ...


What syntax do you propose?


This is the most obvious that comes to mind:

    current :: IO String : = do
        d <- Date.new ()  -- reads system timer, hence IO
        d.toString
...or for something that works better with a really huge type signature or list of arguments, you could extend it like this

    myFunctionFoo :: Int -> Int -> Int -> Int
    : a b c =
        2*a + 3*b + 4*c


Thanks for the proposals.

How would you make your first variant, work with pattern matching?

I like the second variant better, because it seems to work with pattern matching. It would as well have to work without type declarations, letting Haskell infer the types.

But all in all I prefer the way function definition is implemented now. The record issue is more of a trouble - for it Frege actually has some improvements.


Thanks for examining the proposals. All in all I don't think improving this bit of syntax would actually mean this much, it's only worth considering when you start a new language from scratch.

But the records names / TDNR issue, this makes a real difference, I'll definitely take a look at Frege if I'll need functional programming on the JVM ...as Scala just seems too scary for me.

Now, about what I proposed above, I meant the two examples as different cases of the same syntax, newlines should not really make a difference until after the "=". So you'd use the second variant for pattern matching:

    myFunctionFoo :: Int -> Int -> Int -> Int
    : 1 b c =
        1984 - c
    : a b c =
        2*a + 3*b + 4*c
But it's probably better to direct your effort elsewhere. I see that even Typed Clojure prefers to repeat the name of a symbol instead of bothering to rearrange everything else just to avoid this repetition.

...and it's probably not worth spoiling the beauty of the ML-style-syntax for this. I admit it, I find it 10x easier to read either Lisps or C-syntax-like languages than MLs, but there is a beauty in the ML way, and math folks seem to love it, so better keep it that way :)


Actually, you can write:

    foo = (\a -> \b ->  (a+b)*(a-b)) :: Num z => z -> z -> z
HOwever, it is quite un-idiomatic, of course.


What helps most beginners with Monads etc. is seeing it from a problem perspective: You have two things and want to combine them to get a third thing. Depending on how these things are 'wrapped' you need different concepts:

  Monoid: A + B (|+|)
  Functor: M[A] => (A => B) => M[B]
  Monad: M[A] => (A => M[B]) => M[B]
  Applicative: M[A] => M[A=>B] => M[B]
  Kleisli: (A => M[B]) => (B => M[C]) => (A => M[C])


http://mmhelloworld.github.io/blog/2013/07/10/frege-hello-ja...

Am I correct in thinking that Frege cannot leverage on third-party java libraries? Clojure, another functional language built on JVM, has great support for this (possibly) missing feature.


No, incorrect. You can use any existing bytecode, whether it was produced by Java or anything else.


Calling Java from frege isn't hard, but there isn't currently a way to directly implement interfaces or inherit abstract classes in just frege. You can create a sort of proxy in Java (or whatever) and implement methods in frege, but it's not pretty. (I think; I haven't quite comprehended the examples.)

The article at https://mmhelloworld.github.io/blog/2013/07/10/frege-hello-j... talks about this, but doesn't quite explain the proxy mechanism, it just points out an example.


> but there isn't currently a way to directly implement interfaces or inherit abstract classes in just frege.

Not quite true anymore. TO be sure, some (inline) java will still be needed.

Here is an example https://github.com/Frege/frege/blob/master/tests/comp/Issue2... which compiles to a class that implements java.util.Comparator. Instances thereof can be created from Frege by passing a custom comparision function and could be passed to Java.

The example would be useful in cases when you have Frege data in a Java collection and want to sort them. But it is merely there to show the possibilities.


I don't know about Frege, by why wouldn't it? Being able to call out over FFI or the Java equivalent is a pretty standard feature.


The Frege page says that Frege features

    Higher rank types
but says nothing about higher-kinded types (HKTs). The two are different. The rank of a type describes the depth at which universal quantifiers appear contravariantly. This is quite different from higher-kinded types, which allow type-level computation. For good monad support one uses HKTs. I wonder if Frege has HKTs and the description on the original page made a mistake?


It has both.

Somewhere it is said that it has all language features of Haskell 2010. This implies higher kinded types.

But in addition to Haskell 2010, Frege has also higher rank types.


Thanks.

Type inference for types of rank > 2 is undecidable, how does this go together with the claim that Frege has type inference? I'm not a Haskell expert, but I think the

    {-# LANGUAGE RankNTypes #-} 
extension enables HRTs in Haskell too.


Type inference for higher ranks is in fact undecidable, but not type checking. Hence, exactly like in Haskell with RankNTypes, you need to annotate your higher rank functions.

Actually, the Frege compiler employs an algorithm described in Simon Peyton-Jones paper "Practical Type Inference for Higher Ranked Types". Ordinary HM types are inferred, and higher ranked types checked.


Thanks. It's great to have basically Haskell on the JVM. I'll give it a try.


I think the name was a poor choice. It would have been a better fit for a Prolog-style language, and most people will probably mispronounce it.


Given that Gottlob Frege invented higher order functions and currying, I am of slightly different opinion.

Regarding the pronounciation, who cares? For example, in Germany, half of the people say "Ay-Bee-Em", the other half pronounce the letters IBM in the german way.


In what sense did Frege invent higher-order functions?


Well, probably the word "discovered" would fit better. :)

Here is a paragraph from "Funktion und Begriff" (1891):

> Wie nun Funktionen von Gegenständen grundverschieden sind, so sind auch Funktionen, deren Argumente Funktionen sind und sein müssen, grundverschieden von Funktionen, deren Argumente Gegenstände sind und nichts anderes sein können. Diese nenne ich Funktionen erster, jene Funktionen zweiter Stufe.

For non-german speakers: Frege makes a distinction between functions that take things as arguments and functions whose argument are and must be functions. He calls the former ones "first order functions" and the latter ones "second order functions".

Today we call functions whose order is greater one "higher order".


Thanks, that's a nice quote. I had read that text as a student, but overlooked this morsel.

I wonder if Frege's older Begriffsschrift (1879) doesn't already discuss, or at least mention, higher-order functions. After all, in this text Frege explains his new conception of function.

I also wonder if Cantor would have been aware that this is possible.


I haven't read the "Begriffsschrift", but you are right. It is probable that he had developed his formal apparatus already then.


You can try Frege using your web browser:

http://try.frege-lang.org/


Thanks for sharing!

It shows with :java the produced Java code.


How is the Frege compiler written? Is it a new compiler, built from scratch, or does it borrow heavily from a Haskell compiler?


In the InfoQ article this topic ponts to, I've said something about this.

The short answer is that it is not derived from existing Haskell compilers.


I should have read the InfoQ article more carefully.

If you had a Frege -> Haskell translator, which shouldn't be hard to do, except in edge cases, you could use GHC as a testing oracle for the Frege compiler.


Yes, I have often considered how I could employ GHC, for example. But it turns out, as always, that the devil is in the details. Ideally, one would think you could get away with just writing another backend and implementing another FFI calling convention for the JVM. Yet, projects like LambdaVM that pursue this approach are stalled or given up completly.

The difficulties are clearly stated in the Hasell wiki here https://wiki.haskell.org/GHC:FAQ#Why_isn.27t_GHC_available_f...


I mean something simpler, and doesn't involve retargetting GHC.

Write a Frege-to-Haskell compiler F2H, and then for each test T you simply compare the output of running Frege( T ) with the output of running GHC( F2H( T ) ). Maybe you have to transform the outputs into a universal format such as ASCII strings, but that should be straightforward. Now you have an oracle for random Frege programs.


The examples reminded me this artwork by Manuel Simoni: http://2.bp.blogspot.com/-nPo8up-CfXc/TmjmkzfY5NI/AAAAAAAAA2...


    main = putStrLn "\n"
Yes, that was very difficult. I see where you're coming from.


Actually putStrLn "" is enough, since it prints a newline after the string :-)


It's called humor. You should get some.


I hope this argument fades away into eternity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: