Hacker Newsnew | past | comments | ask | show | jobs | submit | jawher's commentslogin

May I suggest you give https://github.com/jawher/mow.cli a look/try ?

Disclaimer: I'm the author of mow.cli ;)


Thanks for citing mow.cli [1] and glad you're liking it!

And you are spot-on regarding closures: it was a design choice made expressly for the purpose of being able to:

- scope command specific flag and args, instead of having one giant catch-all context map

- declare real and typed Go variables instead of something like context.String("--option")

- use the same pattern as the flag std package (flag.Bool, etc.) which I liked a lot

Disclaimer: I am the author of mow.cli

[1] https://github.com/jawher/mow.cli


no problem and thanks for creating it! I also enjoyed how easy it is to extend it to support, say, enums: it was just a matter of adding flag interface support to my type and calling app.VarOpt to set it up, very straightforward


docopt() does few things and does them well:

* parse the call arguments and split them into options and arguments

* perform a very basic validation, i.e. handle the options which take a value and those who don't

Granted, mow.cli is much more complex. But it also does much more:

* Generate contextual help message

* Enforce arbitrarily complex validation rules (see the cp example) in your stead, meaning less code (and bugs) for you to write

* Does the routing, i.e. you don't need to have multi-level switch/case and if/else blocks to select and execute the correct code path based on the input arguments.

I talked about this in a previous article [1].

And finally, the library user doesn't have to deal with all this complexity: this is an implementation detail which I thought would interest technical readers, hence blogging about it.

You as a user only use the much easier to grasp exposed API [2]

[1] https://jawher.me/2015/01/13/parsing-command-line-arguments-...

[2] http://godoc.org/github.com/jawher/mow.cli


Parser combinators would certainly have been easier for me, the library author but not necessarily so for the end user.

Contrast how it is done now:

  [-R [-H | -L | -P]] [-fi | -n] [-apvX] SRC... DST
With a parser combinator based approach:

  and(
    optional(and('-R', or('-H', '-L', '-P'))),
    optional(or(
                '-fi', 
                '-n'
             )
    ),
    optional('-apvX'),
    repeatable('SRC'),
    'DST')
And this didn't even handle the fact that options order is not important.


For that I think you can do something similar to what mpc for C (https://github.com/orangeduck/mpc) and LPeg for Lua (http://www.inf.puc-rio.br/~roberto/lpeg/), which is to provide the parsing machinery and write a small DSL with the same machinery for the users.


> The JVM, at its core, is not working with static types. The bytecode itself is free of static types, except for when you want to invoke a method, in which case you need a concrete name for the type for which you invoke that method ...

Not sure what gave you this impression, as the majority of Java bytecode instructions are typed. For example, the add instruction comes in typed variants: iadd (for ints), ladd (for longs), dadd (for doubles), fadd (floats), etc.

The same is true for most other instructions: the other arithmetic instructions (div, sub, etc.), the comparison instructions (*cmp), pushing constants on the stack, setting and loading local variables, returning values from methods, etc.

http://en.wikipedia.org/wiki/Java_bytecode_instruction_listi...

InvokeDynamic, as you point it out, was added to make implementing dynamic languages on the JVM easier, because the JVM was too statically typed at its core.


Arithmetic operations on numbers are not polymorphic, but polymorphism has nothing to do with static typing per se. You're being confused here by the special treatment the JVM gives to primitives, special treatment that was needed to avoid the boxing/unboxing costs, but that's a separate discussion and note that hidden boxing/unboxing costs can also happen in Scala, which treats numbers as Objects.

Disregarding primitives, the JVM doesn't give a crap about what types you push/pop the stack or what values you return.

invokeDynamic is nothing more than an invokeVirtual or maybe an invokeInterface, with the difference that the actual method lookup logic (specific to the Java language) is overridden by your own logic, otherwise it's subject to the same optimizations that the JVM is able to perform on virtual method calls, like code inlining:

http://cr.openjdk.java.net/~jrose/pres/200910-VMIL.pdf

> ... because the JVM was too statically typed at its core

Nice hand-waving of an argument, by throwing a useless Wikipedia link in there as some kind of appeal to authority.

I can do that too ... the core of the JVM (the HotSpot introduced in 1.4) is actually based on Strongtalk, a Smaltalk implementation that used optional-typing for type-safety, but not for performance:

http://strongtalk.org/ + http://en.wikipedia.org/wiki/HotSpot#History


> Nice hand-waving of an argument, by throwing a useless Wikipedia link in there as some kind of appeal to authority

No need to get agressive over this :) I disagreed with your first comment regarding the dynamic nature of the JVM, and replied trying to explain why.

I posted the wikipedia link not as kind of an "appeal to authority", but to give the readers a full listing of bytecode instructions, so that they can check what I was saying for themselves.

> Disregarding primitives, the JVM doesn't give a crap about what types you push/pop the stack or what values you return.

It depends how you see things: the JVM can't possibly provide instructions for every possible user type, so apart from primitives, the other object types are passed around as pointers or references, but whenever you try to do something other than storing/loading them on the stack, the type checking kicks in, ensuring that the reference being manipulated has the right types.

For instance, the putfield instruction doesn't just take the field name where the top of the stack is going to get stored. It also takes the type of the field as a parameter, to ensure that the types are compatible.

Constrast this to Python's bytecode, where the equivalent STORE_NAME (or the other variants) doesn't ask you to provide type informations.

But then again, we might be splitting hairs here: since this type checking is happening at runtime (when the JVM is running your code), you could indeed question calling it "static typing", which is usually performed at compile time (an is partially performed by the java compiler for example).


> BUT that is happy to let you write a Windows program or a Mac program or a Linux program if that's what you want.

Care to explain the last bit ? Because Java also lets you invoke native code (and hence platform specific) if you want to.


I'm told it's gotten a lot better in Java land, but back in the early days invoking native code was quite a pain. There were whole books available on how to deal with this dark art.

The main thing, though, was simply attitude. Sun's attitude was that Java was a separate platform. You write for Java. It is an inconvenient truth that any given instance of Java happens to be running on Windows, or Linux, or Mac, or whatever--don't worry about that and you just stick to the Java platform. If you wanted to have a native interface with Java computations code, you were thinking heretical thoughts. So, Java came with AWT and Swing. It would have been inconceivable for Sun to include a GUI meant just for Windows, and one meant just for Mac, and so on.

You want to do that in Mono. They are fine with that. They include a GUI interface for Mac that only works on Mac. They include a WinForms interface for developing Windows native stuff (although they do somewhat support it on non-Windows so people can use it to port Windows stuff).

The Mono developer's attitude seems to be more toward providing useful tools to programmers, so we can do what we want, rather than trying to provide a platform that we should leave our native platforms for.

To put it succinctly, if you said "I want to write a Windows program" or "I want to write a Mac program", Sun would have said "Write a Java program instead!". The Mono people say "Cool. We've got some neat tools you can use for that!".


Perhaps the distinction is that the Mono devs favor wrapping native UI kits instead of inventing a platform independent abstraction.


Also the difference between using .NET's P/Invoke (DllImport) interface versus Java's JNI is about as big as can be in terms of developer pain and overhead.


Have a look at JNA. It's about as easy to use as P/Invoke.

Of course Java's days on the desktop are kind of numbered with the various browser and OS vendors getting visibly concerned about the security of the JRE, but that's a different matter.


Unfortunately it's still a huge pain to use JNA with stuff that returns unions (or a pointer to different type depending on the usage context). That makes things like X libraries wrappers really difficult to write in a clean way.

P/Invoke is still much more flexible.


Is it just me or does anybody else think that polluting your code with debugging and logging informations is not such a good idea ?

I understand the value of these informations, but changing your method signatures to take them is going too far in my opinion, especially that there a a less obtrusive ways to achieve the same functionality: stack traces. I'm a Java developer, so I don't know the equivalent in C#, but in Java, you can get the stack trace (or call stack) of the current thread:

    Thread.currentThread().getStacktrace()
Which will return an array containing all the info you'd like, with method names, line numbers and file names.


In .NET, you can just do "new StackTrace()" to capture the current stack of the calling thread. One minor risk with this may be if an optimiser has elided function calls (e.g. converted to a tail recursion). I'm not sure how realistic this is in the C#/.NET case, but having the information injected by the compiler eliminates the possibility.

Also, .NET stack traces only contain line numbers if the debugging symbols are deployed. With the attributes, the C# compiler can inject line numbers at compile time, obviating the need for symbols. Similarly, if a vendor uses an obfuscator, the stack trace will include obfuscated names but the compiler attributes will have injected the real names, making it much easier for developers to read the log files.

Finally, for scenarios like INotifyPropertyChanged, having the caller name injected at compile time is much more efficient at run-time than capturing an entire stack trace just to figure out which property setter is running. This may not a big consideration for logging, but for property setters it is definitely worth bearing in mind.


> Maven is a bad dependency/package manager because of the complicated nuance

What nuance ?

>XML

Fair point. I don't know what they were thinking when they decided not to use attributes, but come on, XML is not that bad.

> Java 8 is even planning on using Project Jigsaw to get rid of it (IIRC).

Wrong: Jigsaw is a modularity tool, maven is a build tool. Jigsaw is intended as a competitor to OSGi, not maven. Jigsaw won't compile your project, resolve your dependencies nor deploy your product.

> Something like Grape or Buildr would be a better model.

I beg to differ. I'm no big fan of maven either, but the idea behind it is pure genius, i.e. an implicit source layout and build lifecycle. Where in imperative tools like ant and co you'd need to repeat again and again the same snippets, compile all those files in the src dir into a bin dir, copy a bunch of resource files to the bin dir, package the whole thing in a jar or a war, place it in a target dir. Also, the dependencies management used to be a tedious task, with you hunting for the correct versions of jars, placing them in a lib directory, going as far as to version them. Most tools now tend to emulate maven's way of dependency management.

Nothing like this with maven: you just place your code and resources in standard locations, create a pom.xml file with just infos regarding your project (name, version and type), possibly the dependencies, and with standard commandes (mvn install for example), it will fetch the dependencies, build your porject, package it and install it.

Maven has its warts: verbose XML, unexpected behaviour in some cases, hard to customize (mostly when you try hard not to follow the standards), etc. But these are minor and fixable issues compared to the value it provides to countless developers who could fetch a project source and build it with one standard command.


Can't you take te same approach ClojureScript took, i.e. to generate a straitforward translation to JavaScript, and then let Google's closure compiler take care of the optimizing thing ?

For instance, for this Roy generated JS:

    var True = function(){};
    var False = function(){};
    var getBool = function(b, ifTrue, ifFalse) {
        return (function() {
            if(b instanceof True) {
                return ifTrue;
            } else if(b instanceof False) {
                return ifFalse;
            }
        })();
    }

The Closure compiler generates:

    var True = function() {
    }, False = function() {
    }, getBool = function(b, c, d) {
      var a;
      b instanceof True ? a = c : b instanceof False && (a = d);
      return a
    };


Roy has the same goal as CoffeeScript - create output that would be similar to how a JS dev would write it. If you use Roy but then decide you don't like it, just take the output and keep going.

I could rely on Closure for optimal performance but what I really want is optimal readability.


While I share your enthusiasm, I'd like to point out some problems with "Sending rovers to all the planets": 4 of them are gaseous giants, literally no solid ground to land on. Venus is a no go too with its crushing atmospheric pressure, sulfuric acid clouds and 400+ c° temperatures.


Where there's a problem, there's at least one solution. I have no idea how to building a bot that can run science experiments in 400+ °C sulfuric acid clouds, but I bet its possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: