Hacker Newsnew | past | comments | ask | show | jobs | submit | ipkn's commentslogin

I agree that allowing implementing long polling with crow is important, just I didn't know a good way to do that. Your suggestion is big help.

I think supporting both way is better if there is a enough explanation. I don't want to drop a simpler way to do the same job.

  CROW_ROUTE(app, "/about")
    ([](){
        return "About Crow example";
    });

  CROW_ROUTE(app, "/about")
    ([](Response res){
        res.send("About Crow example");
    });


^this is exactly how twisted implements their web resources.


I'm writer of this project and it's not completed yet. I planned to publish this after finishing basic features and documentations.


It looks very convenient for making some relatively simple web services (based on me not knowing much C++, others would get more mileage probably). It looks like the kind of C++ project of a moderately small size that's implementing something that I'm very familiar with already, so I think it will be useful to help me learn C++.

I have a couple of questions, and that is what is the reason for only having .h files with no .cpp files in the main part of the microframework? Did something force you to do it that way or was it done like that because you like that code structure?


Header-only libraries have several advantages (see http://en.wikipedia.org/wiki/Header-only). Most-notably, they are dead-simple to include with a project, as opposed to having to separately build and link to a shared-library.


Yeah but usually "header-only" implies templates, of which there seems to be little in this case. Without templates there's little point in putting everything in headers because all the code becomes inline.

Inlining everything is bad because:

  - it makes the binary much bigger.
  - the smallest code change forces library users (applications) to be recompiled.
For libraries it's usually better to go to the other extreme: hide as much code and data as possible in the source files. See, for example, https://en.wikipedia.org/wiki/Opaque_pointer. It greatly reduces the need for applications to be recompiled when the library is updated.

It's actually very unusual to put all function implementations inline (edit: as in, inside the class declaration) in C++. I wonder whether the author is a heavy Java user.


I'd even argue that, with an LTO-optimizing compiler (e.g. clang -flto), writing functions inline is basically obsolete. In my case clang/llvm is very much able to inline code across object file boundaries, even capable of analyzing function-pointer assigments and inline the respective functions.


> because all the code becomes inline.

The language says that's just a suggestion to the compiler, right? Does any compiler actually inline for non-trivial methods?


`inline` has a different meaning in C++. The compiler is free to inline the call if it wishes to, but `inline` means that the same function can be defined in multiple translation units without breaking the one definition rule. Example:

    // header.hpp
    inline int f(int x) { return x + 1; }

    // a.cpp
    #include "header.hpp"

    // b.cpp
    #include "header.hpp"
If f was not marked inline, linking a.cpp and b.cpp together would find a conflicting method f, and compilation would fail. `inline` lets the compiler ignore this, and it simply picks one of the multiple definitions as the 'real' one and moves on with the compilation.


What you are describing sounds a lot like 'static' in C, which marks a function to not export its name, so it can't be seen from outside the file.

'inline' in C is a hint to the compiler that you'd like the function inlined.

You can combine the two, and, in fact, it seems like a good idea IME to also use static if you're using inline.

Are you sure c++ is that different?


`static` will result in a copy of f for every translation unit (without LTO, at least). `inline` will not. `static inline` is effectively the same as `static`, with a slight hint to the compiler to inline the call.

`inline` is used extensively in C++ to make header-only libraries possible; otherwise you'd get constant symbol clashes during linking. With `static` you would get enormous size blowup. It has little to do with the actual inlining of the call, which is mostly up to the compiler.

In C, the situation is complicated. `inline` does not exist in C89. GCC has an interpretation of it for C89 (-std=gnu89), which differs from the C99 interpretation. The only safe way to use inline in C is usually to couple it with `static`, unless you know what you're doing. The C99 interpretation of inline is similar to C++, but once again not exactly. For example:

    // header.h
    // int f(int x);
    inline int f(int x) { return x + 1; }
    // a.h
    int a(int x);
    // a.c
    #include "header.h"
    #include "a.h"
    int a(int x) { return f(x); }

    // b.h
    int b(int x);
    // b.c
    #include "header.h"
    #include "b.h"
    int b(int x) { return f(x); }

    // main.c
    #include "a.h"
    #include "b.h"
    int main(int argc, char **argv) {
      return a(argc) + b(argc);
    }
This is code that compiles perfectly fine in C++, but is invalid C, because when the compiler decides not to inline the calls to f, it has no linkage of its own. But when one declares f to have linkage (by uncommenting that line in header.h), we now get 'multiple definition' errors.


Thanks for that. I always hoped that static C functions would not be generated if they are never called, at least. Which I can't see anything to prevent.

It sounds like you've confirmed my intuition about inline in C, and I find inline to be only marginally-useful at best. inline functions are syntactically-prettier than macros, but they lose the other major benefit of macros, which is increased flexibility about typing and being able to interact with syntax in ways that functions can't. I get the impression that inline probably didn't need to be included in the standard, or, at least, somehow they blew the opportunity to add something more useful.

C's situation still seems less complicated than C++'s. I can't grasp exactly what C++ 'inline' actually tells the compiler to do, based on your description. It sounds like 'inline' in C++ is just a smarter 'static'. Why can't those smarts be implanted into 'static'?


`inline` indicates to the compiler: "this function has external linkage, and no matter how many times it's defined it is to be defined only once in the final linked output". It's the same as if there was no inline, but when the linker finds multiple definitions of the same function it is allowed to ignore them instead of failing. It also serves as a inlining hint to the compiler in its free time.

Note that you don't necessarily have to type `inline` to have inline functions. Methods defined in the declaration of a class are implicitly inline; so are template functions (but not explicit specializations).

The reason it's called `inline` instead of something else probably has something to do with the committee's aversion to new keywords, and commitment to backwards compatibility. Changing `static` would probably break a lot of code: think what would happen to static variables inside static functions.


'static' is old. It must have meant something to Kernighan and Ritchie.

I see no connection to the word inline in the C++ meaning. In C, at least, inline means inline.

My guess is that the C++ inline got its meaning from the winding path of c++ history, and only makes sense in the context of that history.


I guess it is that in C++, methods implemented in the class declaration are implicitly "inline". It could be done to avoid the problem outlined above.


Note that in C++, if you really want to have just want definitions in multiple translation units you should just use an anonymous namespace...


No, not everything will be inlined, but there will still be much more inlining than there should be.

And whether or not the compiler inlines the code, applications will still have to be recompiled whenever there's a code change in the library.


It may seem unusual because you haven't seen it, but header-only libraries are perfectly valid for small, focused libraries.


How do you justify mass recompilations for every minor version bump or bugfix to your users?


This is only a problem if the user structured their code horribly. The library handles HTTP requests, it should be on the edge of the architecture.

Side note: C++ is fantastic in this regard because it makes you suffer every time for excessive coupling. The compile/link times act as a recognizable metric that devs have an interest in minimizing, and the process of doing so produces better code. I love that it is ruthless in punishing poor design.


> This is only a problem if the user structured their code horribly. The library handles HTTP requests, it should be on the edge of the architecture.

Even then you will be rebuilding and redeploying the edge of your architecture every time this library gets a minor minor version number bump.

I would choose not to have to do that, every time.


Just a couple of points.

1. It is an error to couple this library closely enough to the rest of a project's code to cause the condition you note to exist. This kind of library is best used in a small project or as part of the implementation of a user-defined abstraction interface (an abstraction specific to his project's use cases that would not make sense being included in the library code). A small project will compile quickly anyway, and the second kind of project will only need to be compiled if the abstraction's interface changes.

2. C++ compile times aren't that bad. I won't argue that it's not bad in large code bases: it indeed becomes atrocious when the codebase becomes large and spread out over a large number of compilation units (or when coupling is excessive).


> 1. It is an error to couple this library closely enough to the rest of a project's code to cause the condition you note to exist.

Any compilation unit (source file/module) which calls a function in this library will have to be recompiled if any of the called functions change which means that your application will also have to be re-linked. There's just no way of getting around that.


C++ makes maintaining ABI compatibility quite difficult, so in practice libraries with a C++ interface tend to default to requiring that anyway.


Yes, it's difficult by default, but the solution is https://en.wikipedia.org/wiki/Opaque_pointer and it is quite well-known in the C++ world.


Look really nice! To make the people who will complain about no example in the readme, I would suggest you just copy the example.cpp file into the readme, that'll do until you have more time.


This looks nice. I would suggest avoiding macros in the final release; it should be possible to implement CROW_ROUTE() using template meta-programming instead of #define's.


I also want to remove CROW_ROUTE, but with the current c++ standard, it cannot be avoided.

To check whether handler is valid with given URL at compile time, `url' (string literal) argument requires in compile time and in run time. const char* value is invalid for template argument and argument of non-constexpr function cannot be constexpr value. Thus I used macro to provied `url' argument twice; in template argument through constexpr function and in argument.


You can still do metaprogramming on single character constants and with a bunch of really ugly hackery make it somewhat pretty.

You might be interested in metaparse [1] which can greatly simplify compile time parsing of strings but has a very steep learning curve.

[1]: http://abel.web.elte.hu/mpllibs/metaparse/


I already considered using a template with single character constants, and I thought the technic didn't have much benefit over the macro version. Maybe compile-time routing function genetation could be possible with it (and would faster), but requres HUGE work I think. I will try and benchmark it later.


I'd hope that in the end, the efficiency of the routing matters more than templates vs macros. You will never finish anything if you pay too much attention to all the purists.

Useful strings at compile time is a desirable feature beyond c++, though. A perfect hash could be a nice solution, I thought, but I got around to trying out gperf, and it was much slower than I expected. Probably too slow to use in ordinary situations. I guess gperf is for when (runtime) performance is incredibly important.

Another possible approach to strings at compile time is something like flex, or re2c. I haven't tested them in this type of scenario. But, apparently Zed Shaw used ragel to parse http in Mongrel to excellent effect. My problem with ragel is its complicated syntax.


Took a look at the source code. Noticed

  crow::black_magic::is_equ_p(...)
I haven't yet tried the framework, but I already like you.


So, you are alive ;)


very cool


Love the black magic!


the power of `constexpr'!


If you don't want the code generation step and C++ is the only language to use, Dumpable[1] can be useful.

[1]https://github.com/ipkn/dumpable


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: