With Musk upstaging Trump in White House interviews, you'd expect him to be ejected long time ago. Hence the existence of the leverage. Something on the level of helping Trump rig the election. That sort of leverage that would completely destroy Trump if it ever goes public.
It's more about overall look and feel, which in large part comes from the glyph proportions (the x-height), spacing and weights - https://i.imgur.com/mygqn3H.png
Glyph proportions between Whitney and Nebula are almost identical. As are their weights. Source Sans is substantially heavier and more dense looking.
While individual glyphs may be closer between Nebula and Source Sans, but the overall feel of Nebula is that of Whitney.
With more complex structures, you need to specify how it should behave. The definition for 'more complex' here is basically no virtual functions, virtual base classes, is trivially copyable and constructible and a few others.
Basically, if it seems like memcpying the structure might be a reasonable thing to do, it'll work. This is why types like std::array will work but std::vector and std::string won't. It can handle those types when inserted individually but not in aggregate since there's no reflection.
The compiler barf does tell the user why it was rejected but... average C++ errors, even with concepts. Not the greatest.
main.cpp:136:52: note: the expression ‘is_trivial_v [with T = UserPacket]’ evaluated to ‘false’
136 | concept pod = std::is_standard_layout_v<T> && std::is_trivial_v<T>;
Apparently Tesla got caught cooking their books to the tune of 1.4B [1], so the numbers reported on the earnings call might not be as precise as one would expect.
For legacy codebases, switching the compiler is certainly out of the question. For everyone else, why do you think it would be an issue to use the C++ compiler?
I think this is the general style for the Godot framework. They use a limited subset of C++, avoiding STL and some modern features, and limited templates.
> I personally quite enjoy programming in “C with methods, templates, namespaces and overloading”, avoiding the more complicated elements of C++ (like move semantics2)
Don't we all.
Except for the committee, of course, and its entourage.
that's a fair point and you're correct. we will have the SLAs for latency documented and provided soon. in the mean time, please try it out and give us your feedback :)
The site is very snappy, which matches well your pitch.
However your principal selling point - the nanosecond-level speed - falls flat because it's a property important in self-hosted scenarios. Once you put your super speedy stuff behind a web-based API, that selling point becomes completely meaningless. The fact that once our data hits your servers it is handled really quickly doesn't mean much. I am sure you are perfectly aware of that.
That is, your pitch is disconnected from your actual offering. If you are selling speed, it needs to be a product, not a service. It doesn't need not be open source though, just looks at something like kdb+.
our main target for "performance" value proposition are companies and businesses which will setup HPKV either locally (Enterprise plan) for nanosecond performance or in the cloud provider of their choosing, and working via RIOC API (Business Plan), getting ~15 microsecond range over network. however you're totally right, that doesn't really matter much if you're using it REST or WebSocket. for Pro tier, our value proposition is still the fastest managed KV store (you still get <80 ms for writes with a ~30ms ping to our servers) and features such as bi-directional WS, Atomic operations and Range Scans on top basic operations.
but given your comment, I think we should perhaps rethink how we're presenting the product. thanks for the feedback again :)