Hacker News new | past | comments | ask | show | jobs | submit | F-0X's comments login

array.includes(x) is really a function includes(array, x), and this is O(n,1).


I've never seen Landau's notation with two variables, I don't think it exists. At best you could say it is O(n)*O(1) which is formally equivalent to O(n).


You can easily extend Landau's notation to multiple variables (or rather functions of multiple arguments). It's even pretty commonly done, eg a common statement is something like "Getting the k largest elements out of an unsorted input of n elements, takes O(k log n) time, if you use a heap."

> At best you could say it is O(n)*O(1) which is formally equivalent to O(n).

Sorry, that's not how you would use or define multi-variable Big-O notation, if you want it to make any sense.

See eg https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variab... for more details.


>You can easily extend Landau's notation to multiple variables [...] O(k log n)

Yes, but the parent wrote O(n, 1) not O(n * 1). Does O(k, log n) exists?

> Sorry, that's not how you would use or define multi-variable Big-O notation, if you want it to make any sense.

What do you mean? If I have f: n -> n and g: n -> 1, can I not say O(f) * O(g) = O(f*g) ? See https://math.stackexchange.com/a/2317054 for an example of demonstration.


First, the usual way of just writing O(1) or O(n) or O(n^2) is kind of an abuse of notation.

Big-O is a mapping from a function, like O(n -> 1) or (n -> n) or (n -> n^2) or (x -> sin x) to a set of functions.

Function don't care about renaming of arguments. (n -> 2 n) is the same function as (x -> 2 x).

When you write O(k log n) what you really mean is O((k, n) -> k log n)

And you can indeed take it apart:

O((k, n) -> k) * O((k, n) -> log n) == O((k, n) -> n log n)

(For some suitable definition of what multiplication of sets of functions means.)

That's very different from functions with single arguments like in your example.

(And, yes, your example works. It's just a different thing.)


> And it hit me that it's actually O(n), because we've got an array and have to iterate over every element to see if the element matches char.

No, it's actually O(1). n refers to the _input_, which is always a single character. The iteration over an array (fixed at 5 elements) means a maximum of 5 comparisons. O(isVowel) = 5.


> And the old "I have suffered therefore the others have to suffer".

Actually, its an extremely important point. Tools which optimise for the professional are better than those which do so for the noob. Whenever you can achieve both, do so, but never side with the beginner otherwise. People should learn their tools, and they shouldn't be beginners for long, so making things "friendly" at the cost of rewarding expertise is poor design.


This is absolutely true. However, if you can make a tool more learnable for new users without sacrificing its optimization for power users, you should. One way to do that is to not gratuitously invent new terms, and to use terms people are familiar with.

Using conventions like "M-x" throughout the documentation, even with a note at the front of the manual that "We refer to Alt as Meta for historical reasons.", is needlessly baroque. Worse yet, maintaining a distinction that those aren't effectively the same thing for almost all users is needlessly unhelpful. (Yes, it's possible to make Alt and Meta different keys under X with use of modifier maps. That explanation belongs in an "advanced keybindings" chapter late in the documentation.)

It's certainly possible to learn that, and a hundred other gratuitous weirdnesses, but they don't actually add value that justifies imposing that weirdness on every user.


I think maybe you're over-emphasizing the problem with this terminology. When I first started learning Emacs it took me all of 5 minutes to get used to the new terms, it isn't that hard. Maybe there's an argument for the lack of value of using older terminology, but again, I think this problem is overblown.


I'm not trying to say this is the most critical problem; I'm saying it's one example of many, and it spends a lot of "weirdness budget" without necessarily providing value in exchange. "kill/yank" is another example, and there are many more where those come from; the volume of them creates a "thousand papercuts" problem.

In the Rust language design, we're careful about what we spend our "weirdness budget" on. We've already spent a fair bit of it on terms related to ownership and borrowing, because those are fundamental and central to the language. We spent some on having a one-character '?' operator for error-checking, because error-checking occurs so often. But we're not going to gratuitously introduce new vocabulary for existing concepts that already have a name people would be more familiar with, and we're extremely hesitant to introduce gratuitous syntax abbreviations just so people can type a little less, because they'd be harder to read and understand.


I agree that a lot of things are weird and could be modernized.

That said, as someone who very slowly got into Emacs and now am full on into it, all this weirdness slowly became more of a an, oh this actually makes a lot of sense, and, you know what, I might like it better.

I still find it weird for a frame to be a window and a window a frame. But logically, I think the Emacs names make more sense. The thing with a frame is a frame, and the sections within it are the windows. I wouldn't mind if it was renamed frame to window and window to panes tough.

Similarly, C-c and M-m used to confuse me a lot. But now I find them way nicer, why have to type all of Ctrl and alt. Also on Mac alt is called command, so having a Meta as a more generic name for the key kind of works.

Kill was weird to me, until I realized kill and delete both exist, but behave differently. There's a kill-ring, text that is killed go in it, text that is deleted doesn't. When you program extensions this is a very important distinction. You don't want programmatic edits to all go in the kill ring and polute it.

Would it be nicer if the more common user used one, which is kill, be named delete which is more familiar to people maybe.

Like I said, I wouldn't mind someone making a big refactor of it all and renaming everything to be less weird to modern times, but I think as you learn those "weirdness" they stop being weird.

Basically, I mean there's a big difference between a quirk, and just something you're not familiar with. I think Emacs names are mostly unfamiliar.

Emacs also has real quirks though, and I think those are more important to address. Like there's a lot of legacy cruft, having to still support working on defunct terminals, and all kind of stuff. Like ESC being a weird Meta key because of terminals that don't support meta. Or the entire UX which is crap by default.


Meta is not always Alt. Some people use ESC for the Meta keybindings.


So? Alt is not always Alt either, and apparently for a lot of people Ctrl is actually Caps Lock (to help with RSI?).


I think you misunderstood. Control, alt, meta, hyper, and super are (at least on my current OS) distinct modifiers. There are also keys bearing some of those names on most keyboards which makes things confusing to talk about.

Each modifier can be mapped to any physical key. For example, I'm one of those control as caps lock people you mentioned. I then have hyper on the left control key, alt and meta (yes both) on the left alt key, level 3 on the right alt key, and compose on the right control key.


I've never seen a keyboard with hyper or super. I doubt many people have, since they were only present on hardware obsoleted decades ago. Meta, aka the windows key on most PC keyboard, is usually present, but most people wouldn't know its name. On Windows, it's not even usefully bindable since Windows uses various shortcuts for it by default. On Linux, it's not bound to Meta by default by most distributions. So for the vast majority of people, it's just Alt.

I don't think Emacs has done itself any favours by using obscure and non-standard terminology based upon machines from the '70s which few people have heard of, let alone experienced. For the vast, vast majority of us, we all have bog-standard PC or Mac keyboards, and have done for the past 30+ years. It would have been in everyone's interest to standardise on terminology and keybindings which were immediately understandable and usable by all.

Given that every other application uses the standard terminology and keybindings, and that I don't see much in the way of compelling advantage to keeping the non-standard bindings other than habit, I think preserving backward compatibility for four decades was laudable but misguided.


Neither have I outside of photographs! The trouble is that they did exist previously and X11 settled on an abstracted model that supports 5 modifiers (in addition to control, shift, and lock). You have to bear in mind that Emacs can't just stop supporting certain (now defunct to the mainstream) modifiers as many users have setups which depend on them.

I suppose that cosmetically they could update the documentation by changing M- to A- or Alt- or something. Would it really make a difference though?

Aside: Not meaning to be pedantic, but at least under X11 Super is the "Windows Key" and Meta doesn't exist by default. I just checked and (on my machine) the keycap with the Windows logo maps to X11 keycode 133 (hardware specific) which produces keysym Super_L at both levels 1 and 3 which in turn maps to mod4.


Could you explain your rationale for this configuration?


Unfortunately I didn't notice your reply earlier (the consequence of being productive).

Rereading my previous comment I've realized there are some slight inaccuracies - keybindings are conceptually a bit complicated (at least under the historical X11 model). My Alt keycap actually just produces the corresponding Alt keysym, which maps to mod1. My confusion was due to the Meta keysym also mapping to mod1 (this is the default configuration) even though no key on my keyboard is currently configured to emit it. I set everything up quite a while ago and then forgot some of the details.

* Caps Lock -> Control: This is simply more comfortable for frequent use, particularly in combination with the Vi directional keybindings (hjkl).

* Right Alt -> Level 3: AKA AltGr, this is useful for entering common Unicode characters.

* Right Control -> Compose: Useful for a number of other Unicode operations. I don't seem to make much use of it in practice though.

* Left Control -> Hyper (-> mod3): I had a free key. This gives me an extra modifier for use with things like my window manager that's pretty much guaranteed not to conflict.

* Shift + Space -> Underscore: Makes C programming _way_ nicer.

* AltGr + Space -> Nonbreaking Space: I don't actually remember why I configured this one. I never use it.

* Shift + Shift (ie left & right) -> Caps Lock: I don't actually use it, but this way it's still available.


Thanks! I like the underscore idea. First time I've seen it. (As you can see I'm not the fastest replier myself. ;-)


"M = Alt", "C = Control" is such a trivial thing to remember, yet it is brought up as a critical emacs deficiency every time. Why isn't this argument brought up in every Mac/PC discussion, or Xbox versus playstation?


There's some bad stuff in emacs but to concern yourself with Meta as opposed to M- is plain silly.

> if you can make a tool more learnable for new users without sacrificing its optimization for power users, you should

Yes, but it's incredibly difficult and frankly the emacs devs have enough to do (and they do it well), and UI design is a very different skillset from programming. Frankly the learning curve for emacs is not going to improve, muchas we both might wish it.

The biggest problem is people want the benefits of freely downloadable software but mainly aren't prepared to give anything back. Go and assist with emacs or some other project.


> The biggest problem is people want the benefits of freely downloadable software but mainly aren't prepared to give anything back. Go and assist with emacs or some other project.

I don't really buy this line of argument in general, but I think Josh is a particularly poor candidate to pull rank on because they don't contribute to open source. I don't know Josh, but I recognize his name from his open source contributions.

If you don't recognize Josh's name, you can get an idea of some of what he's done from his website, which is linked in his bio: https://joshtriplett.org/

> I work on Linux, primarily on the RCU subsystem and on Sparse-related code. I maintain the rcutorture test module.

> I co-maintain the X C Binding (XCB). I developed the XML-XCB format to describe the X Window System protocol. I also work on other Xorg projects on Freedesktop.org.

> I maintained the Sparse semantic parser and static analysis tool for C for several years, before passing it on to Christopher Li.

> I maintain several packages in the Debian project.


Consider me very regretful indeed. Most people who moan seem to have nothing to contribute, well I got it very wrong this time. Apologies to @JoshTriplett if you're reading this, and Ill check out your stuff tomorrow.

Ah shit, I recognise your name too. I've just been spanked by Dan Luu. This has not been a good night.


I appreciate the sentiment. I would gently suggest focusing the regret on the message rather than the recipient; it wouldn't have been better if written to a novice user.


Heh. My website is drastically outdated (by a decade), but thank you nonetheless.


> it's incredibly difficult

Right. I see this point ignored very frequently (sometimes because it's obvious and sometimes because people are being dumb).

A lot of the things that would make emacs more like other editors are extremely difficult to retrofit. I expect there is still SOME low-hanging fruit, but a lot of the low-hanging fruit has already been picked, and a lot of the remaining changes that people would like are a lot of work.


Let's discuss that. Emacs is so flexible it should be possible to do anything. Binding cut/copy/paste functionality to the conventional keys would be trivial. A few rebound key bindings, not a problem, so you're getting at something larger; what is it?

BTW the current key bindings are so good because I can do a lot without my hands leaving the keyboard, or even moving off the home keys. That was the very point of choosing them originally AFAIK. I recall learning these new keys many years ago, it was surprisingly fast and when I'd learnt them, amazingly quick to sink into muscle memory.


Unfortunately the rest of the software world settled on keybindings for cut and copy that generate almost the greatest possible pain for Emacs to migrate to them. C-c and C-x are used for dozens of the most important commands. It's technically feasible, but switching would be hard and existing users would be really upset.


I assure you I'd never propose to change emacs bindings at all, merely have a switchable alternate set. I think that would be possible? But even if you did so, there's too much else, far deeper that couldn't be amended. I think we agree it's not a credible proposition. I suspect emacs' enormous toolkit can't be exposed consistently without making it inconsistent. I can live with emacs as it is, very happily.


I misread what you were saying, sorry. Too late to delete other post, please ignore it.


> Go and assist with emacs

I have a freaking book to write...


You can absolutely do both. In VSCode, if I don't know how to perform an action, I just hit Ctrl+Shift+P and the fuzzy search panel with all the possible action shows up, with the associated keyboard shortcut next to it so I can remember it for next time.

In emacs, I have absolutely no idea how to discover features, and if I find something I still have to understand what the M- and C- mean


> In emacs, I have absolutely no idea how to discover features

The built-in "help" functionality[1] is really great. C-h a will find useful documentation for what you need 90% of the time, and the manual is there for most of the rest. It's particularly useful for keybindings - C-h w for "what's the keyboard shortcut for this command" and C-h k for "what command does this keyboard shortcut run.

[1] https://www.gnu.org/software/emacs/manual/html_node/emacs/He...


> You can ease the cognitive load by making certain programming structures first-class citizens in the language.

Which, in every s-expression based language I'm aware of, can be achieved using macros.


Isn't that the point - s-expressions aren't good enough so people work around them by writing more conventional languages and parsers (the reader macros) to avoid having to write in them.


I don't think that comment was about reader macros. I don't see reader macros used much, while normal macros are frequently used to create new control structures.


One thing that really annoyed me about D is that its documentation lists basically every function with an 'auto' return type. Auto should be completely banned from documentation, it's a complete hindrance. It's the single biggest contributor to why I stopped using D for personal projects - I was so tired of having to hunt down what functions are going to return, sometimes having to resort to just using it and looking at what the compiler complains about.

And that's a huge shame. Because in general I really liked using D.


The use of Auto is requires in some places because the standard library returns types that cannot be named in the context of the calling function. This happens for example with algortihms that return a custom Range implementation that is declared within the scope of the function implementing the algorithm.

I am not sure what to make of this pattern. At least the documentation should be more explicit about these Voldemort types. Documentation has other issues as well. The standard documentation generator doesn't cope well with version statements (conditional compilation), potentially skipping docs for stuff that wouldn't be compiled into a particular build variant.


I'm glad that Rust has no `auto`. I find this:

    fn map<U, T, F, I>(it: I) -> impl Iterator<Item=U> 
        where I: Iterator<Item=T>, U: From<T>
    {
        it.map(|t| From::from(t))
    }
infinitely more readable than

    fn map<U, T, F, I>(it: I) -> auto
        where I: Iterator<Item=T>, U: From<T>
    {
        it.map(|t| From::from(t)) 
    }
The type signature of the first one clearly tells me that the return type is an `Iterator<U>`, even though the actual type cannot be named because of the anonymous closure.

The second one leaves me guessing what the return type is.

If the actual type cannot be named, it is rarely the case that this is all there is to it. Usually, users are expected to use that type "somehow" (it is a `Range`?), and that means that there are some interfaces that these types implement.


This wouldn't work for D. D doesn't constrain return types to something less than what they are. A Range is not just an Iterator, it has optional pieces that depend completely on the given type.

For example, the return of map could provide indexing, or it could provide forward and backward iteration, or it might have methods that are completely unrelated to the type.

There is no good reasonable and non-confusing way to describe all the things map could return depending on the input. It's much better to just describe it conceptually in the human-readable docs, and let the person understand the result.

I'll note that just above the function map in D's source is the documentation. You just need a little more context, and it will describe what map returns in a much more (IMO) useful fashion than a return type that might be several lines long and consist of various static conditionals:

"The call map!(fun)(range) returns a range of which elements are obtained by applying fun(a) left to right for all elements a in range."

This is the difference between duck typing and generics.


> For example, the return of map could provide indexing,

You can provide more interfaces in Rust:

    fn map(...) -> impl Iterator<Item=U> + Index<Target=U>
but you can't provide "conditional" interfaces (for most interfaces at least), e.g., this won't work:

    fn map(...) -> impl Iterator<Item=U> + ?Index<Target=U>
where `?Index` reads as "maybe implements Index".

To allow that you would essentially need to say that "if the input implements `Index`, the output implements `Index`":

    fn map<U, T, F, I, O>(it: I) -> O
        where I: Iterator<Item=T>, U: From<T>,
              O: impl Iterator<Item=U>,
              I:?Index<Target=T> -> O:?Index<Target=U>
    {
        it.map(|t| From::from(t))
    }
The type system implementation already supports these types of constraints, but there isn't a language extension that exposes that. I don't see any fundamental reasons that make this impossible, but there are many trade-offs involved.

Notice that, for example, the output Range does not implement the same interfaces as the input range, e.g., the input Range implements an `Index` interface over a range of `T`s, but the output Range implements an `Index` interface over a range of `U`s. In D this is super implicit in the implementation details (body) of an equivalent `map` function, but in Rust it needs to be part of the type signature to avoid changes to a function body to silently cause API breaking changes. In D, you could change the body of map to map only from Range(T) -> Range(T), without changing its interface, and that would break all code using it to map a Range(T)->Range(U).


Though it doesn't work for D, it could work for the documentation of D (what is being discussed), in some usefully hand-wavy way.

If I'm working in a typed language, and are dealing with functions max(a, b, c) and list(a, b, c), I would expect the documentation to say that one returns T, whereas the other a list(T). If it says auto, then I'm guessing from the names.

Maybe the target audience is programmers familiar with dynamic languages, who don't care so much and are used to reading the descriptions of functions about what is returned.


In rust this would be expressed as multiple impl blocks with different generic parameters which show up as such in the documentation.

https://doc.rust-lang.org/std/vec/struct.Vec.html#implementa...


What am I looking at there? Are those all traits on Vec that I then have to parse mentally so I can understand what I can do with it? Are all those pages basically to say "Vec works like an array of T"?

I've dealt with generics in other languages such as Swift and C#, and they were substandard to D's templates IMO. I remember in C#, I could not get a simple generic function that accepted both a string and Int to work, so I just gave up and wrote multiple functions without generics.

I'm sure some people find this documentation helpful, but it doesn't look as useful to me as map's simple one-liner.


> What am I looking at there? Are those all traits on Vec that I then have to parse mentally so I can understand what I can do with it? Are all those pages basically to say "Vec works like an array of T"?

No. The type which tells you that Vec works like a slice of T is https://doc.rust-lang.org/std/vec/struct.Vec.html#impl-Deref

The others are separate abstract operations which are available (implemented) on vecs e.g. AsRef/AsMut denote that you can trivially get a (mutable) reference to the parameter from a vec. The implementations are similarly trivial (https://doc.rust-lang.org/src/alloc/vec.rs.html#2348-2374).

> I'm sure some people find this documentation helpful, but it doesn't look as useful to me as map's simple one-liner.

Do you mean this one?

    auto auto map(Range) (Range r)


I’ve not looked much into D, but I’ve really been enjoying Rust.

I think the main takeaway is that there are very different ways of approaching language design. In Rust there was a decision to make the function signature the single place which defines the guaranteed input and output types to a function, but that is a trade-off. It encourages a more complex type system, as the flexibility of functions is on a sense constrained by the type system. Personally I like that explicitness, since there is only one place to look. In the future features like const generics and GATs will make that more powerful.

But on the other hand, D appears to be able to support much more complex types (possibly dependent types?) by not requiring that the type system can express them directly. In a sense the whole language can be used to define types. That’s a cool thing to be able to do, even if it means having to inspect documentation and method bodies to work out what they do.


On the "auto auto", I'm pretty sure I filed a bug report on that. There are alternate D docs that don't produce the "auto auto" (and it goes without saying that it isn't that way in the code itself).


No this one:

"Returns: A range with each fun applied to all the elements. If there is more than one fun, the element type will be Tuple containing one element for each fun."


To get that degree of flexibility in Rust you’d have to turn to macros, then the return type will depend on the generated code. That could be pretty much anything, so you’d need to read the docs anyway. Perhaps the languages are not that different after all.

D’s syntax for the template body looks much more similar to normal D code, in Rust the pattern macro syntax is like a language to itself and procedural macros need a fair bit of boilerplate, including explicit “quote” blocks.


This looks insane, mostly because there must be some repeated code in all these impls.

The D way of solving this is to statically query the properties of the passed in type at compile time whenever a part of the template needs to be specialized. It can make for very concise code, but you can't name the exact input type with this approach.


I don't think that's what the OP is talking about.

They want to have a generic function that returns opaque types implementing different interfaces depending on the inputs. I've replied to that above.


Rust does have auto; "let" and function literal parameter type inference, for a start. It just doesn't let you use it in the return type position.


That's the salient point of this thread though; Rust doesn't have type inference in any position that shows up in API documentation. (The closest thing Rust has is return types of `impl Trait`, but even that imposes a contract that the caller must adhere to and that the callee cannot see through.)


Which helps out the documentation side, but destroys code readability. Particularly since rust users appear to really like creating long method call chains, frequently with a closure or two sprinkled in. Take this "example" https://github.com/diwic/dbus-rs#server. For a beginning user of the library that is nearly impenetrable without breaking each of those calls apart. Even if your pretty familiar with rust you still have to break it apart and explicitly place the types in the "Let" statements to know the intermediate types in order to look up the method documentation.

This style of coding is so bad, that it turns out the example has a syntax error. Good luck finding it without the compiler or a quality editor though. Worse, the example doesn't actually work due to further bugs.

Anyway, rust by itself may be ok. Some of the core concepts are good, but the way people are using it is leading to inpenteratble messes of code. Code like the above combined with what seems excessive/unnecessary use of generics create problems for more advanced usage when it comes to learning and modifying a piece of code. Some people have blamed this on the language's learning curve, but I'm not sure that is appropriate. By itself the language is fairly straightforward, the difficulties occur when people are working around the language and pile in masses of write only code.

That particular code block IMHO is why rust is going to have a hard time truly gaining widespread usage. Even as someone somewhat familiar with rust, moving the example into a program, and modifying it in fairly trivial ways took me the better part of a day.


> Even if your pretty familiar with rust you still have to break it apart and explicitly place the types in the "Let" statements to know the intermediate types in order to look up the method documentation.

Maybe this is just me misreading your phrasing, but why would you actually have to break it apart into `let` statements? You can look up the types without modifying the program. Or are you talking about asking the compiler for the types with the `let _: () = ...` (or similar) trick? At that point you can just ask an IDE, also without modifying the program.


What’s the syntax error? I’m not a Linux user and there’s two different examples, but I am curious!

Most code samples get automatically tested, but READMEs currently do not.


`auto` is just a keyword, that's used in D and C++ to implement many many different language features.

Rust does not have

    fn foo() -> let { ... }
where

    fn foo() -> let { 0_i32 }
    let x: i32 = foo();
    fn bar() -> let { 0_f32 }
    let y: f32 = bar();
That is, you can't have an opaque function return type, that's both opaque, but simultaneously the user can name and use all interfaces from.

If you change the implementation of `bar` with such a feature to return `i32` instead, all calling code of `bar` would break. And that's precisely why Rust doesn't have D/C++'s `auto` in return position.


OP's point was about auto being littered in documentation and not in the code itself.

Having auto is a boon for certain design aspects. As system level programming language D offers everything in betterC mode. Of course it can offer more. But a small community can do only so much.


It's not "littered" in the documentation for anyone who understands the basics of D's ranges. auto is the correct choice here. Range functions are lazy, meaning they are almost always used in a chain of function calls that ends in something like `.array` to get a concrete type -- an array in this case. At no point in that process do you care about the actual return type of any of those functions.


? both are the same.


Is Range a kind of interface? If yes, then wouldn't that be the appropriate return type?

edit: Looking at other answers, I think Range is probably not an interface like they exist in Java, but rather a pattern of behavior per templates in C++. Concepts are supposed to solve this problem in C++, but I don't know how well they actually do.


A Range is something that implements one or more interfaces depending on its properties and guarantees. So in order to name a Range that way you'd first have to create interfaces for all possible guarantees. That doesn't sound practical. It's analogous to C++ containers implementing common concepts without deriving from corresponding interfaces.

See also https://tour.dlang.org/tour/en/basics/ranges


Yes, this is what C++ concepts are supposed to solve: https://en.cppreference.com/w/cpp/language/constraints


In D, you don't declare that a type X should have operations A, B, C. Instead, at the moment of template instantiation, you can verify if the provided type has operations A, B and C.


That makes for terrible docs and discoverability, which is the problem here.

Maybe D should allow the user to name the return type (an existential variable) and static assert stuff on it:

`SomeVar f(…) with isRange!SomeVar` or whatever. `auto` just means "you have to read the implementation because it can be literally anything"


Well, it is visible in documentation, but even those can be hard to parse:

uint startsWith(alias pred = (a, b) => a == b, Range, Needles...)(Range doesThisStart, Needles withOneOfThese) if (isInputRange!Range && (Needles.length > 1) && is(typeof(.startsWith!pred(doesThisStart, withOneOfThese[0])) : bool) && is(typeof(.startsWith!pred(doesThisStart, withOneOfThese[1..$])) : uint));


You did pick a particularly nasty example for that one. I do agree that it is not so easy to read these constraints. It can also be a bit frustrating when you need to chase down why exactly a particular line of code doesn't meet such a constraint. Tests like isInputRange are themselves fairly involved expressions and in the worst case, you end up staring at those after a template instantiation failed.


D's version of concepts have optional operations, it has been found to decrease the need for names


>so tired of having to hunt down what functions are going to return, sometimes having to resort to just using it

With highly generic functions, it's often not possible to know what they'll return without knowing what you'll call them with. Especially given that D functions like "map" and "reduce" tend to return special iterator types so that the compiler is able to smarty fuse them where possible. If D had concepts like C++20, you could probably describe them with something looking like:

    template<class R>
    
      concept __SimpleView =                         //     exposition only
        ranges::view<R> && ranges::range<const R> &&
        std::same_as<std::ranges::iterator_t<R>, std::ranges::iterator_t<const R>> &&
        std::same_as<std::ranges::sentinel_t<R>, std::ranges::sentinel_t<const R>>;
But at least for me that doesn't seem like it would be much more helpful than just reading the documentation, which states what the function returns, if not necessarily the type.


So you still can see that it depends on some input. That is hugely more useful then auto.


>With highly generic functions, it's often not possible to know what they'll return without knowing what you'll call them with.

Of course it is. Map's type is "(a -> b) -> [a] -> [b]". D just absolutely and completely failed here, despite this being a solved problem 40 years ago.


Not in D, it isn't, for performance reasons. It takes an input range and returns a type that iterates through that range applying the callable (doesn't have to be a function!) to each element as requested.


>It takes an input range

Which should have a type.

>and returns a type that iterates through that range

Which should have a type. The entire point is that this is a solved problem, there is no excuse to simply throw up our hands and say "screw documentation we'll just say this function is a mystery".

Functor f => (a -> b) -> f a -> f b

And please don't miss the point and tell me D doesn't have Functor. The entire point is that D has something, and it doesn't tell us what that something is. It should. Documentation is good.


D has functors, but it doesn't have 'Functor'.

Or rather, D doesn't have concepts (of which 'Functor' is a special case); that is, the notion of a type that is characterized by having the ability to execute operations is not expressible in its typesystem.

Or rather, it is, but only with classes. You want something like "a return type; fulfilling the condition of being able to be used in this way." This is not something you can specify as a function attribute in D. Instead, ranges use a form of duck typing. The next step in the call chain can tell whether the previous step gave it something it can use using template inconditions, ie. `isInputRange!T`. But the previous step can't assert that it is returning a type that fulfills a constraint. In other words, there's type inconditions but not type outconditions.


What you're describing has names - structural types, refined types (a.k.a. contracts a.k.a. pre- and post-conditions)...

It's simply a failure of D the language/compiler (and a huge anti-pattern) to not expose internal types in a way that can be displayed to the programmer.


No such internal type exists. The range interface is purely a library feature. The problem is that D has no way to include a type constraint as a part of the function type, at present. Something like out template contracts would do it probably.


java can't express its types in its own syntax either. nor scala, nor sharp.

ceylon could, but they are barely readable anyway.


> > It takes an input range > Which should have a type.

It does. That'd the `Range` here: https://dlang.org/phobos/std_algorithm_iteration.html#.map.m...

> > and returns a type that iterates through that range > Which should have a type.

It does. But the name of that type depends on the type of the range and on the callable.

> Functor f => (a -> b) -> f a -> f b

This doesn't work because the return type isn't `f b`, it's `g b` where g depends on what f is. It also depends on the callable, because the first parameter isn't necessary a function. The closest is

Callable c, Range r0 => c a b -> r0 a -> r1 b

Where `r1` isn't even a concrete type but a type that depends on both `c` and `r0` and is made up on-the-fly (per instantiation).

> Documentation is good.

I agree. How would you suggest improving the signature of map given that D doesn't have typeclasses? Or with types that depend on other types in the template?


class Range g => FunctorTo f g | f -> g where fmapto :: Callable c => c a b -> f a -> g b

is how you would define that class of types in Haskell. It says "If g is a range, then a pair of types f and g satisfy the FunctorTo interface[1] when knowing f determines g, and there's an implementation of fmapto with this type".

Maybe D needs typeclasses. This thread has certainly put me off of D, because being able to write down types is really quite important to me.

[1] The Haskell class keyword is defines something closer to an interface than an OO class.


> because being able to write down types is really quite important to me.

The issue arises only with generic heavily templated functions. Nothing in D forbids you to write your programs with all types explicitly written down. That's btw how I mostly write my code.


>But the name of that type depends on the type of the range and on the callable.

That should not be a problem. It is a problem in D because of a lacking in D.

>How would you suggest improving the signature of map given that D doesn't have typeclasses?

The language needs fixed so it can express its own types.


> That should not be a problem. It is a problem in D because of a lacking in D.

It is not a problem, the compiler copes with it. The problem comes from the fact that such a type is absolutely not interesting to know how it is written. The unmangled type is unreadable.


Yes, the type is very interesting to know. That's why other languages make it known. Ignoring the entire discussion to reply twice with "nu uh!" is not very productive.


There's usually no reason to know the return type of any range-based function in D. It's not like auto is applied as a return type willy nilly. And anyone who understands D's ranges and how they are used should have no problem seeing a function declaration that returns auto.


Right, the entire world is wrong because you can't admit a fault in a random piece of software you are emotionally invested in. Makes sense.


Not quite. I'm saying that in this case it doesn't matter because of the way the API is used. Is it confusing for people who don't understand D ranges? Yes, it certainly can be. It was for me when ranges first came along. But once you understand how D ranges are intended to be used, then you realize you rarely ever care what the return type is. D is not Rust, or C++, or (insert language here).

When the return type actually matters, auto should be avoided unless there's no way around it. But that's why we have "Returns:" in Ddoc. The function signature itself is not the complete documentation. I mean, you're acting like all D functions are documented to return auto. They aren't. It's used where it needs to be.


> The language needs fixed so it can express its own types.

The language expresses its types just fine (it's in the mangled name in the object file). The issue is that there is no point in the human readable form of these types.


It doesn't have to be specialized to []


Of course not. But it still has a type.

Functor f => (a -> b) -> f a -> f b


That still means "it returns something".


It gives us much more information than "something". It tells us it is a list of the type of the second argument to the function we passed it. Or in the generic version a "something you can iterate over" of things of the type of the second argument to the function we passed.


no ? there are plenty of maps which don't return something which looks like [a] -> [b].


So? They still have a type, it doesn't have to be "screw you figure it out yourself".

Functor f => (a -> b) -> f a -> f b


I have seen few videos by Walter Bright, Andrei Alexandrescu on ranges and I have no problem understanding D`s documentation. Maybe you should learn the language before using it.


Repeating the same over and over again doesn't make it any clearer for the ones trying to follow your line of argumentation.

Seems like you had some deep exposure to Haskell, ML or Hindley-Milner in general which, when excessively consumed, detaches from reality.

For one reason or the other you take this discussion serious and personal.


If something is unclear, you can ask for clarification. I repeated the statement to three people because three people repeated the same argument to me. This is how conversations work. I have very limited exposure to haskell, am not detached from reality, and am taking nothing personal. Of course the discussion is serious, why would I spend time engaging in a frivolous and meaningless discussion?


could you provide the type of this map function ?

    auto map(auto x) { 
      if constexpr(is_same<decltype(x), int>) {
       struct T1 { int getStuff() { return 0; } };
       return T1 {};
      } else { 
       struct T2 { void doStuff() { } };
       return T2 {};
      }
    }


Read the rest of the discussion.


being a solved problem 40 years ago

Ahh.. welcome to computer "science", the ever repeating cycle of 'inventions'


You don't have to wait for the compiler to complain. Using:

    pragma(msg, T);
where T is any type will print the type to the screen during compilation. pragma(msg) will print all kinds of things, making it a very handy tool to visualize what is happening while compiling. I use it all the time.


Print-debug you program before it's even compiled and even have a bug is a bag idea.


Why so?


You shouldn't need to reverse-engineer your own program just to understand the code you wrote yourself.


I salute those who never have to debug their own code :-)


Apparently it is a common Go pattern to discover which interfaces a type implements.


It's often because these functions have unnamed types. Chain of lazy computations in D often return unnamed types (so called "Voldemort" types) because finding good names for those inner structs is a challenge, they have a single use (which is to have a particular signature).


> Chain of lazy computations in D often return unnamed types (so called "Voldemort" types) because finding good names for those inner structs is a challenge

There is something so absurd about having "unnamed types" as an antipattern!


Not really - the concrete type isn't important, but what you can do with it is. One could argue that instead we'd use a concept in place of `auto`, and Bjarne has argued exactly that for C++.


> Not really - the concrete type isn't important, but what you can do with it is.

Then surely that's what should be shown? Rust uses `impl <Trait>` for that, the actual return type is opaque but you know it implements the specified trait.


Not everything is worth naming, if there isn't an obvious good name for somethink (like say, a Java Anonymous Classes) then why not allow it to have no name?


Java's objects aren't types even though they're mixed up with its type system. Objects are supposed to represent units of computation so anonymous classes aren't absurd.

But the reason why it seems that types without names are absurd is that types are only real for the interpreter or compiler. At runtime they aren't used anymore. So it's absurd that a construct made for humans to understand and describe code starts to become something opaque to human understanding because they're impossible to be named.


I've felt this as well, been using D for a couple years now, and this is the kind of thing that just makes me have to context switch more than I'd like. With the current implementation of the language it's hard to avoid, and function's return type can be quite complex, so writing it down can be hard.

Another reason is the (ironically) dynamic nature of a return type. E.g.

auto whatDoesItReturn(int i)() { static if (i == 0) { return int.init; } else { return string.init; } }

Template code can do that quite easily and then you don't have a choice but to write auto as the return value.

What would be fantastic if the documentation could be given access to the the compiler's return type inference, so that it could document the auto returns with a little more info.

Another way useful approach would be to implement protocols like in swift, or traits like in scala/rust/others, signatures in ml, etc. Then you would be able to define the interface of what a function returns.


> auto whatDoesItReturn(int i)() { static if (i == 0) { return int.init; } else { return string.init; } }

I agree with your point, but for the sake of the audience who doesn't know D I think this example is misleading, as one could take the "int i" parameter as a runtime one, while it's actually a compile time one (the equivalent of C++ non-type template parameter). If you instantiate the function with 0 as a compile-time parameter, it is a function that returns int; otherwise it's a function that returns string. It is never a function that can return int or string.


Yeah I realise I could've explained a bit more about template arguments indeed! The point was that it can be hard to specify the return type.


Yes, this is the problem for returns, but it's going to be difficult for the compiler to put something useful. The best it can do is point you at the code that returns, and let you figure it out.

I still think the best option is let the author describe it in ddoc, as the semantic meaning can be much easier to convey that way.


I haven't looked at D yet, but... yuck! `var` for return types in a method signature is seriously unhelpful!

If the docs are filled with this, then D is certainly coming off my list of langs to look at.

For functions that can return different types, I think interfaces or union types would be more helpful (not sure if D supports either though).


Yes, it supports interfaces and unions. One thing not being stated enough in this thread also is that the docs are not just a regurgitated version of the prototypes -- there's actual hand-written text that tells you what the things return.


In the standard library, it's primarily in the range-based functions that you see this, where the type doesn't generally need to be known. And where you do see it, the documentation will describe what the function returns more accurately than any convoluted made-up type the function signature could show you.


What would be the return type of that? Does D have an Either type?


Depends on the value of the template parameter. If it's 0 the return type is an int, if it's not 0 the return type is a string.

There's a wanting implementation of a sumtype in the standard library (https://dlang.org/phobos/std_variant.html#.Algebraic), and a much better one as a package: https://code.dlang.org/packages/sumtype


Interesting, so the return type isn't known until runtime.


No. It is known at compile time.

Template declarations in D take 2 parameter lists. The first is the template parameters the second the runtime parameter: in auto whatDoesItReturn(int i)() { static if (i == 0) { return int.init; } else { return string.init; } }

we have (int i) as template parameter and () as an empty runtime parameter. In C++ syntax whatDoesItReturn<int i>()

at instanciation the syntax is different:

whatDoesItReturn!0() will instantiate a function returning an int

whatDoesItReturn!42() will instantiate a function returning a string.


When it comes to templates it's not until instantiation time - when the compiler see the code being used. So this is just an issue during compilation.

The docs on static if may shed some more light: https://dlang.org/spec/version.html#staticif


So the previous code wouldn't compile unless the compiler knew what values were going to be passed into whatDoesItReturn?


Aye correct! I should've been more clear about that, sorry.

Template arguments need to be known at compile time, and the extra set of parens is how templates parameters are declared in D.


I missed this the first time, but there are two sets of parentheses in the example. Apparently the first set are like template parameters, and the second are the actual arguments to the function.


Personally I dislike var/auto in languages because I like having types explicitly written. But in case of languages like Java or Kotlin you can move the cursor over the variable name and you will see the type, also you can right-click and select "replace with explicit type" and it will work. In D, IDEs struggle with templates and can rarely index templated code (no wonder, because most of the code doesn't exist until build time).

Most people will tell you, "oh just use auto, it makes the code more generic". That's sweet, except as soon as I want to pass it to another function, I need to have the concrete type. Like you, I usually just copy-paste the full type from the error message and move on.


In Java, `var` comes in useful at times. For example:

    for (Map.Entry<SomeLongType, AnotherLongType> x : someMap) {
        final SomeLongType key = x.getKey();
        final AnotherLongType value = x.getValue();
        ...
    }
In the above code snippet, `var x` would have been very useful because the actual type just repeats information that can be found in the next two lines. Also, usually, I'll use more speaking names instead of `key` and `value`.

But if the body of the loop just refers to `x.getKey()` and `x.getValue()`, without extracting them into local variables, then it makes sense to put the exact `Map.Entry` type into the loop header.


You can just use map.forEach and avoid the types in the next two lines too.


In java 10+

    for (Map.Entry<SomeLongType, AnotherLongType> x : someMap)    {
        final var key = x.getKey();
        final var value = x.getValue();
        ...
    }
Is valid.


I prefer to put "var" into the loop header because the combination of two types is hard to read. It's easier to read (for me!) when the type of the key and the type of the value are separated, like they are on the first two lines of the loop body.

    for (var x : someMap) {
        final SomeLongType key = x.getKey();
        final AnotherLongType value = x.getValue();
        ...
    }


Personally I dislike var/auto in languages because I like having types explicitly written. But in case of languages like Java or Kotlin you can move the cursor over the variable name and you will see the type, also you can right-click and select "replace with explicit type" and it will work. In D, IDEs struggle with templates and can rarely index templated code (no wonder, because most of the code doesn't exist until build time).

It's 2020. Why couldn't things work like this, where one can open a window for a concrete type using templates, and it shows the code?


A well designed language should be usable from a text editor. Even in 2020.


I agree. I will paraphrase this as - a well designed language should be usable, at minimum, from a pure text editor, and should not put unreasonable burden on an IDE.


The cynic in me wants to say that such a language doesn’t lend itself to static analysis. As soon as you can do great things with static analysis you can build those features into an IDE, therefore causing the “pure text editor” to feel crippled giving rise to the idea that this language is “unusable from a pure text editor”.


A good dev uses an IDE. In 2020.


This is a really tired argument that we don't need to get into right now. Different things appeal to different people.


Two things can be true. You shouldn't need an IDE to grok the code.


A good dev shouldn't require crutches


Interactivity is not a crutch.


Interactivity is a base attribute of an ide, and not the first reason people use it; the context here is that using an ide doesn't define a good programmer. So being able to clicky click is what makes good programmers to you?


They said that good devs use IDEs, not that IDE use makes you a good dev. Those are very different statements.


They said "A" good dev uses an ide, and a snappy 'in 2020' retort. It's plainly clear their intention is to imply only good devs use ide's in the modern era.


No, that's not how simple if statements work. I don't know what else I can say.

If I say "A good CPU has more than two cores in 2020." then I'm just saying it's a requirement, not sufficient all by itself. I'm not calling a twelve-year-old phenom X3 a good CPU. The "in 2020" is just to emphasize that anyone failing this standard is falling behind the times.


Templates don't exist until compile time, until you build the code, the IDE plugin doesn't have the full data on what types exactly are there. Java/C# generics are more limited in functionality, but it's a tradeoff in exchange for better ahead of time knowledge of types.


Visual Studio can do that, you just provide an example type and the IDE shows what the result would be.

https://devblogs.microsoft.com/cppblog/template-intellisense...

Now try that on vim.


Hard disagree.

   MyClass myVar = new MyClass()
is not DRY. It also makes it practical to use complicated structures out of generics/templates without killing the developer With<Deeply<Nested,Template>, Declarations>.


Hard disagree. If you are assigning a value that's the result of an expression you might have somewhat complicated logic. Being able to say what you expect returned is very useful.

    MyClass myVar = something ? SomeFunction() || somethingThatMightBeASubClassOfMyClass


Anything wrong with

   var myClass = ...
(or some other descriptive name of the variable)?


Personally I prefer this, since it also carries on to every following line.

Like, in C#

    List<Account> accountsToDelete = accountService.GetAccountsToDelete();    
    repository.Delete(accountsToDelete);
becomes

    var accountsToDelete = accountService.GetAccountsToDelete();
    deleterService.Delete(accountsToDelete);
So as much type information is already encoded into names so that all references to this object are clear in what we're handling, and so the type declarations at the point where the variable is declared is just redundant noise.

If your variables aren't informative when I'm reading the code, I'll be confused 5 lines later anyways. So make them informative at the start. And given that, doesn't that mean the List<Account> is a bit redundant?


Beware the m_pszSlipperySlope [0].

[0] https://en.wikipedia.org/wiki/Hungarian_notation


Even if you have var/auto, you don't have to use it every time.


var/auto is a trade-off which requires good tooling support, as you said. Once you're able to see the types as you see fit in your IDE, I feel that you're mostly better off, for all the reasons already mentioned in the thread.

Of course good tooling is still a big requirement, but I still think it's the best decision in the long-term: it's way easier to improve and change tools like IDEs (especially with LSP?), rather than the language itself.

Regarding explicit types in functions, you also don't always need them in languages such as OCaml. I feel that the answer to your criticism could be to just have "auto" also for function arguments, especially when you're just prototyping.


In places where you need to know the exact type auto probably isn't the right choice.

There are many cases where the type is clear however or irrelevant or in generic code is hard to express (thus depending on documentation/comments unless obvious) which would also be hard to read.


I've been using D since 2009 both personally and professionally. 'auto' return did bother me in a few places but it never came close to being a deal breaker.


It's helpful to bring up issues you are encountering in the D forums. This is the first I've heard of this particular one.


It's been raised many times before. By me and numerous others. The standard library is generic, which isn't the end of the world, but you can't tell from the documentation how you can work with the output. It's common for someone to ask a question and be told "add .array to the output". They'd never know that after reading the documentation.


Reading the example code in the documentation is very helpful with this.


And the "Returns:" section!


I had this issue last year when I started learning D. I eventually got used to it, but it was a stumbling block at first. The #d IRC channel helped me out.


I've never programmed in D so I don't know, but from curiosity I wanted to check if what you write is true. However, I can't find any function that is declared as auto.

Could you please paste some example of a function that has a return value which is declared as auto?


Here's one example:

https://dlang.org/phobos/std_algorithm_sorting.html#partitio...

The important line is

auto pieces = partition3(a, 4);

So, what's the type of pieces? The D standard library is written to be generic. And sure enough, that line of code will run. Where it turns into a problem is when you try to do something with it. If pieces is a range, there are certain things you can't do with it. Or maybe you can. Who knows. You'll never learn it from reading the documentation. I've been using D since 2013 and I still struggle with this at times. It's a valid complaint. (D's a great language, but is short on manpower to fix rough edges like this.)


> You'll never learn it from reading the documentation.

Did you not see the Returns section from that link?

"Returns: A std.typecons.Tuple of the three resulting ranges. These ranges are slices of the original range."

Further note: If you just saw `std.typecons.Tuple!(typeof(Range.init[0 .. $]), typeof(Range.init[0 .. $]), typeof(Range.init[0 .. $]))` which is what would have to be written there instead of auto, would that make you feel better? Do you not have to read the documentation to figure out what the function does or what actually goes into those tuples?


The major examples are parts of the stdlib which offer higher-level pipelines (some answers here indicate the need for such things to be very flexible in their return values to allow this, but this is not a priori obvious - Java manages similar functionality with just the Stream<T> class after all).

In this (and the sibling pages in algorithm) nearly every entry is listed as either "template" or "auto" relhttps://dlang.org/phobos/std_algorithm_iteration.html


D's classes and interfaces are much like Java's, so it's possible and easy to write functions that express return types like that. It's also quite restrictive for generic programming. D's metaprogramming features are much more powerful. That means a template can return different types that do not conform to a single, easily-expressed interface. The std.algorithm package is built on that concept.



Returns

A range with each fun applied to all the elements. If there is more than one fun, the element type will be Tuple containing one element for each fun.


Yes, D has prosaic descriptions. So does Python. Or Ruby.

Maybe read and try to understand the complaint instead of pointing out something unrelated?


Maybe it was fixed? I would imagine that saying in documentation that return type is auto is equivalent to not mentioning return type at all, so feels like something not intentional.


> Maybe it was fixed?

It wasn't. They probably didn't look into the more "generic" functions e.g. hofs and algorithms.

The vast majority of functions in https://dlang.org/phobos/std_algorithm_iteration.html returns "auto"


I checked your link, each function has a “Returns:” section that describe what to expect. Isn’t that what you want from a documentation?


A type is much denser, than a textual description, and in most cases sufficient.


Having each of the generic range functions return a <FN>Result type that all implement some variations of a random access range is absurd. Having "auto map(); Returns an input range" is much denser and helpful than "MapResult map(); struct MapResult { @property bool isEmpty(); ...}".

A more explicit "impl(InputRange) map()" may be better until you consider that map is generic on the kind of range you give it, so that just turns into "impl(MapResult!R) map(R)()". A more roundabout, pointless way of saying "auto".


> A type is much denser, than a textual description, and in most >cases sufficient.

In feeble languages with simpleton typesystem may be, but in highly generic templated language like D it is not the case. The type is not dense at all.

What's funny is that in general people complain that compiler errors in D are unreadable. You know why they are unreadable?

Because they print out the types of the functions in which the error occurs and that is nothing more than word salad for generic functions.

Types with hundreds of characters are very common.


Depends the type, it can be quite long and complicated. I always have a lot of issues trying to read return types from templated functions in C++ because they look really messy.


Perhaps then it would make sense to shorten types in a smart way.


Sometimes I feel like auto is definitely overused. In a recent fix, I changed something that returned a boolean (no templates involved) from returning auto to returning bool.

But sometimes auto is the best tool for the job, especially when writing wrapping types. In that case, yes, you have to read the documentation (and I mean what is written in the ddoc comments). But in many cases, you don't have to, because you recognize the pattern, or it's simply a wrap of the underlying type's function.


+1, I cringed when Herb Sutter released the 'Almost Always Auto' C++ presentation on YouTube. Sure, auto has its place, and I personally use it, but I just knew that less experience devs would go nuts with it, and it'd only make their lives easier for a short time.


Plus, seeing the word 'auto' all over the place leaves a weird impression.


It lets C++ almost return to C-style duck typing.

Whether that's really what you want and whether that is the best approach to solve the problem at hand is a matter of preference and the problem space.

It is also allows for a "gradual typing" approach that Dart 1 had.


I think it’s really interesting that as dynamically-typed languages increasingly encourage explicit type hints, statically-typed languages are recommending “almost always auto”.


In dynamically typed languages, adding types can stop things blowing up at runtime. In compiled languages, all the type-inference is still done at compile time, so if it compiles than you're not going to get a crash from accidentally adding a string to an integer at runtime.


The template language is weakly typed and code written in it, too, can “crash” when it runs (which is the compilation time) for the same reasons.


Vim vs emacs, auto inference vs explicit, I don't think it will ever end. :)


I don't know D, but after reading more comments looks like auto is also compensating for poor type system. For example functions that accept arguments of many types apparently need to be declared that are returning auto.

At least that's what I understood.


Scala has a strong type system and when working with it on a daily basis I was not programming, I was thinking about types and fighting with the compiler. And that is the language which has a pretty decent IDE plugin. When I moved to D, it felt like a breathe of fresh air to me. Therefore, the whole talk above about auto and types looks like subjective nitpicking.

In practice though, D codes fast and runs fast, as promised on the official site.


Aren't there any type synthesizers which modify your code based on inferenced types?



No. I mean altering code to strong typing using type inference. So that auto becomes int, or MyClass*.


It already ended. VSCode and auto inference.


I vehemently disagree. Auto/var is a tool that may be used judiciously by your userbase. This new philosophy of blocking your users from using dangerous tools because you know better than them just invites workarounds and kludges. Give the user the tools and warn them of the ways it can go wrong. There's a reason Rust is losing the war to C++.

Anyways, var/auto is critical in some cases. C#'s LINQ, for example, would be very difficult to develop with if you had to manually figure out the type you were returning with long queries every time you wanted to restructure your query.


But they were talking about _docs_.


I disagree; you don't seem to be addressing what the parent is talking about (documentation). Whatever reason Rust is losing a war against C++ (it's really not), this isn't it.


More a failure of documentation/tools? We've been content for a decade to just name the arguments and return without any context. Like the old joke "Where am I?" answered by "In a hot air balloon!". Correct but useless.

I wonder if the document could describe (in some regular way) how those auto types are constructed...from what input, with what operations?


They already do.


Im annoyed that you have to spell out auto all over the place. It should be implicit. The compiler should automatically add auto if you have not specified something else.


I really hate the auto keyword but as I like D so much, I kinda get used to it.


Ugh, that is even more ill-advised than using auto without good reason in C++.


In the interests of perpetuating this endless flamewar, here's Herb Sutter saying C++ programmers should use auto 'by default'.

(I see my snarky comment there got no reply.)

https://softwareengineering.stackexchange.com/a/180616/


Herb Sutter is biased. He works on C++ language lawyering, he works on new features, he works in the STL, etc.

People like him tend to be biased about using auto because they write mostly libraries and generic ones at that (data structures, for instance).

In most code out there you actively avoid templates if possible, so that code is concrete, compiles faster and is easier to debug.


Well, there's two ways to write an application, analogous to two styles in mathematics as described by A. Grothendieck.

http://www.landsburg.com/grothendieck/mclarty1.pdf


Bingo! Writing a template library is totally different to writing an application.


Another reason for C/C++ is that they are very permissive with implicit lossy conversions. If you specified explicit type, chances are the compiler helpfully made a lossy conversion for you.


Sutter made that point under Maintainability and robustness.

Like Sutter's answer, this point doesn't answer the complaint. People on the anti-auto side say it seriously harms readability, as locals' types are no longer clear at a glance. They aren't asking for a list of reasons why some people favour auto, they're asking for an answer to their readability problem.

Perhaps IDEs could infer types and display them as a superscript. That would keep just about everyone happy. (Perhaps not Vim users.)


Herb didn't say it well enough. It's a tradeoff between correctness and readability. If it wasn't for C++'s weak type system you wouldn't be required to make this tradeoff and could improve readability while still staying correct.


In IDE-friendly languages like Kotlin, you can enable showing of inferred types, e.g. https://i.stack.imgur.com/tiqjc.png


Perfect, just what I was thinking. Does something like that exist for C++? Visual Studio 2019 doesn't seem to have it, but it's able to show a local's type when I hover over the local's identifier.


If you use only explicit constructors and ban cast operators except for the most basic of types, then you have a sane language and explicitness in your code.


Sounds sensible. Google seems to agree with you. https://google.github.io/styleguide/cppguide.html#Implicit_C...

LLVM's coding standard though doesn't seem to have anything to say about implicit conversions: https://llvm.org/docs/CodingStandards.html


It's sane on C++ scale, but on absolute scale that sanity is still wanting, Sutter mentions this.


In the context of C++, the use of auto allows to avoid unintended type conversions during initialization.


Well, not everyone likes automatic type deduction -- some people just like to torment him/herself by repeating information that is unneeded. Why?


Type inference systems generally have crumby error messages, and are harder to reason about all around. The best type inference systems are those that allow inference within a function definition, but the function signature requires explicit annotation. This keeps the inference local, which is easier for humans to read about. Good tooling can make either system easier to work with, but tooling is much less important in the type annotation case because the cost of adding annotations is trivial (contrary to your "torment" vocabulary) compared to reasoning about (non-local) type inference.

In general, this reduces to the principle that concrete is easier to reason about than abstract. The type signature of an unannotated function in a type inference system is maximally abstract, while (especially in practice) the signatures for functions that are manually annotated are more concrete if not fully concrete. There are still problems in non-type-inferred systems with programmers who try to be egregiously abstract, but these are fewer and farther between.


Sometimes when reading a long passage in a novel you need a reminder who "he" is.


My personal anecdatal arguement against "Hard to parse for humans" is... I actually really like generics residing within <>. It's actually harder for me to read generics denoted otherwise (D uses parentheses)

As for the last paragraph, I'd implore language implementors to continue using < > lest we get that awfully ambigous nonsense.


Scala uses brackets for generics and parens for array accesses. It works nicely, I think.

The primary argument against < and > as generic brackets is that the ambiguity can make for some confusing error messages. It also prevents you from pre-matching your braces before the parsing phase (a technique that enhances error recovery).


Maybe it's my bias of having been introduced to parameterized types via C++'s template mechanism, but I find that <> looks natural to me and Scala's use of square brackets feels off. In general, I find myself preferring C-derived syntax.


Within my first year of professional development, I encountered several fixed-width files I needed to read and write. I suppose exposure depends a lot on the specific industry.


Also big mainframe users (banks, insurance) often send fixed width data to us.


> I had to parse 9million lines.

Awk would chew through that no problem.

> Some of which contain "quoted records", others, same column, are unquoted.

In which case, there is the FPAT variable which can be used to define what a field is. FPAT="\"[^\"]\"|[^,]", which means "stuff between quotes, or things that are not commas", would probably have worked for you. (EDIT: Looks like formatting has gotten hold of my FPAT and I don't know how to stop it... hopefully it is still clear where asterisks should be)

> Some contain comma's, in the fields, most don't. CSV is like that: more like a guideline than actual sense.

Well, I would say that's absolutely false. You can't just put the delimiter wherever you fancy and call it a well-formed file. Quoting exists for the unfortunate cases your data includes the delimiting character (ideally the author would have the sense to use a more suitable character, like a tab).

This is just a retort to prevent your post from dissuading readers from awk, which is a fantastic tool. If you actually sit for half and hour and learn it rather than google to cobble together code that works, it is wonderful. I also don't think it is valid to base your judgement of a tool on what was apparently garbage data.


Garbage and poorly specified csv files are a fact of life and people have to deal with them all the time.

But if you want to be in a world where people only deal with well specified files like RFC 4180 (for some definition of well specified), your quick field pattern doesn’t conform. It doesn’t handle escaped double quotes or quoted line breaks. If you’re using your quick awk command to transform an RFC 4180 file into another RFC 4180 file you’ve just puked out the sort of garbage you were railing against.

While awk is a great tool if you’re dealing with a csv format with a predictable specification, and probably could be made to bend to the GP will with a little more knowledge, it gets trickier if you’re dealing with handling some of the garbage that comes up in the real world. What’s worse is the programming model leads you down the path of never validating your assumptions and silently failing.

I love awk for interactive sessions when I can manually sanity check the output. But if I’m writing something mildly complex that has to work in a batch on input I’ve never seen, I too would reach for ruby.


Just want to throw D into the ring as a candidate here. It's a smashing language. I think it satisfies the very compromise you are seeking from your evaluations of other languages.


Thank you for reading me so well :) It's a very interesting language for sure, checks a lot of boxes. How is the market trending for it job wise? What I'm finding online so far seems pretty mixed.


I think I am going to side with VW on this. I have always been skeptical of fully autonomous vehicles, and I do not believe they will _ever_ exist on the roads that currently stand. Driving safely in all conditions without aid from a human is simply too complex a task for code that can be audited and verified. If some AI model that's been trained on a billion years of driving experience shows promise, but it is some incomprehensible black box of weights, I won't be getting in that car.

Autonomous vehicles will only ever truly exist upon infrastructure literally designed to aid them, greatly simplifying how they need to interact with the environment, thus making the problem tractable with code we can prove works. I really think it will take more than putting markings on existing roads. It is going to take new roads full stop, probably with various wireless checkpoints built into them.


You may not be getting in that car, but I certainly will.

After all, every driver on the road today is an incomprehensible black box where not only do we not know the parameters, we don't even know the function they're parameterizing. Every instance functions differently, and our testing procedures have woefully low coverage.


When one of those black boxes malfunctions it gets taken off the road. When the AI malfunctions, are we going to shut down entire classes of vehicles until the problem is confirmed fixed?

Not to mention that most software fixes cause other bugs...


In an extreme example I expect that's precisely what would happen. Consider what's currently unfolding around the 737 Max. In the automotive space there's a long history of serious flaws that resulted in loss of life, ranging from faulty airbag deployment systems to flawed designs for ignition systems.

We have precedent for how we qualify and evaluate things for safety: test them across a variety of conditions, accumulate driver-miles or operator-hours and incident frequencies. Then, using that data establish a bar for what constitutes an acceptable level of risk given the utility something provides. If we wanted to ensure nobody ever died in a car accident, we would ensure there were no cars, but collectively we've made a different choice.


We've made a choice to allow people to kill each other in cars from time to time, but that's different from choosing to allow automated cars to kill anybody. Knowing human nature, I don't think the general public will accept double digit automated deaths without an outcry.

Shutting down a plane is completely different from taking an entire class of publicly owned vehicles off the road. People will be furious.

Yes, they will be furious about the deaths and the shutdown, both. Don't forget that people are made up of individuals.


> When the AI malfunctions, are we going to shut down

> entire classes of vehicles until the problem is confirmed fixed?

Yes, a malfunction AI would have to be grounded, just like for example the Boeing 737 MAX is now.


Or a car model with severe problems. This rarely (if ever) happens because with that many cars, severe problems tend to be noticed fairly quickly. That shouldn't change with self driving cars. With several million miles driven each day for more popular models, even rare edge cases should appear within days.


> are we going to shut down entire classes of vehicles until the problem is confirmed fixed?

Yes of course we will. What is the problem with that approach? That is the exactly logical thing to do and will be done.


At least the software is fixable. Other humans are not.


At least other humans fear death as a consequence of driving incorrectly. Computers do not.


Some humans, when seeking to end their own lives, end the lives of others: https://en.wikipedia.org/wiki/Germanwings_Flight_9525

That's an extreme example, but automotive suicides that kill other passengers, drivers, or pedestrians fall into the same category. Consider also deaths from accidents involving drunk driving or fatigue -- thousands of motorists take to the roads every day modified in one manner or another that reduces their driving aptitude.

Also, while it may be correct to say that computers don't "fear death", there's no reason that "risk to self" can't be part of the criteria for decision making by an autonomous system.


Too many people still driving around drunk sadly counters that point.


Humans are fixable in that they are held accountable. One person can be taken off the road if they are unfit to drive. (Revoke license, imprison, etc). Then the incident has become Someone's Fault, and society can move on.


I’d personally rather have less death and mayhem than be able to blame someone for it...


That's a fine opinion, but it's the minority. Maybe not in the abstract, but as soon as you have unexplainable deaths (meaning there's nobody to blame), people freak the fuck out.


how do you see insurance working? why would drivers be responsible to have insurance when the car is completely controlled by bigcorp programmers?


I have to have insurance now, even though a lot of the functions of my car are controlled by their software.

Remember when the Toyota had that problem of the accelerator "getting stuck" because the software didn't disengage? Initially the owners' insurances were paying out, until it happened enough that they were able to prove it was Toyota's fault, and then Toyota had to pay them back.

I imagine in a self driving world it would work the same way. You get insurance, the car has a crash, your insurance and the manufacturer fight out whose fault it is.


Sometimes it seems that critics of something like self-driving cars want so badly for the project to fail that they themselves fail to see obvious solutions.

The insurance will work much better than it does today, because insurance in it's core is about spreading the risks and calculating exact costs of those risks, it's about calculating statistics of negative events and predicting total costs of such events for the entire fleets.

ALL parts of that equation are just better calculated if all cars were automatic, - you can better calculate number of accidents, you can see details of all accidents because there is blackbox data including videos, you can compare cars to each other because a Tesla with same hardware drives in exactly the same way as another one (which cannot be said for human drivers), they don't have to calculate for weird human risk activities such as drinking or being tired, they can run simulations of the same situation on the same software etc. etc.

Insurance is not going to have any problems, insurance is going to love it and make a lot of money on the self-driving cars, they are a perfect fit for each other. Insurance companies don't even care for whom do they have to pay to, they just care that the statistic of the number of failures is correctly represented and that manufacturers don't lie about those statistics - that is all they care about, they calculate a simple equation, that's all insurance is about...


Insurers like standards and features that can be easily verified and improve predictability of the crowd. The issue with high tech solutions, especially mono-cultures is malicious hacking or outages of central services that result in simultaneous failure. An insurer can't handle 50% of cars crashing in the same year.


Insurance would be a nightmare for a manufacturer. Every accident will initially be pegged to the auto maker (as it should be.. it’s their code!). The auto maker will always try to weasel out and blame the passenger-owners of the car (they didn’t maintain it, the paint was dirty and messed with the sensors, the tire pressure was 2 PSI lower than average).

And if you go with the “nobody will own cars, you’ll just summon one” model... well the fleet owner will just sue the manufacturer instead.


Just like Tesla blames dead drivers for using "autopilot." "They should have kept their hands on the wheel and been paying attention." No you can't have a copy of the data.


Seems like it would mostly be the manufacturers that would have to insure the cars, at least for the expensive part (liability)

For me? I'm a self-driving skeptic, but... if the manufacturer was willing to properly insure it, (I mean, a reasonable amount of insurance, at least a statistical life worth) I'd ride in the thing. I think that's an honest signal.


>... if the manufacturer was willing to properly insure it,

Its not just the manufacturers, who is underwriting all that insurance?

Ford sells approximately 2.3M vehicles per year, imagine if 50% of self-driving...over a 5 year period that 5.5M cars...if each one needs to carry a potential 1M policy thats an incredible amount of liability on someone's balance sheet. (even if you say the policy is only 100K thats still $576B in liability)

Thats only for Ford, add in all vehicles manufacturers and extend that to 10-15 years into the future and thats an incredible amount.

However there is nothing to say that a new laws won't be passed to allow manufacturers to escape liability. Most likely this is what will happen (see vaccine courts, etc)


But all of those cars are insured (and that insurance is underwritten) today. So the liability already lives on the balance sheet of insurance companies. Maybe the specific companies change...


eh, right now most people are massively underinsured; minimum coverage in California is like $35k, and most insurance companies won't sell you a plan with more than a half million of liability (at least not without an umbrella policy) - if we stop subsidizing driving through pushing costs on to victims of accidents, the cost of driving will go up. But yeah, it should be about the cost of a good umbrella policy+auto policy is now, modulo any savings if the self-driving car gets in fewer accidents.


but note, we're paying most of that already in the form of people who are killed and under-compensated by under-insured drivers. Increasing liability insurance minimums would roll that cost that is currently born entirely by the victim into the cost of operating a car, which is where it ought to be.


but it is some incomprehensible black box of weights, I won't be getting in that car.

You don't understand the complex weighted probabilities in your doctor's head either, but you trust them to diagnose cancer (which incidentally machine learning is beating humans at). None of the algorithms in doctors' heads can be formally proved to work in all circumstances, nor can the code that runs medical equipment.

A full understanding of complex systems (machine or human controlled) is not possible today in many domains, that's why we measure results. If the data shows that self-driving cars are safer, we will switch. At present, that's what it shows.

As to special roads/markers, these would make the technology less effective at dealing with the unexpected (crash ahead, moose on road, cyclist in the lane etc), and many of the leading companies don't think they are necessary. I can see cars forming networks which report danger, or adding more sensors, but don't think our roads will have to change for self driving, which will be prevalent within the decade IMO without infrastructure changes.


5g networks could greatly help bridge the technical gap. A good example of unforeseen consequences similar to smartphones and 4g transforming society.


I don't understand how a vehicle or any other real time system can rely on 5G. For example, electrical utilities have such a critical responsibility for matching supply to demand and maintaining exactly 50/60Hz that the landline phone network is not good enough, they have to maintain a private signaling network. Cellular networks are notoriously unreliable with dead zones, dropped calls, congestion, power failure, etc. Millimeter wave 5G is even worse with line of sight coverage zones measured in meters.


There is no one solution that is meant to not fail.

It is layers of redundancy. If one fails, the car continues to operate normally. If all are operating at peak, the car is near perfect. If multiple fail, it operates with somewhat degraded performance, but still markedly better than a human.

* Digital maps * P2P Networks * Human-reported obstructions and changes (Waze) * Machine-focused traffic markings * LIDAR * Cameras


"I'm going to agree with the side I already agreed with."

... ok.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: