Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Go is the only language I've ever felt highly productive working in. Oftentimes in other stacks I find myself in analysis paralysis on meta things that don't matter:

- what design patterns/language features make sense to use

- what is the best lib to accomplish X

- how do you keep things up to date

With Go, the language is so simple that it's pretty difficult to over engineer or write terse code. Everything you need is in stdlib. The tooling makes dependency management and upgrades trivial because of strong backwards compatibility.



Sure, if everything one does is either CLI stuff, or UNIX daemons, containers, ....

Because in the reign of graphics , GUI, GPGPU, HPC, HFT, ML, game engines,numeric analysis, ... there is hardly any library that really stands out.


Or servers, and about anything that really benefits from concurrency.

Even a lot of games could be made with go. The gc wouldn’t really kill the frame rate of a game unless you really push it.


That is what "UNIX daemons, containers means" on my comment.

Gamedev in Go only for those that rather spend their time doing engines from scratch.

Additionally Go's lack of support for dynamic linking is a no Go (pun intended) for big A game studios.


My Go is a little rusty by now, but I thought they supported some type of dynamic linking(although if I recall correctly it comes with a number of free footguns)


It does a very crude one, where one is bound to expose C ABI types, all shared objects have to be linked with the same runtime, and there are still issues making this rather basic support work on Windows, land of game developers.


Gamedev in Go will require cgo (and not just the easy parts), which ups the complexity quite a bit, unless you're already very familiar with C.

I think it's pretty viable nonetheless, but more for the experienced developer with specific goals outside of the nice parts of common engines, or for a hobbyist who knows the language and wants to tinker and learn.


Sorry, this comment is so incorrect that I have to ask, what are you basing it on?

You can create games today using Go without cgo, and there are numerous examples of shipped games of varying complexity and quality. I do this to ship the bgammon.org client to Windows, Linux and WebAssembly users, all compiled using a Linux system without any cgo.

https://ebitengine.org

https://github.com/sedyh/awesome-ebitengine#games


https://ebitengine.org/en/documents/install.html

For anything other than windows:

> Installing a C compiler

> A C compiler is required as Ebitengine uses not only Go but also C.

I mean, even on platforms without cgo, it's it working magically?

No; it's using https://github.com/ebitengine/purego, which is:

> A library for calling C functions from Go without Cgo.

Like... I mean.... okaaaay, it's not cgo, but it's basically cgo? ...but it's not cgo so you can say 'no cgo' on your banner page?

If you're calling c functions, it's not pure go.

If calls some C library, and it doesn't work on any other platform, its like 'pure go, single platform'.

hmm.

Seems kind of like... this is maybe not the right hammer for gamedev; or, perhaps, maybe not quite mature yet...

Certainly for someone in the 'solo dev pick your tools carefully' team, like the OP, I don't think this would be a good pick for people; even if they were deeply familiar with go.


Ebitengine author here.

PureGo is not fully used in Ebitengine for Linux, macOS, and so on yet. You still need Cgo for such environment.


It was based on my own experience (with e.g. sdl2) and, clearly, some ignorance.

I didn't mean to imply that cgo was an insurmountable barrier. But apparently it was a big enough deal for the authors of this engine that they copied over large parts of major API surface to Go to avoid it. Impressive.

However, AFAICT avoiding cgo means using unsafe tricks and trusting that struct layout will stay compatible. Nevertheless, it's a proven solution and as you say used by many already.


Note that Go has very different GC behavior to what .NET GC and likely Unreal GC do. At low allocation rates the pauseless-like behavior might be desirable, but it has poor scaling with allocation rate and cores and as the object graph and allocation patterns become more complex it will start pausing and throttling allocations, producing worse performance[0].

It also has weaker compiler that prevents or makes difficult efficient implementation of performance-sensitive code paths the way C# allows you to. It is unlikely game studios would be willing to compensate for that with custom Go ASM syntax.

Almost every game is also FFI heavy by virtue of actively interacting with user input and calling out to graphics API. Since the very beginning, .NET was designed for fast FFI and had runtime augmentations to make it work well with its type system and GC implementations. FFI call that does not require marshalling (which is the norm for rendering calls as you directly use C structs) in .NET costs ~0.5-2ns, sometimes as cheap as direct C call + branch. In GoGC it costs 50ns or more. This is a huge difference that will dominate a flamegraph for anything that takes, for example, 30ns to execute and is called in a loop.

It is also more difficult to do optimal threading with Go in FFI context as it has no mechanism to deal with runtime worker threads being blocked or spending a lot of time in FFId code - .NET threadpool has hill-climbing algorithm which scales active worker thread count (from 1 to hundreds) and blocked workers detection to make it a non-issue.

Important mention goes to .NET having rich ecosystem of media API bindings: https://github.com/dotnet/Silk.NET and https://github.com/terrafx cover practically everything (unless you are on macOS) you would ever need to call in a game or a game engine, and do so with great attention paid to making the bindings efficient and idiomatic.

For less intensive 2D games none of these are a dealbreaker. It will work, but unless the implementation details change and Go evolves in these areas, it will remain a language that is poorly suited for implementing games or game engines.

[0]: https://gist.github.com/neon-sunset/72e6aa57c6a4c5eb0e2711e1...


both unity and unreal have GC these days.


And dynamic linking, plugins....

Which Go doesn't do really well, and there is no interest in improving.


Yeah, but their GC are tuned for the specific task of being a game engine. (Idk the specifics, but I doubt they are stop-the-world GCs for example)


wails, raylib, ebiten


> it's pretty difficult to over engineer

I don't know about that. Every programmer's first Go program seems to like to go to channel city. Perhaps more accurately: Over-engineering your Go program is going to quickly lead to pain. It doesn't have the escape hatches that help you paper over bad design decisions like some other languages do.


Also: interfaceiritus. Someone saw "accept interfaces, return structs" somewhere and now EVERYTHING accepts an interface, whether or makes sense or not. Many (sometimes even all) of these interfaces have just one implementation.


Doing this allows you to mock out that implementation in unit tests.


A lot of times you want to be able to cmd+click on something and actually see what the hell the code actually does and not get dead-ended at an interface declaration.


What are you using that can cmd+click to take you to a definition, but can't also take you to an interface implementation? I develop Go in Emacs with the built-in eglot + gopls, and M-. takes me to the definition, C-M-. takes me to the implementation(s). It's a native feature of gopls. Sure, it's one extra button, but hardly impossible.


Sounds like a UI bug more than anything.

The compiler certainly knows how to determine if there is only one implementation of an interface and remove the interface indirection when so. There is nothing really stopping the cmd+click tooling from doing the same.


Does the compiler do that? That sounds extremely unlikely, especially because an interface with only one implementation can store the nil type tag or a tagged pointer to an instance of that implementation.


The nil interface is another implementation. I mean, unless it is being used as the sole implementation, but I think we can assume that isn't the implementation being talked about given that it isn't a practical implementation. We're talking about where there is one implementation.


Right. Can you cite anything that says that the go compiler does this sort of whole-program analysis to try to prove that a certain argument to a function is always non-nil, so that it can change the signature of that function and the types of variables declared in other functions?


Uh. No. Why would I ever waste my time proving something I said? If I'm right, I'm right. If I'm wrong, you'll be sure to tell me. No reason for my involvement.


If a nil is another implementation then interfaces with a single implementation don't exist.


Given the following, where is the nil implementation found?

    package main

    type FooInterface interface {
        Baz()
    }

    func bar(fizz FooInterface) {
        bizz.Baz()
    }

    type MyFoo struct{}

    func (*MyFoo) Baz() {}

    func main() {
        var foo FooInterface = &MyFoo{}
        bar(foo)
    }


Nil is built-in. You just have to write the code to instantiate it and the compiler gives you one. The coder does not need to create an implementation, it's there for free.

I would not have called it a "second implementation" myself, but that's your claim to defend, not mine.


map is also built-in. Where do you find the hash map in the given program?

By your logic some nebulous package in a random GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.


> map is also built-in. Where do you find the hash map in the given program?

If you told me a type can be optimized because the compiler knows it can only have non-hash-map uses, but I could put that type into a hash map with a single line, I think I would be right to be skeptical.

> By your logic some nebulous package in a GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.

I expect the compiler to have a list of implementations somewhere. I don't know if I can expect it to track if nil is ever used with an interface. I could believe the optimization exists with the right analysis setup but you called the idea of finding a citation a "waste of time" so that's not very convincing.


> but you called the idea of finding a citation a "waste of time" so that's not very convincing.

Not only a waste of time, but straight up illogical. If one wants to have a discussion with someone else, they can go to that someone else. There is no logical reason for me to be a pointless middleman even if time were infinite.

Now, as fun as that tangent was, where is the nil implementation and hash map found in the given program?


You can head over to godbolt.org and see for yourself that changing the value to nil doesn't change the implementation of `bar`, though it does cause `main` to gain a body rather than returning immediately.


The implementation is preexisting. Even if it was directly used, there would not be an implementation in the snippet. So it not being implemented in the snippet proves nothing.

And what do you mean "someone else"? You're the one that said the compiler "certainly knows" how to do that.


> So it not being implemented in the snippet proves nothing.

It doesn't prove anything, but is what we've been talking about. Indeed, there is nothing to prove. Never was. What is it with this weird obsession you have with being convinced by something? Nobody was ever trying to convince you of anything, nor would there be any reason to ever try to. That would be a pointless endeavour.

> And what do you mean "someone else"?

He who wrote the "citation".


> there is nothing to prove. Never was.

What was the point of your question, if not to prove something?

If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.

If you were asking to waste time, then it worked.

If you had another motive, what was it?

Are we having a 5d chess game? I thought it was a normal conversation.

> He who wrote the "citation".

Nobody? Nobody wrote a citation.

Do you mean the person that asked for a citation? If so, you're wrong. Finding evidence for your own claims would not make you a middleman. They didn't want to have a discussion with someone else, they wanted a discussion with you, and for that discussion to have evidence. Citing evidence is not passing the buck to someone else, it's an important part of making claims.


> What was the point of your question, if not to prove something?

My enjoyment. For what other reason would you spend your free time doing something?

> If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.

And if I weren't trying?

> If you were asking to waste time, then it worked.

I ask nothing, but if you feel wasted your time, why? Why would you do such a thing?

> If you had another motive, what was it?

As before, my enjoyment. Same as you, I'm sure. What other possible reason could there be?

> Nobody? Nobody wrote a citation.

There was a request for me to refer another party who was willing to talk about the subject that was at hand – one that you made reference to ('you called the idea of finding a citation a "waste of time"'). Short memory?

> Finding evidence for your own claims would not make you a middleman.

There wasn't a request for evidence, there was a request for a citation. Those are very different things. A citation might provide some kind of pointer to help find evidence, which is what I suspect you mean, but, again, if that's what you seek then you're back to wanting to talk to someone else. If you want to talk to someone else, just go talk to them. There is no reason for me to serve as the middleman.

> it's an important part of making claims.

Nonsense. If my claim does not hold merit on its own, it doesn't merit further discovery. It's just not valuable at all. It can be left at that, or, if still enjoyable, can be talked about to the extent that remains enjoyable.

Perhaps you are under the mistaken impression that we are writing an academic research paper here? I can assure you that is not the case.


It's great that in your reply upthread you actually understood that it was a request for any kind of evidence, including evidence you just created on the spot, but now you pretend not to understand that.


What ever do you mean? There was no change in understanding. You spoke to seeking a proof in addition to a citation, the parent did not originally speak to the proof bit, only to a citation. Entirely different contexts.

In fact, you would have noticed, if you read it, that the "upstream" comment doesn't even touch on the citation at all. It is focused entirely on the proof aspect. While the parent wanted to talk about citations exclusively, at least at the onset. Very different things, very different topics.


Okay so confirmed you're going meta to dodge any actual points being made, and grind any possible progress of the conversation to a halt. Bye!


Confused by academic paper writing confirmed. What lead you down that path?


Normal conversations have people back up their claims, dude...

You're getting way more academic with this meta waste.


You could just go to godbolt.org, as others have already said, and as any normal person would do. Evidence is neither here nor there, though. We're talking about citations, which nobody of sound mind does. Why on earth would you have a conversation using someone else's words? That's the stupidest thing I have ever heard of.


Right click > view implementations


yeah gotta use an IDE for that


I agree with your point. OP wrote:

    > Many (sometimes even all) of these interfaces have just one implementation.
They are missing that mocks are the second implementation. (It took me years to see this point.) I would say that in most of my code at work, 95+% of my interfaces only have a single implementation for the production code, but any/all of them can have a second implementation when mocking for unit tests.


> Many (sometimes even all) of these interfaces have just one implementation.


The point of using a mockable interface, even if there's only one real implementation, is to test the behavior of the caller in isolation without also testing the behavior of the callee.

This can be overdone of course, not everything needs this level of separation, but if it makes testing one or both sides easier, then it's usually worth it. It's especially useful for testing nontrivial interactions with other people's code, such as libraries for services that connect to real infrastructure.


Did you miss "just one implementation"? A mock is literally defined by being another implementation. If the 'mock' is your sole implementation, we don't call it a mock, that's just a plain old regular implementation.


I think my comment was clear on the distinction between real and mock implementations. If the code was testable with no need for mocks then certainly remove the interface and devirtualize the method calls.


Your comment was clear about mocks, but not why mocks are relevant to the topic at hand. The original comment was equally clear that it was in reference to where there is only one implementation. In fact, just to make sure you didn't overlook that bit amid the other words, the author extracted that segment out into a secondary comment about that and that alone.

Mocks, by definition, are always a supplemental implementation – in other words, where there is two or more implementations. What you failed to make clear is why you would bring up mocks at all. Where is the relevance in a discussion about single implementations the other commenter has observed? I wondered if you had missed (twice!) the "one implementation" part, but it seems you deny that, making this ordeal even stranger.


It is easy to generate mock implementation code (GoMock has mockgen, testify has mockery, etc.) The lack of a hand-rolled mock implementation doesn't mean that much. For example, many people do not like to put generated code under source control. So, just because you don't see a mock implementation right away doesn't mean one isn't meant to be there. Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it, but not had the time to write the tests or generate the mocks. If we are going to be this pedantic, I did say "mockable" interface, implying the usefulness and possibility, but not necessarily existence, of a mock implementation.

Since we are examining code we can't see, we can only speak about it in the abstract. That means the discussion may be broader than just what one person contributes to it. If this offends you or the OP, that was not the intent, but in the spirit of constructive discussion, if you find my response so unhelpful, it is better to disregard it and move along than to repeat the same point over and over again.


> It is easy to generate mock implementation code

Not in any reusable way. Take a look at mockgen and testify: All they do is provide a mechanism to push implementation into being defined at runtime by user code. So, if they, or something like it, is in use the implementation is still necessarily there for all to see.

> Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it

Okay, sure, but this is exactly what the commenter replied to was talking about initially. What is a repetition of what he said meant to convey?

> That means the discussion may be broader than just what one person contributes to it.

Hence why we're asking where the relevance is. There very well may be something broader to consider here, but what that is remains unclear. Mocking in and of itself is not in any way interesting. Especially when you could say all the very same things about stubs, spies, fakes, etc. yet nobody is talking about those, and for good reason.

> If this offends you

For what logical reason would an internet comment offend?


Agreed. It's not as 'traditional Go' but I find there is way less interface boilerplate if you just pass functions around.

ie instead of

```

type ThingDoer interface { DoThing() }

func someFunction(thingDoer ThingDoer) { ThingDoer.doThing() }

```

just have

```

func someFunction(doThing func()) { doThing() }

```

Then when testing you can just pass a test implementation of the 'doThing' function that just verifies it was called with the expected arguments.


Can't Go compiler statically prove that such single implementation interfaces are indeed that and devirtualize the callsites referring to them?

Either way, the problem seems to happen in most languages of today, if they (or their community) ever happen to accidentally encourage passing an opaque type abstraction over a concrete one.


I think it actually does that, but in local contexts, where this analysis is somewhat easy.

I also believe you don't actually have to prove it statically: PGO can collect enough data to e.g. add a check that a certain type is usually X, and follow a slow path otherwise


I understand that it does so when the exact type is observed - a direct call on a concrete type. But I was wondering if it performs whole-program-view optimization for interface calls. E.g. given a simple AOT-compiled C# program:

    using System.Runtime.CompilerServices;

    var bar = new Bar();
    var number = CallFoo(bar);

    Console.WriteLine(number);

    // Do not inline to prevent observing exact type
    [MethodImpl(MethodImplOptions.NoInlining)]
    static int CallFoo(Foo foo) {
        return foo.Number();
    }

    interface Foo {
        int Number();
    }

    class Bar: Foo {
        public int Number() => 42;
    }
On x86_64, 'CallFoo' compiles to:

    CMP byte ptr [RDI],DIL ;; null-check foo[0]
    MOV EAX,0x2a ;; set 42 to return value register
    RET
There is no interface call. In the above case, the linker reasons that throughout whole program only `Bar` implements `Foo` therefore all calls on `Foo` can be replaced with direct calls on `Bar`, which are then subject to other optimizations like inlining.

In fact, if we add and reference a second implementation of `Foo` - `Baz` which returns 8, `CallFoo` becomes

    ;; calculate the addr. of Bar's methodtable pointer
    LEA    RAX,[DevirtExample_Bar::vtable]
    MOV    ECX,0x8 ;; set ECX to 8
    MOV    EDX,0x2a ;; set EDX to 42
    ;; compare methodtable pointer of foo instance with Bar's
    CMP    qword ptr [RDI],RAX
    ;; set return register EAX to value of EDX, containing 42
    MOV    EAX,EDX
    ;; if comparison is false, set EAX to value of ECX containing 8 instead
    CMOVNZ EAX,ECX
    RET
Which is effectively 'return foo is Bar ? 42 : 8;'.

Despite my criticism of Go's capabilities, I am interested in how its implementation is evolving. I know it has the feature to manually gather a static PGO profile and then apply it to compilation which will insert guarded devirtualization fast-paths on interface calls, like what OpenJDK's HotSpot and .NET's JIT do automatically. But I was wondering whether it was doing any whole-program view or inter-procedural optimizations that can be very effective with "frozen world single static module" which both Go and .NET AOT compilations are.

EDIT: To answer my own question, I verified the same for Go. Given simple Go program:

    package main

    import (
        "fmt"
    )

    func main() {
        bar := &Bar{}
        num1 := callFoo(bar)

        fmt.Println(num1)
    }

    //go:noinline
    func callFoo(foo Foo) int {
        return foo.Number()
    }

    type Foo interface {
        Number() int
    }

    type Bar struct{}

    func (b *Bar) Number() int {
        return 42
    }
'callFoo' compiles to

    CMP        RSP,qword ptr [R14 + 0x10]
    JBE        LAB_0108ca68
    PUSH       RBP
    MOV        RBP,RSP
    SUB        RSP,0x8
    MOV        qword ptr [RSP + foo_spill.tab],RAX
    MOV        qword ptr [RSP + foo_spill.data],RBX
    MOV        RCX,qword ptr [RAX + 0x18] ;; load vtable slot?
    MOV        RAX,RBX
    NOP
    CALL       RCX ;; call the address loaded from the vtable?
    ADD        RSP,0x8
    POP        RBP
    RET
    LAB_0108ca68                                    XREF[1]:
    MOV        qword ptr [RSP + foo_spill.tab],RAX
    MOV        qword ptr [RSP + foo_spill.data],RBX
    CALL       runtime.morestack_noctxt                 
    MOV        RAX,qword ptr [RSP + foo_spill.tab]
    MOV        RBX,qword ptr [RSP + foo_spill.data]
    JMP        main.callFoo
It appears that no devirtualization takes place of this kind. Writing about this, it makes for an interesting thought experiment what it would take to introduce a CIL back-end for Go (including proper export of types, and what about structurally matched interfaces?) and AOT compile it with .NET.

[0]: VMs like OpenJDK and .NET make hardware exception-based null-checks. That is, a SIGSEGV handler is registered and then pointers that need to throw NRE or NPE either do so via induced loads from memory like above or just by virtue of dereferencing a field out of an object reference. If a pointer is null, this causes SIGSEGV, where then a handler looks if the address of the invalid pointer is within first, say, 64KiB of address space. If it is, the VM logic kicks in that recovers the execution state and performs managed exception handling such as running `finally` blocks and resuming the execution from the corresponding `catch` handler.


Yeah too much concurrency and too many channels definitely hit home hard...


I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?


I think it depends on what kind of data you're dealing with. If you know the shape of your data, it's pretty trivial to create a struct with json tags and serialize/deserialize into that struct. But if you're dealing with data of an unknown shape it can be tricky to work with that. In Python because of dynamic typing and dicts it's a little easier to deserialize arbitrary data.

Go's net/http is also slightly lower level. You have to concern yourself directly with some of the plumbing and complexity of making an http request and how to handle failures that can occur. Whereas in Python you can use the requests lib and fire off a request and a lot of that extra plumbing just happens for free and you don't have to deal with any of the extra complexity if you don't want to.

I find Go to be bad for interviewing in a lot of cases because you get bogged down with minutiae instead of working directly towards solving the exact problem presented in the interview. But that minutiae is also what makes Go nice to work with on real projects because you're often forced into writing safer code


It comes down to how the standard library makes you do things. I don't think there's any reason why a more stringly-typed way of handling JSON (or, indeed, a more high-level way of using HTTP) is outside of the realm of possibility for Go. It's just that the standard library authors saw fit not to pursue that avenue.

This variability is honestly one of the reasons why I dislike interviews that require me to synthesize solutions to toy problems in a very tightly constrained window of time, particularly if the recruiter makes me commit at the outset to one language over another as part of the overall interview process. It's frustrating for me, and, having been on the other side, it's noisy for the interviewer.

(In fact, my favorite interview loop of all time required that I use gdb to dig into a diabolical system that was crashing, along with some serious code spelunking. The rationale was that, if I'm good with a debugger and adept at reading the source that's in front of me, the final third of synthesizing code solutions to problems and conforming to institutional practice can be dealt with once I'm in the door.)


My favourite tech interview (so far) was similar: "here's the FOSS code base we're working on. This issue looks like about the size we can tackle in the time we have. Let's work on this together and solve it".

I got to show how I could grok a code base and work out where the problem was quickly, and work out a solution to the problem, and how I understood how to contribute a PR. Way better than random Leetcode bullshit, and actually useful: the issue was actually solved and the PR accepted.


I'm not a fan of this approach because candidates may see it as a "cheap" way to do actual work without being payed.


I like your story about debugging during an interview. I can say from experience, you always have one teammate that can just debug any problem. I am always impressed to watch and learn new techniques from them.


This has also been my experience, yeah. My interviewers were very interested in watching me rifle through that core dump. (:

Ultimately, it feels to me like selecting for people who both can navigate existing code and interrogate a running system (or, minimally, one that had gone off the rails and left clues as to why) is the right way to go. It has interesting knock-on effects throughout the organization (in, like, say, product support and quality assurance) that are direly understated.


In our case we give some high-level description beforehand (which mentions working with REST apis) and allow candidates to use any language of their choice.

Also in our case the API has typing in form of generated documentation and example responses. I even saw one Go-candidate copying a response into some web tool to generate Go code to parse that form of json.

I can also say that people who chose Java usually have even more problems, they start by creating 3-4 classes just to follow Spring patterns.


I think other languages cause folks to understand JSON responses as a big bag of keys and values, which have many convenient ways of being represented in those languages. When you get to Go and you want to parse a JSON response, it has to be a well-defined thing that you understand ahead of time, but I also think you adapt when doing this more than once in Go.


If I had one complaint, it’s the use of ‘tags’ to configure how json is handled on a struct, such that it basically becomes part of the struct’s type. It can lead to a fair bit of duplication of structs whose only difference is the json handling, or otherwise a lot of boilerplate code with custom marshal/unmarshal methods. In some cases the advice is even to do parse the json into a map, do the conversion, and then serialise it again!

The case I ran into is where one API returned camelCase json but we wanted snake_case instead. Had to basically create another struct type with different json tags, rather than having something like decoders and encoders that can configure the output.

I like Go and a lot of the decisions it makes, but it has its fair share of pain points because of early decisions made in its design that results in repetitive and overly imperative code, and while that does help create code that is clear and comprehensible (mostly), it can distract attention away from the intended behaviour of the code.


As an aside, you may be interested in some of the ongoing work to improve the Go JSON serializer/deserializer:

https://pkg.go.dev/github.com/go-json-experiment/json


That’s some good news that will hopefully smooth json handling out.


You could wrap it in another struct and use a custom MarshalJSON implementation.


    var res map[string]any
    err := json.Unmarshal(&res)


Uh huh....and what comes next?

Trying to descend more than a couple of layers into a GoLang JSON object is a mess of casts.


Well, one used to have https://github.com/mitchellh/mapstructure which assisted here, but the lib then got abandoned.



The same things that happens in JS or python.

If you get a key wrong it throws (panics in go)


I wasn't talking about getting the keys wrong, but rather the insane verbosity of GoLang - `myVariable := retrievedObject.(map[string]interface{})["firstLevelKey"].(map[string]interface{})["secondLevelKey"].(string)` vs. `myVariable = retrievedObject["firstLevelKey"]["secondLevelKey"]`

"Oh, but that's just how it is in strongly-typed languages" - that may well be true, but we're comparing "JS or python" with GoLang here.


Especially when you're not certain of the type used for numbers.


> I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?

Have you considered that your interview process is actually the problem? Focus on the candidate’s projects, or their past work experience, rather than forcing them to jump through arbitrary leet code challenges.


Making an HTTP request and dealing with JSON data is a weed-out question at best. Not sure if you are interpreting the grandparent comment as actually having them write a JSON parser, but I don't think that's what they meant.


I either had that come up in an interview recently myself, OR it wasn't clear to me that I was allowed to use encodings/json to parse the json and then deal with that. I happened to bomb that part of the interview spectacularly because I haven't written a complex structure parser in years given every language I've used for such tasks ships with proper and optimized libraries to do that.


Well these are not arbitrary, we work with a number of json apis on a weekly basis, supporting the ones we have and integrating new ones as well. This is a basic skill we are looking for, and I don't see it as a "leet code challenge".

Candidates might have great deal of experience debugging assembly code or generating 3d models, but we just don't have tasks like that.


There is a dict-equivalent data type in Go for JSON (it's `map[string]any`), it's just rather counter-intuitive.

However, as a Go developer, I'm one of the people who consider that JSON support in Go should be burnt down and rebuilt from scratch. It's both limited, annoying, full of nasty surprises, hard to debug and slow.


There was a detailed proposal to introduce encoding/json/v2 last year but I don't know how far it's progressed since then (which you probably already know about but mentioning it here for others):

https://github.com/golang/go/discussions/63397


I've done, literally, hundreds and hundreds of coding interviews and an http api was a part of lots of them. Exported vs non-exported fields and json tags are about the only issues I've seen candidates hit in Go and I would just help in those kinds of cases. Python is marginally easier for many.

The problem was java devs. Out of dozens upon dozens of java devs asked to hit a json api concurrently and combine results, nearly ZERO java devs, including former google employees, could do this. Json handling and error handling especially confounded them.


Hold on, did you just say Go doesn't have a Dictionary data type?

I'm a Javascript, Lua, Python, and C# guy and Dict is my whole world.


It does. https://go.dev/blog/maps

What the poster was alluding to is that you usually prefer to deseialize to a struct rather than a record/dict/map


Not a programmer, so this is every programmer's chance to hammer me on correctness.

No, Go doesn't have a type named Dict, or Hash (my Perl is leaking), or whatever.

It does have a map type[1], where you can define your keys as one type, and your values of another type, and that pretty closely approximates Dicts in other languages, I think.

[1]: https://go.dev/blog/maps


So, these types (and many more) are hash tables.

https://en.wikipedia.org/wiki/Hash_table

They're a very common and useful idea from Computer Science so you will find them in pretty much any modern language, there are a lot of variations on this idea, but the core idea recurs everywhere.


I have a quibble here. A hash table, the basic CS data structure, is not a two-dimensional data structure like a map, it is a one-dimensional data structure like a list. You can use a hash table to implement a Map/Dictionary, and essentially everyone does that. Sets are also often implemented using a hash table.

The basic operations of a hash table are adding a new item to the hash table, and checking if an item is present (potentially removing it as well). A hash table doesn't naturally have a `V get(key K)` function, it only naturally has a `bool isPresent(K item)` function.

This is all relevant because not all maps use hash tables (e.g. Java has TreeMap as well, which uses a red-black tree to store the keys). And there are uses of hash tables besides maps, such as a HashSet.

Edit: the term "table" in the name refers to the internal two-dimensional structure: it stores a hash, and for each hash, a (list of) key(s) corresponding to that hash. Storing a value alongside the key is a third "dimension".


I think I'd want to try to decode into map[string]interface{} (offhand), since string keys can be coerced to that in any event (they're strings in the stream, quoted or otherwise), and a key can hold any valid json scalar, array, or object (another json sub-string).


That of course works, but the problem is then using this. Take a simple JSON like `{"list": [{"field": 8}]}`. To retrieve that value of 8, your Go code will look sort of like this:

  var v map[string]any
  json.Unmarshal(myjson, &v)
  lst := v["list"].([]any)
  firstItem := lst[0].(map[string]any)
  field := firstItem["field"].(float64)
And this is without any error checking (this code will panic if myjson isn't a json byte array, if the keys and types don't match, or if the list is empty). If you want to add error checking to avoid panics, it gets much longer [0].

Here is the equivalent Python with full error checking:

  try :
    v = json.loads(myjson)
    field = v["list"][0]["list"]
  except Exception as e:
    print(f"Failed parsing json: {e}")
[0] https://go.dev/play/p/xkspENB80JZ


And, if you hate strong typing, there's always map[string]any.


Really, the mismatch is at the JSON side; arbitrary JSON is the opposite of strongly typed. How a language lets you handle the (easily fallible) process of "JSON -> arbitrarily typed -> the actual type you wanted" is what matters.


    > arbitrary JSON is the opposite of strongly typed
On the surface, I agree. In practice, many big enterprise systems use highly dynamic JSON payloads where new fields are added and changed all the time.


Go has had a dict-like data type from the jump; they're called "maps" in Go.

Some of early Go's design decisions were kinda stupid, but they didn't screw that one up.


Go has maps, json parsing and http built in. I'm not exactly sure what this person is referring to. Perhaps they are mostly interviewing beginners?


Go maps have a defined type (like map[string]string), so you can only put values of that type in them. A JSON object with (e.g) numbers in it will fail if you try and parse that into a map of strings.

As others have said, the issue with Go parsing JSON is that Go doesn't handle unstructured data at all well, and most other languages consider JSON to be unstructured data. Go expects the JSON to be strongly typed and rigidly defined, mirroring a struct in the Go code that it can use as a receiver for the values.

There are techniques for handling this, but they're not obvious and usually learned by painful experience. This is not all Go's fault - there are too many endpoints out there that return wildly variable JSON depending on context.


I feel like good JSON handling is sort of table stakes for any language for me these days.

The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.


> The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.

Unless you're dealing with JSON input that has missing fields, or unexpected fields, there is no pain. Go can natively turn a JSON payload into a struct as long as the payload's fields recursively match the struct's fields!

If, in any language, you're consuming or generating JSON that doesn't match a specific predetermined structure, you're yolo'ing it and all bets are off. Go makes this particualr footgun hard to do, while JS, Python, etc makes it the default.

Default footguns are a bad idea, not a good idea.


This.

In $other_language you'll parse the JSON fine, but then smack into problems when the field you're expecting to be there isn't, or is in the wrong format, or the wrong type, etc.

In Go, as always, this is up front and explicit. You hit that problem when you parse the JSON, not later when you try to use the resulting data.


Go's JSON decoder only cares if the fields that match have the expected JSON type (as in, list, object, floating point number, integer, or string). Anything else is ignored, and you'll just get bizarre data when you work with it later.

For example, this will parse just fine [0]:

  type myvalue struct {
    First int `json:"first"`
  }

  type myobj struct {
    List []myvalue `json:"list"`
  }
  js := "{\"list\": [{\"second\": \"cde\"}]}"
  var obj myobj
  err := json.Unmarshal([]byte(js), &obj)
  if err != nil {
    return fmt.Errorf("Error unmarshalling: %+v", err)
  }
  fmt.Printf("The expected value was %+v", obj) //prints {List:[{First:0}]}
This is arguably worse than what you'd get in Python if you tried to access the key "first".

[0] https://go.dev/play/p/m0J2wVyMRkd


It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.


> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

To me it looks like a footgun: if the parsing failed then an error should have been signalled. In this case, there is no error and you silently get the wrong value.


Yeah, I presented that wrong. It's not actually a failure as such.


> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

I do agree that there are good reasons on why this behaves the way it does, but I don't think the reason you cite is good. The implementation detail of generating a 0 value is not a good reason for why you'd implement JSON decoding like this.

Instead, the reason this is not a completely inane choice is that it is sometimes useful to simply not include keys that are meant to have a default value. This is a common practice in web APIs, to avoid excessive verbosity; and it is explicitly encoded in standards like OpenAPI (where you can specify whether a field of an object is required or not).

On the implementation side, I can then get away with always decoding to a single struct, I don't have to define specific structs for each field or combination of fields.

Ideally, this would have been an optional feature, where you could specify in the struct definition whether a fields is required or not (e.g. something like `json:"fieldName;required"` or `json:"fieldName;optional"`). Parsing would fail if any required field was not present in the JSON. However, this would have been more work on the Go team, and they generally prefer to implement something that works and be done with it, rather than working for all important cases.

Separately, ignoring extra fields in the JSON that don't match any fields in the struct is pretty useful for maintaining backwards compatibility. Adding extra fields should not generally break backwards compatibility.

> One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.

I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?


> I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?

No, I mean you create a struct that deals with only a part of the JSON, and do multiple calls to Unmarshal. Each struct gets either populated or left at its zero-value depending on what the json looks like. It's useful for parsing json data that has a variable schema depending on what the result was.


Umm, you can unmarshall into a map[string]any, you know ?

    dataMap := make(map[string]any)
    err = json.Unmarshal(bytes, &dataMap)


You can, but then it's a lot of work to actually traverse that map, especially if you want error handling. Here is how it looks like for a pretty basic JSON string: https://go.dev/play/p/xkspENB80JZ. It's ~50 lines of code to access a single key in a three-layer-deep JSON.


Its more like 30 lines of code without the prints. However, one generally should code generic functions for this. The k8s apimachinery module has helper functions which is useful for this sort of stuff. Ex: `NestedFieldNoCopy` and its wrapper functions.

https://github.com/kubernetes/apimachinery/blob/95b78024e3fe...

Ideally, such `nested` map helper functions should be part of the stdlib.


Sure, in production you'd definitely want something like that, but the context was an interview exercise, I don't think you should go coding generic wrappers in that context.


It does (it's called a map) and Go does have generics, the previous poster clearly doesn't know what they're talking about.


Go has ruined all other languages for me. I really fell in love with Gleam recently and was trying to implement a fun side project in it. The problem is I really don’t have enough time to learn the intricacies of it, with a startup, two kids, etc. As soon as I have to look at some syntax and really _think_ about what it’s doing every time I look at it, I lose interest. I kept trying and eventually implemented it in Go much faster. And while doing it in Go I kept wishing I could just use actors and whatever to make it simpler but, is it really simpler?


I haven't looked too deeply into it but I came across https://github.com/ergo-services/ergo not too long ago and thought it could be pretty interesting to try using OTP in Golang

Packaging a Go service in Docker and dumping it into k8s is probably the easier/better understood path but also deploying Go services onto an Erlang node just sounds more fun


Yep.. say you wanted to make a simple http service that needs to

* request a json.gz file from another HTTP service * decompress it * deserialize the json, transform it a bit

That's net/http (and maybe crypto/tls), compress/gzip, encoding/json. I need to make zero decisions to get the thing off the ground. Are they the best libraries in the world for those things? no.. but will they work just fine for almost every use case.


Sounds like

    curl ... | jq ...
to me!

Not saying you shouldn't use Go for that problem, in a particular context, but it does drive home how much of programming is glue ... there is combinatorial amounts of glue, which is why JSON, HTTP, compression, etc. end up being part of so many problems


There’s a big difference between building something on curl and jq and building something using a language’s standard library.

Everything is just bits at the end of the day. Just about anything can do anything.


I feel the same.

Especially in the UI/UX world when you want to just start building a demo you're paralized by dozens of build toolchains in between.

Wanna get started with a starter template? Tough luck, it hasn't been updated for a year, so it takes even longer.

In go, everything is opinionater and unified upstream. Conventions matter, because they allow efficiency and reuse of patterns and architectures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: