I arrived to a similar conclusion. I come from Java and in Java you have exceptions with TryCatch clauses and declaring them in function signatures. It works fairly well but very difficult and not idiomatic to Golang.
Therefor, I created a simple rule. If you do not know what this error means to the user yet then let it stay a fmt.errorf("xx:%w",err). If you do, wrap it in your own custom ServerError struct and return that type from now on. Do not change the meaning of ServerError even if you wrap the Error with another ServerError.
This test is no real.
These languages works differently. Garbage collected and manually allocating and deallocating. If you do not configure a garbage collected language correctly it will spin out of control in memory consumption because it will just not garbage collect. If you would have configured the garbage collection to be low for java and go then go would look like rust.
Similarly the power of golang concurrent programming is that you write non-blocking code as you write normal code. You don't have to wrap it in functions and pollute the code but moreover, not every coder on the planet knows how to handle blocking code properly and that is the main advantage. Most programming languages can do anything the other languages can do. The problem is that not all coders can make use of it. This is why I see languages like golang as an advantage.
Kotlin embraced the same thing via co-routines, which are conceptually similar to go routines. It adds a few useful concepts around this though; mainly that of a co-routine context which encapsulates that a tree of co-routine calls needs some notion of failure handling and cancellation. Additionally, co-routines are dispatched to a dispatcher. A dispatcher can be just on the same thread or actually use a thread pool. Or as of recent Java versions a virtual thread pool. There's actually very little point in using virtual threads in Kotlin. They are basically a slightly more heavy weight way of doing co-routines. The main benefit is dealing with legacy blocking Java libraries.
But the bottom line with virtual threads, go-routines, or kotlin's co-routines is that it indeed allows for imperative code style code that is easy to read and understand. Of course you still need to understand all the pitfalls of concurrency bugs and all the weird and wonderful way things can fail to work as you expect. And while Java's virtual threads are designed to work like magic pixie dust, it does have some nasty failure modes where a single virtual thread can end up blocking all your virtual threads. Having a lot of synchronized blocks in legacy code could cause that.
Kotlin is not a language I learned so I will avoid commenting.
However, the use of JAVA for me is for admin backend or heavy weight services for enterprises or startups I coded for, so for my taste I can't use it without spring or jboss, etc.. , and in that way I think simplicity went out the window a long long time ago :) It took me years to learn all the quirks of these frameworks... and the worse thing about it is that they keep changing every few months...
Kotlin makes a lot of that stuff easier to deal with and there is also a growing number of things that work without Java libraries. Or even the JVM. I use it with Spring Boot. But we also have a lot of kotlin-js code running in a browser. And I use quite a few multiplatform libraries for Kotlin that work pretty much anywhere. I've even written a few myself. It's pretty easy to write portable code in Kotlin these days.
For example ktor works on the JVM but you can also build native applications with it. And I use ktor client in the browser. When running in the browser it uses the browser fetch API. When running on the jvm you can configure it to use any of a wide range of Java http clients. On native it uses curl.
While I generally agree with your take that it's a regression in PL design, there's no need to be inflammatory. There's lots of good software written in it.
> pretending Erlang does not exist
For better or worse it doesn't to most programmers. The syntax is not nearly as approachable as GoLang. Luckily Elixir exists.
Probably a waste of time to answer due to the long thread here. But short answer: you can store tokens in a server session which will manage it for you. In case you need to refresh it, you are redirected to the idp and get a refreshed token which again stored inside the session. So you can handle any "microservice" scenario as was called here, not sure why micro is important... Also, it is a misconseption that the tokens,as it where, are not stored on the oidc providing service. How are you going to logout someone or invalidate or simply track devices? It is going to be stored somewhere and there is nothing wrong with it. It is matter of scale, if you are not facebook the addition is miniscule, especially with distributed cache. Again, a misconseption it is not being used already, e.g. on keycloak if you want HA you have to enable distributed cache. So really naive thinking that session is bad or jwt is bad. They are simply tools used by protocols and the only question is usually what do you prefer unless you get to the edge cases of performance which unless you are facebook, my face would look daughtful to begin with if you raise this argument.
When I started on this codebase, we needed to implement some custom exchange logic that maps very neatly to fanout exchanges and non-durable queues in RabbitMQ and weren't built out on our PostgreSQL layer yet. This was a bootstrapping problem. Like I mentioned in the comment, we'd like to switch to pub/sub pattern that lets us distribute our engine over multiple geographies. Listen/notify could be the answer once we migrate to PG 16, though there are some concerns around connection poolers like pg_bouncer having limited support for listen/notify. There's a Github discussion on this if you're curious: https://github.com/hatchet-dev/hatchet/discussions/224.
I use haproxy with go listen notify of one of the libs. It works as long as the connection is up. I.e.i have a timeout of 30 min configured in haproxy. Then you have to assume you lost sync and recheck. That is not that bad every 30min... at least for me. You can configure to never close...
I am using keycloak for quite a while. The main problem I have with it is that you can't get a link to reset a password, you have to issue an api that does it for you. In fact that is how most of the product goes. It is very opinionated.
Making it a cluster is also not easy, though I did it and it works ok.
Another issue is that the realms has a limit. Though you can spin up an instance every 200 realms but it is not for me... Instead, I just use it for login and do the roles internally in my app for every tenant<>user<>Role but then I get back to thinking, this was an overkill... but a login system that is secured is difficult and I don't want to deal with that...
I am complaining but what is the alternative? Do it all yourself? It is either too risky or too difficult...
> Forgot Password: If you enable it, users are able to reset their credentials if they forget their password or lose their OTP generator. Go to the Realm Settings left menu item, and click on the Login tab. Switch on the Forgot Password switch.
As I said, very opinionated. Meaning you have to use their way. So, for example, if I want to add turnstile bot protection to the reset password screen so your aws smtp won't be abused, I have to write a plugin instead of just getting the url to send myself.
Not specifically the topic, but I looked for a library for golang and it is not that common, there is a library in <20 stars, too experimental for me.
Also, not sure the postgresql extension is in the main distribution, couldn't find it if it does. For example, GCP only supports this one IIUC https://www.postgresql.org/docs/current/uuid-ossp.html
Java has something, but again not really clear how tested.
So using this is a bit iffy...
c) UUID7 internal (which also includes time badly), UUID4 external/whatever short ID.
How exactly this helps if you need external ids (which you usually do today)? It doesn't even make it a short ID.
Even if there is a corner case, are we just saving a few bytes while adding more complication?
Clustered Index is a myth in PostgreSQL, not practical since you have to run a special program to reorder. So, a regular index might suffer but not really. Why? Because I am not ordering by the ID most of the time, I am ordering by "Created Date/Updated Date" or Name or whatever. Who cares about ordering IDs?
WAIT!!! But what about Next Tokens? ok, these are painful, but easily solved: Next can be (>=Created Date,>ID). Same result. Pagination, stays the same since it is sorted by Created Date.
I understood it as c) only UUID7, no secondary external UUID.
The external Id is used instead of Bigint because you don't want your external users to query 1, then 2, then 3 (IDOR)... But the random part of the Uuid7 makes this impossible.
Uuid7 isn't a substitute for Created/Updated, but a substitute for the dual field Uuid4/Bigint.
I build a port check way back to determine if services are up. It crashed half the company by simply opening a few tcp ports to the machines.
Ridiculous days :)
I also remember SMB vulnerabilities that stayed unpatched for years on some machines. That was already when Metasploit existed, so you could inject VNC into most Windows hosts on local network with just a few commands. These days at least the patching is super fast.
Even into the late 90s early 2000, modems (including ADSL) didn't come with a router, you had to establish a PPPoE connection from your computer, which also means your home machine was directly on the WAN with no firewall protection.
I can't remember which version of windows but it must have been 98 or ME, you had to rush to download and install a patch when you connected it first to the internet before one of these exploits would make it crash.
I never encountered any of this, except that one roommate liked to brag about his expensive win 9x box, and me and another roommate would take turns using our junky linux and nt desktops to “pause” his machine with “ping -f”, usually in the middle of a lecture about how amazingly fast it was.
Later, we had an openbsd router running on an old 386 that we jammed a few old 10MBit 3com cards into (later, Linux, plus $20 ne2000’s).
Those things had 100% uptime other than power outages, ne2000 swaps, and the time I unplugged it after 50 gallons of water ran through it (stayed up, worked fine after I made a new copy of the soggy boot floppy).
Later we ended up with some shitty belkin router, etc. “Unplug it and plug it back in? Really?”
Eventually, I got a WRT54GL (emphasis on the L) which worked for a few years.
Now I’m back on OpenBSD. The only software downtime is due to PG&E power cycling it 100 times, and fsck expecting me to send a “y” over the serial port one of those times. Now it is double battery backed.
It works, but I’m living in fear of the day my PC Engines APU board finally gives up the ghost.
Also, sometimes our starlink’s linux cpu hangs. You’d think they could get that right. It’s not like it’s as hard as building a car, launching rockets, or operating a network that’s used for public safety announcements.
> Even into the late 90s early 2000, modems (including ADSL) didn't come with a router, you had to establish a PPPoE connection from your computer, which also means your home machine was directly on the WAN with no firewall protection.
Even today modems don't always come with a routers. In fact, I like them that way :).
IIRC, the problem in the late 90s/early 2000s was routers were thought of as only necessary to get multiple computers online, and it was pretty common for people households to only own a single desktop. There wasn't enough security consciousness earned through repeated failure, so it "made sense" to direct connect consumer machines to the internet.
We actually had a LAN years before we had broadband, and I setup a PC running Linux as a router to share our 33.6 modem to the household. But before that? The PC direct dials into the ISP, and got a publicly-routable IP.
IIRC Windows XP up to SP2 was vulnerable to this. Basically if you ran the install with the DSL modem attached, your PC was compromised even before the end of setup.
When W32/Blaster[0] came out I worked at a small ISP doing tech support and computer repair. A tech and I imaged an old box we had in the corner with a clean XP, assigned it a static IP in our /24, plugged it in and started a stopwatch. It didn’t even make it two minutes before it was infected.
I was working for a small ISP in that time frame and that's when we started blocking incoming windows ports. And yea, it was annoying for the few techie types that tried to run SMB and could actually protect their stuff.
For the other 99.9% percent of the users it protected them and us.
Yeah Blaster is one of the few worms I've ever (knowingly) been infected with. As you say, it was literally less than a minute or two between connecting an unpatched box and getting it.
that is really unpleasant.. engineers worked, companies worked and volunteers also worked to make the modern Internet, then selfish-clever, thieving, control-oriented militaristic jerks from WINDOWS filled the content with WINDOWS virus activity to play cheap stealing tricks on unsuspecting people. And you call it "the Internet" .. it has nothing to do with "the Internet" as much as the cheap and aggressive culture of BS from WINDOWS at that time
It would be more fair to criticize the corporate culture at Microsoft in the 90s that led to this situation.
They simply didn't really care. If another OS was dominant, it is easy to argue that fundamental security issues could have been addressed in a better fashion, if management wanted it to be so.
To wit, this is the same era of computing that spawned OpenBSD. You can't say with a straight face that OpenBSD would have been brought down by oversized ping packets or be allowed to accept traffic out of the box like Windows was.
> I can't remember which version of windows but it must have been 98 or ME, you had to rush to download and install a patch when you connected it first to the internet before one of these exploits would make it crash.
Much more than that. With Windows 95, you could send an illegal ICMP with a simple "ping.exe -l 65510 victim.host.ip.address". Your Windows 95 might crash/misbehave after that, but not always.
The receiving end, the destination IP, on the other hand... These would panic, crash, dump, hang or reboot: Windows, MacOS, Netware, AIX, Linux, DEC Unix, Nextstep, OpenVMS, SCO Unix, HP-UX, Convex OS, Solaris.
It was very funny in the very first hours, the little toy Win95 machines obliterating all those big, expensive Unix servers on the network.
That was the precise moment when we started filtering ICMP echo on the routers. Hardly anyone did this before.
Earlier versions of Windows (98? 95?) also used to share things like drives (C$, D$) and printers with the dial-up connection by default.
I remember connecting to a printer of a classmate over the internet and printing a page, to his surprise. All you needed was the IP, which was trivial to get from ICQ, back in the days.
There was a time when you could SMB mount shares from servers at MS over the public internet (and e.g. do things like download alphas and betas that were not visible on the ftp server).
I remember early Bitcoin exchanges that had everything stolen because they left all of their unencrypted private keys on SMB shares that were left visible to the Internet. IIRC this is what finally took down MtGox, almost 20 years after the release of Windows 95. Some people never learn.
What was more fun when the spammers figured out 'net send'. I showed that to one guy I worked with and that thing had a nasty bug. If you got one of the parameters wrong it would send a message to every computer on the domain. He had to explain to the top guys why they had funny messages on their screen.
When I was at the university and without a fully developed frontal lobe I thought it was a great idea to test this in the lab.
Ended up creating a "battleship"-like game. Two people, each trying to crash the other's machine. Since the IPs were randomly assigned by DHCP and for some inexplicable reason changed frequently (every day or so), we would be trying to guess what the other machine's IP was.
Given how they were physically arranged, we were able to see the machines blue screening (but not always fully crashing).
Of course, there was a lot of collateral damage as some machines were in used by people that weren't part of the 'game'. Thankfully, most of the time they didn't fully crash. Most of the time.
Ah, the good ole days of "hey what's your IP address?" followed by typing those four magical numbers into Winnuke and then watching a person just drop off ICQ. Still makes me chuckle. That worked for years.
Oh I remember those times. There was a guy in high-school 2 years younger that one day shown me that he wrote a C implementation of WinNuke on the school Unix server and he was then crashing Windows PCs in the lab for fun.
He was a really smart guy and AFAIK he's been working at Google for a few years now (maybe he's on HN even?)
I remember when CdC released Back Orifice to remote control Windows machines, like ejecting CDROM and such [1]. We really did come a long way, where 0-days go for 20 million dollars. [2].
I was managing a few labs full of machines used for training on NT4 which meant I was frequently re-imaging and could use effective remote control capability. Back Orifice was, at the time, the absolute best remote admin solution available for free. I could deploy it in the image and then use it to kick off reimage process, reboot, log out a student, or monitor their screen from the teacher's desk to share on the attached TV. It really was a handy tool for remote admin tasks.
Could also disable certain keys on victims keyboard. Did that in the office a bunch was hilarious watching co workers who had no idea what was going on. Perfect for a Monday morning.
I may or may not have known someone who wrote a shell script with the linux BO client to reset Windows machines' home pages to a porn site that paid a dollar for every unique clickthrough in IP ranges that were in specific foreign countries.
This person might have earned several hundred dollars each month for several months afterward. But opening their cdrom tray could have been fun too. He probably wishes he had thought of that.
My favorite 'thing' for historical windows was that accessing "C:\con\con" was an instant BSOD (Even over file sharing, or even over an image URL pointing to "file://C:/con/con")
I had my Mac exposed on the public Internet around 2021/22, and I expected to be hacked instantly, but nothing actually happened. Times really have changed.
The feeling of being able to chat with friends over nc was pretty powerful, though.
Therefor, I created a simple rule. If you do not know what this error means to the user yet then let it stay a fmt.errorf("xx:%w",err). If you do, wrap it in your own custom ServerError struct and return that type from now on. Do not change the meaning of ServerError even if you wrap the Error with another ServerError.