I like TST. We use it in our code (FP'ish Kotlin).
But I also like stack traces, as they show me call-stack to the point the error happened. Something TST does not always show: different call-stacks can result in the same (or very similar) TST-stack.
I must say this take seems a bit... grandiose... to me.
People have been banging on with type systems and similarly "better" capabilities for at least 2 decades [0] and Enterprise continues with a stark preference for language "practicality" and low barrier of entry.
IMO it's because it's best to keep engineers superficially interchangeable rather than having a highly stable system (perhaps stable over spec) and a costly workforce with lots of negotiation leverage. But I digress.
Your exercise can't:
1. Tell me who other people's friends are.
2. Change it's output if a spammer stops pinging after 1 hour.
3. Count the amount of times a person has pinged in their lifetimes.
4. Build a pyramid
5. Output Pi
6. Make me lasagna
For some “can't” statements, you don't need to write any error.
The “can't” statements in CDD are for actions and input that you could take, but the software deliberately prevents you from doing them.
There are no actions or input, requested from the exercise that make it possible to do the list you gave me, so it does not make sense to create an error for them.
The TDD and DDD ultimately tell you which inputs are of interest. CDD doesn't, so it's apples to oranges.
People who think that specifically TDD is a programming technique didn't get the entire memo. It can drive you to design and re-design your app's inputs. But that part of TDD is tacit - hard to teach.
I guess while you're at it, you are also going to tell us how you solved the halting problem then? Why not prove if P != NP or not while you're at it. I mean after all, we just couldn't think of any other reason why P should not equal NP, so it must be equal, mustn't it?
Not being able to quickly think of more test cases for whatever you're trying to test at any given moment does not prove anything. You also seem to be very hung up on something about stack traces that I'm really not able to figure out what it really is, even though you keep mentioning it.
I put coloured glasses on the past because back then companies didn't request developers to know more than one programming language. They just used fancy words for simple tasks.
Yes they did. My dad worked in IBM 360 assembler. And in COBOL. And in Natural (yes that's actually a programming language - https://documentation.softwareag.com/natural/nat426mf/pg/pg-...) And JCL. And I am probably forgetting some. And they had learn their "Hibernate" and JPA etc too (CICS comes to mind).
Also in a similar regard your article mentions other things like microservices being the root of all evil so to speak and you need to know multiple services.
When in fact microservices are meant to do the opposite. You are supposed to be able to focus on just your service and what it needs to do and make it do it well. Instead of having to wade through a huge monolithic and probably very interwoven code base. We can of course talk about whether microservices achieved that goal.
They are not fundamentally different from having worked in large enterprises 25 years ago where you would've encountered many many different services making up your application landscape. It was probably using SOA and the services were deployed to a bunch of application servers like an IBM WebSphere and those services might be talking to other services incidentally deployed on some JBoss app servers and they'd talk via message passing via a queue (JMS comes to mind but other ways existed).
All four probably not. He might have worked on some feature in COBOL that then required a new/change to a channel program that would have been written in assembler and at the end of the day write some JCL to get a job to start. Yes.
I constantly write software in Typescript, Java and Kotlin. Some days it's all of them other days it's only a subset. I don't see an issue with that at all. If you don't have a standard set of languages I would agree with you but a stack of one frontend and one backend language should not be a problem for anyone that calls themselves full stack.
The SOA point was not about languages any longer. It was about having multiple services that make up an overall feature and app landscape. Also you seem to have missed the part where I said that I was not in fact HTTP calls is manyif not most cases ;)
Completely fine and the kids are growing (up fast).
I fail to see how one has anything to do with the other.
Knowing and applying both Java and Typescript is not an issue at all. What is a pain is CSS and why I tell everyone I'm a BE dev. But I happily build vertical slices of functionality where I have full control over FE and BE code and can adjust as I see fit instead of having to create or follow a rigid interface definition and where the BE isn't user testable when it gets written. What is a pain is the libraries on the FE but that's not an issue with knowing multiple programming languages but with the web dev FE ecosystem in general. A non full stack FE developer would have the same pain.
While we did learn a programming language at university to code our exercises in, we didn't learn that language specifically as "this is the language we teach and this is what you will use in your job until you retire". On the contrary we were told and taught about programming concepts, data structures and algorithms and it was expected that you can program in any language. The (mandatory) operating systems course for example would just assume that you learned enough C by yourself to do the programming exercises. We had to write a working memory compactification algorithm for Minix. In the scripting language course (non mandatory) you had to choose any scripting language you wanted and implement a project in it. In the mandatory hardware course we were expected to pick up enough VHDL to implement whatever we learned in the course.
So you telling me that you don't need to study on your free time? Is that correct?
You can't program in any language, don't fool yourself.
Take, for example, SQL that is used in all companies. Can you write SQL functions with the same ease as Java? You may ask why do you need functions in SQL? Because it is faster to run the algorithm in SQL, rather than in Java from an ORM.
I'm not saying I can instantly program at expert level in any language I've never heard of. But yes, I can read most programming languages even without ever having seen them. Take LUA. I had never used LUA and when I started out I was reading code examples and writing LUA code without first "learning" it at all. I just went and did it. Did I probably do some "stupid things" that an expert would do differently and/or fast? Of course. But I got stuff done and learned as I went along. Same when we introduced Kotlin for example. I was instantly productive but someone using Kotlin for a longer time (such as today's me) would "cringe" at how I was writing Kotlin as if it was Java.
Notable exceptions would be languages that have such a weird syntax as Brainfuck ;)
SQL is a great example. I have not been using SQL directly for quite a long time now. But about 20 years ago my bread and butter was SQL and I was writing stored procedures every single working day and I was not writing any Java at that point in time but was using two other languages other than SQL on a daily basis as well (Perl and Bash).
Look, you are very lucky for sticking with java and using your OOP knowledge to move to other programming languages.
But an unexpected change will come and you will need to go back studying on your free time.
This is always happening in our industry, you were just lucky for not living it because you sticked with Java.
You are making assumptions there, even though in this very conversation you have evidence to the contrary.
In professional capacity over the years I have used Perl, PHP, Bash, SQL (various dialects), Java, Groovy, Kotlin, Scala, Javascript and Typescript. I did not move from Java and OOP knowledge to other programming languages. I started with with languages that don't (have to) use OOP. I still work with Typescript (and until recently Javascript) while also working with Java and Kotlin. And in Java and Kotlin there is very little traditional OOP going on.
I guess the point there is to always keep learning and evolving. I learned Kotlin at work, while doing it. I learned Typescript at work, while doing it. I learned about various libraries at work, while using them. Until a few years ago, I had never used React and I have never used React except at work.
Talk about it. Prolifically. Say that you have no problem working on "that thing" but that it's going to take more time, because you're unfamiliar with that language / library / service or whatever it is that would slow you down.
Then adjust your estimates. If you are working on something completely new that you know nothing about, put in "Spike" tickets to prototype things. Put larger estimates for the tasks themselves after for the "unknown unknowns". When you discover an unknown unknown that can be extracted as a new ticket from what you are currently working on, create a ticket for it to make that fact visible and adjust your current ticket to say it got extracted so nobody still expects your ticket to do it.
Tickets are not evil, tickets are protection for developers - create them and comment on them, explaining what is going on and what needs to be done. Most companies aren't really "evil" in that way. They just don't know better. If you do happen to be in one of the 'actually evil' ones, go find another company to work for. E.g. if developer are not allowed by process or actually prevented by the ticketing software from just creating tickets and moving things around as necessary: run. Don't walk. Run from that company.
The “while doing it” does not work if you don't have experience with similar language.
If you don't believe me, try learning Rust while writing a server for your work.
Tickets are useless. Real tickets are unit tests that need to pass for the software to be ready for production.
Tickets cannot protect you, only Odin's switch statement on stack trace can protect developers from bad changes.
I will give you an example, you worked for a company and on a ticket, you wrote “this code works this specific way and should never be changed”. After 6 months, the company fires you and another senior developer takes your place. The new developer never read previous tickets because no one read old completed tickets, so no one will read the message you left, and they will change the code accidentally.
However, in Odin with unit tests and locking stack traces with switch statements, the code is really protected from these kinds of accidents.
The “while doing it” does not work if you don't have experience with similar language. If you don't believe me, try learning Rust while writing a server for your work.
I can't try that unfortunately. I have tried to push trying Rust "for whatever next service we need" but nobody took me up on it :shrug:. That said, yes, many moons ago, probably about 25 years actually, I did have a job that paid me to learn how to write a TCP server in Perl that would take commands in a custom plain text protocol to do certain things. It was a drop-in replacement for the same thing written in C by some developer that long since had left the company and nobody knew how it worked or knew C. And no I did not know any C either, nor had I written any servers in Perl at that point (though to be fair I had used Perl to do some quick regex magic).
But I think I'm barking up the wrong tree here anyway, because you seem to be completely set in your thinking that nobody should need to know more than exactly one programming language and that should probably be Odin and that anyone thinking or saying anything else is just wrong. Well good luck to you getting paid and having fun. I definitely know that someone with your attitude towards learning and expanding ones horizon would not last long at my place.
only Odin's switch statement on stack trace can protect developers from bad changes.
I mean, until here it was all "fun and games" but either you've also had fun here leading me with your little crusade or you actually believe this. But then I can't help you.
on a ticket, you wrote “this code works this specific way and should never be changed”
Ah I see now. You misunderstood what I was saying about tickets and what they're good for. Ticket != (unit) tests! Tests are what describes how your software should behave.
What I was saying about tickets being protection is that they're protection against "the company" and process and being rushed all the time "because agile". Embrace them. Use them to your advantage. Someone gives you one ticket to do X and do it quickly but you don't know the library or service involved? Convert it into an Epic and extract 10 tickets including some prototyping in spikes.
Odin with unit tests
s/Odin/Any compiled language with good typing/g ;)
You expand your knowledge in useless things like design patterns and frameworks. Odin will make this type of knowledge useless.
Making simple tickets to epic shows that the company does not have a software architecturer. It means that you create a monster application with full of technical debts left by junior. If CDD used in your company, nobody would ever be able to replace a ticket with an epic, because the architecturer would show you from where to start and what steps to take.
This really is something else. Now you are also a psychic. That's it. I'm out of here but feel free to keep living your dream with Odin. I'll stay here in the real world.
What problem do you have with companies requesting developers know more than one language? If they want a master in multiple then they are fools. But a web developer for example will need proficiency in a few.
First, in the article I wrote how detrimental it is to the developer's social life, to know multiple languages.
Secondly, companies need to realize that they request multiple programming languages because the current programming languages can't parse stack traces, and they push everything to microservices, so they can parse their errors from DevOps services. Which means they don't request only to know multiple languages, but also DevOps services.
in the article I wrote how detrimental it is to the developer's social life, to know multiple languages.
That is an opinion and not fact. One based on your personal experience that is not universal in any way. Knowing multiple programming languages is quite normal and I would expect any good software developer to pick and up be able to program in (almost) any language. I guess https://en.wikipedia.org/wiki/Brainfuck might be an exception. JCL comes close to it.
Do I expect everyone to be an instant expert in any language? Or able to create the busiest beaver? No of course not! But I do expect any decent programmer to pick up any programming language and work in it.
Do we all have preferences? Heck yes! I absolutely dislike Python and would not take a job where I have to program in it if I wasn't forced to by circumstances. I've never done C# or Lua professionally but it was super easy to pick both up when I dabbled with them for game development in private (each respective game engine used these languages for scripting).
Secondly, companies need to realize that they request multiple programming languages because the current programming languages can't parse stack traces
I read something like that in the article and it still makes no sense whatsoever when I read it here either. Parsing a stack trace in various languages is absolutely possible. But what's that got to do with anything?
FWIW, yes, I've actually looked at previous stack frames (e.g. `e.getCause()` in Java) for error handling. It always felt very very dirty but it was my only choice under the circumstances.
they push everything to microservices
We have to distinguish the why there.
Some companies (meaning the people within those companies) because they read about how Netflix has x-many of them and it's the thing to do today. Stay away from those companies. They will also make you do other things just because someone read an article of forbes.com.
And then there's companies that are actually trying to solve similar problems as the ones that Netflix was trying to solve with microservices and it actually makes sense to model their internal application service landscape that way! We're in that space for example. We have one big legacy "macroservice" aka very monolithic core and then we have a bunch of newer microservices (some larger, some smaller) that actually solve problems. All of these microservices are written in exactly two languages: Java and Kotlin. Newer ones in Kotlin, older ones in Java (like the legacy monolith). The FE is written in exactly one language by now: Typescript (converted from quite a large javascript code base over multiple years).
so they can parse their errors from DevOps services
What does that even mean? Makes no sense to me whatsoever. If you're talking about having a central log analytics service like Splunk, Datadog or Graylog et al. then that has nothing to do with microservices or multiple languages. At a previous place we had a bunch of monoliths make up the overall application and splunk so that you'd be able to actually reason about what the system overall was doing for any given overall user transaction / session and it was awesome to have. And if you have any sort of actual need to run software that has uptime requirements, then you are going to run at least two replicas of whatever service you created, monolith or not and you want a central log aggregation service.
I'm really sorry to say this, but it seems like your articles are really just ranting about things that you personally don't like for one reason or another but there's no actual real reasoning or universal truth to it.
When you try to implement it, then everything on what I said will make sense to you.
Also, we don't need log systems when there is a programming language like Odin to parse stack traces with type checking (not just string like you gave me as an example from Java).
In microservices you are the error handlers. For example, if in the logs you see a stack trace, then you will go to the code and fix the error.
In Odin I don't need to go in that trouble, because ALL the stack traces can be handled. There will be no undefined behavior in software or unexpected input that caused an unexpected stack trace, so there is no reason to have logs.
I looked at that and I find it pretty funny. As in, why would I ever build error handling that cares about the specific call stack to handle the error? That makes no sense to me. I'll show you why.
Let's say you have the call stack as this (from your example):
f4()->
f3()->
f1()->
ErrInvestmentLost
Great, I'll handle this based on the fact that f1 was called by f3 (in the Java example, you'll just inspect e.getCause() until you reach the desired point in the trace - basically do what `printStackTrace()` but don't print it and instead do your error handling based on it).
But nobody would ever want to do this because it's super brittle. I change f4 so that it first calls f17, which then calls f3, which then calls f17 again, which calls f1 and your error handling based on the call path is suddenly broken.
What is it that you are trying to even achieve by doing this? Proper error handling doesn't depend on the call path. Proper error handling depends on the type of error that occurred and whether you can actually handle it at all or if you just have to give up and throw the error all the way to where it will get logged for a programmer to take a look at why it happened and why we couldn't handle it.
Your claim about being able to handle "all stack traces" makes no sense to me. You don't handle stack traces. You handle error types.
A real world example of the above (taking Java as an example again) might be a REST resource. My error handling should not depend on nor suddenly break, just because someone configured a new filter in the filter chain that sits above the actual resource method. Say someone added in a `AuthenticationFilter` that checks if some auth token is present and valid and that didn't used to be the case. Now any error handling in my resource method that was based on the exact stack trace combination that existed before that filter was added will break horribly.
Your system with Java will break if someone else add an AuthenticationFilter, but my system in Odin will not even compile until I have handled all the stack trace paths that include AuthenticationFilter.
Do you see the differences between handling stack traces with union types rather than string?
I don't see how it improves or gives me anything, no.
See, the `AuthenticationFilter` sits outside of my REST resource. I could not care less that someone configured it and that at runtime, based on some configuration that can change without even needing a recompilation, this filter will either be there on the stack or it won't.
My resource does not interact with this filter in any way and when an error happens somewhere down in another method I call, then I don't care that I was called with or without the auth token having been checked by said filter. I might care whether the method I called threw a `SQLException` or a `JSONParseException` but very probably I don't even care about that at all because I can't do anything specific in either case and will just throw it further (i.e. not handle it, other than potentially logging it).
Java actually tried the whole "specify all error situations with checked exceptions and otherwise the code won't even compile" and it failed miserably and you are hard pressed to see anything new derive from `Exception`. Everyone uses `RuntimeException`. It does come at a cost, because now I no longer have the hassle of explicitly knowing and deciding what to do with these exceptions and I may only figure out that a particular type of error can happen once I "see it in the wild" (e.g. in my logs, coz something failed) or I'm lucky enough to have actually read the documentation and handled all the exceptions I wanted to handle.
But that happened precisely because it was just too much to have all your code specify these exceptions when everyone figured out that 99% of all code just threw them further up the stack. You call one new library method that specifies an exception and you suddenly have to adjust 127 other files and the only thing you do is to declare all those methods will also just throw the exception further up the stack.
Odin's way does not mean that you have to do something for every Exception that is thrown to you.
Also, in Odin SQLException can not exist, it is too generic, but SQLClosedConnException which is more specific can give a different story to a stack trace, which you can handle.
For example, in Authentication_Filter_Error union, you will have another union called SQL_Verify_Account_Error, that it will contain SQL_Error enum with the Closed_Conn value.
Imagine your stack trace like this Authentication_Filter_Error -> SQL_Verify_Account_Error -> SQL_Error.Closed_Conn
Now when you know that can happen (through CDD), you can create a switch statement to catch the specific stack trace, to call the system administrator in the middle of the night to check what happens.
This is how software should handle its errors and there is not even a need to log it.
In your scenario, you wake up, you go to work, everyone is screaming at the office, you check the logs, you see the problem, and then you call the system administrator for the problem.
I agree with half of that. The half that says that most exceptions thrown from libraries today and in much of application code as well are way too generic and hide the details that might allow handling them in a `message` String.
However, that's still about the exceptions thrown from down thread, not from the call path part of the "stack trace".
This stack / call path is impossible, because when the AuthenticationFilter notices that the token is invalid, it returns a 401 or 403 or whatever is appropriate and my REST resource is never actually called. There's no SQL being run and very definitely no "connection closed" error occurred.
But let's say there was a distinction made with proper exception types and instead of `SQLException("Connection closed")` and `SQLException("Statement timeout")`, I actually received `SQLException(ConnectionClosedException())` vs. `SQLStatementTimeoutException`. Now, without string parsing, I can know that either the connection just closed or that the statement was aborted due to timeout. If these are checked exceptions, I have to declare that I'm aware these can happen and what I want to do with them: Handle or rethrow.
However, a myriad of such exceptions can happen. I would probably have to declare 20-50 exceptions way up in a REST resource layer. Not only can these two happen, but many other situations on the network or database side and on the JSON parsing side for the payload I receive, some exceptions from my business logic etc.
And for most of these, what can I do? If the connection to the database closed, all I can do is to log the error and return a `500 Internal Server Error` to my caller. Guess what I can do when a statement timeout occurs? I log the error and return a `500 Internal Server Error` to my caller. For a statement timeout I can't even return a `400 Bad Request`, because it's not knowable if the statement timeout occurred because the database was simply overloaded in that moment or if the request itself was created with such parameters as to always cause a statement timeout. Until we see the logs and through investigation figure out that it wasn't a bad request after all anyway. We were missing an index and the table finally grew large enough for that to matter.
So yeah, I'm good with `RuntimeException` and handling only very few specific ones ever.
Also nobody will be screaming when the token is invalid and I definitely don't call any system administrator. That's something you as a developer look into. Same with the statement timeout.
No, you don't need to just log and return 500, you can make the software to handle these kind of errors.
You could make the software call the system administrator and return a message to the user "Try again in an hour, the system administrator is fixing it now"
Or if it is a timeout, the software will call amazon to buy a new machine to scale the database and send a message to the user "Try again in an hour until we scale the system".
Developer's job is to automate error handlers and not be the error handlers.
If you can see the stack trace tree, then you can plan far ahead, but to do that we need to destroy this "agile" mindset, that is always in a hurry and doesn't let you to think that far ahead.
you can make the software to handle these kind of errors.
No you can not do this in all cases and I described multiple cases in my comment already in which you can't reasonably do anything automatic. Let me explain.
You could make the software call the system administrator
But why would I do that for every `StatementTimeoutException` at every moment of the night and why do I need to bake that into my error handling? That isn't actually handling the error.
"Try again in an hour, the system administrator is fixing it now"
Please never ever do this to either your users, who may believe it or you "system administrators" who do value their sleep or have other more pressing matters to attend to.
Or if it is a timeout, the software will call amazon to buy a new machine to scale the database
I described a case in which automatically adding resources would actually be wrong in that it would completely mask the actual problem, which is that the developer did not think about the access patterns of the software they wrote enough and did not add the right index to the database. If you keep scaling your database automagically you'll pay AWS until you run out of money and have not solved anything. Believe me, a missing index can eat up a lot of resources before anything gets better. And until then your software will just keep failing and keep adding resources. And in this case it won't help at all because your new machine will not even be used by the query. It'll still only be one node handling your read and that is constrained by actually reading data from the disk and that's super slow because you forgot the index!
Developer's job is to automate error handlers and not be the error handlers.
I've yet to see anything that the developer should do here based on an individual error in the software. One case where something should happen automatically on the database would be if the database was running out of space. You should have monitoring in place that takes care of that. And no it should not be your software doing that scaling because it received a `SQLError -> DatabaseOutOfDiskSpace` error. If it gets that far, all of your calls to the database will fail. Which of your error handlers should be the one handling it and why should we let things get that far in the first place? Have monitoring set up outside of your actual software that adds disk space automatically before you run out of space and then make it scream very loudly to your system administrators and developers about it, so that they can take a look at it and determine if this was a legitimate "well I guess we got too many more paying customers now, this was OK" or if it was the last update that went out having a bug that keeps filling up the database with BS data and you need to make an emergency bug fix or maybe you're deliberately being DDoS'd and it got past your DDoS protection and you better do something about that or the DDoS'er is gonna make your AWS bill go crazy.
If you can see the stack trace tree
Again, nothing and nobody cares about up-stack. I care about down-stack when handling such errors.
we need to destroy this "agile" mindset, that is always in a hurry and doesn't let you to think that far ahead.
Nothing to do with agile at all. You should think about error handling every time you code anything. For example as we've seen above, think ahead about your database running out of disk space. But don't make every developer think about it for every single piece of code they ever write. That makes no sense to have them try to handle these situations. It'd actually make it worse and these people would never get any actual work done either.
System administrators are paid to wake up at night to fix things.
Ask your boss, "do you want your users wait until the next morning for the system administrator to wake up and fix the issue? Or do you want the software inform the user how long it will take to fix the issue in seconds and start calling the system administrator to wake up and fix it?"
Because you answer based on your preferences as a worker who wants to avoid the extra work to make the system perfect and not your boss's preferences.
I guess that settles it. You are conveniently ignoring the cases I described in which it makes absolutely zero sense whatsoever for your software to do anything about this.
And yes, there's an on-call person that does get paged when something happens that likely needs immediate attention. A page for every single time there's any error? Not bloody likely mate.
To pick up your last point: My boss is not in the business of paying for you, who will spend countless extra days building useless error "handling" for stuff that has already been handled and who is trying to get out of the responsibility of writing resilient software by paging someone else to "pick up the tab".
A "boss" never wants to pay for a perfect system. That would take way too long and nobody has figured out how to actually build that anyway (no, Odin is not the answer). They want to pay for the "slightly less than good enough" system, because that's cheaper and still gets the job done. And especially when I hear you talk here, I'm with them: Perfect is the enemy of good enough. We just have to ensure that it really is good enough and not less (coz many a boss will happily take way less than good enough if it gets them to market faster.
The entire stack down to silicon exploded multiplied by 3 or 4 in that time.
Vectorization, more cache levels, branch prediction, multiple cores and processors, gpu, virtual machines, (operating systems, but they didn't change much, I only let the placeholder for the layer), containers, but all that in the cloud, distributed databases, frameworks written in a language that need to be compiled to a language that is actually implemented by a web browser in the client (we are far better on compatibility between browsers, but much libraries or practices keep stuck in the past).
I haven't touched shift in practices that make it far worse.
Meanwhile in 2024 you could barely begin to even describe any individual layer of the stack with so few words. You'd be out of breath before even beginning to finish describing React.
From what I understood it seemed like you listed a collection of requirements for TDD failure cases, named them "can't" statements and then did TDD on them. But I believe there is more to it, but I couldn't see it.
Could you list the types test cases you would have created if you had done TDD on this same problem? And how they'd be different?
In TDD, I would have to iterate over the hello_from procedure 900 times to create the software design.
In CDD, I created first the software design by translating "can't" statements to errors and then iterate over the errors following specific rules to create a tree of stack traces that created the software design and then I TDD over that design to complete the application.
that is the beauty of Odin, you don't need a book.
If you know Golang then you can understand Odin as well.
Vlang always adds a new feature every year and I hate that because it keeps me on my toes (and I am already too old). Also, Odin's memory management is more clear to me.
Jai is not even open source. How can I learn the language if I can not read its source code?
We are getting fat because we stopped eating legumes and whole grains like we used to.
100 grams of uncooked lentils is 300 cal only and will make you feel full for 12 hours
Those are not investments because you do it out of obligation and necessity.
You are not allowed to work if you don’t have a bank account.
And also you loose value from these. Cash loose value thanks to inflation money spent on nsa.
Banks not only dont pay you for the money you have in the bank but also charge you a lot for a digital transaction they spy and use.