The way I like to think about is that with immutable data as default and pure functions, you get to treat the pure functions as black boxes. You don't need to know what's going on inside, and the function doesn't need to know what's going on in the outside world. The data shape becomes the contract.
As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.
Sure modularity, encapsulation etc are great tools for making components understandable and maintainable.
However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?
In functional programs, you very explicitly _do not_ need to understand an entire program. You just need to know that a function does a thing. When you're implementing a function-- sure, you need to know what it does. But you're defining it in such a way that the user should not know _how_ it works, only _what_ it does. This is a major distinction between programs written with mutable state and those written without. The latter is _much_ easier to think about.
I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.
I think you missed the point. I understand that if you writing a simple function with an expected interface/behaviour then that's all you need to understand. Note this isn't something unique to a functional approach.
However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?
It didn't disappear; there's just less of it. Only the stateful things need to remain stateful. Everything else becomes single-use.
Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.
> Only the stateful things need to remain stateful.
And I think it is worth noting that there is effectively no difference between “stateful” and “not stateful” in a purely functional programming environment. You are mostly talking about what a thing is and how you would like to transform it. Eg, this variable stores a set of A and I would like to compute a set of B and then C is their set difference. And so on.
Unless you have hybrid applications with mutable state (which is admittedly not uncommon, especially when using high performance libraries) you really don’t have to think about state, even at a global application level. A functional program is simply a sequence of transformations of data, often a recursive sequence of transformations. But even when working with mutable state, you can find ways to abstract away some of the mutable statefulness. Eg, a good, high performance dynamic programming solution or graph algorithm often needs to be stateful; but at some point you can “package it up” as a function and then the caller does not need to think about that part at all.
And what about that state that needs to exist? - like application state ( for example this text box has state in terms of keeping track of text entered, cursor position etc ).
Where does that go?
Are you creating a new immutable object at every keystroke that represents the addition of the latest event to the current state?
Even then you need to store a pointer to that current state somewhere right?
It's moved toward the edges of your program. In a lot of functional languages, places that can perform these effects are marked explicitly.
For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.
It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.
You are very right in that things need to change. If they don't, nothing interesting happens and we as programmers don't get paid :p. State changes are typically moved to the edges of a program. Functional Core, Imperative Shell is the name for that particular architecture style.
FCIS can be summed up as: R->L->W where R are all your reads, L is where all the logic happens and is done in the FP paradigm, and W are all your writes. Do all the Reads at the start, handle the Logic in the middle, Write at the end when all the results have been computed. Teasing these things apart can be a real pain to do, but the payoff can be quite significant. You can test all your logic without needing database or other services up and running. The logic in the middle becomes less brittle and allows for easier refactoring as there is a clear separation between R, L and W.
For your first question. Yes, and I might misunderstand the question, so give me some rope to hang myself with will ya ;). I would argue that what you really need to care about is the data that you are working with. That's the real program. Data comes in, you do some type of transformation of that data, and you write it somewhere in order to produce an effect (the interesting part). The part where FP becomes really powerful, is when you have data that always has a certain shape, and all your functions understands and can work with the shape of that data. When that happens, the functions starts to behave more like lego blocks. The data shape is the contract between the functions, and as long as they keep to that contract, you can switch out functions as needed. And so, in order to answer the question, yes, you do need to understand the entire program, but only as the programmer. The function doesn't, and that's the point. When the code that resides in the function doesn't need to worry about what the state of the rest of the program is, you as the programmer can reason about the logic inside, without having to worry about some other part of the program doing something that it should do that at the same time will mess up the code that is inside the function.
Debugging in FP typically involves knowing the data and the function that was called. You rarely need to know the entire state of the program.
I'm trying to work out in my head if it helps the true challenge of programming - not writing the program in the first place, but maintaining it as requirements evolve.
The examples for functional programming benefits always seem to boil down to composable functions operating on lists of stuff where the shape has to be the same or you convert between shapes as you go.
It's very useful, but it's not a whole programme - unless you have some simple server side data processing pipeline - and I'd argue those aren't difficult program.
Programming get's difficult when you have to manage state - so I accept that parts that don't have to do that are therefore much simplier, however you have just moved the problem, not solved it.
And you say you've moved it to the edge of the program - that's fine with a simple in->function-> out, but in the case of a GUI isn't state is at the core of the program?
In that case isn't something with a central model that receives and emits events, easier to reason over and mutate?
Even the GUI can follow the FCIS architecture. It helps immensely with testing and moving things around.
For a bigger program that handles lots of things, you can still build it around the FCIS architecture, you just end up with more in->chains of functions->out. The things at the edges might grow, but at a much slower pace than the core.
My experience with both sides is what's driven me to FP+immutability.
For your last question: I believe it's a false belief. I believed the same when I started with FP+immutability. I just did not understand where I should put my changes, because I was so used to mutating a variable. Turned out that I only really need to mutate it when I store in a db of some sort (frontend or backend), send it over the wire (socket, websocket, http response, gRPC, pub/sub, etc) or act as an object hiding inherit complexity (hardware state like push button, mouse, keyboard, etc). Graphics would also qualify, but that's one area where I think FP+immutability is ill suited.
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.
So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.
Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.
> Once you identify the part of the program that needs to change,
And how do you do that without understanding how the program works at a high level?
I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.
What happens if the change you need to make is at a level higher than a single function?
Yes, obviously a program with no mutability only heats up the CPU.
The point is to determine the points in your program where mutation happens, and the rest is immutable data and pure functions.
In the case of interacting services, for example, mutation should happen in some kind of persistent store like a database. Think of POST and PUT vs GET calls. Then a higher level service can orchestrate the component services.
Other times you can go a long way with piping the output of one function or process into another.
In a GUI application, the contents of text fields and other controls can go through a function and the output used to update another text field.
The point is to think carefully about where to place mutability into your architecture and not arbitrarily scatter it everywhere.
A pretty basic example: I write a lot of data pipelines in Julia. Most of the functions don't mutate their arguments, they receive some data and return some data. There are a handful of exceptions, e.g. the functions that write data to a db or file somewhere, or a few performance-sensitive functions that mutate their inputs to avoid allocations. These functions are clearly marked.
That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.
What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.
'vi'
'emacs'
'jove' -- use whatever editor floats your boat
but learn the hell out of it
you should know EVERY command in your editor
Those who picked emacs from that list never got the point of writing any code for the MUD. They greatly contributed to OS development all over the world however.
There is a functional core in js, and as such it might be an idea to take a look at common OOP design patterns and how they translate to an FP approach.
Yes, so in that sense it's sharded. In another sense it's all one server, as you can warp between different systems and meet all the players there in the game. Everything is run on one giant server.
Not sure how it is now, but back in 2014-2016 you could inform them of big battles ahead of time for a specific system, at which point they would, in their own words, reinforce the node (moving that particular system to its own allocation of resources). This often was the difference between being able to duke it out in an epic space battle or wait for the system to load while everyone lagged to death.
Used to play EVE a long time ago in a known alliance. The game mechanics were not that good. What made the game shine was that it's an extremely complex sandbox game, where almost anything goes. This leads to a very complex game that is almost entirely governed by how the other players react to your interactions with them. The drama, dreams, hopes and despair that comes with playing the game is the real reward.
It used to be that the game was really harsh when you screwed up, which made the adrenaline pump when you did something you knew was dangerous. For me as a hardcore gamer at the time, that was the initial kick. Now a days I believe they have lessened the harshness a bit, but it should still be one of the few games where you can lose literally hundreds of hours in investment by losing everything you own in the game.
This is a really important part of figuring out if you want to get into EVE. CCP are terrible game designers. They're a "throw shit at the wall and see what sticks" kind of game design studio. The market is a graveyard of items and associated mechanics that are design failures and are never to rarely relevant. They keep adding new mechanics that are broken, in balance and intended purpose. Their ship balancing team is two people.
But somehow this pile of crap mechanics by the sheer mass of them makes for an emergent sandbox. It feels not designed, like so many other video games. Systems that are not meant to interact with each other do in unexpected ways. Don't go in expecting every career path in EVE to have thought behind it. You have to figure that out.
I have to imagine that the social element is what makes it addicting, since I found it to be a rather constricted slog particularly if you aren't sociable. DF/Rimworld sort of accomplishes the complex-sandbox thing for solo players, minus the drama.
A combination of bad PR from Sega Saturn, fanboyism of Nintendo over Sega, Sony absolutely killing it with Playstation One and hyping Playstation Two to pieces leading to few game studios making games for Dreamcast, which lead to fewer sales, etc.
I owned one, and absolutely loved it. But the amount of games you could buy compared to the alternatives was a massive drawback, and Sega didn't have enough household names like Nintendo to pull through.
There were cheaper players by LG and Philips at the time, but more importantly the PS3 was the fastest to boot up and play a Blu-ray disc. Boot to playback times on 2006-era standalone Blu-ray players were abysmal, anywhere between 2 to 3 minutes.
What really hurt adoption in the early days was the Blu-ray versus HD DVD format war, not the players' startup performance. Consumers just sat it out and stuck with DVD, which still looked pretty good with upscaling players and anamorphic movies having become the norm.
Blu-ray has been a successful format, not at all like what happened to LaserDisc. Given the increase in the speed, reliability, and availability of broadband over the last decade (hence streaming) plus the aforementioned acceptability of DVD, Blu-ray's window of opportunity and overall potential were considerably narrower and more limited than DVD's.
Yes, but not right away. It took a while for the knowledge and processes to become established. Piracy wasn't what sank the Dreamcast. By the time piracy became commonplace, roughly mid-late 2000, the DC was already pretty obviously going to lose in the market to both the PS1 and upcoming PS2.
Casual piracy was significantly easier on the DC, but not because GDRoms were easy to copy. What was easy to do was to convince the system to boot from a standard CD, and it turns out some games didn't take up a whole GDRom or could have their textures easily replaced with lower res or better compressed versions allowing them to fit on a self-booting CD.
Kind of? The 1ST_READ.BIN had to be re-scrambled to run off of CD-ROM, but this was trivial after the Utopia leak.
However, if this were the reason for the demise of the console, we could expect large volume sales of the (loss-leader? or close to it?) console, and limited game sales.
Instead, unfortunately for Sega and ultimately, everyone, we saw limited sales of both the Dreamcast console and its games - a sign that the console itself was simply defeated by the PS2, rather than piracy.
Another argument here is that the PS2 suffered from a similarly trivial "swap magic" exploit just after release, where as long as the disc drive never registered a disc ejection, running code could simply be switched out for another piece of running code.
By the time Dreamcast piracy was common place, the console was already cancelled.
It may be a minor quibble, but I found the DC controller to be rather hard on the hands. The edges were just sharp enough to be uncomfortable after a good round of Marvel vs. Capcom.
I quite liked the DC controller. Never felt the comfort issues you’ve reported on (but everyone’s hands are different).
That said, I do agree with your point about the fighting games. They’re definitely played better with arcade sticks (same is true for beat em ups on most systems though).
The Capcom fighting games on Dreamcast were arcade quality at the time. Absolutely stunning. My favorite was Marvel vs Capcom. I wasn't very good at it, but I was mesmerized by the graphics and great overall UX.
IIRC the Dreamcast and the Naomi arcade system are very similar. The arcade systems have about twice the ram, and can run from rom cartridges as well as optical media (read once on boot into a dedicated disc cache), and there's some variants with interesting I/O, but there wasn't a difference in compute or GPU capabilities.
Notably Soul Reaver 2 was aiming for a Dreamcast release but then moved to be a PS2 exclusive, and there was a fully playable port of Half-Life that never saw an official release.
I've played the Half Life port, it is playable but had some fairly major issues. The framerate, controls, and load times were all pretty bad. The save files would become larger and larger (and took longer) as you progressed through each level to the point that in some sections a single save would occupy well over half of the VMU's capacity.
> The save files would become larger and larger (and took longer) as you progressed through each level to the point that in some sections a single save would occupy well over half of the VMU's capacity.
Windows CE as an OS has lot of peculiarities. For essentially single-tasking game console most of them are probably irrelevant, but on the other hand these peculiarities would probably make straight port of typical DirectX game from desktop windows an interesting endeavor.
That is not true. It has been debunked so many times.
Sega and there poor decisions during the end of the Genesis era(add-on that cost consumers hundred and then dropping support soon after) and Saturn days(surprise early launch angering developer, consumers and retailers enough for them to never carry the Saturn) lead to the death of the Dreamcast.
Statistical data showed that people were not buying the Dreamcast even though the games were easily pirated. Therefore piracy had no bearing on the console.
The Dreamcast needed sales and Sony's hype machine and SEGA's past reputation lead to consumers avoiding the Dreamcast.
That was the Saturn you’re thinking about. The Saturn had two sprite based processors and did 3D by skewing those tiles. This also presented other problems like with transparency (morphing squares into triangles causes problems with alpha blending). It was how Sega arcade boards also worked at that time and so Sega engineers were well versed in writing 3D engines like that but the rest of the development community had settled on the now standard approach of triangles. Couple that with the lack of an SDK and a dual processor system in era before developers were used to writing for such hardware and you had a very problematic console.
The Dreamcast, however, ran a PowerVR2 chip which was much more familiar for anyone with prior dev experience.
Perversely the PlayStation 2 was more complex due to custom hardware like the “emotion engine”. But Sony already had enough momentum from developers and consumers for any such difficulties to become game changing.
As many have pointed out it was very easy to develop for.
The developers of Dead or Alive 2(Tecmo), one of best looking 3D fighters on the Dreamcast state stated that developing on the Dreamcast was like writing a sentence with a pen whereas on the PlayStation 2 it was like writing a sentence with a brush.
People are still developing games for it using open source libraries. There was recently a 3D racing game released.
Was it? That’s not something I’d heard before (unlike, say, the PS2/3, Saturn, or N64, where complaints like that are common). What made the DC hard to wrangle?
Welcome to the world of being a parent. It can be frustrating, perplexing and mind-bending. Few things compare to those little arms snug around your neck, the head leaning on your shoulder, all in complete trust that you love them and wish them the best.
As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.