A lot of extensive research was done by Dr. Valter Longo and his department on similar topics and has made its way into an easily digestible book called The Longevity Diet. It changed my perspective big time with food.
Highly recommend reading this book for folks that are interested in Longevity.
Dr. Longo has also done a few podcasts with Dr. Rhonda Patrick (who might be more known to the audience here) you can checkout
The longevity diet resembles a modified vegan diet in which certain seafood and small amounts of meat and dairy are allowed. It is as much a lifestyle as an eating plan and can be followed for an indefinite amount of time. The guidelines include following a five-day fasting-mimicking diet periodically throughout the year.
The Fasting Mimicking concept is there because this method was exclusively tested on Cancer patients who are also going through Chemo therapy. Having nutrients for these patients is important [1]
Is pure Water Fasting superior? Yes. I have done 1 week water fasts, back to back for 4 weeks and lost ~30 lbs.
However, I think fasting mimicking may help adherence for some people.
I did a DEXA scan at the end only so I don’t know the before accurately. I have some measurements from my Scale on muscle mass which shows a drop of about 2 kgs. Overall I lost about 30-35 lbs of weight. The measurements from the scale should be taken with a grain of salt though because they are based on impedance and formulas that are related to BMI
Fast-mimicking are diets where you eat but gets the same results as if fasting. Fake fasting gimmicks is the real name for this "diets" because you can't fake fasting with food.
When we're talking about longevity, I often think about this old comic. It showed two very old women, crumpled in seats in a nursing home, and one turns to the other saying, "Just think, if hadn't quit smoking and drinking, we might have missed this!"
Facetious of course, but the underlying point has merit. Just what part of your life are you extending? Are you extending the healthy, active period, or just dragging out that bit at the end when someone else has to wipe your bottom and feed you apple sauce? Especially with the current topic, it seems very likely to be the latter, and I'd rather eat the other half of my sandwich, and die a few years earlier.
> Are you extending the healthy, active period, or just dragging out that bit at the end when someone else has to wipe your bottom and feed you apple sauce?
It's always both. Mortality isn't an independent counter, it's linked to a bunch of predictors that are also predictors of health and autonomy.
In other words, if you "eat the other half", you won't just die younger, you'll also start eating applesauce sooner.
I read on HN a comment to the effect that to feel better in your 60s means you must do somethings in your 50s. The things you need to do in your 50s also make you feel better in your 50s. And to be able to do these things in your 50s requires you to do somethings in your 40s - which also make you feel better in your 40s.
Feel better meant being more active and more social. Obviously my recollection lost all the "literary" qualities that comment had - but I like its message.
This is one of those things where interests of the consumer are at odds with that of the business. Businesses need money, customers have money. Customers do not always want to pay the businesses directly. This issue is more obvious when it comes to news media. We don't have a good solution to pay $0.01 - $0.10 or whatever small denomination. The other question is would we even want to?
I don’t think Freemium or whatever subscription model solves these problems beyond the scope of a single website. I am interested to hear how this can be solved for consumers for whole swaths of websites that have ads.
It’s pretty easy to say ads are evi, but I personally don’t know a good solution.
I subscribe to YouTube Premium (from back when it was called YouTube Red). I pay a fixed amount each month, and that gets given to people who made videos I watch. Why can't we have more things like this?
Because you are an active consumer of YouTube to the point where you think it’s worth it to pay the monthly fee; as do I.
However, I seldom read any news articles maybe 1-2 here and there only. In that case I wouldn’t want to pay for a susscriotion. I likely also don’t wanna pay like five bucks to read an article.
Maybe because I haven’t used languages like these in the past, but I hardly think this is elegant, much read readale. I would hate my life trying to parse code like this in a 10k LOC codebase.
It's actually very readable once you get the hang of it. The transition from imperative paradigms to Haskell can be tough, but once you've overcome this barrier, the code reads effortlessly. In your example case: split the string into a list of words, that is tokenize based on spaces. Then map the read function onto this, which will parse each of the "words" in the list into some type. Annotations would likely be needed here. Then sum this list.
I much prefer this over 10 levels of class indirections or procedural style.
Isn't it the case that anything is very readable once you get the hang of it?
I think the problem is exactly in the "get the hang of it" part. For some languages, that's really difficult, while for others it's almost trivial. From my own experience, Haskell is very very difficult to "get the hang of" despite my multiple attempts, while something like Javascript is trivial for me (interestingly enough, except when it uses too much of the new FP patterns!) even when I do not program on it daily.
This is like complaining that Spanish is so hard to learn while English is actually quite intuitive. Of course it’s easier to understand what you already know.
Not at all. I tried to make the point of distinction clear by saying I do not use JS, nor Haskell, daily, but JS is more readable, without a doubt, so it's more like saying something like "english is more readable than french to a spanish speaker" (the analogy makes much less sense, but trying to correct yours). I think that we can agree that everyone's a priori is what they know in the world, which is common language... and everyone is familiar with "recipes", or step-by-step instructions... which is the same as imperative code, not functional.
Don't get me wrong, I like FP and have been trying to get into it for a long time. But currently I strongly believe FP as commonly done in Haskell is just too far from what we expect even before we start writing code. Combining functions and chaining Monads just seems to me to be extremely hard to do and understand, and I don't need to do any of that in "lesser" languages. However, I am finally "getting it" with newer languages like Flix and Unison - they let me just use `let` and stuff like that which makes the code trivial again, while being basically purely functional.
in JS would be: const strSum = str => str.split(' ').map(Number).reduce((a, b) => a + b, 0);
for a person who's not already a JS programmer, the first one would be more readable (without a doubt), it literally reads like plain English: "sum of mapped read of words".
Haskell's version is more "mathematical" and straightforward. Each function has one clear purpose. The composition operator clearly shows data transformation flow. No hidden behavior or type coercion surprises.
Whereas JS version requires knowledge of:
- How Number works as a function vs constructor
- Implicit type coercion rules
- Method chaining syntax
- reduce()'s callback syntax and initial value
- How split() handles edge cases
So while the JS code might look familiar, it actually requires more background knowledge and consideration of implementation details to fully understand its behavior. Javascript is far more complex language than most programmers realize. btw, I myself don't write Haskell, but deal with Javascript almost daily and I just can't agree that JS is "more readable" than many other PLs. With Typescript it gets even more "eye-hurting".
> in JS would be: const strSum = str => str.split(' ').map(Number).reduce((a, b) => a + b, 0);
It's funny to me that you quote the FP-like version of that in JS.
The more traditional version would be more like this:
function strSum(str) {
let words = str.split(' ');
let sum = 0;
for (word of words) {
sum += new Number(word);
}
return sum;
}
I do sincerely think this is more readable, no matter your background. It splits the steps more clearly. Doesn't require you to keep almost anything in your head as you read. It looks stupid, which is great! Anyone no matter how stupid can read this as long as they've had any programming experience, in any language. I would bet someone who only ever learned Haskell would understand this without ever seeing a procedural language before.
I don't even know where to start, your biases here are so explicit.
- The assumption that "verbose = readable" and "explicit loops = clearer"? Seriously?
- The suggestion that "looking stupid" is somehow a virtue in code? "Simple" I can still buy, but "stupid"... really?
- You're using new Number() - which is actually wrong - it creates a Number object, not a primitive;
- You `sum +=` is doing not a single op but multiple things implicitly: addition, assignment, potential type coercion, mutation of state;
- for loops are another layer of complexity - iterator protocol implementation, mutable loop counter management, scoping issues, potential off-by-one errors, break/continue possibilities, possible loop var shadowing, etc. Even though for..of is Javascript's attempt at a more FP-style iteration pattern and is safer than the indexed loop.
You clearly underestimate how natural functional concepts can be - composition is a fundamental concept we use daily (like "wash then dry" vs "first get a towel, then turn on water, then...").
Your "simple" imperative version actually requires understanding more concepts and implicit behaviors than the functional version! The irony is that while you're trying to argue for simplicity, you're choosing an approach with more hidden complexity.
Again, I'm not huge fan of Haskell, yet, the Haskell version has:
- No hidden operations
- No mutation
- Clear, single-purpose functions
- Explicit data flow
You have just demonstrated several key benefits of functional programming and why anyone who writes code should try learning languages like Haskell, Clojure, Elixir, etc., even though practical benefits may not be obvious at first.
I spent years writing JavaScript, PHP, and Ruby. I thought Haskell was weird and hard, and probably not practical in the real world.
As it turned out, I was just being a fool. Once you actually learn it, you realise how silly the opinions are that you had of it before you learned it.
Advent of Code is running right now. Why don't you just try learning the language?
I learned the language more than 10 years ago. No, it's not for me. Please don't assume that because somebody doesn't find Haskell readable the person must be ignorant.
Maybe „French is more readable than Korean” would be a better analogy. Sure, if you’re already familiar with the Latin alphabet, but Hangul is clearly a better writing system.
> I do not use JS, nor Haskell, daily, but JS is more readable
I’m guessing you do use languages that are very similar to JS. Like a Spanish speaker saying “I don’t speak Italian or Chinese but Italian is way easier.” If you wrote F# every day you would probably find Haskell syntax quite intuitive
I was trying to make the point that no, it's not that at all. But I guess it's a very hard point to make and even though I am convinced that I'm right and this has nothing to do with familiarity, I can't find any serious research showing either way.
I know a dozen languages well. Everyone here thinking it's just ignorance, but that's not the case. There's just no way that, for me, Haskell and similar languages are readable in any sense just because they're more concise. If that was the case Haskell still wouldn't be close to the most readable, but something like APL or Forth would.
I've tried for more than 10 years to be like you guys and read a bunch of function compositions without any variable names to be seen, a few monadic operators and think "wow so easy to read"... but no, it's still completely unreadable to me. I guess I am much more a Go person than a Haskell person, and I am happy about that.
It is a point of familiarity. Just because you've been coding in multiple languages before doesn't necessarily make you "a better programmer" (in the sense that you've developed good instincts to quickly mentally parse different pieces of code) - you could have been using programming languages of similar paradigms. It took me a few months of writing Clojure (niche language) to start seeing things in a different light - I also, just like you, used to think that imperative constructs are more readable and easier to reason about. I was wrong.
There's no such thing as a "Go person" or a "Haskell person"; all programming languages are made up. Nobody has "more natural inclination" for coding one way than another. Just get your ass out of the comfort zone, try learning a new (to you) language - give it a heartfelt attempt to use it - that may change your life.
Just to be clear - I'm not saying Haskell is easy, oh no, not at all. I'm just saying that it stops being so intimidating and weird after a while.
> and everyone is familiar with "recipes", or step-by-step instructions... which is the same as imperative code, not functional.
everyone is familiar with "I don't know how exactly, but generally it would be this way..., we can discuss specifics later" which is the same as reading the above pointfree notation (sum . map read . words) verbatim instead of imperatively inside-out: something is a sum of all parsed values of space-separated words.
> In your example case: split the string into a list of words, that is tokenize based on spaces.
You've made a common mistake. You're wiring your listener's thinking with the imperative inside-out approach that you're used to. Instead, it should be explained as this: "strSum = sum . map read . words" is "a sum of all parsed values of the original input of space-separated words". The reason you should avoid inside-out explanations is because in Haskell you're allowed to move from general ideas to specifics, and you can sprinkle `undefined` and `_` for specific details whilst thinking about general ideas and interfaces.
I really like the FP paradigm, but could you all stop using weird abbreviations and random characters as substitute for operations?
You don't do programming with chalk on a wallboard, for crying out loud. Ideally, you are using a good IDE with syntax completion. Therefore, readability matters more than the ability to bang out commands in as few keystrokes as possible.
It's about phase transitions. When you understand the system, shorter symbols are easier/faster to reason with. If your primitives are well thought out for the domain, this notation will be the optimal way of understanding it!
On the other hand, longer names help on board new people. Theoretically, you could avoid this issue by transforming back and forth. Uiua e.g. lets you enter symbols by typing out their names. Typing "sum = reduce add" becomes: "sum ← /+". If you transform it back...
Imagine if you could encode the std lib with aliases!
I've originally studied social sciences, so my training uses maths but its core is words.
I dislike symbol only notation. Real words trigger different parts of my brain. I am very good at memorizing content and flow of texts but bad at keeping symbols in my head. I have the same issue with language, e.g. I have no problem with pinyin, but written Chinese characters are taxing me.
I second this, most programmers seem to be fixated on the idea that all code should show at every moment how data types and their values are being passed and tossed around, and they simply ignore or refuse to realise that you can omit it and think in terms of functions fitting the slots.
If you're familiar with Haskell, this is something you can just look at and parse without thinking. It's all basic Haskell syntax and concepts (function composition and partial application). I haven't touched Haskell for a few years and I didn't have any trouble interpreting it as "strSum is a function that takes a single string argument, splits it by whitespace, interprets each chunk as a number, and returns the sum".
I guess the more succinct the code, the more the reliance on understanding what a function actually does - either through experience, or by reading the docs. The words function is simply:
words :: String -> [String]
So that
words "foo bar baz"
-- Produces: ["foo","bar","baz"]
In my experience, both the blessing and the curse of Haskell's incredible succinct expressiveness is that, like other specialised languages - for example using latin for succinctly-expressed legal terms - you need a strong understanding of the language and its standard library - similar to the library of commonly used "legal terms" in law circles - to participate meaningfully.
Haskell, and languages like Go (which anybody with a bit of programming experience can easily follow) have very different goals.
Like many in this discussion, I too have developed a love/hate thing with Haskell. But boy, are the good parts good ...
I recently learned that things like Goroutines aren’t naturally written with buffers and channels. Granted anyone who reads the original documentation would likely do it correctly, but apparently that’s not how they are intuitively written. so while it may be easy to read it might be harder to write than I was assuming.
So maybe there a difference where Haskell has an advantage? I mentioned it in my previous comment but I don’t know Haskell at all, but if this is “the way” to do splits by word then you’ll know both to read and write it. Which would be a strength on its own, since I imagine it would be kind of hard to do wrong since you’ll need that Haskell understanding in the first place.
It all comes down to knowing the FP vocabulary. Most of FP languages share the names of the most widely used functions, and if you're well versed in Haskell you'll have 80/20 ratio of understanding them all, where the 20% part would be language-specific libraries that expand and build upon the 80% of the FP vocabulary.
As for "words"... yes, possibly not the best name. But also so common that everyone that has ever written any Haskell code knows it. Such as Java's System.out.println
Yeah this language probably has a lot of Stackoverflow questions. This is basically like tasking someone’s personal dot file and trying to reason about it
> Maybe because I haven’t used languages like these in the past [...]
Yes, that definitely the case.
If you know what each function above does, including the function composition dot (.), then this is like reading English — assuming you know how to read English.
Parenthesis are not really optional, they're just used differently than other languages. Other languages use parenthesis for function application and grouping, in Haskell it's just grouping.
Funnily enough, parenthesis are actually optional in Elixir, although it's a warning to use pipe syntax without them. The following is valid in both Haskell and Elixer:
In our codebase we enforced usage of `>>>` instead, which composes forward instead of backwards:
strSum = words >>> map read >>> sum
For most people this then becomes "Apply `words` to the input argument, pass the result to `map read` and then `sum` the results of that".
I don't think `.` is super complex to read and parse, but we had people new to Haskell so I thought it prudent to start them off just with `>>>` and keep it that way. Most things are read left-to-right and top-to-bottom in a codebase otherwise so I don't see why not.
Edit:
I also told everyone it's fine to just spell out your arguments:
stringSum sentence =
sentence
& words
& map read
& sum
In the example above `&` is just your average pipe-operator. Not currying when you don't have to is also fine, and will actually improve performance in certain scenarios.
Edit 2:
The truth is that there are way more important things to talk about in a production code base than currying, and people not using currying very much wouldn't be an issue; but they'll have to understand different pointer/reference types, the `ReaderT` monad (transformer), etc., and how a `ReaderT env IO` stack works and why it's going to be better than whatever nonsense theoretical stack with many layers and transformers that can be thought up. Once you've taught them `ReaderT env IO` and pointer types (maybe including `TVar`s) you're up and running and can write pretty sophisticated multi-threaded, safe production code.
In a production application you generally don't write code like that. I find it tends to be the opposite problem where you often see giant `do` blocks performing all sorts of monadic effects.
Yes but the notation of dot, plus such function names plus optional parens makes it sure read like English. That’s great but it’ll be a nightmare when you are also dealing with strings which similar English in it.
Hey folks -- I was recently on a Subreddit of my hometown (/r/toronto) and saw the stark difference between the Metro System of our city compared to other places in the world; some of which had projects started much later than in Toronto. I was curious and wanted to see other places around the world. Thought it might be interesting for folks here.
> The title of this post claims that pytz is the fastest footgun in the west, by which I mean that pytz is a quite well-optimized library, and historically it has been faster than dateutil.tz
Wat. The author has no idea what a foot gun is.
Pytz is a must because using the standard library and timezone manipulations often leads to bugs. There are nuances around “naive” date times where one can easily just add a fake timezone value, not realizing that the time didn’t actually get localized to the timezone.
There are other libraries such as Arrow and Pendulum worth checking out but pytz is probably enough.
Eh? They're pointing out it's an extremely well optimised and fast library, and then pointing out a number of ways in which it's a footgun because it doesn't behave as you'd expect, e.g.
Did you read the article? The author explains exactly why they think Pytz is not enough, and why they think dateutil.tz (not the standard library) is a better alternative.
To be clear, when I wrote this article, I had not yet created the `zoneinfo` module in the standard library. All the recommendations for `dateutil.tz` apply to `zoneinfo` as well, except that `zoneinfo` is also faster than `pytz`.
Thank you so much for your work on zoneinfo. I spend my days working on a legacy Django app and can't wait to switch from pytz to zoneinfo. In my experience, timezones are always a struggle so your efforts to make them easier to deal with is hugely appreciated.
Was looking through your projects and saw you made a Chess bot [1]. While the idea is nice, I wish you just played with strong bots rather than real people. Its kind of ironic that you then hid your username [2] in the video lol