> 1) Happiness is not as subjective as people think. Research suggest that what people state about their own happiness correlates very well with how friends and
colleagues would describe a person as.
This just means that how happy a person feels correlates well with how happy others perceive them to be. This says nothing about whether their subjective feelings reflect objective reality.
Research actually seems to suggest that happiness is very subjective, and may in fact have a strong genetic component[0]. I.e. whether a particular person is happy is dictated less by their circumstances and more by their outlook/personality/genetics. Of course, one's circumstances also play a role (e.g. a person will be less happy immediately after the death of a loved one), but subjective factors seem to win out overall.
>happiness is very subjective, and may infact have a strong genetic component.
I agree that happiness is subjective and personality trait expression may very well have a genetic component, but I wouldn't dismiss societal and environmental factors entirely.
For example, inequality and air pollution are also associated with unhappiness.
> Most researchers probably can't afford to pay an outside lab to duplicate their research.
Even if they could, we probably don't want the researchers paying for their results to be duplicated. This would create perverse incentives, similar to what happened with investment banks and credit rating agencies. If the original researchers must get their results confirmed in order to get published, and it is them who are paying for the confirmation, they will naturally tend to choose confirmatory labs that are more likely to confirm their findings. Since the labs would then rely on the researchers for funding, that would create pressure on the confirmatory labs to adapt their methodologies in ways that make it more likely that results get confirmed (even when the original study may not warrant it).
We want confirmatory labs to have no special interest in either confirming or disproving a particular study, but in improving the overall quality of research.
Since a journal's reputation depends (at least in part) on the quality of research it publishes, the journals would seem to be the natural candidates for the source of funding of confirmatory labs. Whether they'd actually be willing to do it another matter...
Any confirmatory lab would have to be licensed in order to get grant money for it. Just like CPAs who do an audit. Sure there is some corruption and drift toward hiring more lenient firms, but it basically works.
Side note: it is weird to me that everyone talks about whether researchers can afford to pay for confirmation, but researchers never pay for anything, grants pay for everything. The granting institutions might even be excited to try a confirmation process.
>have already seen improvements in concentration and a decrease in anxiety.
Interesting to hear. Meditation is something I have considered looking into for a similar purpose. Do you have any recommendations on a good place to start?
First, I did a few random 'guided meditations' off youtube (Sam Harris has a popular one) and discovered that I enjoyed it.
After that, I've been enjoying The Mind Illuminated, by John Yates, and I think others on hackernews will as well. It doesn't bother with a lot of the eastern spirituality aspects, but instead focuses on how to become good at meditating and find the most benefits from it.
>>You can trade off future maintenance by spending more initially, or vice versa
> Umm is that really true ?
Maybe not universally, but often, yes. E.g., you can build using more expensive materials that will last longer/are more resistant to environmental decay.
And when concrete roads fall into disrepair, they get to be really nasty to drive on. I feel like I'm going to shake my car apart driving on some roads in Iowa.
I can't say that asphalt roads in disrepair are much better, but potholes tend to be more quickly fixed than slightly mis-aligned sections of concrete roads.
That project is a symptom of manual pages not having good “EXAMPLES” sections. The examples on that web page should be contributed upstream to the manuals pages of the software that they are for.
The issue isn't just the lack of EXAMPLES, but also with how man pages tend to be structured. They tend to be very "encyclopedic". There is a set ordering for sections, with a lot of them very verbose, and examples, when present, near the end. Options are often listed in alphabetic order, which doesn't usually correspond to how often they are used or useful.
Man pages are OK when you're first learning how to use something; but if you're already familiar with a command and just need to remind yourself of a the specific sequence of options to achieve a desired result, they're not the most convenient.
I think it's useful to have a tool that fulfills the latter purpose without worrying about the former.
Microsoft documentation was mentioned earlier in this discussion. One of the things that MSDN and TechNet doco does is have both "X reference" and "using X" sections. Manual pages are reference doco, in this way of organizing things.
The FreeBSD, TrueOS, and related worlds put the "using" doco into what are often called "handbooks" or "guides".
The Linux Documentation Project was supposed to contain a wealth of this stuff, but large parts of it are seemingly moribund, and incomplete after decades or woefully outdated. Wikibooks tried to take up the slack with an "anyone can edit" Guide to Unix and a Linux Guide:
If you want examples and doco that works from the basis of what you usually want to do, then these handbooks and guides are the places to go, not reference manuals.
Whenever the discussion comes up about man pages and how documentation should be organized, I like to quote this section from the GNU coding standards about how Info documentation is structured:
----
Programmers tend to carry over the structure of the program as the structure for its documentation. But this structure is not necessarily good for explaining how to use the program; it may be irrelevant and confusing for a user.
Instead, the right way to structure documentation is according to the concepts and questions that a user will have in mind when reading it. This principle applies at every level, from the lowest (ordering sentences in a paragraph) to the highest (ordering of chapter topics within the manual). Sometimes this structure of ideas matches the structure of the implementation of the software being documented--but often they are different. An important part of learning to write good documentation is to learn to notice when you have unthinkingly structured the documentation like the implementation, stop yourself, and look for better alternatives.
[…]
In general, a GNU manual should serve both as tutorial and reference. It should be set up for convenient access to each topic through Info, and for reading straight through (appendixes aside). A GNU manual should give a good introduction to a beginner reading through from the start, and should also provide all the details that hackers want. […]
That is not as hard as it first sounds. Arrange each chapter as a logical breakdown of its topic, but order the sections, and write their text, so that reading the chapter straight through makes sense. Do likewise when structuring the book into chapters, and when structuring a section into paragraphs. The watchword is, at each point, address the most fundamental and important issue raised by the preceding text.
What would your example achieve? You're making the format more verbose and error-prone (someone might easily forget to match a paren), without imposting any additional structure over what is already implied by line breaks.
Though I do agree with your overarching point that some of the formats/outputs could do with a more consistent structure. Perhaps something like YAML would strike a good balance between structure and conciseness/readability...
The example is not really interesting, because as you said there is already a simple structure. But programming with text becomes really tiresome after a while.
For example:
... where git blame returns a sequence of "blame" data for which I can retrieve the author easily (if you prefer pipes over function composition syntax, use threading macros). Then I don't have to worry about strange characters crashing my scripts randomly. Suppose I forgot to add the "^" symbol in my regexp (I can assume this, since you assume people forget parentheses), there could be situations where I would match too many lines.
> You're making the format more verbose and error-prone (someone might easily forget to match a paren), without imposting any additional structure over what is already implied by line breaks.
Ultimately, structured data (which is pretty much all data) should be edited with structure editors. Good text formats make it easy to write such structure editors.
> What would your example achieve? You're making the format more verbose and error-prone (someone might easily forget to match a paren), without imposting any additional structure over what is already implied by line breaks.
That particular example is fairly straightforward (at a simple level, passwd files aren't complex), but being able to express arbitrary nested structure would make various things a lot more straightforward. Line breaks and some sort of tab/colon/what have you work fine if everything has at most two levels of hierarchy, but it starts being painful after that.
Missing matched parens are a bit of a specious argument, since many of the random formats for files are fairly strict about what they parse, and the ones that matter (e.g. passwd, sudoers, crontab) are conventionally edited through tools that check the syntax before committing.
@grandparent: In other words, no, they can't read the texts you've sent up to this point. But they may be able to a month or year from now. Your texts up to the point the law passed are safe, but you can't be sure about the messages you send from now on.
(well, as safe as they were prior to the Snooper's Charter).
Essentially, yes -- it will be down to whether the company is willing to risk having to withdraw from UK market. And if a company does choose to cave in into UK government's demands for backdoors, you can't be sure whether or not they would inform you of this in a timely manner.
The answers to your questions will come down to which a particular company values more -- their UK revenue or the privacy and safety of their UK customers.
- Can easily deal with large volumes of email -- quickly tab through unread messages
- Highlights text from previous emails in a thread, making it easy to see who replied to what
- You can also set up highlighting for diffs, making it easy to review attached patches and change snippets
- Powerful search via external indexers (mu, notmuch)
- Keyboard navigation for everything (if you're using vim and tmux, then you're already spending most of your time with your hands on the keyboard, and having to fish for the mouse just to quicky reply to an email starts to feel really slow).
For handling HTML email, I use a two-pronged approach: for most email I use links browser to automatically render HTML as ASCII so I can view it directly in mutt; for email that has images or otherwise cannot be rendered sensibly with links, I have mapped a key to open it in a graphical browser (dwb for fast launching or Firefox+vimperator if I need to deal with login forms, etc).
This just means that how happy a person feels correlates well with how happy others perceive them to be. This says nothing about whether their subjective feelings reflect objective reality.
Research actually seems to suggest that happiness is very subjective, and may in fact have a strong genetic component[0]. I.e. whether a particular person is happy is dictated less by their circumstances and more by their outlook/personality/genetics. Of course, one's circumstances also play a role (e.g. a person will be less happy immediately after the death of a loved one), but subjective factors seem to win out overall.
[0] e.g. https://www.psychologytoday.com/blog/media-spotlight/201302/...