Hacker Newsnew | past | comments | ask | show | jobs | submit | jrapdx3's commentslogin

Chicken indeed interoperates with C quite easily and productively. You're right that the generated C code is mostly incomprehensible to humans, but compiles without difficulty.

The Chicken C API has functions/macros that return values and those that don't return. The former include the fabulous embedded API (crunch is an altogether different beast) which I've used in "mixed language" programming to good effect. In such cases Scheme is rather like the essential "glue" that enables the parts written in other languages to work as a whole.

Of course becoming proficient in Scheme programming takes time and effort. I believe it's true that some brains have an affinity for Lispy languages while others don't. Fortunately, there are many ways to write programs to accomplish a given task.


Perhaps a bit tangential to the main topic, but it is of course true that UTIs can adversely affect cognition in the elderly, even precipitate delirium, etc., depending on type and severity of infection. Naturally that also occurs with other sources of infection, and factors including intoxication due to drugs (prescribed or otherwise) and a host of others. Vulnerability to such decompensation is greater among those already functioning marginally. As such accurate diagnosis can be hard to establish particularly when multiple factors are implicated, hardly a rare circumstance. (At least in my physician-practice that's frequently been the case.)

I appreciate your comment pointing to the importance of carefully evaluating individuals manifesting new onset delusional ideation or other "mental" disturbance. It might be associated with an obscure condition, but likely enough it's the result of common maladies. The worst error is thinking one knows what's going on before (or not at all) thoroughly investigating the possibilities.


Having been an "investigator" in a few phase 3 and 4 trials, it is true that all actions involving subjects must strictly follow protocols governing conduct of the trial. It is extremely intricate and labor intensive work. But the smallest violations of the rules can invalidate part of or even the entire trial.

Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.

This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?

Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.

The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.


This could at least be done after release, but I don’t think any incentives are there, while collecting the data is incredibly difficult


It is done, in many countries there are legal requirements to report adverse events whenever they are observed upon use

https://en.wikipedia.org/wiki/Pharmacovigilance#Adverse_even...


That data goes into VAERS and FAERS. You can query it in MedWatch.


This topic provokes a question, what exactly is "winning" anyway? As others point out, how could there be absolute winning, or complete dominance of the whole gamut of software used for every purpose. Of course, no one ever proposed such a definition of open-source success.

Since the 1990s I've been thoroughly committed to using and developing open-source programs. I strongly prefer using open-source products even when they've been less robust than proprietary options. In recent years, that's changed in favor of open-source, a number of open-source programs have become best-in-class. To name a few Blender, postgresql, Firefox, most developer tools. Still, proprietary products dominate areas like OSs, enterprise programs, etc., and will probably continue to do so.

But even if not as widely used, the fact that quality alternatives exist to a significant share of proprietary offerings speaks to open-source success. It's noteworthy that giants like Microsoft have open-sourced some of their products, a practice unheard of a couple of decades ago that shows influence of the open-source movement.

A winner-take-all philosophy is bound to be as deleterious to open-source advocacy as in any other endeavor. Realistically, producing excellent, bug-free, well-documented open-source software is what it takes to find an appreciative user-base. Perhaps not the majority of users of that category of software, but is that necessary to call a project successful? To say it is seems a prelude to enduring a constant sense of failure and missing out on authentic victories.


The goal of the Free Software movement is to build a usable computing environment for which all software (i.e., "code") is free. If you include things like cell phones, tablets, web services, firmware, or basically anything other than core os components in the computing environment, that goal is very far off.


Sure, the FSF is as idealistic as it has been influential. Can't fault FSF for unrelenting commitment to stated purposes. While the totally free OS was a goal that never quite materialized, a large proportion of modern open-source systems is composed of free (in the FSF sense) software. What FSF advocates has indeed mattered.

I think the question is this: is having totally free cell phones, etc., the essential criterion of success? Or is something less than embodying FSF-style ideology acceptable? To be sure, there's no definitive answer to such a question. But ideological purity is a luxury in the real world that even FSF acknowledges, compromises sometimes have to be made, pragmatic considerations have to be taken into account.

Nothing wrong with keeping lofty goals, but as practical necessity frequently dictates, graciously accepting less than total victory more often than not best serves our interests.

(Edited re: grammar.)


> a large proportion of modern open-source systems is composed of free (in the FSF sense) software.

The critical parts aren't though, and that's where it matters the most, IMHO intentionally so.

An HP printer being 99% based on free components won't be a tangible improvement if the last 1% vehemently prevents it's free use. Open source being the core of the OS doesn't help if nothing can replace iOS on an iPhone.

We're in a world where free software has massively grown, while the day to day impacts are IMHO comparatively small. It feels like we're more free than ever, inside our new confinement cells.


The victory situation for free software is that it becomes socially unacceptable, and rare, for individuals and for organizations to claim IP rights over software, to restrict its dissemination, to hide its source code, etc. When it is clear that software is shared commons, and nothing else.


Why would that ever happen? Software is too important for people to not sell outside of communism and free software people aren’t as good as making consumer products as capitalists


Software is too important for people not to _share_. And too important for people to have to waste endless resources in re-developing in multiple closed contexts.

As for "communism" - if by whiskey, I mean if by Communism you mean soviet-union-style social arrangements, then I'm pretty sure they had closed-source software which the government controlled and people could not use and alter freely; but if you mean "communism" as in software being a "commons", then, yeah, free software will win when that is again the case.


I love firefox, but it is NOT best in class. What kind of copium are you huffing? I want some!


Couldn't disagree that FF has its deficits, IME it's not the best browser for videoconferencing. OTOH FF provides top-quality developer tools. And AFAIK preserves user privacy better than certain other browsers, Chrome being the poster child for the issue. FWIW FF remains unique and influential.

Ultimately judging what's "best in class" depends on exactly what criteria are applied. How old is the saying "one man's meat is another's poison"?

Anyway, the class of browsers is perhaps the most volatile in the software world, top of the heap changes constantly. But within it there are good examples of quality open-source programs and some that are not free at all. We decide on our own terms which among them is the "best".


Yes indeed, holding up over time is not characteristic of digital technologies. To be sure I owned all of the "ancient" storage formats you mention, and down in the dark corner of my basement some of the old denizens still abide if not actually used. From time to time it's painfully evident that optical drives have all but disappeared too. We're all writing in electronic sand, endurance of our messages, ideas and creativity seems hardly a high priority.


In studies of monozygotic twins (shared genetic predisposition), typically the twins were raised in different environments (adoption, etc.). If behavior among the twins is divergent then environmental factors are likely predominant. OTOH if concordance of traits is strongly evident, behavior is attributable to genetic factors.


My understanding is that separating children from their biological parents has wide-reaching consequences, even if done in a non-traumatic way, and even if they are ultimately raised by a different set of parents. I would imagine the trauma originating from having to be adopted could be a uniquely triggering factor for genetic predisposition in the case of only one of the twins. How would twin studies be able to account for that?


I agree. Also, the prenatal environment (9 months of development!) and circumstances of birth, which both twins share, is not accounted for at all. Or rather, it is accounted for as "heritability" by twin studies, which is plainly wrong.

https://williamjbarry.substack.com/p/the-first-1000-days


No need for family separation. You simply compare the correlation between monozygotic ("identical") twins vs dizygotic ("non-identical") twins.

For example, monozygotic twins will always have the same eye color (99+% correlation), while dizygotic twins do not. Thus we can conclude eye color is genetic. Both twins are raised by their respective parents, so it's unlikely parenting is causing this difference in eye-color-correlation.


After reading the article and the comments on this page, I also think it's unclear what exclusion criteria were used to select the cannabis using and control cohorts. Controlling for likely and unlikely confounding variables is essential in this kind of study. Obesity is a notoriously heterogenous condition with a great number of inheritable and environmental contributors which makes the task especially difficult.

However, a connection of particular interest concerns ADHD, a disorder identified as having a strong link to obesity, including common genetic predisposition [0]. Furthermore, individuals with ADHD are also more likely than non-ADHD peers to develop drug dependence, including cannabis-use disorder [1,2]. If ADHD was not among direct or indirect exclusion criteria, the results of the recent study could be misleading or at least incompletely characterized.

    [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC6097237/ 
    [1] https://pmc.ncbi.nlm.nih.gov/articles/PMC5568505/  
    [2] https://pmc.ncbi.nlm.nih.gov/articles/PMC8025199/


I think the issue I have is simpler. My understanding is those that were obese and have lost weight still have a higher chance of getting type 2 diabetes than those that were slim throughout their lives. If your chance of getting diabetes is 70% while obese, and 2.2% (or 15% or whatever it actually is) after losing weight, how is that not a win?


It is indeed a win. It's long been established that for obese individuals even 5% weight loss reduces comorbidity of obesity-related conditions. Of course greater weight loss, 10-15%, gives better outcomes. Typically the difficulty is maintaining lower weight for the long haul. For those who can do it the payoff is substantial.


Interesting comment. I found the lisp/sexpr form instantly understandable. While the others weren't hard to grasp it took a moment to consciously parse them before their meaning was as clear. Perhaps the functional arrow notation is least appreciated because it's seems more abstract or maybe the arrows are just confusing.

More likely than not it's a matter of what a person gets used to. I've enjoyed working in Lisp/Scheme and C, but not so much in primarily functional languages. No doubt programmers have varied histories that explain their preferences.

As you imply, in C one could write nested functions as f (g (h (x))) if examining return values is unnecessary. OTOH in Lisp return values are also often needed, prompting use of (let ...) forms, etc., which can make function nesting unclear. In reality programming languages are all guilty of potential obscurity. We just develop a taste for what flavor of obscurity we prefer to work with.


Absolutely. Years ago writing programs for *BSD/Linux, PS was the natural, most direct way to implement printing to printers equipped with a PS interpreter.

Fortunately the PS language was very well documented. That made writing PS pretty straightforward, at least for the reasons I was using it. Curiously other concatenative languages have been harder for me to grasp. Maybe that's because I regarded PS as a specific-use tool vs. a general purpose language.

If nothing else PS showed the value of excellent documentation. Lack of it probably accounts for many software project failures, particularly in the open-source world.


You can also run PostScript programs on the computer; you do not need a PostScript printer.

> Maybe that's because I regarded PS as a specific-use tool vs. a general purpose language.

In my opinion, it is both. Many of the programs I write in PostScript do not involve a printer at all.

> If nothing else PS showed the value of excellent documentation. Lack of it probably accounts for many software project failures, particularly in the open-source world.

I also find a problem with many programs that do not have good documentation. When I write my own, I try to provide documentation.


Well, Tcl is kind of a mixture of scope rules. In some respects, e.g., within a proc, it's mainly a lexical environment, but of course dynamic scoping is introduced by commands like upvar/uplevel. FWIW Tcl programmers don't concern themselves very much with sorting out dynamic vs. lexical. In any case, Tcl programmers are careful to maximize static scoping. No doubt that's necessary to create reliable larger programs many of which have been written in Tcl.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: