Hacker Newsnew | past | comments | ask | show | jobs | submit | suzumer's commentslogin

Because a language being NP-hard implies it is at least as hard as the hardest NP problems. For any language that is NP-hard, if one had a turing machine that decided the language, one could construct a polynomial time transformation of any language in NP to the NP-hard language to decide it using the NP-hard turing machine.


I thought using multiple therapies for cancer was fairly common, or are you referring specifically to immunotherapies?


Sequentially? Yes. Simultaneously? Rare. Doxorubicin and cisplatin, has been done, but keytruda was quite interesting when it started to do clinical trials in combination with various chemos so that the FDA would approve it. My late-wife's oncologist was hesitant on combining anything, but I kept pushing for pembro plus anything.

But this was also systemically applied, these new ADCs are targeted, much stronger with combination payloads. Sadly, not as effective for harsher cancers like triple-negative BC.


This doesn't track my experience. I have Ewing's Sarcoma, and my first regimen involved 5 agents: Doxorubicin, Vincristine, Cytoxan, Ifofsfomide, and Etoposide. This [1] appears to be the same plan I used. When that failed, I had a second regimen with Vincristine, Irinotecan, and Temozolomide. After that failed, I had Irinotecan and Trabectidin, followed by Doxil, followed by, Cytoxan and Topotecan, followed by high dose Ifofsfomide. So only two of the treatments I received were single agent, but maybe it depends on the cancer.

[1] https://www.cancerresearchuk.org/about-cancer/treatment/drug...


According to this [1] wikipedia article, the only feature Sapphire Rapids doesn't support is VP2INTERSECT.

[1]:https://en.wikipedia.org/wiki/Advanced_Vector_Extensions



Or Zen 5. :-p


Note that the article mentions using both outputs of the instruction, whereas the emulation is only able to compute one output efficiently.


Saying COVID-19 leaked from a lab with zero evidence is different than waiting for evidence and then saying it leaked from a lab.


There were very good arguments that Covid (probably) leaked from a lab as early as April 2020 at the latest (and in January if you were a virologist included in top-level NIAID emails). HN largely went along with the shunning of debate, which helped give everyone the impression there was "zero evidence" of a lab leak compared to solid evidence of zoonotic origin, which was simply never true.

https://yurideigin.medium.com/lab-made-cov2-genealogy-throug...


Censoring views that it came from a lab leak with zero evidence isn't any better. In fact I remember being lied to by major news outlets at the time saying that the evidence points to a non lab origin.


It did though. Animal origin only stopped being the most favored explanation because we haven't found the link in 5 years.


There was a nucleotide sequence in the covid strain that did not show up in any of the proposed hosts or progenitor viral sequences, which is where leaked documents showed NIH (Fauci included I believe) discussing the non-natural origin of the nucleotide sequence. It's possible to search for articles about the Fauci NIH emails, and whether they mean anything scandalous.

Here's a technical article at NIH discussing the theory of no known natural origin for a nucleotide subsequence

https://pmc.ncbi.nlm.nih.gov/articles/PMC8209872/


This is an open access journal by two people whose publication history you can look up if you want to draw your own conclusions. Read the disclaimer at the top of the link. Don’t bootstrap its credibility by linking it to being at NIH (which does mean something) anymore than saying something found on Google is from the company itself.


Don’t imagine any bootstrapping of credibility is stated. It’s a citation to one article of many with no assertion otherwise. That’s how science discussions work.


This is interesting, thanks for sharing.


I'm confused, do you mean the animal origin had no evidence either, but was favoured? But not having evidence for 5 years suddenly makes the other theory favoured instead?

So basically neither had real evidence, but one was favoured?


False equivalence. Zoonotic diseases have precedent (SARS, MERS) and SARS-CoV-2 most closely resembles BANAL-52, a bat Coronavirus.

Animal origin still seems more likely to me, but less than 5 years ago, since we have a missing link one would expect to see.


Animal origin does not contradict a lab leak however. Especially if you have a biolab studying coronaviruses in bats in the city identified as ground zero.

It does favor an accidental lab leak over a targeted weaponization and release, but it doesn't contradict a lab origin.


Coronavirus lab leaks in China also had precedent. What's your point?

There was no evidence for a zoonotic origin other than it was possible.

There was little evidence for a lab leak other than it was possible, but at least there was some.


That is not true at all. Some scientists at the time suspected a lab leak, talk of which was deliberately shut down.

"Dr Robert Redfield, who led the Centers for Disease Control and Prevention during the Trump administration, told Vanity Fair that he received death threats from fellow scientists when he backed the Wuhan lab leak theory last spring. "I was threatened and ostracised because I proposed another hypothesis," Dr Redfield said. "I expected it from politicians. I didn't expect it from science."[1]"

The US State Department were told to not to explore claims of Gains of Function research:

"According to an investigation in Vanity Fair magazine published on Thursday, Department of State officials discussed the origins of coronavirus at a meeting on 9 December 2020. They were told not to explore claims about gain-of-function experiments at the Wuhan lab to avoid attracting unwelcome attention to US government funding of such research, reports Vanity Fair.[2]"

We may never know the truth, but its clear that there was politics being played since the beginning of the pandemic to obscure the truth, and not just by China.

[1] https://www.bbc.com/news/world-us-canada-57352992

[2] https://www.vanityfair.com/news/2021/06/the-lab-leak-theory-...


A lot of careers are tied up in research that isn't gain of function officially, but sure looks like it.


Yes, but the rush to the wet market theory was no better.


Wasn't it? Most of the earliest cases had a link to the market, many of whom were vendors including the very first known case. The early cases which had no known link lived/stayed clustered around the market. The market sold live wild animals which were known reservoirs for the previous coronavirus break (SARS).

How can following a trail possibly be no better?


There was also no evidence against it. If there is neither solid evidence for nor against something I find it perfectly reasonable to apply the balance of probabilities. At least as long as you qualify your statement with a "probably".

And with the main competing theory (covid spreading from a wet market in a city that contains a biolab) also being consistent with the hypothesis that it was an accidental lab leak, to me the balance of probabilities always seemed to favor the lab leak hypothesis.

Yet saying that Covid probably originated from a lab leak was once branded as dangerous misinformation, with seemingly no evidence to support that claim


At the time, there was essentially a 50/50 chance it was a lab leak or from a wet market. The issue with saying it was a lab leak at that time is that you are essentially gambling the US's relationship with China should it come out that it was a from a wet market. Also, a lot of the discussion regarding the lab leak theory early on seemed to me like it wouldn't be sated even if the US presented sufficient evidence that it was from a wet market.


But we can say it leaked from wet market without evidence!


The lab leak theory was started by Chinese netizens. It was mentioned on Chinese media. Hell the name Wu Flu came from Chinese media.


The issue there is that the listens from people who listen to less music would be worth more than the listens from people who listen to more music.


That's not an issue, that's accurately reflecting reality. If I'm paying the same $10/month just to listen to $OBSCURE_ARTIST for 10 plays per month, then each play of that _is_ worth more to Spotify than each play from a 10-year old listening to the same track of $SUPERSTAR one thousand times in a month.

In one case, 10 plays brought in $10 of revenue to Spotify, and those 10 plays should get $PERCENT of that $10.

In the other case, 1000 plays brought in $10 of revenue to Spotify, and those 1000 plays should also get the same $PERCENT of that $10.


A fixed monthly subscription amount with unlimited usage will always carry this deficiency. A solution that addresses this would be usage-based pricing.


I don't understand how this is a deficiency.


That's not an issue. That's the entire point. You track listens per account and if you're only listening to a single niche musician, all your money (not someone else's) goes to that musician.

The real mystery is why it should work any differently, because the cross subsidy seemingly creates a perverse profit incentive for bots to scalp off some of that cross subsidy. The economics are broken. This is socialism for the rich and popular.



I haven't gone through the whole article, but it seems to be conflating chroma and saturation. If lightness of a color is scaled by a factor c, then chroma needs to be scaled by that same factor, or saturation won't be preserved, and the color will appear more vibrant then it should.


Well, no, it's not straight up scaling.

(Not directed at you) Color science is a real field, CAM16 addresses all of the ideas and complaints that anyone could have, and yet, because it's 400 lines of code, we are robbed of principled, grounded, color. Instead people reach for the grab bag of simple algorithmic tricks


> CAM16 addresses all of the ideas and complaints that anyone could have...

Here's some complaints that better color scientists than me have had about CAM16:

> Bad numerical behavior, it is not scale invariant and blending does not behave well because of its compression of chroma. Hue uniformity is decent, but other models predict it more accurately.

https://bottosson.github.io/posts/oklab/

Here's more:

> Although CAM16-UCS offers good overall perceptual uniformity it does not preserve hue linearity, particularly in the blue hue region, and is computationally expensive compared with almost all other available models. In addition, none of the above mentioned color spaces were explicitly developed for high dynamic range applications.

https://opg.optica.org/oe/fulltext.cfm?uri=oe-25-13-15131

Color is hard.


You've discovered my White Whale.

It spells out a CAM16 approximation via 2 matmuls, and you are using as an example of how CAM16 could be improved.

The article, and Oklab, is not by a color scientist. He is/was a video game developer taking some time between jobs to do something on a lark.

He makes several category errors in that article, such as swapping in "CAM16-UCS" for "CAM16", and most importantly, he blends polar opposite hues in cartesian coordinates (blue and yellow), and uses the fact this ends up in the center (gray) as the core evidence for not liking CAM16 so much.

> better color scientists than me

Are you a color scientist?!


> The article, and Oklab, is not by a color scientist. He is/was a video game developer taking some time between jobs to do something on a lark.

As a non-color scientist sometimes dealing with color, it would probably be nice if the color scientists came out sometimes and wrote articles that as readable as what Ottosson produces. You can say CIECAM16 is the solution as much you want, but just looking at the CIECAM02 page on Wikipedia makes my brain hurt (how do I use any of this for anything? The correlate for chroma is t^0.9 sqrt(1/100) J (1.64 - 0.29^n)^0.73, where J comes from some Chtulhu formula?). It's hard enough to try to explain gamma to people writing image scaling code, there's no way ordinary developers can understand all of this until it becomes more easily available somehow. :-) Oklab, OTOH, I can actually relate to and understand, so guess which one I'd pick.


Mark Fairchild, one of the authors of CIECAM02, recently published a paper that heavily simplified that equation: https://markfairchild.org/PDFs/PAP45.pdf

If the link doesn't work, the paper is called: Brightness, lightness, colorfulness, and chroma in CIECAM02 and CAM16.

Also, if you want a readable introduction to color science, you can check out his book Color Appearance Models.


Thanks for the link! To anyone looking for a summary: Fairchild's paper explains the origin and nature of various arbitrary and empirically/theoretically unjustified and computationally expensive complications of CAM16 (from Hunt's models from the 80s–90s via CIECAM02 and CIECAM97s) which were apparently originated as duct-taped workarounds that are no longer relevant but were kept based on inertia. And it proposes alternatives which match empirical data better.

Mark Fairchild is great in general, and anyone wanting to learn about color science should go skim through his book and papers: he does the serious empirical work needed to justify his conclusions, and writes clearly. It was nice to drop by his office and shake his hand a few years ago.

In an email a couple years ago he explained that he had nothing to do with CAM16 specifically because the CIE wouldn't let him on their committees anymore (even as a volunteer advisor) without signing some kind of IP release giving them exclusive rights to any ideas discussed.


J is the lightness channel and similar to other lightness formulas in other colorspaces for SDR lightness. I.e. usually idea is to take a lightness formula, and just arrange hues/chromas for each value of J.

Yea, Jab instead of Lab in ciecam haha. Btw, ciecam is pretty bad predicting highlights, it was designed for SDR to begin with. Lightness formula in ICtCp is more interesting (and here it is "I").

But yea, difficulty of ciecam02 comes from the fact that it tries to work for different conditions, i.e. if usual colorspaces just need to worry about how everything works with one color temperature (usually 5500 or 6500K), ciecam02 tries to predict how colorspace would look like for different tempratures and for different viewing conditions (viewing conditions do not contribute much difference though).

Oh, and of course, ciecam02 defines 3 colorspaces, because it is impossible to arrange ab channels in euclidean space :) TLDR, there is metric de2000 to compare 2 colors, but this metric defines non-euclidean space. While any colorspace tries to bend that metric to fit into euclidean space. So, we have a lot of spaces that try it with different degree of success.

Cam02 is over-engeneered, but it is pretty easy to use, if you just care about cam-ucs colorspace (one of these three) and standard viewing conditions.

If you kinda just wanna see difference between colorspaces, good papers comparing colorspaces have actually nice visual graphs. If you want to compare them for color editing, I've implemented a colorgrading plugin for photoshop: colorplane (ah, kinda ad ;)).

From most interesting spaces, I would say colorspaces, optimized using machine learning are pretty interesting (papers from 2023/2024). But yeah, this means they work using tensorflow, so you need to use batching, when converting from/to RGB. But yeah, what they did, they took CieLab (yes, that old one), used L from it and stretched AB channels to better fit de2000 metric prediction. Basically, like many other colorspaces are designed, just machine learning is cool to minimise errors in half-automatic way. Heh, someday I should write a looong comparison of colorspaces in an easy language with examples :)


> Are you a color scientist?!

I would say yes, but if you're going to argue Björn Ottosson isn't, then no.


They called me a color scientist at work and I didn't like it much :( until I started doing it. But I don't think I could again.

I was just asking because I'm used to engineers mistaking the oklab blog for color science, but not color scientists. It's legit nothing you want to rely on, at all, clearest example I can give is the blue to yellow gradient having grey in the middle. It's a mind numbing feeling to have someone tell you that someone saying that, then making a color space "fixing" this, is something to read up on.


> CAM16 addresses all of the ideas and complaints that anyone could have

A statement this emphatic and absolute can't possibly be true.

Here's a concrete complaint that I have with CAM16: the unique hues and hue spacing it defines for its concept of hue quadrature and hue composition are nontrivially different than the ones in CIECAM02 or CIECAM97s, but those changes are not justified or explained anywhere, because the change was just an accidental oversight. (The previous models' unique hues were chosen carefully based on psychometric data.)

> because it's 400 lines of code, we are robbed

It's not really surprising that people reach for math which is computationally cheap when they need to do something to every pixel which appears in a large video file or is sent to a computer's display.


Then give us both. fast_decent_colormap() and slow_better_colormap() and hide away all your fancy maths.

Give me some images and the kinds of transforms each color space is good at, and let me pick one, already implemented in a library in a couple different languages.

What's the best color space if I want pretty gradients between 2 arbitrary colors?

What's the best color space if want 16 maximally perceptually unique colors?

What color space do I use for a heat map image?

What color space do I use if I want to pick k-means colors from an image or down sample an image to 6 colors?


This is a category error question and that's what makes these hard to answer. There's very good & clear answers, but you see the reaction to them in this thread. I wish I could share the tools I made these problems visible at BigCo, and thus easy to resolve, but alas.

TL;DR: A color space just tells you where a color is in relation to others. From there, focus on your domain (eg. mine was "~all software", 2D phone apps) and apply. Ex. the gentleman above talking about specularity and CAM16 is wildly off-topic for my use case, but, might be crucially important for 3D (idk). In general, it's bizarre to be using something besides CAM16, and if that's hard to wrap your mind around, fall back to Lab*, (HCL) and make sure you're accounting for gamut mapping if you're changing one of the components.


Is it a category error? I can see that if I blend linearly in one color space vs another space I'll get a different result. And if a try to cluster colors using one color space vs another I'll get different results. Surely the color space is relevant and my questions aren't completely non-sensical?

CAM16 can't be the best answer to all of them, can it? It's possible but I'd think some color spaces are better suited for some tasks than others.

Which CAM16 are we even talking about? A quick Google reveals CAM16 UCS, SCD and LCD.

CIELAB I've heard good things about but then OKLAB became all the rage and now I don't know what's better.


While CAM16 helps, it doesn't address all the ideas and complaints. The field that brought you CAM 16 has many more advanced models to address shortcomings, and there's papers published nearly daily addressing flaws in those models.

It's by no means a solved problem or field.


Like most people, I think, I'm just using Oklab for interpolation between colors on my screen, and in color pickers that feel a little easier to use than the usual HSV one. As you mentioned, it's easy to throw in anywhere.

Is there a reason why it would be more appropriate to use CAM16 for those use cases?

I think an even simpler approximation than Oklab might be appropriate for these cases - it'd be nice if the sRGB gamut was convex in Oklab, or at least didn't have that slice cut out of it in the blue region.


Do you live in Canada?


>Do you live in Canada?

I'm guessing they don't. As a US person we hear a lot of, presumably insurance company sponsored, anti-Canadian-healthcare propaganda and then dumb people repeat it online.


While one commenter had it somewhat right that HDR has to do with how bright/dark an image can be, the main thing HDR images specify is how far ABOVE reference white you can display. With srgb, 100 percent of all channels is 100 percent white (brightness of a perfect lambertian reflector). Rec 2100 together with rec 2408 specify modern hdr encoding, where 203 nits is 100 percebt white, and above that would be anything brighter (light sources, specular reflection, etc). So if a white image encoded in sdr looks dimmer than hdr for non specular detail, that is probably encoding or decoding error.


Cam16 (as opposed to cam16 ucs) is perception based. It calculates chroma, lightness, and hue, and is based on the munsell color system. Hellwig and Fairchild recently simplifed the model mathematically, improving it's chroma accuracy.( http://markfairchild.org/PDFs/PAP45.pdf) Another, simpler, model is CIELAB, which outputs paramters L, a, and b, where L is lightness, hypot(a,b) is chroma, and arctan2(b,a) is the hue.


Thanks! IIRC(fuzzily - it's been a while), I chose -UCS for a more euclidean color difference metric - I should review that. My even fuzzier recollection, is CIELAB's visible gamut shape is very artifacty[1], perhaps misleadingly representing the volume outside sRGB/P3 for instance.

The pedagogical objectives of playing well with full visible 3D gamut, and spectral locus, and of avoiding shape artifacts (concavities, excursions), are... non-traditional. Characteristics which could be happily traded away in traditional uses of color spaces, for characteristics like model math and simplicity which here have near-zero value (lookup tables satisficing). And were - most spaces have "oh my, that's a hard downselect" bizarre visual hulls, and topologies outside of P3 or even sRGB can get quite strange. Thus the need to untwist CAM16's curving hue lines - they're not bad within sRGB, but by the time they hit visible hull, yipes, I recall some as near parallel to hull.

Having a color space to play with as a realistic 3D whole, seems not the kind of thing we collectively incentivize. A lot of science education content difficulty seems like that.

[1] https://commons.wikimedia.org/wiki/File:Visible_gamut_within...


CAM16's hue lines are curved by design. Hue is not linear with regards to xy chromaticity, as evidenced by the Abney effect[1].

[1] https://en.wikipedia.org/wiki/Abney_effect


But maybe not this[1] non-linear? Fun if real. But perhaps fitting was done within a gamut folks care more about, and model math then induced artifacts at the margins of the full visible gamut? I'd really love to know if that blue tail represents real perception.

[1] https://www.researchgate.net/profile/Volodymyr-Pyliavskyi/pu... [png] from https://ojs.suitt.edu.ua/index.php/digitech/article/download... [PDF dl] (Curiously, bing image search has this figure, but google doesn't.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: