Hacker News new | past | comments | ask | show | jobs | submit | ThatGeoGuy's comments login

You might be thinking: "But, if I see an result/error value that I didn't expect while running my program, the stack trace will help me track down the issue!" Yeah, no kidding. So, let's also start adding stack traces to our successful values, too! After, all, if I call my division function and get back a `Result::Ok` with a weird number that I didn't expect, I might want to trace that back, too, right? (This suggestion is sarcastic to prove a point. It should, hopefully, sound ridiculous to add stack traces to every return value from every function.)

I don't think I disagree with the ends you're proposing (don't add stack traces to every value, don't add stack traces specifically to Result::Err(E) variants); however, this is a bad way to justify it. Tools like dtrace / bpftrace do exactly this kind of stack tracing for both success and error cases across entire systems. This is a good thing™, and is actually very useful for both debugging, performance profiling, and understanding what your code is really doing on the hardware.

So I guess I disagree with how you're framing it. I would argue that adding stack traces to every value in Rust would be bad because it is a lot of overhead for something your kernel can and will do better.

The issue is that Rust's Result (and Java's checked exceptions) require a different paradigm. A Result is in the type signature because it's part of your domain's API design. It's just values. It's not for* debugging. You use a debugger for that or programmatically panic when something is truly unexpected and get the stack trace from that.*

This really is the gist of it. However, I will say that in my experience the reason that Result types are nice (over e.g. exceptions) is that putting the error cases in the type contract means that you can have the compiler check when someone hasn't handled an error case (? and unwrap are "handling" it even if they may not always be appropriate), as well as statically verify which variants may be unused. One very frustrating thing I've had to encounter in C++ is finding a whole list of different errors that have been duplicated as multiple different opaque (e.g. behind a unique_ptr<std::exception> or some such) exceptions across the codebase.

Being able to know what variants of error can come out of an API is great! It just happens that working with a rich type system like Rust makes it possible to do all manner of things that languages-with-only-exceptions cannot.


Yeah, fair point about dtrace, et al, but I think my statement is still fine in context, since we're specifically talking about these Rust libraries that collect stack traces for error types.

And I agree and love having statically checked failure modes! So, if you're choosing to panic in Rust, it better be because of something that is really not able to be handled at all (caveat: the top-level event loop or whatever could catch panics/exceptions, print a "Oops! Something went wrong!" message to the user and then either die or try to keep going, etc, but no handling panics/exceptions in "middle" layers.).


Quick nit/question. For the fold method, I don't think I've seen it called sentinel value. Usually it is seed or initial?

I think in Scheme it is common to call it knil, mirroring how lists use nil as the "sentinel" value which marks the end of a proper list. I opted to name it sentinel in that article (and in the docs) for two reasons:

1. Sentinel values are a common topic in many languages https://en.wikipedia.org/wiki/Sentinel_value

2. Transducers necessarily abstract across a lot more than loops / lists. Lisps do a lot of really cool (and optimized) stuff with lists alone and Scheme is no different in this regard. However, because of how Scheme primarily exports list operations in (scheme base) is really easy to run into a situation where lists are used in an ad-hoc way where another data structure is more appropriate. This includes vectors, sets, mappings, hashmaps, etc. Transducers-the-library is meant to be general across operations that work on all of these types, so I chose language that intentionally departs from thinking in a list-focused way.

Now, my main question. Transducer? I'm curious on the etymology of that word. By itself, I don't think I could ever guess what it was referencing. :(

This is from Rich Hickey's presentation: https://www.youtube.com/watch?v=6mTbuzafcII

It's not a reducer, because they serve as higher-order functions that operate on reducers. Instead, the values they accept are transient through the function(s), so they transduce. You should watch the video, I think Rich explains the origins of his language very well.


Author of the post here, hi! Funny to see this resurface again, I have made a number of changes to the transducers library since this blog post (see: https://wiki.call-cc.org/eggref/5/transducers).

A vector-reduce form would be trivial but icky, and I chose not to do it to not have to have the continuation safety discussion.

I am not sure what "continuation safety" refers to in this context but I wanted a library that would give me a good out-of-the-box experience and have support for commonly used Scheme types. I have not yet added any folders/collectors/transducers specific to some types (like anything related to streams or SRFI-69), but I think a broad swath of types and patterns are currently covered.

I think in particular my griped regarding vectors were that collectors such as `collect-vector`, `collect-u8vector`, etc. were not implemented. There is a chance to break out of these collectors using continuations but that's not really a good argument to not have them (I hope this is not what you're referring to!).

Anyway, if I read things correctly the complaint that srfi-171 has delete dupes and delete neighbor dupes forgets that transducers are not always used to or from a data structure. They are oblivious to context. That is why both are necessary.

I think this is exactly my argument: they are oblivious to context and actually do the wrong thing by default. I've seen this happen in Rust with users preferring `dedup` or `dedup_by` (from the Itertools crate) rather than just constructing a HashSet or BTreeSet. It almost always is used as a shortcut to save on a data structure, and time and again I've seen it break workflows because it requires that the chain of items is first sorted.

I think this is is particularly damning for a library that means to be general purpose. If users want to implement this themselves and maintain it within their own code-bases, they're certainly welcome to; however, I don't personally think making this kind of deduping "easy" helps folks in the general sense. You'd be better off collecting into a set or bag of some kind, and then transducing a second time.

From what I can see the only differences are ordering of clauses to make the transduce form generic and naming conventions. His library shadows a bunch of bindings in a non-compatible way. The transduce form is still not generic but moves the list-, vector-, generator- part of transduce into a "folder". Which is fine. But a generic dispatch would be nicer.

Shadowing bindings in a "non-compatible" way can be bad, but it also helps to make programs more clean. If you're using transducers across your codebase, you almost certainly aren't also using e.g. SRFI-1's filter.

As for generic dispatch: I agree wholeheartedly. I wish we had something like Clojure protocols that didn't suck. I've looked into ways to (ab)use variant records for this sort of thing, but you run into an open/closed problem on extending the API. This is really something that needs to be solved at the language level and something like COOPS / GOOPS incurs a degree of both conceptual and actual performance overhead that makes them somewhat unsatisfying :(

And also: thank you for SRFI-171. I disagree with some of the design decisions but had it not been written I probably wouldn't have even considered transducers as something worth having.


Small unrelated bug report: in your "Book review: Bernoulli's Fallacy" article, there is a link to: https://www.thatgeoguy.ca/blog/%7B%7B%20site.baseurl%20%7D%7...

I don't have much time to read through all this now but I'll check later, looks like great write-ups!


Thanks for the heads up! Seems my excerpt urls had some errors!


> It almost always is used as a shortcut to save on a data structure, and time and again I've seen it break workflows because it requires that the chain of items is first sorted.

I am not sure I understand. I almost never use transducers to create data structures. I use them as a way to create general processing steps. The standard example is how they are used in clojure's channels. In such a context you need both dedup and dedup-neighbors.

To be frank, I don't really care much for the *-transduce functions. I think a general purpose looping facility is a better choice almost always. For those things I use https://git.sr.ht/~bjoli/goof-loop which is always going to be faster than transducers unless you have very very smart compiler (or perhaps a tracing JIT).

I think that transducers should be integrated into the standard library to make sense so that you can for example pass them to a port constructor.

Anyway, your library looks much more complete, and pretty similar to the SRFI. The differences are mostly cosmetic.


This is more or less what Sean Baxter was trying to do with https://www.circle-lang.org/.

Of course, this requires buying into a set of tooling and learning a lot of specific idioms. I can't say I've used it, but from reading the docs it seems sound enough.


Hey all, senior engineer on the Tangram team here — we're really excited to be launching HiFi today!

Through past work with similar sensors [0][1], we've heard a lot of feedback about what people want from a depth camera! With our expertise in calibration & shipping sensor SDKs in the past, we saw this as an amazing opportunity to ship what we think is a big leap forward in sensing: 1.6MP cameras, AI capability (up to 8 TOPS / 8GiB onboard memory driven via TIDL), and what is most relevant to me: self-calibration and rock-solid software support.

I've spent the better part of my career building and helping develop multi-sensor systems! We hope you'll pick HiFi for your project (and even if you don't, none of our software is locked to any specific hardware vendor). HiFi is a great chance for us to flagship our software on a first-class piece of hardware, and we want to share that superpower with all of you!

[0]: https://gitlab.com/tangram-vision-oss/realsense-rust

[1]: Myself, as well as various members of the team used to work at what is now https://structure.io/


I'm not super informed on the space but i do try to keep up with different 3D sensing tech. What makes this a big leap forward over what we already have? I mean doesnt the iphone and most flagships already do 3D sensing?


Hi - I'm one of the founders of Tangram Vision here. It's a good question. This sensor in particular is focused on robotics, where the capabilities of 3D sensors are fairly different from what you'd find on an iPhone. In the case of HiFi, the leaps are in resolution (much higher than other depth sensors for robots), AI compute (about 5x the amount of the next competitor), and ease of integration with a robotic platform.


It would perhaps be more accurate to say that this is a big leap forward compared to most existing off-the-shelf depth cameras for robotics. To address the iPhone specifically: you probably aren't going to mount iPhones on a bunch of production robots in the field.

Comparing to other alternatives in the robotics space (I've listed RealSense and Structure above, but there are others), there is somewhat of a laundry list of potential pitfalls and issues that we've seen folks trip over again and again.

Calibration is a big one, and a large part of what we're doing with HiFi is launching it with it's own automatic, self-calibration process (no fiducials). There are some device failures that a process like this wouldn't be able to handle, but the vast majority of calibration problems in the field result from difficult tooling or requirements, a need to supply one's own calibration software, or a combination of hardware and software that make the process difficult. A nickel for every time someone has to train a part-time operator to fix calibration in the field, and I'd own Amazon.

Depth quality and precision is another big pitfall — there are folks out there today using RealSense for their robot, but we've talked to a number of folks who just don't rely on the on-board depth. It's too noisy, it warps flat surfaces, etc. Lots of little details that on the surface you might not think about when just looking at a list of cameras! Putting our edge AI capabilities aside, the improved optics and compute available on the HiFi allow us to build a sensor that always provides good depth. That sounds like a baseline for this kind of tech, but there's plenty of examples otherwise on the market today!

Software is probably the other last big thing that we really want to leap forward on. We don't have too much to say about our SDK today, but when we launch it we hope to make working with these sensors a lot easier. I work with RealSense quite a bit (I am the maintainer of realsense-rust), and quite honestly what has been a solid overall hardware package for many years (until HiFi, I hope) is let down by how confusing it is to use librealsense2 in any meaningful project.

Needless to say, I think HiFi stands on some solid merits and I'm not sure it can be directly compared to other 3D sensors in e.g. iPhones, mostly because the expected use-case is so utterly different.


Appreciate the detailed response! Definitely seems like we've come a long way from when i heard about people using Kinect cameras, and look forward to all future advancements that you will contribute!


Question: Luxonis seems comparable (I’ve used their products). Are you pursuing the same use cases? How do you plan to differentiate?


I think in terms of use-cases there is bound to be a lot of overlap. Of course, in terms of comparable products, I'd have to mention that Luxonis sells a good number of different products, so understand that there's a necessary disclaimer here that HiFi isn't going to replace every possible option that Luxonis offers (especially their modules / full robot offerings, HiFi is just the sensor!).

In terms of how we differentiate ourselves, I think the main focus is going to be primarily in terms of depth quality and software. Our expertise in providing robust calibrations, combined with the improved optics on the HiFi allow us to produce much higher quality depth frames, more consistently, than what we see on the market today.

In fact, the whole purpose of building this sensor was to bring to market a 3D depth camera that provides good quality data, always, to better enable long-term autonomy. AI tools and capabilities enhance that data in a way that customers have told us existing market offerings are currently lacking.

As for software: We're a software company at heart, which helps, and HiFi is meant to be a flagship representation of what our software can power. We've used a lot of sensors, sensor APIs, SDKs, etc. and the number one thing we find is that these systems are complex, opaque, and difficult to debug or understand. A big part of designing software for me personally is producing software that is legible; not in the sense that one can literally read it (because reading code isn't easy), but in the sense that the software itself can be understood in the abstract. We're hoping that when we ship HiFi and the corresponding SDK that folks will appreciate the steps we've taken to make working with it understandable and obvious.


Awesome! Thanks for the detailed write up and I hope to have an excuse in the next few months to check you guys out :)


I think the core difference is that as a teen we see people reject tech and assume that it's the tech itself being rejected. That there is some underlying progress being shunned.

As an adult I realize that the tech is a layer of gloss and glamour on actively making the world worse than I knew it could be. I didn't have that perspective as a teen because as a teen I hadn't known the world.


Ex: "Sonny, I'm not against people communicating easily with their friends and sharing pictures, I'm against what I see as a wave of addictive gamified narcissistic codependency."


An alternative, R7RS implementation that I also maintain for (CHICKEN) Scheme: <https://wiki.call-cc.org/eggref/5/transducers>

I believe I've submitted my own blog post on transducers in the past. See <https://www.thatgeoguy.ca/blog/2023/01/04/reflections-on-tra...>

To bjoli: Have you seen my library? Any intentions to update the SRFI and incorporate more types?


I have had some plans on updating the SRFI to add some reducers I did not include because I never actually used transducers before writing the initial implementation.

I don't really understand what you mean by types (my implementation stays monomorphic so new types are easily introduced by TYPE-transduce) , but I have thought about generalising things like numerical ranges by having something like unfold-transduce.


> but I have thought about generalising things like numerical ranges by having something like unfold-transduce.

This is more or less what I was wondering about. Numerics, ports, SRFI-41 streams, etc. There's a lot of stuff that isn't in e.g. r7rs-small but is more or less expected in most Scheme implementations.


Well. Without generic functions it is impossible to specify a "complete" API. I mention it in "Scope considerations" in the srfi document


> That's also an outlier, but less than 30k isn't which makes this post somewhat misleading (at best).

I don't disagree that this _isn't_ misleading in a way but it's also important to consider that most EVs are subject to what you said right after:

> Ford's cars are probably too expensive because they ignored the EV market for a decade and it takes time to scale up production and reduce costs.

Notably that Tesla still isn't at a scale of production where walking off the lot with a < $30k vehicle is possible! There usually isn't a lot to walk off of, but its rare that the base model will be available at all!

Most of the cars they are producing are not the base package, and likely will never be. The thing about supply chains here is that the "base model" is a much smaller percentage of actual manufacturing. Most of the time if a manufacturer were to produce 1M units a year, less than 30% will be the very base model. I'm not sure there are clear numbers published anywhere, but you can bet that while the "base model" might seem affordable you're unlikely to find it due to the same supply chain constraints — Tesla would much rather up-sell you to some package above the base model and availability is restricted as-is, so which customers do you think they'll prioritize? Tesla isn't even producing millions of units yet (I think last year was just over 600k?), and this is just a drop in the bucket in the number of cars purchased each year.

> I'd guess this is because Ford can't profitably make an affordable sedan (they can't do so without losing a lot of money) so they need to make a more expensive vehicle with better margins, Tesla did this too of course - they just had a decade head start while the legacy manufacturers did nothing.

They don't "need" to make larger vehicles with better margins, they just prefer to have better margins. A lot of this is a downstream effect of CAFE exemptions on "light trucks," which applies to both SUVs and modern pick-up trucks. I guess my disagreement here isn't that they "can't" make sedans unless you're defining "can't" in terms of "what the shareholders said." We can absolutely regulate these things and it would probably be beneficial to do so.

As for why Tesla isn't making massive trucks? EV physics (weight of batteries vs. amount of energy to move said weight) somewhat preclude this, but counterpoint there is that Tesla is making a truck and wants to be on top of that market. Sedans are in a weird space with EVs since the added weight kind of puts you in a range<->weight arms race. While most people probably don't need as much range as they think they do, you end up picking between the atrocious Hummer EV or a Model 3 (which is still very large for most of the trips you'd take with a sedan!!!).


> There usually isn't a lot to walk off of, but its rare that the base model will be available at all!

That isn't true for much of the US. The base model 3 is readily available, even in inventory. At the end of the quarter, it is available with discounts as well.

Not only that, but because of the way the battery supply chain works, there are advantages to Tesla for you to buy the base model. The base uses LFP cells, whereas higher trims use 2170s from either NV or imports. Selling the base model in its current config frees up higher energy density cells for higher trims.


You can't walk off the lot with any Tesla, so Tesla has much less influence on what percentage of its cars are the base model than other brands. You order online so there's no salesman standing over you talking you into unneeded upgrades.

For high-demand products Tesla will offer a better expected delivery date for more expensive vehicles, but Tesla's supply and demand are in balance now and you can get base model deliveries within a couple of weeks.


There are no unneeded upgrades to be had (aside I guess autopilot?).

The experience of buying a Tesla was wonderful. We just couldn't believe how great everything was. Zero pressure. Clear pricing. Simple website. They even registered the car.


>but counterpoint there is that Tesla is making a truck and wants to be on top of that market.

Is this actually true? I know Tesla say they're trying to compete in the truck market, but they also said that full self-driving was less than a year away in 2014, and then again in basically every year since.

Meanwhile, the cybertruck was slated for release in 2021. Ditto the trucks, IIRC.


>Notably that Tesla still isn't at a scale of production where walking off the lot with a < $30k vehicle is possible!

It's estimated Tesla has a 30% profit margin on the Model 3 meanwhile automakers traditionally were at 5%.

There's alot of overpricing there.


Well, prices aside, there currently is no Ford SUV. I've been shopping for an EV but the current offerings still don't replace my ancient Ford Escape. Still waiting for the electric version...


From what I understand (and have seen, around here), the new electric "Mustang" is actually a crossover SUV. Note the URL hierarchy.

https://www.ford.com/suvs/mach-e/


Crossover, yet, but it hardly qualifies as an SUV. Its overall profile is closer to a medium size sedan with a hatchback. What was once called a 5-door. If the hatch were more vertical it would be closer to traditional station wagons.

Yes, Ford markets it as an SUV because that terms appeals to a wider market and most people don’t know what a crossover is.


Yes, as are most other big brand EVs like Ioniq, Kia's line, etc. But they are still smaller than Escape, not sure if it's a design trend or what. It doesn't seem like there is any inherent constraint in the shape unless it's for air resistance reasons.


a website hierarchy means little; that could be a hack to reuse the existing layouts.


System76 is based in Denver, Colorado. Presumably, this is where COSMIC is being developed.


> Yeah, every time the discussion comes up about, for example, reparations for the descendants of slaves, I start out thinking: it's been 150 years!

Well, that might be a bit disingenuous. The "last chattel slave" was only freed around September 1942. I've seen this reference in several places, but the most direct one is a footnote on a wikipedia page [0].

Regardless, it is probably not worth putting a time limit on suffering. The children and grandchildren of enslaved black people are still alive today! Waving it away with "time has passed" seems more an attempt to bury the issue than to approach it with some semblance of acknowledging the wrong done.

[0] https://en.wikipedia.org/wiki/Beeville,_Texas#/media/File:Be...


Historically, many slaves were not permitted or able to reproduce, this is one thing that distinguishes the slave trade in the United States. Trying to make amends for those slaves whose line ended with them is probably impossible.

On the other hand, a great many people today all around the world, and of many skin colours are descended from slaves. I am mostly familiar with this history in Europe and Africa, though I have no doubt it went on to a greater and lesser extent elsewhere. Supposing that the average reader here, who does not consider themselves to be "minority", is a "white" American, how confident are you that your ancestors do not include many slaves? Slavery in Europe still exists, but in the traditional sense with open buying and selling and large-scale enslavement it was openly and widely practiced in England and Germany and Poland and wherever you trace your ancestry no more than a thousand years ago.

You may consider it inappropriate to put a time limit on suffering, but in practice it's implicitly done all the time. The US is exceptional in having so many people who bear clear marks of historically nearby enslavement. Other parts of the world have been more successful in forgetting.

If I proposed to some Ivy League admissions panel that the descendants of biblical Jews should be favoured over those of Egyptians on account of enslavement would anyone listen?


Population wise there are more slaves today than 100 years ago, so the world has not quite moved on.


>it is probably not worth putting a time limit on suffering

I'm not a historian, but if you believe this, how do you propose to make things right for all the suffering of the past? You would need to examine history for winners and losers, every battle and atrocity and societal structure, and then assign blame to modern people who look like the bad guys, and victimhood to modern people who look like the victims. How do you deal with the (probably very common) case when a group of people that looks one way has been both oppressor and victim? How do you deal with issues like pedophilia, incest, or domestic violence, or torture, all of which have had very different moral weight historically?

To me, that's the tragedy of this ideology. The problem isn't the desire for making past wrongs right - that's a very good urge, and one I share. It's that the method for making past wrongs right is based on a very simplistic reading of history and a simplistic, and deeply unfair, idea that you can assign blame and victimhood based on similarity of appearance. There ARE cases when you can address great wrongs, but there is a kind of natural "statute of limitations" where it becomes actually impossible to do anything. Should the Jews still be angry with Egyptians? Or does the Israeli treatment of Palestinians wipe that debt out? What about the Jews who weren't involved? What about the blood libel, the assertion that Jews killed Jesus (nevermind that he was a Jew), and so it is right to hold all modern Jews responsible? What about all the tribal massacres in Africa, where the victims and oppressors a) look exactly the same, and b) would do exactly the same thing if their positions were reversed? How do you deal with the Aztecs, who were slaughtered by Europeans, but who themselves did human sacrafice and slavery, and who eventually interbred with the Europeans? Same for the Russians and Mongolians. (There are probably a hundred other examples of this - Vikings and the Anglo Saxons? The French and the Celts? Etc).

What we can do, we should do. Japanese internment at Manzinar was wrong, and they deserved all the reparations and apologies they (eventually) got, and more. Harvey Weinstien's female victims deserved to see him in prison (at least). Black neighborhoods deserve to have freeways rerouted to not split them and make them terrible, and money to rebuild. But do all white people deserve to be hated, and to hate themselves, because they look like a group of wrongdoers? No. Heck, some of them are recent immigrants. Ditto for black people. And the whole idea we can assign blame based on a person's appearance is a CORE racist belief, and yet now the zeitgeist holds that if you don't do it, you're the racist. The world is upside down, and this ideology is utterly unjust. In my view, it's not anti-racist, it's a new racism that doesn't seek to end racism, but rather to turn the tables and swap the roles of victim and oppressor. This will not, cannot, end well, and it's not the world I want for myself or my children, and I don't think it's the world any right-thinking person wants.


> tribal massacres in Africa, where the victims and oppressors a) look exactly the same

To you. There's almost certainly more genetic difference between two people randomly selected from two African tribes than two people randomly selected from different self-identified racial groupings in a Western country. And a much longer history of conflict between tribes vs races. I'd note the fact this is true goes some way towards explaining why Africa suffers the levels of violence and poverty today that it still does. As for the rest of your post, while AA clearly is a strong form of racial discrimination that does little to help us achieve an ideal world where "race" is no longer a thing, it's also a policy with an underlying philosophy of "let's provide help to other people different in appearance/ethnic backgrounds" , which is rather obviously a massive improvement on "let's actively discriminate and/or commit violence against such people". And hopefully a step towards a policy of "let's help other people when they need help, regardless of their appearance or ethnic background".


>To you.

No, to them. I was thinking specifically of the Rwandan Genocide[0], where there was and is no visible difference between the Hutu and Tutsi. The difference was via a field on their national id card [1].

0 - https://en.wikipedia.org/wiki/Rwandan_genocide

1 - http://www.preventgenocide.org/edu/pastgenocides/rwanda/inda...


Accepted, the Tutsi/Hutu division isn't one where difference in genetics/appearance seems to be a major factor, though I'd still assume the average Tutsi or Hutu could easily distinguish one from the other in a way outsiders mightn't be able to.


The main division of Tutsi/Hutu was primarily done by Europeans, and the criteria was based on "those who owned cattle became known as the Tutsi and those who did not own cattle became known as the Hutu", and taller persons were also assigned as Tutsi.

Taller men tend to earn more money on average so in those terms both the average Tutsi, Hutu and outsiders should be able to make a better than random guess about who is Tutsi or Hutu.


Ah you have it backwards - there was already the Tutsi ethnic group, but the Belgians found it easier to identify them as Tutsi by number of cattle etc.

"Prior to the arrival of colonists, Rwanda had been ruled by a Tutsi-dominated monarchy since the 15th century."

"Rwanda was ruled as a colony by Germany (from 1897 to 1916) and by Belgium (from 1922 to 1961). Both the Tutsi and Hutu had been the traditional governing elite, but both colonial powers allowed only the Tutsi to be educated and to participate in the colonial government. Such discriminatory policies engendered resentment."

"When the Belgians took over, they believed it could be better governed if they continued to identify the different populations. In the 1920s, they required people to identify with a particular ethnic group and classified them accordingly in censuses."


I was taught that there was no difference and it was the Dutch that measured nose length and made the classification ‘arbitrarily’. But isn’t that false? In the time since I’ve seen side by side pictures and it seems trivial to tell them apart. So now I don’t know what to think.


> ...how do you propose to make things right for all the suffering of the past?

Yes and: What is justice?

> You would need to examine history for winners and losers...

That'd be a good start.

Until something better comes along, I support the "truth & reconciliation" strategy. With a splash of sociology. https://en.wikipedia.org/wiki/Truth_commission https://en.wikipedia.org/wiki/Critical_theory

Another good step would be to enfranchise people. Like giving the all the people impacted by a new freeway some say in the planning process.


Your society would be doomed to forever look back at historical grievances and never make progress.

As Ibram X. Kendi says: "The only remedy to past discrimination is present discrimination. The only remedy to present discrimination is future discrimination." Under your and his vision, there will never come a day when people aren't discriminated for things they had no control over.


I think it is probably unwise to pre-suppose an extreme here (that society will never "progress").

The default action today is "do nothing and don't acknowledge the problem." Suggesting any action be taken against that status quo does not in any way suggest that it is a permanent inviolable law that society must continuously optimize for nor does it suggest that it can't be done in tandem with other "progress" society may achieve.


> The default action today is "do nothing and don't acknowledge the problem."

Ambivalence is the human default, and logic requires it must be so. The world presently has 10e9 people. Historically, something like 10e12 people have ever existed (I'm estimating). If you were to somehow feel the sum total of human suffering in just one instant, I daresay it would destroy you. We ALL pick and choose what suffering to acknowledge, for the simple reason that to do otherwise is impossible (and deadly if it was possible). Heck, we ignore entire categories of suffering in every discussion, like that caused by disease, heart-break, ostracism, bullying, or old age.

You loudly proclaim your aversion to all human suffering, past and present, and claim to know how to fix it. This is absurd. It is vain virtue signaling. Your position smacks of an ignorant pride, wrapped in a claim of impossible compassion. And this sin of pride extends to your "solutions" - you assert that you can accurately assess the suffering of all humans throughout history and take just action to make it right. That's even more absurd.

We can't address ALL suffering. That doesn't mean that we can't address ANY suffering. It means we must (must!) be highly selective. We must let (almost) everything go. We deal with what's in front of us. We must acknowledge how human life is twisted: Rape and plunder...that yields good kids. Civilizations collapse...to make new for the next one. Rampant exploitation...that yields just and fair societies. Cultural appropriation...that yields great ideas and art. Slavery and dehumanization...that ultimately leaves the descendants in a better position than the descendants of those that weren't taken. It's twisted, messed up, and that's life. (btw the most twisted thing I know of in nature is the life-cycle of this slime-mold/ameoba life cycle. https://www.youtube.com/watch?v=vlANF-v9lb0).

Yeah, there are plenty of structures that need to be dismantled in the US. The police are out-of-control and there is no meaningful separation of powers at the local level; the health-care system is plundering us all for profit; wealth inequality continues to get worse; money in politics has ossified our power structures. And yeah, America has a profound and unique history of racist dehumanization rooted in southern slavery that continues to this day and negatively impacts many American black people in profound ways. But the solution to the KKK (the original recipe anti-black version) is not to invent a ~KKK (the crispy anti-white version) and tell whites that if they don't join ~KKK then they are in the KKK. That's just fucked up.


> The world presently has 10e9 people. Historically, something like 10e12 people have ever existed (I'm estimating).

Tangential, and doesn't detract much from your well-defended point, but the percentage of people alive today is probably much higher than your estimate. The population has gone up so fast in recent years that the total number of people who have ever lived is closer to 10x current population than 1000x:

Given a current global population of about 8 billion, the estimated 117 billion total births means that those alive in 2022 represent nearly 7% of the total number of people who have ever lived

https://www.prb.org/articles/how-many-people-have-ever-lived...


Look I'm not exactly engaged enough to dismantle this piece by piece so this will probably be my last comment but:

> Ambivalence is the human default, and logic requires it must be so.

You'd do well to do more than assert it. This is ideology.

> You loudly proclaim your aversion to all human suffering, past and present, and claim to know how to fix it.

I said no such thing, and the remainder of your prior statements are also asserting I made any such claim. Making efforts to fix wrongs is not itself a moral failure, nor is it some kind of foolish pride.

> We can't address ALL suffering. That doesn't mean that we can't address ANY suffering.

What is odd to me is that this is exactly my point. If you somehow think that racism isn't still "in front of us" as you so boldly claim, I encourage you to prove that substantially and convince the people who to this day still feel victimized by it.

> But the solution to the KKK (the original recipe anti-black version) is not to invent a ~KKK (the crispy anti-white version) and tell whites that if they don't join ~KKK then they are in the KKK.

I haven't claimed this at all. For what its worth though — you are in some form invoking the paradox of intolerance here. I'm not sure why you felt the need to write this screed, it is entirely separate from anything I've said and completely off-the-rails.


You may be right - I suppose that apart from my first point about ambivalence being the default, it doesn't necessarily apply to you personally. But it does apply to the general ideology this thread is addressing. I'm sorry if I grouped you in with views that you don't share.


> Under your and his vision,

You know me so well.

> ...there will never come a day when people aren't discriminated for things they had no control over.

Um, what?

While I'm ambivalent towards Kendi, I have zero doubt you've got him wrong.

Maybe you're thinking of McWhorter?


I think you may have misread the comment you're responding to.


The part you quoted was a rhetorical device, hence "start out". The poster went on to explain Jim Crow laws and other systemic discrimination against Black people up to at least 1971.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: