Hacker Newsnew | past | comments | ask | show | jobs | submit | maxilevi's commentslogin

Canada wouldn't be able to defend from an invasion


An invasion from who? I agree with you that Canada couldn't defend against an invasion from the US, but I also don't think that any country without nuclear weapons could defend against an invasion from the US. But I think that Canada could probably defend itself from an invasion from most other countries—the Canadian military is generally competent, and NATO and NORAD would almost certainly offer assistance.

Even then, who would want to invade Canada? Despite the recent political blustering, it seems incredibly unlikely that the US would invade Canada, and the only other plausible invader that I can think of right now is Russia, but their military isn't doing very well at all right now.


If the threat model is "the US goes rogue and does crazy stuff", Canada is a prime risk of suffering from such madness, so moving your resources there doesn't really change anything.


Agreed, but I'd argue that there's a big difference between the US making it difficult to access gold reserves stored there and invading/blockading Canada to the point where gold reserves stored there are unusable. The first seems unlikely but possible, while the second seems almost unimaginable, and even if the second does happen, I'd be more concerned about access to food/medicine than access to gold reserves.

(Although I'm Canadian, so this may perhaps just be wishful thinking on my part)


And dont give trump any more reasons.


LLMs are just really good search. Ask it to create something and it's searching within the pretrained weights. Ask it to find something and it's semantically searching within your codebase. Ask it to modify something and it will do both. Once you understand its just search, you can get really good results.


I agree somewhat, but more when it comes to its use of logic - it only gleans logic from human language which as we know is a fucking mess.

I've commented before on my belief that the majority of human activity is derivative. If you ask someone to think of a new kind of animal, alien or random object they will always base it off things that they have seen before. Truly original thoughts and things in this world are an absolute rarity and the majority of supposed original thought riffs on what we see others make, and those people look to nature and the natural world for inspiration.

We're very good at taking thing a and thing b and slapping them together and announcing we've made something new. Someone please reply with a wholly original concept. I had the same issue recently when trying to build a magic based physics system for a game I was thinking of prototyping.


  it only gleans logic from human language
This isn’t really true, at least how I interpret the statement, little if any of the “logic” or appearance of such is learned from language. It’s trained in with reinforcement learning as pattern recognition.

Point being it’s deliberate training, not just some emergent property of language modeling. Not sure if the above post meant this, but it does seem a common misconception.


LLMs lack agency in the sense that they have no goals, preferences, or commitments. Humans do, even when our ideas are derivative. We can decide that this is the right choice and move forward, subjectively and imperfectly. That capacity to commit under uncertainty is part of what agency actually is.


But they do have utility functions, which one can interpret as nearly equivalent


better mental model: it's a lossy compression of human knowledge that can decompress and recombine in novel (sometimes useful, sometimes sloppy) ways.

classical search simply retrieves, llms can synthesize as well.


Corporate wants you to find the difference...

Point being, in broad enough scope, search and compression and learning are the same thing. Learning can be phrased as efficient compression of input knowledge. Compression can be phrased as search through space of possible representation structures. And search through space of possible X for x such that F(x) is minimized, is a way to represent any optimization problem.


This isn't strictly better to me. It captures some intuitions about how a neural network ends up encoding its inputs over time in a 'lossy' way (doesn't store previous input states in an explicit form). Maybe saying 'probabilistic compression/decompression' makes it a bit more accurate? I do not really think it connects to your 'synthesize' claim at the very end to call it compression/decompression, but I am curious if you had a specific reason to use the term.


It's really way more interesting that that.

The act of compression builds up behaviors/concepts of greater and greater abstraction. Another way you could think about it is that the model learns to extract commonality, hence the compression. What this means is because it is learning higher level abstractions AND the relationships between these higher level abstractions, it can ABSOLUTELY learn to infer or apply things way outside their training distribution.


ya, exactly... i'd also say that when you compress large amounts of content into weights and then decompress via a novel prompt, you're also forcing interpolation between learned abstractions that may never have cooccurred in training.

that interpolation is where synthesis happens. whether it is coherent or not depends.


Maybe the base model is just a compression of the training data?

There is also a RLHF training step on top of that


yep the base model is the compression, but RLHF (and other types of post training) doesn't really change this picture, it's still working within that same compressed knowledge.

nathan lambert (who wrote the RLHF book @ https://rlhfbook.com/ ) describes this as the "elicitation theory of post training", the idea is that RLHF is extracting and reshaping what's already latent in the base model, not adding new knowledge. as he puts it: when you use preferences to change model behavior "it doesn't mean that the model believes these things. it's just trained to prioritize these things."

so like when you RLHF a model to not give virus production info, you're not necessarily erasing those weights, the theory is that you're just making it harder for that information to surface. the knowledge is still in the compression, RLHF just changes what gets prioritized during decompression.


No, this describes the common understanding of LLMs and adds little to just calling it AI. The search is the more accurate model when considering their actual capabilities and understanding weaknesses. “Lossy compression of human knowledge” is marketing.


It is fundamentally and provably different than search because it captures things on two dimensions that can be used combinatorially to infer desired behavior for unobserved examples.

1. Conceptual Distillation - Proven by research work that we can find weights that capture/influence outputs that align with higher level concepts.

2. Conceptual Relations - The internal relationships capture how these concepts are related to each other.

This is how the model can perform acts and infer information way outside of it's training data. Because if the details map to concepts then the conceptual relations can be used to infer desirable output.

(The conceptual distillation also appears to include meta-cognitive behavior, as evidenced by Anthropic's research. Which manes sense to me, what is the most efficient way to be able to replicate irony and humor for an arbitrary subject? Compressing some spectrum of meta-cognitive behavior...)


Aren't the conceptual relations you describe still, at their core, just search (even if that's extremely reductive)? We know models can interpolate well, but it's still the same probabilistic pattern matching. They identify conceptual relationships based on associations seen in vast training data. It's my understanding that models are still not at all good at extrapolation, handling data "way outside" of their training set.

Also, I was under the impression LLM's can replicate irony and humor simply because that text has specific stylistic properties, and they've been trained on it.


I don't know honestly, I think really the only big hole the current models have is if you have tokens that never get exposed enough to have a good learned embedding value. Those can blow the system out of the water because they cause activation problems in the low layers.

Other than that the model should be able to learn in context for most things based on the component concepts. Similar to how you learn in context.

There aren't a lot of limits in my experience. Rarely you'll hit patterns that are too powerful where it is hard for context to alter behavior, but those are pretty rare.

The models can mix and match concepts quite deeply. Certainly, if it is a completely novel concept that can't be described by a union or subtraction between similar concepts, than the model probably wouldn't handle it. In practice, a completely isolated concept is pretty rare.


Information Retrieval followed by Summarization is how I view it.


“Novel” to the person who has not consumed the training data. Otherwise, just training data combined in highly probable ways.

Not quite autocomplete but not intelligence either.


What is the difference between "novel" and "novel to someone who hasn't consumed the entire corpus of training data, which is several orders of magnitude greater than any human being could consume?"


The difference is that when you do not know how a problem can be solved, but you know that this kind of problem has been solved countless times earlier by various programmers, you know that it is likely that if you ask an AI coding assistant to provide a solution, you will get an acceptable solution.

On the other hand, if the problem you have to solve has never been solved before at a quality satisfactory for your purpose, then it is futile to ask an AI coding assistant to provide a solution, because it is pretty certain that the proposed solution will be unacceptable (unless the AI succeeds to duplicate the performance of a monkey that would type a Shakespearean text by typing randomly).


Are you reviewer 2?

Joking aside, I think you have too strict of a definition of novel. Unfortunately "novel" is a pretty vague word and is definitely not a binary one.

ALL models can produce "novel" data. I don't just mean ML (AI) models, but any mathematical model. The point of models is to make predictions about results that aren't in the training data. Doing interpolation between two datapoints does produce "novel" things. Thinking about the parent's comment, is "a blue tiger" novel? Probably? Are there any blue tigers in the training data? (there definitely is now thanks to K-Pop Demon Hunters) If not, then producing that fits the definition of novel. BUT I also agree that that result is not that novel. It is entirely unimpressive.

I'm saying this not because I disagree with what I believe you intend to say but because I think a major problem with these types of conversations is that many people are going to interpret you more literally and dismiss you because "it clearly produces novel things." It isn't just things being novel to the user, though that is also incredibly common and quite telling that people make such claims without also checking Google...

Speaking of that, I'm just going to leave this here... I'm still surprised this is a real and serious presentation... https://www.youtube.com/watch?v=E3Yo7PULlPs&t=616s


Citation needed that grokked capabilities in a sufficiently advanced model cannot combinatorially lead to contextually novel output distributions, especially with a skilled guiding hand.


Pretty sure burden of proof is on you, here.


It's not, because I haven't ruled out the possibility. I could share anecdata about how my discussions with LLMs have led to novel insights, but it's not necessary. I'm keeping my mind open, but you're asserting an unproven claim that is currently not community consensus. Therefore, the burden of proof is on you.


I agree that after discussions with a LLM you may be led to novel insights.

However, such novel insights are not novel due to the LLM, but due to you.

The "novel" insights are either novel only to you, because they belong to something that you have not studied before, or they are novel ideas that were generated by yourself as a consequence of your attempts to explain what you want to the LLM.

It is very frequent for someone to be led to novel insights about something that he/she believed to already understand well, only after trying to explain it to another ignorant human, when one may discover that the previous supposed understanding was actually incorrect or incomplete.


The point is that the combined knowledge/process of the LLM and a user (which could be another LLM!) led to it walking the manifold in a way that produced a novel distribution for a given domain.

I talk with LLMs for hours out of the day, every single day. I'm deeply familiar with their strengths and shortcomings on both a technical and intuitive level. I push them to their limits and have definitely witnessed novel output. The question remains, just how novel can this output be? Synthesis is a valid way to produce novel data.

And beyond that, we are teaching these models general problem-solving skills through RL, and it's not absurd to consider the possibility that a good enough training regimen cannot impart deduction/induction skills into a model that are powerful enough to produce novel information even via means other than direct synthesis of existing information. Especially when given affordances such as the ability to take notes and browse the web.


> I push them to their limits and have definitely witnessed novel output.

I’m quite curious what these novel outputs are. I imagine the entire world would like to know of an LLM producing completely, never-before-created outputs which no human has ever thought before.

Here is where I get completely hung up. Take 2+2. An LLM has never had 2 groups of two items and reached the enlightenment of 2+2=4

It only knows that because it was told that. If enough people start putting 2+2=3 on the internet who knows what the LLM will spit out. There was that example a ways back where an LLM would happily suggest all humans should eat 1 rock a day. Amusingly, even _that_ wasn’t a novel idea for the LLM, it simply regurgitated what it scraped from a website about humans eating rocks. Which leads to the crux: how much patently false information have LLMs scraped that is completely incorrect?


This is not a correct approximation of what happens inside an LLM. They form probabilistic logical circuits which approximate the world they have learned through training. They are not simply recalling stored facts. They are exploiting organically-produced circuitry, walking a manifold, which leads to the ability to predict the next state in a staggering variety of contexts.

As an example: https://arxiv.org/abs/2301.05217

It's not hard to imagine that a sufficiently developed manifold could theoretically allow LLMs to interpolate or even extrapolate information that was missing from the training data, but is logically or experimentally valid.


So you do agree that an LLM cannot derive math from first principals, or no? If an LLM had only ever seen 1+1=2 and that was the only math they were ever exposed to, along with the numbers 0-10, could an LLM figure out that 2+2=4?

I argue absolutely not. That would be a fascinating experiment.

Hell, train it on every 2-number addition combination of m+n where m and n can be any number between 1-100 (or 0-100 would be better) BUT 2, and have it figure out what 2+2 is.

I would probably change my opinion about “circuits”, which by the way really stretches the idea of a circuit. The “circuit” is just the statistically most likely series of tokens that you’re drawing pretend lines between. Sure, technically connect-the-dots is a circuit, but not in the way you’re implying, or that paper.


> If an LLM had only ever seen 1+1=2 and that was the only math they were ever exposed to, along with the numbers 0-10, could an LLM figure out that 2+2=4?

What? Of course not? Could you? Do you understand just how much work has gone into proving that 1 + 1 = 2? Centuries upon centuries of work, reformulating all of mathematics several times in the process.

> Hell, train it on every 2-number addition combination of m+n where m and n can be any number between 1-100 (or 0-100 would be better) BUT 2, and have it figure out what 2+2 is.

If you read the paper I linked, it shows how a constrained modular addition is grokked by the model. Give it a read.

> The “circuit” is just the statistically most likely series of tokens that you’re drawing pretend lines between.

That is not what ML researchers mean when they say circuit, no. Circuits are features within the weights. It's understandable that you'd be confused if you do not have the right prior knowledge. Your inquiries are good, but they should stop as inquiries.

If you wish to push them to claims, you first need to understand the space better, understand what modern research does and doesn't show, and turn your hypotheses into testable experiments, collect and publish the results. Or wait for someone else to do it. But the scientific community doesn't accept unfounded conjecture, especially from someone who is not caught up with the literature.


My 4-year-old kid was able to figure out 2+2=4 after I taught them 1+1=2. All 3 of them actually, all at 4-5 years old.

Turns out counting 2 sets of two objects (1… 2… 3… 4…) isn’t actually hard to do if you teach the kid how to count to 10 and that 1+1=2

I guess when we get to toddler stage of LLMs I’ll be more interested.


That's wonderful, but you are ignoring that your kid comes built in with a massive range of biological priors, built by millions of years of evolution, which make counting natural and easy out of the box. Machine learning models have to learn all of these things from scratch.

And does your child's understanding of mathematics scale? I'm sure your 4-year-old would fail at harder arithmetic. Can they also tell me why 1+1=2? Like actually why we believe that? LLMs can do that. Modern LLMs are actually insanely good at not just basic algebra, but abstract, symbolic mathematics.

You're comparing apples and oranges, and seem to lack foundational knowledge in mathematics and computer science. It's no wonder this makes no sense to you. I was more patient about it before, but now this conversation is just getting tiresome. I'd rather spend my energy elsewhere. Take care, have a good day.


I hope you restore your energy, I had no idea this was so exhausting! Truly, I'll stop afflicting my projected lack of knowledge, sorry I tired you out!


Ah man, I was curious to read your response about priors.

> If an LLM had only ever seen 1+1=2 and that was the only math they were ever exposed to, along with the numbers 0-10, could an LLM figure out that 2+2=4?

Unless you locked your kid in a room since birth with just this information, it is not the same kind of set up is it?


You compared a LLM blob of numbers to a child.


Everyone else compared them to college interns, I was being generous.


No, you were being arrogant and presumptuous, providing flawed analogies and using them as evidence for unfounded and ill-formed claims about the capabilities of frontier models.

Lack of knowledge is one thing, arrogance is another.


You could find a pre-print on Arxiv to validate practically any belief. Why should we care about this particular piece of research? Is this established science, or are you cherry-picking low-quality papers?


I don't need to reach far to find preliminary evidence of circuits forming in machine learning models. Here's some research from OpenAI researchers exploring circuits in vision models: https://distill.pub/2020/circuits/ Are these enough to meet your arbitrary quality bar?

Circuits are the basis for features. There is still a ton of open research on this subject. I don't care what you care about, the research is still being done and it's not a new concept.


I really don’t think search captures the thing’s ability to understand complex relationships. Finding real bugs in 2000 line PRs isn’t search.


This is not true.


Im not sure how anyone can say this. It is really good search, but its also able to combine ideas and reason about and do fairly complex logic on tasks surely absolutely no one has asked before.


Its a very useful model but not a complete one. You just gotta acknowledge that if you're making something new its gonna take all day and require a lot of guard rails, but then you can search for that concept later (add the repo to the workspace and prompt at it) and the agent will apply it elsewhere as if it was a pattern in widespread use. "Just search" doesn't quite fit. I've never wondered how best to use a search engine to make something in a way that will be easily searchable later.


Calling it "just search" is like calling a compiler "just string manipulation". Not false, but aggressively missing the point.


No, “just search” is correct. Boosters desperately want it to be something more, but it really is just a tool.


Yes, it is a tool. No, it is not "just search".

Is your CPU running arbitrary code "just search over transistor states"?

Calling LLMs "just search" is the kind of reductive take that sounds clever while explaining nothing. By that logic, your brain is "just electrochemical gradients".


I mean, actually not a bad metaphor, but it does depend on the software you are running as to how much of a 'search' you could say the CPU is doing among its transistor states. If you are running an LLM then the metaphor seems very apt indeed.


What would you add?

To me it's "search" like a missile does "flight". It's got a target and a closed loop guidance, and is mostly fire and forget (for search). At that, it excels.

I think the closed loop+great summary is the key to all the magic.


Which is kind of funny because my standard quip is that AI research, beginning in the 1950s/1960s, and indeed much of late 20th century computer tech especially along the Boston/SV axis, was funded by the government so that "the missile could know where it is". The DoD wanted smarter ICBMs that could autonomously identify and steer toward enemy targets, and smarter defense networks that could discern a genuine missile strike from, say, 99 red balloons going by.


It's a prediction algorithm that walks a high-dimensional manifold, in that sense all application of knowledge it just "search", so yes, you're fundamentally correct but still fundamentally wrong since you think this foundational truth is the end and beginning of what LLMs do, and thus your mental model does not adequately describe what these tools are capable of.


Me? My mental model? I gave an analogy for Claude not a explanation for LLMs.

But you know what? I was mentally thinking of both deep think / research and Claude code, both of which are literally closed loop. I see this is slightly off topic b/c others are talking about the LLM only.


Sorry, I should have said "analogy" and not "mental model", that was presumptuous. Maybe I also should have replied to the GP comment instead.

Anyway, since we're here, I personally think giving LLMs agency helps unlock this latent knowledge, as it provides the agent more mobility when walking the manifold. It has a better chance at avoiding or leaving local minima/maxima, among other things. So I don't know if agentic loops are entirely off-topic when discussing the latent power of LLMs.


i dont disagree, but i also dont think thats an exciting result. every proboem can be described as a search for the right SOP, followed by execution of that SOP.

an LLM to do the search, and the agent to execute the instructions can do everything under the sun


I don't mean search in the reductionist way but rather that its much better at translating, finding and mapping concepts if everything is provided vs creating from scratch. If it could truly think it would be able to bootstrap creations from basic principles like we do, but it really can't. Doesn't mean its not a great powerful tool.


> If it could truly think it would be able to bootstrap creations from basic principles like we do, but it really can't.

alphazero?


I just said LLMs


You are right that LLM and alphazero are different models, but given that alphazero demonstrated having the ability to bootstrap creations, we can't easily rule out LLM also has this ability?


This doesn’t make sense. They are fundamentally different things, so an observation made about Alphazero does not help you learn anything about LLMs.


I am not sure, self-play with LLMs self generated synthetic data is becoming a trendy topic in LLMs research.


  > Once you understand its just search, you can get really good results.
I think this is understating the issue, ignoring context. It reminds me of how easy people claim searching is with search engines. But there's so many variables that can make results change dramatically. Just like Google search, two people can type in the exact same query and get very different results. But probably the bigger difference is in what people are searching for.

What's problematic with these types of claims is that they just come off as calling anyone who thinks differently dumb. It's as disconnected as saying "It's intuitive" in one breath and "You're holding it wrong" in another. It's a bad mindset to be in as an engineer because someone presents a problem and instead of trying to address it is dismissed. If someone is holding it wrong, it probably isn't intuitive[0]. Even if they can't explain the problem correctly, they are telling you a problem exists[1]. That's like 80% of the job of an engineer: figuring out what the actual problem is.

As maybe an illustrative example people joke that a lot of programming is "copy pasting from stack overflow". We all know the memes. There's definitely times where I've found this to be a close approximation to writing an acceptable program. But there's many other times where I've found that to be far from possible. There's definitely a strong correlation to what type of programming I'm doing, as in what kind of program I'm writing. Honestly, I find this categorical distinction not being discussed enough with things like LLMs. Yet, we should expect there to be a major difference. Frankly, there are just different amounts of information on different topics. Just like how LLMs seem to be better with more common languages like Python than less common languages (and also worse at just more complicated languages like C or Rust).

[0] You cannot make something that's intuitive to all people. But you can make it intuitive for most people. We're going to ignore the former case because the size should be very small. If 10% of your users are "holding it wrong" then the answer is not "10% of your users are absolute morons" it is "your product is not as intuitive as you think." If 0.1% of your users are "holding it wrong" then well... they might be absolute morons.

[1] I think I'm not alone in being frustrated with the LLM discourse as it often feels like people trying to gaslight me into believing the problems I experience do not exist. Why is it so surprising that people have vastly differing experiences? *How can we even go about solving problems if we're unwilling to acknowledge their existence?*


AED is pegged to the USD


most of them are non binding letters of intent, i don't think it's as trite as you put it


The government bailout part doesn't even kick in until they sink enough to need trillions of annual revenue.

Skepticism is easy.


Try it and let us know


That only applies for industries that are growing. And most of the “gambling” happens in options markets which are perfectly zero sum before fees.


You changed the goal posts, the thread started with stock. Derivatives are completely different.


Shrinking industries can still generate profit, and can still give out dividends.


Similar to the plot of the library of babel (https://en.wikipedia.org/wiki/The_Library_of_Babel)


And the actual website: https://libraryofbabel.info/

As well as their slideshow: https://babelia.libraryofbabel.info/


Half the features are not available


As if the government could do better capital allocation in the economy than buffett


Why couldn’t they, for the citizens of the USA? Buffett likely would not allocate it to optimize the outcome for US citizens.


As if the government could do better capital allocation in the economy than buffett

That seems to be what Buffett is suggesting.


temu and shein are cooked


Less cheap junk flowing into the US sounds like a win to me. Maybe clothes should be more expensive and better quality.


I already buy a lot of clothes at least partially made in OECD states. Even with that “partially” doing a lot of work and my avoiding paying extra for “fancy” brand names… I don’t think Americans earning closer to median household income are gonna be happy about paying the kind of prices I pay.


At least we can rest assured that they will be more expensive.


Now I need to pay 10x for a USB cable, a charger etc.


Monoprice has low cost cables.


Monoprice cables are made in China. They'll also cost 10x.


I'm not certain, but I suspect this'll work out better for Monoprice than it does for individuals.

I _think_ Monoprice will pay the couple of hundred dollars "per package" fees for the pallet or container full of cables they bring in, amortising that cost over thousands of items. They'll still get charged the 145% tariff though, but they'll probably cost something like 3x rather than 10x. (Until they work out that they no longer compete with individuals buying from AliExpress/Temu/Shein, or probably even with side hustle Fulfilment by Amazon micro-importers).


...for now.


Also less affordable electronics


Tangent. I got into building hi-fi tube amplifiers some years back. Part of it was a kind of nostalgia for the days of Heath-Kit which I am only just old enough to remember the company's sunsetting years.

It was a fun few years deep-diving into the various amplifier topologies, buying NOS vacuum tubes on eBay, looking through electronics flea markets for parts. I made several amps, tried different tubes, topologies.... Eventually I settled on a small stereo amp and designed a PCB for it, created a small kit even.

Using a drill press in the garage, a table saw to cut aluminum sheet stock down, even learning to powder-coat parts in a toaster-oven I picked up from Walmart, I made increasingly nicer looking amps. With two large output transformers and an even large power transformer they were fairly heavy beasts.

Nonetheless, though I built them a decade or more ago, every one of the amplifiers I built are still in use today. The music I am listening to at this moment is coming from one. Another is down in my "lab". I have given several away to friends, co-workers in the past.

I guess the reason for the tangent was to say that I did indeed find that when you have (or make) a thing of real quality it can last … perhaps a life time?

And thinking again a little nostalgically, I like that too about electronics just up to the post-modern era: a new electronics purchase might have cost you a paycheck or two, but you got I think more mileage out of that device.

EDIT: come to think of it, the heavy iron transformers are from the U.S., the tubes NOS from U.S. WWII bombers. I didn't built them of course with tariffs in mind, but surprisingly they are not so cost-dependent on overseas suppliers.

And here's a photo of the finished amp (from when I once considered selling the kits): https://imgur.com/PBKOQMk


Thanks for sharing, that’s really cool and something I wish I had the time/skill/patience for. The amp looks great and love the name - might have to dust off the tools for a “Now and Then” model.


Even more important in creating a more closed loop system with less waste. Some Android phones are e-waste before they hit a year or two.


There is also an argument that we should have fewer electronics too…


I don't think so. There is an argument for -individuals- buying too much electronics and they should revisit that, but it's not anyone's business other those people. Tanking the economy and destroying lives "just becuz consumers" is a really really bad way to run the country. Just giving back and going back to horse and buggy while China eats your lunch is not a good thing, because soon you will be making "cheap trinkets" for them


Too bad that’s not the argument being made by people pushing the current policies, instead of the idea that this will magically lead to us having more and better things.


It is very interesting how if there was a genuine attempt at degrowth this is potentially a good start.


It sure is funny how the party that has spent 2 decades screaming at anyone they could that "climate change isn't real" and "the people saying we should output less carbon are REALLY just degrowth cultists and we can consume everything forever with no issues" are outright, willfully, destroying the American economy in such a way that average americans WILL have to consume less


the thing is that life is about freedom of choice, I didn't buy they cheap junk, I'm fairly normal. I might be the occasional hobby board off alibaba express a couple times a year. Choice is good, not bad.


Maybe the law should impose quality and environmental standards instead of tariffs. But no, that would hurt domestic businesses.


The market does what people want. Fast fashion is exactly what people want because fashion has always been changing fast and about the "new thing" and people like to be able to buy new stuff all the time.


Here you go: enjoy your $120 American jeans: https://originusa.com/collections/jeans (Oh look its on sale 20$ off...yay :/ )

The sale discount is the entire amount I was able to buy my non American jeans for. :/

I guess I can make due with one pair for the week...or wash them each day(oh wait thats gotten more expensive as well).


Proper jeans aren't really washed more than once a month or at all. Especially every day. They also will last for years therefore buying 1-3 pairs a year means your wardrobe will have plenty.

Maybe local production will get cheaper once more people start keeping their money in local communities. Sending it to China is just awful for your country/region and kills local businesses.

Disclaimer: from Europe so I don't care about USA at all. It's still having the same effect here


> Maybe local production will get cheaper once more people start keeping their money in local communities.

Is the thinking here that increased scale would allow production to get cheaper? How would this account for the fact that production was scaled here, but was not cost-competitive when it was operating at scale? What's different now?


Let's say currently it costs $15 to make thing 1m units of X in China and $50 to make 10k of them in the USA. USA could be scaled to make 50k of them for $40 and 100k for $30. 1m could cost like $25. There are people for who are ready to pay more for local products so the current production volume makes sense, but majority of people will go for cheaper option when given the chance so it doesn't make sense to scale up the production currently. If the import cost of the item goes above local production cost and there is still enough demand for the item, it can make sense to scale up that production even if you cannot compete with the China made things internationally.

Of course that assumes your own costs (like raw materials) do not increase at least on the same scale and that you can rely on the situation being long-term thing (i.e. will last years rather than weeks) as costs include your CapEx on things like new machines.


> say currently it costs $15 to make thing 1m units of X in China and $50 to make 10k of them in the USA. USA could be scaled to make 50k of them for $40 and 100k for $30. 1m could cost like $25.

So here we are assuming that we could get a 50% reduction in cost by scaling to 1m units of a thing. The problem with this logic is that many product categories currently made overseas were produced domestically at scale until relatively recently.

This assumption also appears to imply that the goods in question either have a very low labor input or are produced using automation that is not available to Chinese manufacturers.

Reframing my initial question, what advantages would a US manufacturer have today that they didn't have in e.g. 1990 that would allow them to manufacture for only 66% more than the same manufacturing in China?


> Proper jeans aren't really washed more than once a month or at all.

I've had a lot of people say this to me. I've known their policy on washing jeans without them ever having to tell me, though.

People become noseblind to their own stench. Unfortunately, it's not easy to ignore the stench of someone else wearing pants with a month of sweat and fecal bacteria soaked into them. I know lots of people also only wash their coats once a year, and trust me, being more resistant to stinking isn't the same as being completely immune to stinking.

Wash your clothes. The idea of not washing them is a meme and it's incredible how many people have fallen for it lately.


I don't know about you but it feels icky to me to wear dirty pants. I could probably get by wearing them two days in a row. If I have one paid of jeans, im washing them at least every other day. If I have 7 pairs of jeans im washing at least half of them once a week. I'd rather have 7 pairs of jeans. They last long enough for me(a few years). Maybe its just because I dont have to taken them to the laverie (as the french would say) but clean clothes just feel better.


> kills local businesses

The business-to-consumer businesses, which take the largest markup, employ the most people and pay the highest wages in the supply chain, have thrived under this system.

It's not the customers that demand products be made in China, it's these "local" businesses.


If the de minimus rule is in-fact suspended on May 2nd, yes. Hasn’t happened yet, so who knows.

Amazon and other US selling platforms are also in trouble, given how much of their income is from drop shippers.


Well, given how many of their products come from China, right? How many of the products on sale on Amazon are partly or entirely produced in China? Those will have 125% (145? How mush is it today?) import duty on them, unless they're electronics.


too little too late for Forever 21 and it's 350 locations which once employed 43,000 people at it's peak: https://www.cnbc.com/2025/03/17/forever-21-files-for-second-...

The temu/shein loophole should been closed ages ago.


I'm surprised the Chinese sellers are able to compete for fast fashion. Clothes are the one thing I don't really buy online because getting sizing right is already hard even when you're not dealing with Temu-style "well actually we said there's a +- 25 tolerance in the fine print and this is within tolerance" bullshit.

AliExpress is indispensable for small technical items. If they're available locally at all, shipping included they'd often cost 10-20x as much.


No idea about Shein, but I was shocked how easy/good Temu return policy was. My wife bought some rugs and some prints and they were not as described/pictured.

Took a minute in the app to generate a qr code, then I had it to the post shop the same day and they refunded within 3 days.

I wouldn't (personally) buy clothes to wear normally from them, but something like beach shoes or a poncho for a festival I'd maybe get there.


TIL Temu has a return policy. I thought the return policy was "throw it in the trash and be out the money (albeit 1/10th of what you would have paid in a regular store)".


It's not fast-fashion they are competing with — they invented ultra-fast-fashion. Their platforms (Shein and Temu) are fully geared towards allowing manufacturers to jump on board the latest hypes and trends and have a saleable product on there within a week or so, to sell for a few weeks until it is no longer trending.

You want a 'My tariffs did that' T-shirt? Temu.

https://www.temu.com/search_result.html?search_key=tariffs%2...

Local store chains can't match that velocity.


People are happy to just try stuff on at home then deal with returns or accept the loss if it doesn't fit or look good.


They tried closing the loophole a month ago. It was such a burden trying to track and collect tariffs on small shipments they gave up.


It is pretty crazy how worker unfriendly US trade policy has been for so long.


They need to get their priorities straight - stop directing trade policy towards tech companies employing 1000's of workers on $250,000 a year and start building factories employing 100's or people on 25c an hour.


> The temu/shein loophole should been closed ages ago.

Or the US should figure out how to get domestic shipping rates to be as cheap as the rates that Chinese shippers pay to ship to the US.


International shipping from China to the US is subsidized by USPS under the Universal Postal Union rules since China is classified as a developing country. Terminal dues to the US have been increasing over the last 5 years to compensate for this.

https://www.ecomcrew.com/why-china-post-and-usps-are-killing...


It's still crazy to me that we classify the second largest economy as a developing country. Especially when said "developing" country is trying to flex it's muscles over the world stage and attack its neighbors.

China can either remain a developing country subject to rules imposed by developed countries. Or it can join the developed countries and shape those rules. It can't do both.


That would probably require that they receive federal funding to subsidize postage rates, which is unfortunately not going to happen (especially not under DeJoy).


Yes, perhaps the government should subsidize (or allow states to subsidize) the fixed costs of mail, just as we subsidize the fixed costs of roads, so that our business can be competitive. Is this what China does domestically, or what the US does when we charge for international shipping? Point is, we should at least list all the ways that we can be more competitive, rather than cheering isolationism that makes us all worse off.


He resigned last month.


Ah, yeah, just in time to be replaced by another Trump appointee to similar or worse results.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: