> TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
I thought it was ridiculous when I read it. I'm glad the fabs think he's crazy too. If he wants this then he can give them the money up front. But of course he doesn't have it.
After the dot com collapse my company's fabs were running at 50% capacity for a few years and losing money. In 2014 IBM paid Global Foundries $1.5 billion to take the fabs away. They didn't sell the fabs, they paid someone to take them away. The people who run TSMC are smart and don't want to invest $20-100 billion in new fabs that come online in 3-5 years just as the AI bubble bursts and demand collapses.
I started working during the dot com boom. I was getting 3 phone calls a week from recruiters on my work telephone number. Then I saw the bubble burst in from mid-2000. In 2001 zero recruiters called me. I hated my job after the reorg and it took me 10 months to find a new one.
I know a lot of people in the 45+ age range including many working on AI accelerators. We all think this is a bubble. The AI companies are not profitable right now for the prices they charge. There are a bunch of articles on this. If they raise prices too quickly to become profitable then demand will collapse. Eventually investors will want a return on their investment. I made a joke that we haven't reached the Pets.com phase of the bubble yet.
I started off using twm / olwm / vtwm in 1991. Then FVWM and Afterstep / WindowMaker. I've been using XFCE since around 2007. As long as it functions similarly I'll be happy.
Back in 1991 the older students showed me how to telnet to port 25 and make my "From:" email address be anything. It was funny when the person sitting next to me received an email from satan@hell.gov
My very political grandpa definitely received a few emails from the "president", along with a few email from a "government agency" following up on the emails.
15-30 years ago I managed a lot of commercial chip design EDA software that ran on Solaris and Linux. We had wrapper shell scripts for so many programs that used LD_LIBRARY_PATH and LD_PRELOAD to point to the specific versions of various libraries that each program needed. I used "ldd" which prints out the shared libraries a program uses.
I've probably worked on 70 chips over the last 30 years.
Tape out time always sucks. I'm in physical design which is fixing all the timing violations, DRC violations, LVS errors, and dealing with late design changes.
Working 80 to 100 hours a week for a month really sucks and makes you wonder why you didn't go into software.
When you combine it with a fixed shuttle date like in the article it is even worse because if you miss that date it might be another 1-2 months for the next shuttle instead of just a day for day slip when you control all the masks.
Don’t worry we have those 80 hour weeks in software too. I can think of a few examples. For example with mobile App Store review time used to be kind of like that. You submitted your app waited a few business days and prayed there wasn’t an obscure rejection that lead to an appeal which could take even longer. Very stressful when you are cueing up a launch and press releases on a certain date. you had to make sure you were done a few weeks in advance to account for everything.
I don’t work much on apps anymore but I hear it’s somewhat better now.
Another big area is compliance, those processes can take forever.
Can I ask how often you guys end up doing gate-level netlist ECOs, instead of re-running synthesis when you're close to a deadline? Also, post-fabrication, if a mistake is found, have you been able to fix it just with a new M1 or M2 mask, instead of paying for a full new mask set?
If the change is under 1000 logic cells and no new flip flops then we do a it as an ECO. If there are tons of new flip flops we resynthesize and start over.
Lots of chips have metal spins to fix errors. The blank areas of the chips are filled with filler cells but most of them are special "ECOFILLER" cells that are basically generic pairs of N/P transistors like a gate array. These can then be turned into any kind of cell just by using metal. They are a little slower but work fine.
I've worked at one huge company where they planned 3 full base layer mask sets and 1-2 metal spins for each full base layer set. This was when doing a chip on a brand new process node where you couldn't always trust the models the fab gave you so you wanted more post silicon characterization to recalibrate models.
> The blank areas of the chips are filled with filler cells but most of them are special "ECOFILLER" cells that are basically generic pairs of N/P transistors like a gate array. These can then be turned into any kind of cell just by using metal. They are a little slower but work fine.
The other alternative is that you sprinkle spare gates around the chip. If the chip is 10mm x 10mm then every 100 microns you put a group of cells that just have their inputs tied to 0 and the outputs go nowhere. You put in a good mix of flip flops, and combinational logic cells. Then when you need to do a metal ECO the RTL team says "We need 2 AND gates, 1 OR gate, 1 mux, and they are connected to these 5 cells." So you highlight those 5 cells and find the closest spare logic group and use those.
The ECOFILLER gate array style cells are easier to use.
Then during the DRC check process in Calibre we run a check to make sure that the base layers stayed the same and only the metal layers changed. Since we have 18 metal layers in a leading edge node hopefully only metal layers 1 to 3 changed for the metal ECO so you only have to pay to make new versions of that.
A full mask set in 3nm can be over $30 million. Just a new set of metal masks is around $20 million.
A full mask run takes about 4 months in the fab. Normally you tell the fab to keep a few wafers after the base layers and don't manufacture the metal layers. Then when you do a metal respin they get those out of storage and save a month.
Blocks are never 100% full. If it was then you would never be able to route the design. High utilization may be 70% but if a block has tons of IO then I've worked on blocks that are only 25% utilized. For various manufacturing and yield purposes the empty spaces need filler cells.
Sometimes we put in decoupling cap cells. But the ecofiller cells go in everywhere else.
About 25 years ago we were using spare gates that we had preplaced on the die.
About 5 years ago we started using spare gates preplaced and ALSO the ecofiller cells. The reason I was told was to save money because the ecofiller cells require some other mask layer to change. I think that was in the $500K range but it's still money.
In general I hate doing ECO's with the preplaced spare gates as it is manual and time consuming to find the best cells to use.
Wow, awesome thanks for the details! I have once or twice on projects added extra gates as fillers in some 28nm mixed-signal designs for metal layer re-work, but I had no idea that in larger digital teams there was also the practice of adding these types of individual transistor arrays. Super clever!
Bots have ruined reddit but that is what the owners wanted.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
It's been really sad to see reddit go like this because it was pretty much the last bastion of the human internet. I hated reddit back in the day but later got into it for that reason. It's why all our web searches turned into "cake recipe reddit." But boy did they throw it in the garbage fast.
One of their new features is you can read AI generated questions with AI generated answers. What could the purpose of that possibly be?
We still have the old posts... for the most part (a lot of answers were purged during the protest) but what's left of it is also slipping away fast for various reasons. Maybe I'll try to get back into gemini protocol or something.
I see a retreat to the boutique internet. I recently went back to a gaming-focused website, founded in the late 90s, after a decade. No bots there, as most people have a reputation of some kind
I really want to see people who ruin functional services made into pariahs
I don't care how aggressive this sounds; name and shame.
Huffman should never be allowed to work in the industry again after what he and others did to Reddit (as you say, last bastion of the internet)
Zuckerberg should never be allowed after trapping people in his service and then selectively hiding posts (just for starters. He's never been a particularly nice guy)
Youtube and also Google - because I suspect they might share a censorship architecture... oh, boy. (But we have to remove + from searches! Our social network is called Google+! What do you mean "ruining the internet"?)
given the timing, it has definitely been done to obscure bot activity, but the side effect of denying the usual suspects the opportunity to comb through ten years of your comments to find a wrongthink they can use to dismiss everything you've just said, regardless of how irrelevant it is, is unironically a good thing. I've seen many instances of their impotent rage about it since it's been implemented, and each time it brings a smile to my face.
The wrongthink issue was always secondary, and generally easy to avoid by not mixing certain topics with your account (don't comment on political threads with your furry porn gooner account, etc). At a certain point, the person calling out a mostly benign profile is the one who will look ridiculous, and if not, the sub is probably not worth participating in anyway.
But recently it seems everything is more overrun than usual with bot activity, and half of the accounts are hidden which isn't helping matters. Utterly useless, and other platforms don't seem any better in this regard.
Yes registering fake views is fraud against ad networks. Ad networks love it though because they need those fake clicks to defraud advertisers in turn.
Paying to have ads viewed by bots is just paying to have electricity and compute resources burned for no reason. Eventually the wrong person will find out about this and I think that's why Google's been acting like there's no tomorrow.
I doubt it's true though. Everyone has something they can track besides total ad views. A reddit bot had no reason to click ads and do things on the destination website. It's there to make posts.
> So they allow even more bots to increase traffic which drives up ad revenue
When are people who buy ads going to realize that the majority of their online ad spend is going towards bots rather than human eye balls who will actually buy their product? I'm very surprised there hasn't been a massive lawsuite against Google, Facebook, Reddit, etc. for misleading and essentially scamming ad buyers
Is this really true though? Don't they have ways of tracking the returns on advertising investment? I would have thought that after a certain amount of time these ad buys would show themselves as worthless if they actually were.
No, it's not really true. Media companies have a whole host of KPI's and tracking methods to evaluate the success/failure of their campaigns/single ads: here's a small summary of some of the KPI's and methods https://www.themediaant.com/blog/methods-of-measuring-advert...
Steve Huffman is an awful CEO. With that being said I've always been curious how the rest of the industry (for example, the web-wide practice of autoplaying videos) was constructed to catch up with Facebook's fraudulent metrics. Their IPO (and Zuckerberg is certainly known to lie about things) was possibly fraud and we know that they lied about their own video metrics (to the point it's suspected CollegeHumor shut down because of it)
The biggest change reddit made was ignoring subscriptions and just showing anything the algorithm thinks you will like. Resulting in complete no name subreddits showing on your front page. Meaning moderators no longer control content for quality, which is both a good and bad thing, but it means more garbage makes it to your front page.
I can't remember the last time I was on the Reddit front page and I use the site pretty much daily. I only look at specific subreddit pages (barely a fraction of what I'm subscribed to).
These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.
This is my current Reddit use case. I unsubscribed from everything other than a dozen or so niche communities. I’ve turned off all outside recommendations so my homepage is just that content (though there is feed algorithm there). It’s quick enough to sign in every day or two and view almost all the content and move on.
> why would you look at the "front page" if you only wanted to see things you subscribed to?
"Latest" ignores score and only sorts by submission time, which means you see a lot of junk if you follow any large subreddits.
The default home-page algorithm used to sort by a composite of score, recency, and a modifier for subreddit size, so that posts from smaller subreddits don't get drowned out. It worked pretty well, and users could manage what showed up by following/unfollowing subreddits.
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
There were some dedicated submissions about ad fraud on HN. In long term companies tend to reach equilibrium, where both reducing and increasing ad budget decreases profit. It is a classic prisoners dilemma, giving up on ads with poor ROI would be obvious decision if it wasn't for competitors who will benefit both from some actual conversions and lower bids for domain specific keywords.
the alternative - not buying ads - is worse though. No one knows about you, and you sell nothing. so it ends up being seen as a cost of doing business. that is passed on to paying customers.
I'm really starting to wonder how much of the "ground level" inflation is actually caused by "passing on" the cost of anti-social behaviors to paying customers, as opposed to monetary policy shenanigans.
They did not have the original unix vision. and it is a lot easier to to design an interface as a programming interface than shoehorn it into a filesystem interface.
I think having a filesystem interface is pretty great, and plan9 showed it could be done. but having to describe all your io in the [database_key, open(), close(), read(), write(), seek()] interface. can be tricky and limiting for the developer. It is pretty great for the end user however. Having a single api for all io is a super power for adaptive access patterns.
I think the thing that bothers me the most about the bsd socket interface is how close it is to a fs interface. connect()/bind() instead of open(), recv()/send() instead ot read()/write() but it still uses file discripters so that stuff tends to work the same. We almost had it.
As much as I like BSD and as great an achievement that the socket interface was, I still think this was their big failure.
> Plan 9 does not have specialised system calls or ioctls for accessing the networking stack or networking hardware. Instead, the /net file system is used. Network connections are controlled by reading and writing control messages to control files. Sub-directories such as /net/tcp and /net/udp are used as an interface to their respective protocols.
> Combining the design concepts
> Though interesting on their own, the design concepts of Plan 9 were supposed to be most useful when combined. For example, to implement a network address translation (NAT) server, a union directory can be created, overlaying the router's /net directory tree with its own /net. Similarly, a virtual private network (VPN) can be implemented by overlaying in a union directory a /net hierarchy from a remote gateway, using secured 9P over the public Internet. A union directory with the /net hierarchy and filters can be used to sandbox an untrusted application or to implement a firewall.[43] In the same manner, a distributed computing network can be composed with a union directory of /proc hierarchies from remote hosts, which allows interacting with them as if they are local.
> When used together, these features allow for assembling a complex distributed computing environment by reusing the existing hierarchical name system
I remember first setting up NAT or IP masquerading around 1998. It seemed like an ugly hack and some custom protocols did not work.
I use a bunch of VPNs now and it still seems like a hack.
The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
> The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
With that I mind I wish (the standard Unix gripe!) 9P had a more complex permissions model... 9P's flexibility and per-process namespaces get you a long way, but it's not a natural way to express them.
> The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
aye. this was my first thought too. I seem to recall older Windows doing something like the same thing -- e.g. internet controls tied to the same system as the files -- and that's how we got the 90s-2000s malware 'asplosion.
Clean doesn't mean easy to use. I've worked with a system before that had a very clean, elegant design (message-passing/mailboxes), easy to implement, easy to apply security measures to, small, efficient, everything you could ask for, and pretty much the first thing anyone who used it did was write a set of wrappers for it to make it look and feel more natural.
Where the elegance starts to fade for me is when you see all the ad hoc syntaxes for specifying what ta connect to and what to mount. I have no love for tcp!10.0.0.1!80 or #c or #I. I want to move away from parsing strings in trusted code, especially when that code is C.
I also have no love for "read a magic file to have a new connection added to your process".
9P is neat but utterly unprepared for modern day use where caching is crucial for performance.
> The best apis are those that are hated by the developer and loved by the end users.
No, just those loved by the API consumer. Negative emotions on one end doens't do anything positive.
In the case of plan9, not everything can be described elegantly in the filesystem paradigm and a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse. It also handicaps performance due to the number of filesystem operation roundtrips you usually end up making.
Maybe if combined with something io_uring-esque, but the complexity of that wouldn't be very plan9-esque.
> a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse.
These are no different in principle than ioctl calls in *ix systems. The `ctl` approach is at least a proper generalization. Being able to use simple read/write primitives for everything else is nonetheless a significant gain.
They are very different. ioctl's on a file take an operation and arguments that are often userspace pointers as the kernel can freely access any process's memory space. ctl files on the other hand are merely human-readable strings that are parsed.
Say, imagine an API where you need to provide an 1KiB string. The plan9 version would have to process the input byte for byte to sort out what the command was, then read the string to a dynamic buffer while unescaping it until it finds, say, the newline character.
The ioctl would just have an integer for the operation, and if it wanted to it could set the source page up for CoW so it didn't even have to read or copy the data at all.
Then we have to add the consideration of context switches: The traditional ioctl approach is just calling process, kernel and back. Under plan9, you must switch from calling process, to kernel, to fileserver process, to kernel, to fileserver process (repeat multiple times for multiple read calls), to kernel, and finally to calling process to complete the write. Now if you need a result you need to read a file, and so you get to repeat the entire process for the read operation!
Under Linux we're upset with the cost of the ioctl approach, and for some APIs plan to let io_uring batch up ioctls - the plan9 approach would be considered unfathomably expensive.
> The `ctl` approach is at least a proper generalization.
ioctl is already a proper generalization of "call operation on file with arguments", but because it was frowned upon originally it never really got the beauty-treatment it needed to not just be a lot of header file defines.
However, ioctl'ing a magic define is no different than writing a magic string.
It's perfectly possible to provide binary interfaces that don't need byte-wise parsing or that work more like io_uring as part of a Plan9 approach, it's just not idiomatic. Providing zero-copy communication of any "source" range of pages across processes is also a facility that could be provided by any plan9-like kernel via segment(3) and segattach(2), though the details would of course be somewhat hardware-dependent and making this "sharing" available across the network might be a bit harder.
Indeed, you can disregard plan9 common practice and adopt the ioctl pattern, but then you just created ioctl under a different name, having gained nothing over it.
You will still have the significant context switching overhead, and you will still need distinct write-then-read phases for any return value. Manual buffer sharing is also notably more cumbersome than having a kernel just look directly at the value, and the neat part of being able to operate these fileservers by hand from a shell is lost.
So while I don't disagree with you on a technical level, taking that approach seems like it misses the point of the plan9 paradigm entirely and converts it to a worse form of the ioctl-based approach that it is seen as a cleaner alternative to.
Being able to do everything in user space looks like it might be a worthwhile gain in some scenarios. You're right that there can be some context switching overhead to deal with, though even that might possibly be mitigated; the rendezvous(2) mechanism (which works in combination with segattach(2) in plan9) is relevant, depending on how exactly it's implemented under the hood.
I must admit that the ability to randomly bind on top of your "drivers" to arbitrarily overwrite functionality, whether to VPN somewhere by binding a target machine's network files, or how rio's windows were merely /dev/draw proxies and you could forward windows by just binding your own /dev/draw on the target, holds a special place in my heart. 9front, if nothing else, is fun to play with. I just don't necessarily consider it the most optimal or most performant design.
(I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top... I hate $PATH and the gross profile scripts that go with it with a passion.)
> I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top
That's really an easy special case of what's called containerization or namespacing on Linux-like systems. It's just how the system works natively in plan9.
can you paper over the *ixian abstraction using transformer based metamodeling language oriented programming and the individual process namespace Lincos style message note passing hierarchy lets the Minsky society of mind idea fall out?
I first learned Unix in high school in 1991. I wanted a Sun workstation so bad but they were over $10,000 in 1991 money. Got a PC in 1994 just to install Linux and I've been happy ever since. Completely skipped over windows 95 and all the stuff after and continue to ignore it.
https://www.macrobusiness.com.au/2021/05/the-great-semicondu...
Here is a long article from last year about Sam Altman.
https://www.nytimes.com/2024/09/25/business/openai-plan-elec...
https://finance.yahoo.com/news/tsmc-rejects-podcasting-bro-s...
> TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
I thought it was ridiculous when I read it. I'm glad the fabs think he's crazy too. If he wants this then he can give them the money up front. But of course he doesn't have it.
After the dot com collapse my company's fabs were running at 50% capacity for a few years and losing money. In 2014 IBM paid Global Foundries $1.5 billion to take the fabs away. They didn't sell the fabs, they paid someone to take them away. The people who run TSMC are smart and don't want to invest $20-100 billion in new fabs that come online in 3-5 years just as the AI bubble bursts and demand collapses.
https://gf.com/gf-press-release/globalfoundries-acquire-ibms...
reply