WarCraft II was among the first games that I ever played "online" (direct connection via dial-up modem with my friends). I can't overstate how absolutely incredible that felt to me at the time - pure magic.
The other games we were playing at that time were Doom II and various other first-person shooters (Rise of the Triad, Hexen, etc) - which were also pretty incredible. But the WarCraft II experience really took things to the next level with far richer gameplay.
This is different than the July megaquake prophecy, which was indeed dumb. With a strong quake like this there will be aftershocks. Most will be small, but there is a risk (about 5% according to the USGS) of an even stronger quake than the first within the next week or so [1].
I agree the parent will likely be fine, but it can be stressful in the aftermath of a large quake. And if they want to leave the area and have the opportunity to do so calmly and safely, I think that’s justified.
This isn’t something that I personally wish to debate, but I’ll leave link to the wikipedia page for the July 2025 prophecy [1] for anyone who may not know what we are talking about.
And also point out that last night’s earthquake in Northern Japan was not a “prophecy”. Just a regular, large earthquake - which do occur pretty frequently in Japan. And I say "large" not just because of the magnitude, but because parts of Aomori experienced 6+ shaking on the shindo scale [2] which is categorized as "brutal" [3].
The July megaquake prophecy scare was dumb because it originated in a work of fiction, not intended to be taken seriously by its author and not based on any scientific evidence. If the "prophecy" had come true, it'd be by luck alone. fwiw, I'd say it didn't come true; the 8.8 magnitude earthquake was near Kamchatka and didn't actually damage Japan, though a tsunami seemed plausible enough that there was a precautionary evacuation.
This "strong quake" is a thing that happened, not a "smart prophecy" [1]. Talk of aftershocks is not a prophecy either; it's a common-sense prediction consistent with observations from many previous earthquakes.
[1] smart prophecy is an oxymoron. A prediction is either based on scientific evidence (not a prophecy) or a (dumb) prophecy.
You are certainly reading something into my question that isn't there. I'm genuinely ignorant. I thought you were saying that predictions of a strong aftershock following an M8.8 were dumb, but the same thing following an M7.6 were smart. Is that not the case?
Again, sorry if this seemed antagonistic or something, I really am just unsure of what you were saying.
A manga book published in 1999 randomly predicted a disaster in March 2011, which seemed to come true with Fukushima. The manga was re-published in 2021 predicting a M8.8 in July 2025, but nothing happened. This is the dumb prophecy part, it was not based on seismology studies, just a shot in the dark to try to seem prophetic again. Countless works of fiction are published every year which predict some future disaster at an arbitrary date. Every once in a while, one of those thousands of random predictions can be interpreted as coming true when something bad happens on that day, which retroactively drives interest in that work of fiction, and less scientific minds believing the author has actual future predicting power beyond the abilities of science.
A relatively major (but not M8.8) quake has now hit in December 2025. It is intelligent to expect there may be aftershocks in the days after a significant earthquake actually happens, which can sometimes be larger than the initial quake. This is a well-accepted scientific fact born out of large amounts of data and statistical patterns, not whimsical doomsdayism.
Fukushima's M9.0-9.1 was around a 1-in-1000-year scale event. The last time Japan saw such a powerful earthquake was in the 869 AD. It would be reasonable to expect one of that scale to not happen again for another 1000 years.
Math nazi in me really wants to point out that an event with a 1:1000 probability would be expected to be seen (> 50% probability) in about 700 years, not 1000.
Heh, hence why I said 1-in-1000-year, rather than just 1-in-1000. Indeed 1:1000 would happen within 693 years with 50% probability, 1:1443 would happen within 1000 years with 50% probability.
Great response and very informative - no clue how I totally missed the references and stories about this manga. That’s pretty cool - I’ll have to look it up!
You asked what I would have asked, in a sentence, my understanding is: it was LITERALLY a prophecy, I.e. an unscientific statement out of thin air, that in July, there would be an earthquake followed by a larger one. Here, we have reality, an earthquake, ergo the first prong of a mega quake was satisfied, as opposed to prophesied.
Crucial was always a brand that I associated with quality, and I used their memory to upgrade several MacBooks back when it was still possible to upgrade the memory on MacBooks.
That being said, the only SSD I’ve ever had fail on me was from Crucial.
In recent builds I have been using less expensive memory from other companies with varying degrees of brand recognizability, and never had a problem. And the days of being able to easily swap memory modules seem numbered, anyway.
I've long (very, very long) been a storage snob. Originally via the IBM UltraStar drives, and continued with the Intel SSDs. Even with good backups, a drive failure is often a pain in the ass. Slightly less so with RAID.
IBM really locked me in on the Ultrastar back in the mid '90s. Sure, it has proven itself to be a great product. But some of the first ones I bought, one of the drives arrived failed. I called the vendor I bought it from and they said they wouldn't replace it, I'd have to get a refurb from IBM. So I called IBM, when I told them I had just bought it they said I needed to call the place I bought it from because otherwise I'd get a refurb. I explained I had already called them. "Oh, who did you buy it from?" I told them. "Can you hold on a minute?" ... "Hi, I've got [NAME] on the line from [VENDOR] and they'll be happy to send you a replacement."
My most memorable RAM upgrade was adding 512KB to an Atari ST in 1988. Had to suck the solder out of 16x(16+2) factory flow-soldered through-holes, then solder in the 16 individual RAM chips and their decoupling capacitors. I was a teenager and hadn’t soldered before. I had no one to show me how, so I got a book from the library with pictures.
Was a huge relief that the machine come up successfully. But then it would lock up when it got warm, until I found the dodgy joint.
Was a very stressful afternoon, but a confidence builder!
I bet there are many people whose sole experience inside a computer is popping in some DIMMs. I’ll be kinda sad if those days are also gone. On the other hand, super integrated packages like Apple’s M-series make for really well-performing computers.
And before that I duct-taped the insanely large 16KB RAM extension (from 1KB), so it doesn't reset with the slightest movement, on my Sinclair ZX81, which I've also assembled and soldered from a kit :)
That's because the SSD business was little more than a carbon copy of most other consumer non Samsung or SK/Solidigm brands. They've been phison controller with some cheap NAND flash with a different coat of paint for generations now, or in the case of the portable/external ones, that plus a 3rd party enclosure and IO module that they'd contracted out. In terms of hardware, this sub-business-unit was no more "Micron" than Corsair is (Support may be a different story). Enterprise SSD's and Consumer ones diverged years ago, and today are about as different from one another as GPU's are from CPU's.
The only real difference between Crucial RAM and Micron's unbuffered RAM was which brand's sticker they put on it, with some binning and QA on the higher-end enthusiast SKU's and a heatsink. This sub-business-unit was almost entirely redundant to Micron.
> And the days of being able to easily swap memory modules seem numbered, anyway.
I keep seeing people say this in threads across platforms discussing this news, and it baffles me. Why?
All the higher margin non consumer markets are moving away from socketed ram for integrated ram for performance and manufacturing cost reasons. It’s hard to see what the motivation for spending some of their limited foundry time on products that are only of interest to lower margin direct consumers if this keeps up
> All the higher margin non consumer markets are moving away from socketed ram for integrated ram
Absolutely, positively, wildly untrue. Just because there is a boom in memory-on-package designs doesn't mean the market is moving away from expandable/socketable memory. The opposite is true. It's supplementing it because we're trying to cram as much ram as possible into things, not because we're trying to reduce it.
There has never been more demand for RAM. Many of the memory-on-compute/memory-on-package designs are going into systems with socketable ram. Those systems btw have never had more memory channels/slots available. Intel just cancelled their 8 Channel SKU's for their upcoming Xeon parts because their partners pretty much all universally told them they'd be buying the 16 channel variants instead, because demand is so high, and that's not unique to Intel. AMD and Ampere are seeing and responding to similar demands, by continuing to increase their supported memory channels/memory capacities.
> and manufacturing cost reasons.
This generally increases price, even when using things like LPDDR, especially as the capacity of the packaged RAM goes up (the fact that this can't be replaced makes yield issues a big concern whereas in socketable RAM it's effectively a non-issue). There are ways that it can be used for cost effectiveness, but those applications are generally not "high margin" nor are cost-sensitive applications of this deploying a lot of SKU's to cater the wide variety of demand in type/speed/capacity (eg (LP)DDR vs GDDR vs HBM and all the variations therein, not to mention buffered vs unbuffered, Load reduced, computational, storage class etc), because even with the chiplet/modular production of CPU's, that is not a linear scale up of cost-to-manufacture (or engineer) as complexity goes up. This isn't like Cores on a CPU where you can just disable some if there's a manufacturing defect, you need to swap memory controllers and scale qty of those controllers and use different kinds of DMA interlinks depending on the ram type (can't just swap HBM for DDR and expect everything to work)
For most performance oriented products, the memory-on-package thing is a new layer of RAM that sits between the cache of the compute unit (CPU/DPU/Whatever) and traditional socketable DRAM, not as a replacement for it. There are very real thermal and packaging limits though. For example, how are you going to install 2TB of DDR directly onto a CPU package? How are you going to cool it when teh available surface area is a fraction of what it is with socketable RAM and you're theoretically hitting it even harder by leveraging the lower latency while placing it so close to the compute that's using it, that even if the RAM is idle it's still subject to far more heatsoak than equivalent socketable RAM is?
This is further substantiated by the demand for things like CXL which allows you to further expand RAM by installing it to the PCIe bus (and thus, through things like RDMA/RoCE, through the network fabric) like you would any other PCIe add in card, which is leading to an explosion in something called Storage Class Memory (SCM), so that we can deploy more socketable/expandable/replacable RAM to systems/clusters/fabrics than ever before.
I could go on and on actually, but I'm not trying to attack you, and this post is long enough. If interested, I could continue to expand on this. But the point is, memory-on-package designs aren't replacing socketable memory in high margin markets they're supplementing it, as a direct result of demand for RAM being so astronomical and there being multiple hard limits on how much RAM you can cram into a single package effectively. The last thing people want is less RAM or less choice in RAM. The RAM itself may evolve (eg SOCAMM, Computational Memory, MCR, SCM etc), but socketable/replaceable/expandable memory is not going away.
EDIT:
> It’s hard to see what the motivation for spending some of their limited foundry time on products that are only of interest to lower margin direct consumers if this keeps up
This is a fair concern, but entirely independent of the first part of your comment. Worth noting that just because Samsung is the only game in town left selling consumer DIMMs (at least in the US), doesn't mean that the consumer market isn't getting supplied. Micron, Samsung and SK are all still selling DRAM components to consumer facing brands (like Corsair, Gskill, Geil). It's entirely possible they may reconsider that, but consumers aren't the only ones with volume demand for DDR4/DDR5 DRAM UDIMM's. OEM's like DEll, HP, etc and various SI's all have considerable volume demand for them as well, and combined with consumer demand, does place considerable pressure on those companies not to fully exit supplying the market - even if they chose to only do it indirectly going forward.
I also had a Crucial SSD fail. I believe it was either 256GB or 512GB SATA, around 2013-2014. Right around the same time OCZ released a batch of SSDs that were so bad they went out of business, despite being a leader in performance. It was a fairly large story about defective silicon. Good lesson in not being too loyal to brand names.
I think I actually enjoyed Netflix more when they used to send DVDs in the mail. The library of movies felt much more complete, and getting stuff in the mail is fun. And it’s probably a healthier way to watch than binging 3 seasons of some mediocre show in a weekend just to see how it ends.
Yeah, I think younger people don't realize how limiting Netflix's library is today. They used to have absolutely everything. If it was released on DVD, they had it.
Now if I want to watch a movie or TV show, I need to consult an aggregator [0] to figure out where it's streaming.
As a nerd, I was so excited about the early days of streaming, as it felt like the inevitable path for a company like Netflix.
Now though, I'm way past subscription fatigue. I've seen movies I "purchased" on iTunes be silently swapped out for other versions as the licensing changed.
So I recently bought a Blu-ray player for the first time, and I'm assembling a library of plastic discs. This might be the last physical media format for video, so I want to support it and grab some of these titles while they're available.
It's long, but I listened to this podcast a while back with Peter Attia and Trenna Sutcliffe discussing Autism, ADHD, and Anxiety, and found that it really reduced the stigma I associated with medication for treatment of ADHD. In particular, understanding the risks of not effectively treating ADHD, in comparison with with the potential risks/benefits of the medication. That's not to say that we should only rely on medication - behavioral therapy (with parents involved too) should also play a part.
You could draw a parallel with GLP-1 agonists: people like to grandstand about how you shouldn't need it and how it's somehow cheating. As if it's not addressing a condition that people are suffering from right now, today.
The stigma also seems to accidentally admit that things like executive function and food noise aren't equally distributed, thus some people could benefit from intervention.
For example, if you've never been fat or you never binge eat or you've never procrastinated 15min of homework until 2am despite, then you're missing the irony when your solution for people who deal with these things is to try harder and to jump through hoops that you don't need to.
Behavioral therapy is only needed to make people feel better about taking amphetamines. It takes only a very cursory review of published reputable papers to realize there's nothing behavioral therapy can do to improve ADHD because as Russell Barkley says ADHD is a disability of doing, not knowing what to do.
If medication alone has worked for you, that's great! But I don't think your opinion matches the medical consensus.
> For children with ADHD younger than 6 years of age, the American Academy of Pediatrics (AAP) recommends parent training in behavior management as the first line of treatment, before medication is tried.
> For children 6 years of age and older, the recommendations include medication and behavior therapy together—parent training in behavior management for children up to age 12 and other types of behavior therapy and training for adolescents. Schools can be part of the treatment as well. AAP recommendations also include adding behavioral classroom intervention and school supports. [1]
This makes sense for very young children, for various reasons, mainly that it’s hard or impossible to diagnose ADHD with reliability at such a young age, and because medication is hard to dose properly to a rapidly growing child. But these recommendations are honestly more about helping the parents cope than about treating the child’s ADHD. Behavioral therapy is more about learning how to fit in than addressing the actual problems (which are often exacerbated by the inevitable failure of such behavioral treatment and its corresponding expectations in folks with genuine ADHD).
CBT is effective in treating people whose problems mostly stem from an inaccurate view of themselves or the world around them, because CBT is training people to take a step back and reassess what they're seeing. If you're suffering from some forms of OCD for example it can be incredibly effective, it helps to reframe things.
It is not effective, and I would argue actively worsens, situations where you're feeling bad about your accurate view of things, such as when you're depressed because you're unable to ever get any of the things you need to do done despite knowing they need to be done. CBT is unable to help in that situation because most people can't simply go "oh, well its ok, its a mental health condition" - employer, while sometimes supportive, aren't going to continue employing someone who doesn't do the work they're being paid for, and reframing that would eventually result in losing their job.
> Using a random effects model, we found that CBTs had medium-to-large effects from pre- to posttreatment (self-reported ADHD symptoms: g = 1.00; 95% confidence interval [CI: 0.84, 1.16]; self-reported functioning g = .73; 95% CI [0.46, 1.00]) and small-to-medium effects versus control (g = .65; 95% CI [0.44, 0.86] for symptoms, .51; 95% CI [0.23, 0.79] for functioning). Effect sizes were heterogeneous for most outcome measures. Studies with active control groups showed smaller effect sizes. Neither participant medication status nor treatment format moderated pre-to-post treatment effects, and longer treatments were not associated with better outcomes. Conclusions: Current CBTs for adult ADHD show comparable effect sizes to behavioral treatments for children with ADHD, which are considered well-established treatments. Future treatment development could focus on identifying empirically supported principles of treatment-related change for adults with ADHD. We encourage researchers to report future findings in a way that is amenable to meta-analytic review.
Yeah, it's better at making people feel better. Not great but certainly OK at improving behaviour.
As I said, the evidence seems to suggest medication is extremely effective which is I guess is why people are quoting the first thing I wrote and acting like they disagree with me (they get mad for suggesting that CBT works a bit because they feel judged for using the arguably superior treatment?).
I want to share my counter example -- no amount of therapy could help me not almost loose it every time I drove my daughter in heavy traffic or deal with her just being a toddler.
But after 2 weeks being on concerta made lasting changes even months off the drug.
It was the best type o therapy -- you just do the thing that triggers you minus the bad part and learn it's not so bad and you can do it, it has profound implications.
No, it doesn't. It _barely_ works and mostly consists of teaching people some coping mechanisms. Medication works _much_ better, especially when using in addition to the CBT.
Therapy, and most of all, understanding how our brains work make all the difference in the world.
It's like realizing that the reason you've been getting stuck in the mud is not that you're a bad driver.
It's just that people who don't are driving 4x4 trucks, and you've had a Nissan Z series sports car.
Turns out, farms and off-road are simply not the right environment for your vehicle, and when that environment has some accomodations, like the paved surface of a highway or a race track, you're literally running circles around people in the most common vehicles.
One profound effect of taking Adderall was feeling the clarity to understand that difference, and seeing the road instead of the endless mud fields in front of me.
It does help to get things done, but around 30% of ADHD'ers aren't responsive to it.
Understanding that you're getting stuck because your brain wasn't meant for that kind of driving, however, is universally useful.
That's why I made that ADHD wiki [1], and keep posting links to it.
It's an compilation of information that has helped me tremendously to understand the above; and I know this resource was helpful to others too in their journeys.
My perspective is that of a late-diagnosed adult who's been completely unaware of what ADHD is, and thought that they can't possibly have an attention deficit because to get anything done, they have to hyperfocus on it.
Again, learning that hyperfocus is a symptom of ADHD and understanding how it works)l had a profound impact. And medication helped with that too: it's easier to not get stuck hyperfocused on the wrong thing with Adderall.
Getting Adderall was like spraying WD-40 into rusty steering components. The immediate effect is that I can go where I want to go to, not the random direction my vehicle happens to face.
The long-term effect though was understanding what makes it difficult to steer, and how to maintain it better.
And even if I don't have power steering all the time like everyone else, I'm still better off with that experience.
My point here that it's never about medication VERSUS therapy and knowledge.
Medication is not an alternative, it's a BOOSTER.
When it works, it's just dropping the difficulty from Nightmare to Medium/Hard. It doesn't play the game for you.
The said, I'm very much happy the Nightmare mode days are behind me, and I'm very sad that the only reason I've been living my life that way is stigma and lack of information.
When I took Adderall, I unexpectedly had to grieve the future I'll never get to have after being held back by all the pain I've been needlessly subjected to over the preceding three decades.
That grief, too, is a common experience in ADHD late-diagnosed adults.
Thank you for sharing that link, and contributing to the discussion and awareness <3
In your personal experience, therapy did nothing for your ADHD. That doesn't mean it is universally unhelpful.
While it is true that therapy is not a replacement for how stimulants reinforce executive function, therapy can help folks with ADHD understand their behavior patterns and better manage some of the challenges that go along with the disorder.
In my experience, therapy helped me get over my lifetime guilt around my lack of achievements, better understanding my triggers, and prioritizing self care. I couldn't have gotten through COVID lockdown with my wife and two kids in our tiny house without it.
There may be multiple types of “therapy” being mixed up here. I think it’s important to accept that therapy is never going to “fix” the differences in brain function that folks with ADHD experience. Any attempt at behavioral therapy to “fix” an ADHD brain will fail.
But talk therapy can help some folks come to accept the differences that their ADHD means in terms of how to relate to other people or to better understand why how ADHD impacts their own behavior and self-perception.
I myself have found it’s much easier and happier to shape my life around my particular ADHD than trying to change my behavior (something that’s destined to fail and only compound the negative emotions associated with ADHD).
I'm sorry that has been your experience, but I have had very different experiences - I'd encourage you to give it another shot, there is a lot left on the table for you
> I'm sorry but therapy does NOTHING for ADHD. I wish it did, it would be very useful to me, but it's just not the case.
The therapist that worked for me practiced ACT, and was more close to coaching when it came to ADHD.
Therapists are living databases of solutions to certain kinds of problems that people have.
Problems caused by ADHD certainly belong to that category, and if your therapist is specializing in that area, they can save you a lot of time and effort by suggesting approaches that you'd otherwise have to figure out on your own.
Finding such a therapist is, unfortunately, a bit like winning a lottery that most people in the US wouldn't have the resources to play.
The article also says that the effects persist after adjusting for BMI:
> After adjusting for age, body mass index (BMI), disease severity and other health factors, GLP-1 users still showed significantly lower odds of death, suggesting a strong and independent protective effect.
The observed reduction in mortality is also quite large:
> Health sites, researchers found that those taking glucagon-like peptide-1 (GLP-1) medications were less than half as likely to die within five years compared to those who weren’t on the drugs (15.5% vs. 37.1%).
More research is needed, but if I were diagnosed with colon cancer I would definitely be asking my doctor about the risks vs. potential benefits of getting on GLP-1 meds based on this study alone.
Is medical tourism a potential option for uninsured people to decrease costs in the event of major illness like cancer? How about for a chronic condition?
It is and many immigrant families frequent their origin countries for this purpose (my own as well as many others I know), but it’s hard for someone born and raised in the United States to conceptualize what this looks like in a country like Portugal, Armenia, Russia, Turkey, Korea, etc. There’s issues of trust, risk, learning new and unusual systems, travel discomfort, and the incumbent American system benefits from this. It’s a massive thing for someone in Nebraska to say “let me check treatment options for my chronic back pain in Seoul and run the numbers”.
There’s nothing, absolutely nothing that as an immigrant I loathe more in the United States than the healthcare system. It is disgusting. The mediocrity of the average doctor combined with how much they charge for that mediocrity blows my mind every time life forces me into their wretched cabinets.
I get 1,700 Mbps on Ookla with my iPhone 17 Pro. This is on 6ghz with line of sight to the AP, with MLO turned off.
I haven't experienced any issues with 6ghz enabled, although honestly there isn't much noticeable benefit on an iPhone either in real-world usage. MLO was causing some issues for my non-WiFi 7 Apple devices - since WiFi credentials are sync'd in iCloud, I found that my laptop was joining the MLO network even though I never explicitly told it to - so I have disabled MLO.
It's not something I do often, but I've done a number of puzzles with my family. It's nice to collaborate toward a common goal, and it's fun to watch the picture come together. I find that working on a puzzle puts me in a flow state, and slotting pieces into the correct place is very satisfying. It also really gets me focused on small details of the image like nothing else - small color gradations in the sky, for example - which can bring new perspective and appreciation to a painting or photograph.
We always glue and frame our puzzles when we are finished (using standard off-the-shelf framing kits from Amazon - nothing like the scale of this article) and display them proudly in a common area of the house for a while after we finish.
But I can see why it's not for everybody, which is totally fine. More open-ended projects are also great.
Yeah I understand. The end result is a blocky model made of plastic and that holds no appeal, it kind of ruins the journey as well. I just don’t respect plastic as much as wood or metal.
I would probably call it 2 billion fps* (with an asterisk), with a footnote to explain how it’s different than video is typically captured. Especially the fact that the resulting video is almost a million discrete events composited together as opposed to a single event. All of which the video is transparent about, and the methodology actually makes the result more interesting IMO.
I would say that everyone - you, other commenters disagreeing with you, and the video - are all technically correct here, and it really comes down to semantics and how we want to define fps. Not really necessary to debate in my opinion since the video clearly describes their methodology, but useful to call out the differences on HN where people frequently go straight to the comments before watching the video.
The other games we were playing at that time were Doom II and various other first-person shooters (Rise of the Triad, Hexen, etc) - which were also pretty incredible. But the WarCraft II experience really took things to the next level with far richer gameplay.
reply