I've had great results, and every workout I do consists of an exercise I can do at least 20 reps of for the first set, sometimes going up to 50. I can still gain strength by increasing the weight slowly week by week but maintaining a high level of reps. I don't think it takes longer at the gym -- just do 2 sets per motion instead of the more common 3-5. The breaks in between sets at the gym are the real time sink. Plus, you get lean muscle with high endurance, and virtually no injuries. Last tip: put your phone/music in a locker while you're at the gym if you want to both improve your workout, save time, and practice being more present.
It seems like they fixed the most obvious issue with the last release, where codex would just refuse to do its job... if it seemed difficult or context usage was getting above 60% or so. Good job on the post-training improvements.
The benchmark changes are incredible, but I have yet to notice a difference in my codebases as of yet.
Is there something similar with twice the memory/bandwidth? That's a use case that I would seriously consider to run any frontier open source model locally, at usable speed. 128GB is almost enough.
Fill up the memory with a large model, and most of your memory bandwidth will be waiting on compute shaders. Seems like a waste of $5,000 but you do you.
It was the only thing to be optimistic about in this administration, but it sure didn't last long. We should all know that this was the last attempt that had a chance of addressing the national debt -- the only other way out is extreme inflation.
Musk was absolutely the wrong guy for the job. He doesn't have the patience to spend 4 years carefully poring over government expenses, nor the security clearance (AFAIK) to address pentagon spending. Plus, I don't think he's humble enough to bring in people who actually know what to look for.
Prediction: AI will become commoditized ~15 IQ points higher than the state of the art models today, and with larger context, within 4 years as the incremental improvements in training from synthetic data plateaus (we've already used all the "real" data out there) and open source models are cheaply trained on the outputs of the big money models. Then AI development stagnates until someone invents an effective way to use competitive reinforcement learning to train generalized intelligence (similar to how AlphaGo was trained), removing the need for vast quantities of training data. Then, we get real AGI.
If that's true and if today's frontier models are around 120 IQ (who knows if that is true, but let's run with it, source: https://www.trackingai.org/home) then we'll have an enormous number of ~135 IQ bots with nearly unlimited conscientiousness.
I can't even begin to understand what that would mean.
At the speeds AI is moving, we've effectively used it all; the high quality data you need to make smarter models is coming in at a trickle. We're not getting 10^5 Principia Mathematicas published every day. Maybe I just don't have the vision to understand it, but it seems like AI-generated synthetic data for training shouldn't be able to make a smarter model than whatever produced that data. I can imagine synthetic data would be useful for making models more efficient (that's what quantized models are, after all), but not pushing the frontier.
It seems to me the two are effectively the same unless you have significantly misshaped teeth (remineralizing vs regenerating). I also use hydroxyapatite, just to reduce my fluoride exposure, although I believe fluoride is supposed to be a more potent remineralizer (and fluorapatite is allegedly stronger than natural hydroxyapatite). But the upside is that I don't mind swishing hydroxyapatite around in my mouth for 10 minutes, twice a day, so whenever I go to the dentist, I'm the healthiest mouth of the day (not the case pre-hydroxyapatite tooth paste/powder).
NHAP particles are smaller than fluoride particles, so they're able to penetrate farther into the porous surface of the teeth; flouride basically can only coat the surface. There is some research indicating that NHAP is more effective than flouride at remineralizing (e.g. https://pmc.ncbi.nlm.nih.gov/articles/PMC4252862/) but that flouride is more protective than NHAP because NHAP isn't protective at all. (The flouride creates a temporary sacrificial enamel-like shell layer that closes off pores in the surface of the teeth in addition to buffering acids; the NHAP will just create new enamel.)
My dentist says that NHAP is great if you have lots of cavities or drink lots of acidic drinks like soda, but once your enamel is repaired too much of NHAP can actually cause weird growths.
Dave's toothpaste has both NHAP and flouride (and the sensitivity agent used in Sensyodyne) if you're looking for the best of all worlds in the U.S.
After doing some research, I decided to go for this one: https://drjennatural.com/products/dr-jen-super-paste-with-na.... 10% nHAP (rod-shaped), RDA under 50 (exact number unspecified), nothing obviously objectionable in the ingredients, and comes with or without fluoride. My only minor quibble is that I couldn't determine the exact range of HAP particle sizes, which some other vendors do list. On the other hand, it has some strong reviews that seem credible, and there aren't many other options that explicitly provide 10% nHAP with a low RDA, and even fewer that offer a fluoridated version on top of that.
SuperMouth also looked like a great option with an RDA of 67 (particularly for kids who like crazy flavors), and Elims also looked good for anyone who doesn't mind the 92.71 RDA. Ollie stood out for its minimal ingredients list, but turned out to have a relatively high RDA of 143.
I currently use BioMin C in the morning and F at night, but based on everything I'm learning right now about nHAP, I figure it can't hurt to stack Dr. Jen with those. Maybe in a few years I'll get some keratin in the mix too.
Nobs is good because they only use rod-shaped NHA, not needle-shaped NHA which has a worse safety profile. Safety profile is important for anything nano
IDK how to tell what brand uses what type without independent testing or taking their word for it. Several makers have come out and said needle-shaped is cheaper to buy so if a brand has 10% formulation as opposed to 1 or 3 or 5%, it is more likely to be using needle-shaped. (And there is a separate conversation to be had whether 10% is needed/ideal concentration anyway)
For me the game changer here is the speed. On my local Mac I'm finally getting token counts that are faster than I can process the output (~96 tok/s), and the quality has been solid. I had previously tried some of the distilled qwen and deepseek models and they were just way too slow for me to seriously use them.
As much as I like the novelty of the design, there isn't much of a crumple zone for a head on collision. I could see the wheel placement making this a fun off-road vehicle, though.
Yeah, I was thinking of the Pearson testing centers because they're already prpctored to prevent cheating and setup for identity verification. But co-working spacings could certainly work too. That might be even more viable in Europe.
I've had great results, and every workout I do consists of an exercise I can do at least 20 reps of for the first set, sometimes going up to 50. I can still gain strength by increasing the weight slowly week by week but maintaining a high level of reps. I don't think it takes longer at the gym -- just do 2 sets per motion instead of the more common 3-5. The breaks in between sets at the gym are the real time sink. Plus, you get lean muscle with high endurance, and virtually no injuries. Last tip: put your phone/music in a locker while you're at the gym if you want to both improve your workout, save time, and practice being more present.
reply