It certainly seems like someone would've invented a Kid Friendly phone by now that's completely safe, and doesn't allow access to the "real" internet at all, but only an ability to send texts without images, make voice calls, etc. Now that we have AI it would be easier, an you could potentially give "Google" access that's censored into a "child friendly" output by the AI. You could have a texting app where friends can talk, but only to kids in their own school for example, or at least limited by geographical area, to foster friendships IRL, rather than some Chinese Bot being able to trick your kid into eating Tide Pods or whatever their latest Attack on America happens to be.
But TBH making kids continually solve math problems seems a bit mean to me. Like making a kid do pushups for food if they're overweight. Too militaristic and authoritarian for my liking, but I can respect your creativity for creating that. It's good to try new ideas.
“Child friendly output” is not a solution. It is the problem. I trust my 9-yo to avoid porn or violence; I don’t trust him to be able to resist the hours of inane content on YouTube Kids &c. Using AI to facilitate access to more of that, while censoring reality, is the opposite of what’s needed.
So make a phone without all the things that make it so profitable? Limit what they can be sold? You would have to sell it at a premium for less functionality.
There are ways of locking down phones and apps, I think. I am pretty sure there are apps that will do most of what you want, but they do not have critical mass.
I did set up a Jitsi server for my daughter and her friends at one point when another parent was not keen on allowing kids access to chat and video apps.
You can give kids a basic phone instead of a smartphone.
Right, I didn't mean it necessarily had to be on it's own hardware. I don't have any Android development experience, but it seems like android could have a version that's as locked down a this.
If I had kids I wouldn't even allow use of a smartphone. I think hardly any BigTech execs let their own kids use these dumpster fires called smartphones and social media. They know there's almost zero benefits to it. Just leads to brain damage, laziness, ADHD, psychological disorders like depression, life-threatening risk-taking, and even su*cicd.
Depends on age, individual, usage and circumstances. My kids had phones as teens, and they were useful to some extent. It also depends what they do - social media + doomscrolling is the worst thing.
There is Android support for locking things down for kids, but I do not know how effective it is - mine are adults of close to being an adult now.
Its also hard to do without. I would have to pay a lot more for my daughter's bus tickets to get to school if she did not use the bus company app (because that would mean daily tickets instead of monthly which are a lot cheaper). Its where a lot of kids not only discuss things and social, but organise things (although I encourage doing that at a desktop rather than a phone when possible) so kids without get left out.
I agree with all that. Nowadays kids are so addicted to phones, the phone is like a toy (even like a baby pacifier) that they simply never outgrow even into adulthood. They can't sit at a stoplight without needing a "fix" like a junkie. It's so sad.
I somewhat agree, and it is very harmful, but adults who did not have phones as kids can be just as bad. I have even seen someone posting on social media (with a photo of what was happening) to complain about a child not putting their device down!
It is not just sad, it is harmful. "What is life if full of care, we have no time to stand and stare". It is the opposite of mindfulness.
I dunno. My formative years were the 1970s, and I don't think anyone my age will have a genuine panic attack if denied access to their pacifier like today's kids (and adults) do.
But would you make your kid do CAPTCHAs every time they need to earn some privilege? We're talking about what's appropriate for kids, not what's easy or hard for adults or kids. I mean why not make them earn dessert by doing push-ups? Because it's mean, that's why.
If it was up to me, I'd make them do 5 push ups and 5 crunches instead. Or putting the devices down altogether. It's not mean to make your kids do physical activity. In fact, if you are not making them do physical activity, I'd say you are negligent as a parent. I guarantee you that if you had your kids start doing pushups and crunches they would get to the point of it being a nothing burger to do it. It will be a bunch of moaning and complaining at the start, but that goes away. It's just as much of a conditioning as the kid crying and being rewarded with a device.
If you wanna have successful kids just make 'em solve a coding challenge on a white board for food and/or medical and dental care, whereby noncompliance or failure earns them a night out in the tent in the back yard, especially in winter. You wanna be a disciplinarian, then let's get it right, amirite?
no, now you're just being obstinate because you think it's cool on the internet or something. if you think doing 5 push ups and 5 crunches is punishment, then we're just on different planets. fine, if you're so against physical exercise. then make the ankle biters clean their room, take out the trash, walk the dog (oops physical exercise again), or any of a number of other things. unless you're one of those parents who thinks chores are too taxing for their sweetwiddleones
I think you took my post a bit too seriously. I have no kids, but I just know the topic of how to discipline children is hotly debated among the "experts" (if there are any).
I'd be willing to bet a more clear prompt would've given a good answer. People generally tend to overlook the fact that AIs aren't like "google". They're not really doing pure "word search" similar to Google. They expect a sensible sentence structure in order to work their best.
Maybe, but this sort of prompt structure doesn't bamboozle the better models at all. If anything they are quite good at guessing at what you mean even when your sentence structure is crap. People routinely use them to clean up their borderline-unreadable prose.
I'm all about clear prompting, but even using the verbatim prompt from the OP "ffmpeg command to convert movie.mov into a reasonably sized mp4", the smallest current models from Google and OpenAI (gemini-2.5-flash-lite and gpt-4.1-nano) both produced me a working output with explanations for what each CLI arg does.
Hell, the Q4 quantized Mistral Small 3.1 model that runs on my 16GB desktop GPU did perfectly as well. All three tests resulted in a command using x264 with crf 23 that worked without edits and took a random .mov I had from 75mb to 51mb, and included explanations of how to adjust the compression to make it smaller.
There's as much variability in LLM AI as there is in human intelligence. What I'm saying is that I bet if that guy wrote a better prompt his "failing LLM" is much more likely to stop failing, unless it's just completely incompetent.
What I always find hilarious too is when the AI Skeptics try to parlay these kinds of "failures" into evidence LLMs cannot reason. If course they can reason.
Less clarity in a prompt _never_ results in better outputs. If the LLM has to "figure out" what your prompt likely even means its already wasted a lot of computations going down trillions of irrelevant neural branches that could've been spent solving the actual problem.
Sure you can get creative interesting results from something like "dog park game run fun time", which is totally unclear, but if you're actually solving an actual problem that has an actual optimal answer, then clarity is _always_ better. The more info you supply about what you're doing, how, and even why, the better results you'll get.
I disagree. Less clarity gives them more freedom to choose and utilize the practices they are better trained on instead of being artificially restricted to something that might not be a necessary limit.
The more info you give the AI the more likely it is to utilize the practices it was trained on as applied to _your_ situation, as opposed to random stereotypical situations that don't apply.
LLMs are like humans in this regard. You never get a human to follow instructions better by omitting parts of the instructions. Even if you're just wanting the LLM to be creative and explore random ideas, you're _still_ better off to _tell_ it that. lol.
Not true and the trick for you to get better results is to let go of this incorrect assumption you have. If a human is an expert in JavaScript and you tell them to use Rust for a task that can be done in JavaScript, the results will be worse than if you just let them use what they know.
The only way that analogy remotely maps onto reality in the world of LLMs would be in a `Mixture of Experts` system where small LLMs have been trained on a specific area like math or chemistry, and a sort of 'Router pre-Inference' is done to select which model to send to, so that if there was a bug in a MoE system and it routed to the wrong 'Expert' then quality would reduce.
However _even_ in a MoE system you _still_ always get better outputs when your prompting is clear with as much relevant detail as you have. They never do better because of being unconstrained as you mistakenly believe.
I love Copilot in VSCode. I always select model "Claude Sonnet 3.7", when in Copilot since it lets me choose the LLM. What I love about Copilot is the tight integration with VSCode. I can just ask it to do something and it relies on the intelligence of Claude to get the right code generated, and then all Copilot is really doing is editing my code for me, reading whatever code Claude tells it to, to build context, etc.
That's why I said "in VSCode" because I have no idea what this guy is running, but it's almost a certainty the problem isn't copilot but it's a bad LLM and/or his bad prompt.
The Copilot integrated with Microsoft 365 doesn’t have a model switcher it just is what it is. You are talking about a completely different product that Microsoft calls the same names.
imo, any VSCode user needs both extensions: "GitHub Copilot" for inline completions, and "GitHub Copilot Chat" for interactive, multi-turn coding chat/agent.
I haven't tried GPT-4.1 yet in VSCode Copilot. I was using 'Claude Sonnet 4' until it was struggling on something yesterday which 3.7 seemed to easily do. So I reverted back to 3.7. I'm not so sure Sonnet 4 was a step forward in coding. It might be a step back.
Every time you locate something in space and/or time, it means a wave has collapsed. So that statement is as trivial as saying "constraints are about positions of things in space time." It's about as enlightening as saying "clocks tick" or "rulers have numbers on them."
They say "On each step...[yadda yadda] we have a completely observed state, the wave function has collapsed."
So they're trying justify calling a "state" a "collapse". That's a bad metaphor to start with, but then they try to use that metaphor to justify calling lots of other stuff "waves" that are unrelated to waves, and continue to shove that square peg thru a round hole. Hilarious.
I know. It's hard to tell if they're trying to be jokingly "cringe" about all the "wave" stuff, or simply that non-conversant about wave theory and QM.
I mean in VIM you can't even easily exit. I've always had to literally reboot my computer to get out of VIM. One time even that didn't work, so I had to pull the main circuit breaker in my house to get it to quit.
You can tell those feet had toes that were much longer and stronger than modern toes are. Makes sense since these creatures were closer back to when we were like monkeys climbing thru trees.
However, I do notice the pronounced gaps between the toes. My parents generation grew up in Melanesia starting around the 1950’s and many of them have commented on the distinctly different footprint profiles of the local people who never had worn shoes and the western newcomers. If you’ve never worn shoes your toes are far more splayed. I don’t know about lenght of toe.
So really I don’t think your observation is related to their genetic proximity and more to do with bodily adaptation. Perhaps an anthropological podiatrist can comment.
For context, how old are the oldest Egyptian pyramids?
Well I regularly walk barefoot in wet sand and I don’t see any noticeable difference between my footprints and those in the article. I can assure you I am not simian.
Sure, there's all kinds of reasons the fossil footprints might have long looking toe marks. Heck it could've been toenails. Not sure what they used for nail clippers back then. haha.
Women's feet have grown 30% since 1960. Look it up. Doesn't mean the trend will continue, it just means evolution can indeed happen very rapidly under certain circumstances, and for primates to keep long toes for a very long time even after coming down from the trees makes some sense. Probably much more efficient to run thru mud, etc.
Wouldn't nutrition, diet, and lifestyle changes be a far more likely explination than evolution? What mechanism within the last 80 years could possibly be the driving factor behind evolutionary changes in peoples feet size? Its not like people in the 1930s were dieing due to overly large feet, nor has foot size been a significant factor in mating sucess. People are taller today too, but that isn't because tall people use to die more often or was once considered unattractive, it is mostly because of better nutrition thanks to far more varied and reliable diets.
It's very likely that there's a simple switch (i.e. not much more than a couple of mutations required) which governs finger/toe length in primates. For example, did you know all the DNA for growing a "Lizard Leg" (the ENTIRE leg) is still in all snakes, but just just not activated, because one other mutation is blocking it?
There are many known genetic conditions that we see even in modern times, caused by one or two mutations which can cause very long fingers/toes, and people in this thread are arguing that even in 23000 years nature can't land on that mutation and stick to it, _especially_ when we _know_ the DNA for it is likely still there because we're all apes 98% identical to monkeys for example, which have the long toe thing.
I don't know why you're on about this, but our foot shape has been essentially static across the entirety of genus Homo. The difference in time between us and them is an imperceptible rounding error compared to the many millions of years since bipedalism evolved. These people looked like us, wore clothes, spoke languages, etc. If you teleported one of their infants forward and raised it, it would be virtually indistinguishable from a modern person until you did genetic testing.
If feet can change 30% in 50 years then toes can certainly change that much in thousands. And I'm not even saying it was worldwide, just the people who made those mud footprints. And that 30% isn't even all humans either, it's bizarrely only women.
Evolution can happen rapidly sometimes. Lookup "island rule" or "Foster's rule", which is also about this. Changing environmental conditions can rapidly increase evolution rates, specifically for "size" attribute.
A similar topic but not this link[0] was mentioned here on HN awhile back. An aboriginal man had a lock of hair which had passed down over many generations, and some of which was allowed to be carbon dated. It was old, I don't remember the specific age, but it was at least BC old. And this lock of hair had been passed down from one generation to the next. It showed his people had been in Aus for a very long time and they predate first humans in North America.
Just looking at this article Aborigines had been in Australia for 20k - 30k years before the White Sands footprints were made. I'm sure there's footprints of similar "vintage" there. It would be curious to compare them.
We're 98% genetically identical to all other primates, so a gene combination controlling lengths of appendages is buried probably in just a handful of mutations embedded in that 2%.
Maybe ultra modern humans have toes stunted from non-stop shoe wearing. You definitely splay your toes more if you're accustomed to walking bare foot than you do if you're accustomed to walking in shoes.
Sure it is. Especially when talking about relative sizes of existing anatomy rather than completely new anatomy.
For example: Women's feet have gotten considerably larger over the past several decades. For example, in the 1960s, the average size was around a 6.5, in the 1970s it was 7.5, and today it's often cited as between 8.5 and 9. That's a whopping 30% (according to Gemini) increase in them whoppers, in my lifetime alone.
I think it's well know also that when there's a certain type of environmental condition that puts different stressors on something evolution can happen only in a few generations. Look it up. There's countless examples of rapid evolution that's well known to happen.
Yes we live in a society, and 50-100 years is not significant in an evolutionary scale (unless you are breeding artificially en-masse which would be funny in case of women with giant feet).
Rapid Evolution _does_ happen also. Foster's Rule can happen in under 100 years.
"Evolutionary Scale" time ranges is normally what we associate with the ability to evolve entirely new body morphologies. However, for simple changes in length of existing structures often only several generations is required (a few decades).
We are probably talking about entirely different things. Growing feet (if there is even such a thing) being attributed to evolution (by natural selection) sounds wrong, and I don't know how "rapid evolution" is related. And I'm sorry, conversations involving phrases like "ask gemini" is just hilarious to me. I can't even.
In many cases no one knows the _cause_ of "Rapid Evolution", but ALL SOTA LLMs know it definitely happens, and quite often, and can give numerous examples.
“My LLM thinks so” has got to be the worst supporting argument I’ve ever seen, and I fear it will become all too common. Even “just Google it” is better, because at least the user might find an actual source.
Have you tried asking these programs to provide some sources and reading those instead?
Seems like it makes more sense to build on the build machine, and then just copy images out to PROD servers. Having source code on PROD servers is generally considered bad practice.
The source code does not get to the filesystem on the prod server. It is sent to the Docker daemon when it builds the image. After the build ends, there's only the image on the prod server.
I am now convinced that this is a hidden docker feature that too many people aren't aware of and do not understand.
Yeah, I definitely didn't understand that! Thanks for explaining. I've bookmarked this thread, because there's several commands that look more powerful and clean than what I'm currently doing which is to "docker save" to TAR, copy the TAR up to prod and then "docker load".
I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.
If someone's not using LLMs yet in 2025 to write code they're basically Amish.
They're riding a horse in the age of automobiles, just because they think they're more comfortable on horseback, while they've never been in a car even once.
But TBH making kids continually solve math problems seems a bit mean to me. Like making a kid do pushups for food if they're overweight. Too militaristic and authoritarian for my liking, but I can respect your creativity for creating that. It's good to try new ideas.