'Temperature' would probably already be close, which controls how much GPT takes low probability (could reasonably be interpreted as confidence?) into account.
Love it! The older kids in the fam regularly revisit this game to recreate the lan battles of old, would love adding these to shake things up a bit :) post up a paypal donation link if you'd like some $$ for your efforts!
I'm no fan of the stock MIUI experience either, especially china-only variants, but with that said...
The bootloader unlock is 7 days and has been for every xiaomi phone I've owned.
You can turn off "Sync" which is xiaomi's backup/sync service if you don't want to sync stuff up to the cloud...
Providing your IMEI / phone number aka "personal info" is standard even across Apple with registering AC+ etc.
There's community roms that remove all the chinese bloatware that comes preinstalled (https://xiaomi.eu/community/) plus add a bunch of new features that make it a pretty awesome experience.
The assumption that a child can't give consent is based on their lack of understanding of what exactly they're consenting to, same as a drunk person can't give consent for the same reason. So yes, a 16yo seeking out a 12yo to have sex with would be wrong. Or an 18yo with a 14yo. They are not peers.
The question was about when the younger party instigates. If such contact is inherently harmful then two prepubescent experimenting with each other would presumably cause similar issues. I feel some cognitive dissonance between those two examples which is why I was hoping someone had some actual research on the topic.
Freshman dating seniors in high school seems a lot different from adults forming a party to advocate for sex with prepubescent children. I wouldn't have thought it belongs in the same discussion but maybe I just got old.
A follow up question: as a human doesn't start with "knowing" something either and first creates definitions for objects or words, which it then uses to build increasingly abstract concepts that we eventually classify as "knowledge" on the thing, is there anything that would stop LLMs from being able to do the same thing? I fully agree the capability is not there yet, but I can't say what would stop an appropriately designed model from being able to do so myself.
A human hears words in context. Those words tie to things in the environment, responses to the young human's actions, etc. A parent saying, "roll the ball" during playtime with their kid and actually pushing a ball back and forth, provides a grounding of words in actual experience.
> is there anything that would stop LLMs from being able to do the same thing?
If you built an AI system which could hear/see/touch/move etc, and it learned language and vision and behaviors together, such that it knows that a ball is round, can be thrown or rolled, is often used at playtime, etc, then maybe it could understand rather than just produce language. I don't know that we would still call it an LLM, because it could likely do many other things too.
Socrates argued that we are born knowing everything, but we forgot most if it. Learning is simply the act of recalling what you once knew.
The point, for this thread, is not whether or not Socrates was correct.
Rather, it’s a warning that we must not confidently assume we are anything like a machine.
We may have souls, we may be eternal, there may be something utterly immaterial at the heart of us.
As we strive to understand the inner-workings of machines that appear, at times, to be human-like, we ought not succumb to the temptation to think of ourselves as machine-like merely in order to convince ourselves (incorrectly) that we understand what’s going on.
We may indeed have souls or be eternal; although I call myself atheist, I don't agree with subscribing with 100% certainty to any idea. As CosmicSkeptic points out everyone holds bad ideas without knowing it, and unless you're open to questioning them you'll never find out.
With that said, there is quite literally zero evidence for the existence of a soul, despite it being posited for thousands of years, and increasing evidence that consciousness is simply a product of a sufficiently connected system. I'll draw an analogy to temperature, which isn't "created", but is a simple consequence of two points in space having different energy levels. I'm sure there's a better analogy that could be made, but I think you get the idea.
And, conversely, we might just be so full of ourselves that we are willing resort to claims on the immaterial if that's what it takes to not give up the exceptionalism.
I'll dig up a source in a bit, but there is a critical period of development in which a child must be exposed to language, or they will fail to develop the very core skills that you're suggesting are innate abilities in a person regardless of their upbringing. This is exactly how you learned everything you know; your parents talked to you. Language grants you the ability to define concepts in the first place, without which you have no ability to recognise them as you have no language with which to think about them in the first place. So what specifically differentiates the way your brain learned to classify objects and words from the way a NN does? And what stops a NN from being able to develop concepts based on the relationship of those new definitions in the same way you do? IMO arguably it's just a matter of processing power and configuration of the network.
The question being asked was "what it would mean under this definition of knowing things for a machine learning algorithm to ever know something". Aside from your answer being rude, it's also unhelpful in that it doens't address the question asked and instead relies on reductio ad absurdum to pretend to make a point.
If you'd like to take a crack at a helpful answer, perhaps educate us all on what it WOULD take for you to consider a NN to actually "know" something in the same way that we say a human or other sentient animal does.
> Aside from your answer being rude, it's also unhelpful in that it doens't address the question asked and instead relies on reductio ad absurdum to pretend to make a point.
That is indeed often the kind of answer that a philosophical question deserves.
> If you'd like to take a crack at a helpful answer, perhaps educate us all on what it WOULD take for you to consider a NN to actually "know" something in the same way that we say a human or other sentient animal does.
Serious question, does anyone really struggle with staying under their data limit anymore? I pay $28AUD a month and get 85GB, and I use about 20 of that per month?
You do realise the definition is "beautiful or delightful in a way that seems removed from everyday life"...seems like you're set on defining "magical" as "related to druids" lol, when nobody is using the term in that way.