That video helps explain one property one of the UFO videos (the rotation) but doesn't explain the rest. Doesn't explain the Tic Tac videos. It does not explain why these were observed on radar as well.
While some of the videos have explanation, I would kindly encourage you to look at this with more curiosity.
He covers the "Tic-Tac" and "Go Fast" videos too, just not in that specific video. Like in this one, where he explains how the "Go Fast" video isn't actually even a fast object zipping just above the water, but rather an object flying at roughly wind-speed at about 12000 feet.
The tic tac looks exactly like any number of inflight videos of other inflight objects. The apparent speed is a function of the unusual perspectives created when two objects fly at different altitudes. Watch tactical footage from fighters on a regular basis and it won't even look odd.
> It does not explain why these were observed on radar as well.
The lens flare was caused by the camera looking at the ass end of another jet. The radar saw the other jet.
For even one of these videos to have a mundane explanation that should have been obvious to the Navy upon investigation, I think that discredits the lot. Either the Navy couldn't figure it out themselves (which seems highly improbable), or for some reason the Navy is deliberately misleading the public, or at the very least allowing some of their personnel to mislead the public and playing coy about it. I think this is what's happening.
> or for some reason the Navy is deliberately misleading the public, or at the very least allowing some of their personnel to mislead the public and playing coy about it
Maybe they think it's funny. Maybe it's to confuse their adversaries, or a ploy for more funding from Congress. Maybe they're allowing some pranksters to have their fun because they want to encourage an environment of open reporting where pilots aren't afraid to report strange things.
The Navy was directed by the Executive branch to release the videos. They released the videos and a non-statement about what the videos were.
My guess is that there's an internal report describing the FLIR system and how the FLIR system works and how the internal workings of the FLIR system caused the visual phenomena. But that's all classified.
So they did the absolute minimum the Executive branch required them to do and left it up to the White House Press Secretary to explain it to the American public.
To me it reeks of the brass not wanting to have any more of their time wasted. There's a great scene in The Wire where the metro police, the harbor police, the state police, and the county sheriff arguing that a string of murders don't fall under their jurisdiction; it's your problem you deal with it etc, subverting the trope of the local cops fighting with the federal/state police (usually the FBI) that "this is my jurisdiction" or whatever. I think this is the same. The Executive branch (I'm 80% sure it was Trump, coulda been Obama, too lazy to look it up) demanded that they do a thing they didn't want to do, and then they dragged their feet and did the bare minimum, and in the process made a mess that now the Office of the White House now needs to clean up. (which they didn't, because they don't want to explain a classified sensor system in a public briefing either)
> This preliminary report is provided by the Office of the Director of National Intelligence (ODNI) in response to the provision in Senate Report 116-233, accompanying the Intelligence Authorization Act (IAA) for Fiscal Year 2021, that the DNI, in consultation with the Secretary of Defense (SECDEF), is to submit an intelligence assessment of the threat posed by unidentified aerial phenomena (UAP) and the progress the Department of Defense Unidentified Aerial Phenomena Task Force (UAPTF) has made in understanding this threat.
YOu're graphing a random walk with a single black swan event (2008 crash).
A more honest graph would just have the day/night delta mapped out, not integrated in.
Indeed. I pulled the data and mostly reproduced the results when starting on Jan 1, 1990 (I got 1/20 of a penny for day trading, but "only" somewhere in the $600-650 range for overnight trading, depending on the choice of end date. Close enough to proceed with additional analysis, IMO.
If instead I pick my starting point as Jan 1, 2012, I see $1.15 for day trading vs $2.24 for overnight trading. (Note that there was again another black swan event in Feb-March 2020, before which point day trading was actually doing better than overnight trading.)
Tracking deltas (well, day-over-day multipliers) as you suggest since 1990 shows that the two shapes are qualitatively more similar (although day trading is more volatile):
- 0.1st percentile: day trading: 0.803 vs overnight trading: 0.871
- 1st percentile: day trading: 0.932 vs overnight trading: 0.960
- 99th percentile: day trading: 1.059 vs overnight trading: 1.052
- 99.9th percentile: day trading: 1.189 vs overnight trading: 1.222
To say it is a random walk is confusing a model of reality with reality.
Totally agree with the other poster though. You can pretty much prove whatever you want in the market depending on the start date and window size.
I would think the driving factor is after market earnings releases, expectations for those releases and how many there randomly happen to be during the window in question.
The response takes a long time to generate. The user could just sit there and stare at a blank response, or start reading in realtime as the response is generated.
I find it surprising that you can display any of it before the whole thing is done, since I would expect information dependencies between the start and the finish of a sentence or paragraphs. I have yet to really look into how these models work, they are black boxes to me.
From what I understand, these models generate the response one word at a time. Every time you see a new word appear at the end, the model is taking into consideration the entire chat history + its own answer so far to generate that next token.
Thanks for the comment, that's so fascinating since it seems to put limitations on thinking in general. A human for example can imagine future possibilities concurrently while speaking and correct themselves as they go.
It doesn't seem to map well tk how I put together a thought either, but admittedly I wouldn't really know how the mechanics of my brain do it, maybe it's not so different just with some auxiliary modules bolted on ha.
tl;dr: It decodes the output one word at a time, but at each step it can focus on any mix of words from the input via the attention mechanism.
So the output token n can't depend on future output token n+1 in GPT, but it can attend to any of the input tokens
I did not expect that, when iterating with smaller models like nanoGPT, even tough the output is one token at a time it did not felt like it would take half a second between each of them, but I guess that's what happen with billions parameters models.
Could you help me understand what it means to "pump entropy out of a system"?
I asked ChatGPT and it claims "It is generally not possible to "pump" entropy out of a system in the same way that it can be added to a system. This is because the second law of thermodynamics states that the total entropy of a closed system will always tend to increase over time."
ChatGPT is wrong here because in order to "pump" entropy out of a system this system must not be closed! So to ask it, you must insist the system isn't closed.
And, specifically, living organisms can't be closed systems, because their existence depend on spending energy to decrease their internal entropy, increasing the entropy of the external environment in the process, in order to keep their own entropy low enough to still function without random breakage we would expect from an aging system.
And we all know that entropy always wins in the end, and we always break if we are old enough, so this is a sisyphean task; but we still do it all day every day, because we are afraid to die.
"Pumping entropy out of a system" to me means keeping the system more orderly. To drive such processes, you have to increase entropy elsewhere, which is also known as using energy.
Beats me. It was pretty obvious to me early on when asking about any field I know well that it had no understanding and would happily blurt out a wrong, but plausible sounding answer. So I haven't even tried to ask it abkut stuff I don't understand. Because how would I even tell if the answer made sense? Seems like an easy way to get the completely wrong idea.
People love DeepL translations for the same reason, that they sound convincing, even though they are often completely wrong. Even before that people were (and still are) trusting the infocards that Google puts in search results based on arbitrary snippets it extracted from the webpage, because those snippets are presented as authoritative even though they're often out of context or completely wrong.
People are used to AI being clunky, unfocused, ungrammatical text, ala markov chain bots from the 2000s. So conversely this kind of verbose, coherent, well-written text appears to be knowledgeable and correct.
I can only hope that deepfakes and such become popular enough that people learn to be less trusting of what they find on the internet.
> People love DeepL translations for the same reason, that they sound convincing, even though they are often completely wrong.
Could you show examples?
I'm using DeepL to translate things form and to languages that I know very well, usually to double check or get additional inspiration for wording. I've never experienced anything that was completely wrong. The translations are most of the time almost perfect.
The examples I know of all involve Japanese light novels translated to English. For example https://twitter.com/Xythar/status/1405658562378952705 (The tweet author is someone I know, not me.) Other cases are fan TLs of novels that I've read which were done through DeepL, so I can't link them.
If you define advertisements as quality information google definitely wins out. I wouldn’t use either to learn something unless you’re interested in buying what their advertisers and their SEO results are selling.
To some degree, I'm probably at least curious about what SEOs are selling. If it's art, it probably popular and I'll want to know what it is, and if it's tech, people with money for SEO are often also the ones with money for innovation.
I don't usually buy stuff in ads(Actually, I rarely buy anything nonessential without thinking for a week about it), but Google's results usually are interesting, if a bit creepy and echo chambery.
I don't just use Google to learn things and accept the first results, but I do use it as a starting point.
That’s how I use chatGBT to learn. It’s rarely precisely correct but it’s answers are often basically right or direct me to a lot of concepts to investigate further. For instance I want to build a TEC based CO2 freezing chamber but need a low temperature TEC. I asked it about what TEC materials operate at low temperature. It gave me 10 different TEC semiconductors with operating temperatures, which were sometimes right. I then used Kagi (seriously, google?) to find literature. Using the two together - one to get a rough outline of the domain explained in simple and concise language and the other to retrieve information.
IR systems lack the bootstrapping phase - unless you know precisely what you want to learn you’ll end up in SEO hell sifting through loosely related garbage produced by algorithms to include as many relevant terms but zero insightful knowledge. I think for learning chatGPT is great for bootstrapping, then I turn to IR to refine and get accurate information.
I disagree, I think it's a case of the question being asked imprecisely. When you ask the right question chatGPT gives you a very insightful and accurate answer.
Human :
By what means might entropy be pumped out of one part of a system and into another part?
ChatGPT:
There are several ways that entropy can be transferred from one part of a system to another:
Heat transfer: Entropy can be transferred from a hotter body to a cooler body through heat conduction, convection, or radiation.
Work: Entropy can be transferred from a system to its surroundings when work is done on the system.
Mass transfer: Entropy can be transferred from a system to its surroundings through the transfer of matter.
Information transfer: Entropy can be transferred from a system to its surroundings through the transfer of information.
Chemical reactions: Entropy can be transferred from a system to its surroundings through chemical reactions, either through the production of heat or through the transfer of matter.
Electrical currents: Entropy can be transferred from a system to its surroundings through electrical currents.
Magnetic fields: Entropy can be transferred from a system to its surroundings through the generation of magnetic fields.
Gravitational fields: Entropy can be transferred from a system to its surroundings through the generation of gravitational fields.
If the person knew how to ask the question "correctly", they wouldn't need to ask.
They asked about systems, not closed systems. ChatGPT just regurgitated the 2nd law, which is just plain wrong no matter how you slice it. Any physicist asked this question would explain the distinction.
They gave you all the information you needed to ask the next question. It specified “in a closed system,” which meant it made an assumption of the closed system and told you. An observant reader would ask then “and what about an open system?” Or if they’re not that sophisticated, “what about a non-closed system?”
You’ve not talked to many theoretical physicists I see. I’ve not noticed they’re usually that good at explaining things, especially to lay people. They tend to be very good at talking with people at the same level or more advanced, usually in a mathematical form. They tend to be pretty bad at imitating a human.
Yeah, if you understand the field or are observant enough, you can tell the answer is fishy. And if you don't, you can't tell.
So what, anyone who gets a wrong idea from ChatGPT is just unsophisticated and we should ignore it? Why are you so incredibly set on invalidating any criticism of ChatGPT?
You don't see a problem with advertising this LLM as something it isn't? Lots of people seem willing to take ChatGPT completely at face value now, and walk away having learned a bunch of nonsense. And lots of them are smart people, they've just been duped by the hype into thinking LLMs can do things they fundamentally can't.
Oh I think it should be improved for sure. I just think this is a bad example. I think most of the fact checking can be done using any modern information retrieval system and you can build algorithms that will regenerate answers until they’re factually correct, or use the IR to hint the answer to correctness. We also have very powerful semantic inference engines and other tools that complement LLM output. I think judging the possibilities by the beta is simplistic, and folks are unfairly down on the achievement by picking nits.
In the context of this question, system has a specific technical meaning though. This is a very technical question so chatGPT might be forgiven for assuming the technical meaning.
So what is ChatGPT supposed to be useful for if people need to understand what the answer is in order to ask the "correct" question? Clearly the person didn't pick up on this closed/non-closed distinction which is why they ended up asking on a forum instead of reformulating.
I think GPT just saw "entropy" and "system" and predicted 2nd law. Which is the sort of low effort response you might get if you ask random non-experts on the internet.
> So what is ChatGPT supposed to be useful for if people need to understand what the answer is in order to ask the "correct" question?
Personally I think there's still a lot of value in mere rephrasing, recontextualizing, perspective shifting, and occasional insightful connection that ChatGPT can do even if it's all using information you already ostensibly know.
As it is often said that one never learns a topic more deeply than when they teach it to others, ChatGPT can serve as the ultimate "rubber duck" coworker for any subject. I'm not sure what we can do about users not using critical thinking, or expecting the machine to do that part for them. There are plenty of non-AI sources of misinformation that can readily be taken uncritically too, though, so it's not necessarily a new problem.
No, rubber ducks are still the ultimate rubber ducks, because they don't talk back with industrial grade overconfident bullshit that misleads and confuses you.
Is there a body of evidence that suggests people get more misled and confused after using things like ChatGPT? It seems like a reasonable hypothesis, but my own experience doesn't necessarily support it. I've used the language model at character.ai for a bit and have found it to be clarifying in a sense. When the model spits out some overconfident misinformation, it's a great opportunity to argue with the bot about it in ways one could never argue with another person - certainly not a stranger, at least.
Perhaps I've been confused and misled so badly I don't realize it, so all I can really say is I think it's premature to assume people will be any more misled or confused by technologies like ChatGPT when all they have to do now is get on the internet or flip on a TV to be personally targeted with misleading and confusing information already. I think there's very real potential for the technology to give people a lever against misinformation if it helps them understand and explore their own thoughts/thought processes.
I guess to me, fundamentally, it's a question of who's the one with agency over using it, and to what end. I'd be much more comfortable once we can fit models like this on home computers and worry less about them suddenly trying to sell us sponsored products or convince us of some ideology because their creator was paid to do so.
Write a python function to compute the idiotsecant.
Sure! Here is a function that computes the idiotsecant of a number x:
def idiotsecant(x):
return 1/x
This function returns the reciprocal of x, which is also known as the reciprocal function or the multiplicative inverse of x. The reciprocal of a number x is defined as 1 divided by x.
For example, if we call the function with an argument of 2, it will return 0.5, because 1/2 is equal to 0.5.
I hope this helps! Let me know if you have any questions.
Wouldn't the star itself be the closed system? Any lifeforms within it are free to reduce their own entropy so long as they increase entropy within the system.
It's a bit of a colloquialism, I think, but the idea is, all of your biological processes are constantly generating entropy.It is constantly tending toward a system in equilibrium, as physical systems do, but you are essentially made out of non-equilibrium systems. You need the water to be here, but not there. Your nervous system is based on electrical gradients that, left to their own devices, will normalize. Every cell has an electric potential gradient it maintains (google "cell proton gradient"). All of these things and thousands more are constantly breaking down and require energy to maintain. Basically, imagine the difference between a recently dead body that just attained ambient temperature and a living one. Characterizing all those differences would be more than the work of a lifetime.
A super abstract, but physically valid, way of expressing that is that your body is constantly "generating entropy" and it needs to consume energy to fight it. "Pumping out that entropy" is what the body is doing when it takes in energy and uses it to maintain all those gradients.
Life in general may not use organic chemistry, or chemistry at all, but the advantage of this level of abstraction is that any life form (in this universe) will have to do something to "pump out the entropy". It is essentially by definition a deviation from the equilibrium state around it, and it will require energy to maintain.
The point I am making here is that there must be some ability to pump out the entropy faster than it is being generated, or pushed in to the system. Otherwise those gradients and variations from equilibrium will be erased. Inside a sun, there is so much heat energy being pressed in to the system that it is challenging to imagine how any conceivable structure could push it back out again.
This boils down to the observation that "Holy cow, the sun is, like, REALLY HOT!", but, you know, wrapped up in a different formalism that allows us to get past "But what if, like, there's something that could deal with that?". You see this online, the challenge that someone needs to prove that there's no way to build something that could live in the sun. If you don't have thermodynamics as a tool, this isn't even necessarily unreasonable. But thermodynamics gives us a principled way to turn around and say "Any such system would have to have this and that and the other property, and it's really hard to see how plasma and magnetic fields in such a violent environment could have that." It's so not-close that it's not really plausible.
ChatGPT may be referring to the general conceptual space/idea that you can't just build a machine to "reverse entropy" trivially. The most common example of this is that you can't just build an anti-microwave, that remotely cools things by shooting radiation at it in some easy manner. It is sooooo much easier to add entropy than remove it; that is definitely true. In fact, it's kinda a key element of my point here. But you can pump entropy out of a system, in some ways, with some machines, in some manners; since you are literally such a machine yourself, you are an existence proof of that.
I think there might be fundamental tradeoffs at play. An information processing system acting in a cold local environment will have the ability to do far more complex computations with more reliability (thus, in a way, be much more likely to exhibit complex internal behavior) but will also be limited in how much work it can perform onto the outside, thus how much it can recognizably output from the computation. While the opposite will be true at higher temperatures.
I signed up for their service, was assigned a virtual agent. I did some background check on the person assigned to me and found she had stolen cc details & funds from a customer of hers in the past.
Nothing you can do when you're in the US and the person that stole from you is in the philippines.
I just did a google search of her name. Ran into a website from a former client lamenting she'd stolen his identity, lamenting there wasn't much that he could do about it and warning others.
False. Energetic costs of distributing duplicate information is significantly more expensive than local computation. Eg: measuring joules spent for decompression vs transmission is orders of magnitude over the n duplication.
But surely the orders of magnitude thing factors in when you can have orders of magnitude more clients than servers? I think you'd need to run more specific numbers to lean either way on this.
You seem to not realize that moving the calculation client-side means to send a JavaScript implementation, which for all sensible implementations would be significantly larger than the pre-computed result.
No, it's just a theme for MkDocs [1], a static site generator that has no affiliation with Google. Material for MkDocs only borrows the Material Design philosophy from Google.
Hear hear. That's exactly why I occasionally buy a lotto ticket or 20 when the number gets REALLY big. I consider $20 every year toward an hour of fantasizing about what I'd do with a $500M lump sum a worthwhile investment.
($500M lump sum...$250M after taxes...invested into triple tax-free muni bonds paying 2.5%....ahhhh...$521,000 a month for life...mmmmmm)