Love to see this concept condensed! This kind of knowledge will only emerge only after you dive in your project and surprisingly find things do not work as thought (inevitable if the project is niche enough). Will keep a list like that for every future project.
> One of the most notorious examples of Clickjacking was an attack against the Adobe Flash plugin settings page. By loading this page into an invisible iframe, an attacker could trick a user into altering the security settings of Flash, giving permission for any Flash animation to utilize the computer’s microphone and camera.
I think GitHub is correct that the bypass itself is not a vulnerability, but just like the little tooltip on GitHub's "create secret gist" button, GitHub can do a better job clarifying at the "Actions permissions" section.
The Google input method on my phone is patched by myself. So is the calculator, and many other everyday apps. I cannot imagine owning an Android phone without the ability of sideloading. Maybe I will consider rooting my phone and void my warranty on the first day with my every future Android phone.
If you assume that a cell is full and then get a contradiction, this is pretty much a backtracking to a computer. So it is reasonable that the solver does not do this trick.
My educated guess is that the profit for such a "magic" search engine is way too small than a magic "painter" that is more likely to replace a real artist. So the AI tech companies prefer the latter. It is all about the business model of the world we are in.
GenAI models are meant to create, not retrieve. The model being able to sometimes reproduce copyrighted material in training data is an unintended side effect, and usually requires the user to intentionally cause it to happen. Their business model is indeed a "magic painter" (and/or "magic writer", "magic coder", etc.), which is IMHO valuable and fair - again, it's meant to create, not reproduce or retrieve.
I indeed confused LLM/foundation models and GenAIs. They are both the current tech trend and both fall under the category of AI productivity tools, and I agree with your point.
I’m not sure what you mean here. If you’re in the realm of “more space” than you’re not thinking of the time it takes.
More precisely, I think it is intuitive that the class of problems that can be solved in any time given O(n) space is far larger than the class of problems that can be solved in any space given O(n) time.
>Programs (especially games) clearly use more memory than there are instructions in the program.
How can you access a piece of memory without issuing an instruction to the CPU? Also, "clearly" is not an argument.
>Memory bombs use an incredible amount of memory and do it incredibly quickly.
How can you access a piece of memory without issuing an instruction to the CPU? Also "incredibly quickly" is not an argument. Also also, O(n) is incredibly quick.
As in your assertion is literally self-evidently false. It is on you to provide a burden of proof here; especially since there are instructions that can load more than a single bit of memory.
> How can you access a piece of memory without issuing an instruction to the CPU?
Let me rather ask you this: where do the instructions exist that are running? That is right: in memory. However, just because instructions exist in memory doesn’t mean they’re accessed. There is not a relationship between the number of instructions and the amount of memory accessed/used.
This is about time and memory complexity, which is a formal field of computer science. Your replies are about your own vague understanding of computing, which is not the topic here.
Yes, but you are asserting the relationship is directly connected -- which is clearly not true. You said that it is O(n) memory and O(n) time, both using n. That means a program containing x bytes can only run for x seconds. This is clearly not true.
>That means a program containing x bytes can only run for x seconds.
That is not what it means. Again, if you are not familiar with the notation then all you are doing is slapping your personal ideas about computing to some symbols
Almondsetat's proof seems more obvious. Given O(n) time, you can only use O(n) space, so you're comparing "O(n) space, any amount of time" with "O(n) space, O(n) time", and it turns out you get more resources the first way.
reply