Hacker Newsnew | past | comments | ask | show | jobs | submit | nabla9's commentslogin

You could form a cooperative that periodically negotiates with providers a cheaper bulk price. If the co-op grows into millions of users they could start having their own data centers.

Each person should still have to pay for their usage, but the price could be cheaper.


DJB has a good counter "Looking at some claims that quantum computers won't work" https://blog.cr.yp.to/20250118-flight.html

In the middle of the text he specially addresses Peter Gutmann's argument.


DJB is a bit off his rocker and biased by selling the solution to a problem which does not exist. Reads more like a ransom note than a "good counter".

It's a non-exclusive deal.

No reason for antitrust action whatsoever.


That’s a loophole. Regulation hasn’t caught up to the innovation of non-exclusive licensing deal. Hopefully we’ll get some competence back in government soon-ish and can rectify the mistake

That's not a loophole. Non-exclusive licensing agreement is the opposite of loophole.

It's a backdoor acquisition by looting the key talent.

It's the opposite of an acquisition.

It's literally:

"I don't want you and your 200 tensorflow/pytorch monkeys. I just want your top scientist and I need a clever way to offer him a nine figure salary. Good of you to grant him so much stock and not options. Now I can just make a transfer to your shareholders, of which he is one! Awesome! Now I don't have to buy your company!"

I'll give you bonus points if you can guess what happens to the worthless options all those TF/PyTorch monkeys are holding?

Guys, seriously, be careful who you go to work for, because chances are, you are not the key scientist.


Non exclusive deal but also acquiring a lot of the staff, which seems pretty exclusive in that term.

Yeah but that's going nowhere in court right?

You can't have the government coming in telling a scientist who he has to work for. People are free to take jobs at whatever company they like.

This is just a clever mechanism of paying that intellectual capital an amount of money so far outside the bounds of a normal salary that it borders on obscenity.

All that said, I don't say anything when Jordan Love or Patrick Mahomes are paid hundreds of millions, so I need to learn to shut my mouth in this case as well. I just think it sucks for the regular employees. I guarantee they will lose their jobs over the next 24 months.


You saw name "Noam Chomsky" and that started a process in your mind that generated the standard spiel about Syntactic Structures.

Chomsky Hierarchy is his more fundamental work that joins computer science and linguistics. It was published in IRE Transactions on Information Theory. Chomsky, Noam (1956). "Three models for the description of language" https://chomsky.info/wp-content/uploads/195609-.pdf

Type-3 grammar ≡ finite-state automaton

Type-2 grammar ≡ Non-deterministic pushdown automaton

Type-1 grammar ≡ Linear-bounded non-deterministic Turing machine

Type-0 grammar ≡ Turing machine

ps. Chomsky was already aware of Finite State Automata and Turing Machines and understood that they match Type-3 and Type-0. Pushdown Automata was invented later, and connection between Type-1 grammar and Linear Bounded Automata was made few years later.


An LLM is not type 0. It always finishes in finite time so it is not Turing complete.

I asked Copilot

   answer yes or no,  is the following Java method syntactically well formed
   
   static int aMethod() { return "5"; }
and got what I thought was the wrong answer

   No.
   It’s syntactically valid as Java code, but it will not compile because it returns a String where 
   an int is required.
because I hadn't specified clearly that I was talking about the type 2 CFG of the parser as opposed to the type 1 behavior of the compiler as a whole. [1] I had a good conversation with copilot about it and I'm sure I'd get better results with a better prompt... It would make a good arXiv paper to pose an LLM grammatical recognition problems by prompting

   here is a set of rules: ...  is the production ... in the grammar?
with a wide range of cases. Somebody who just tries a few examples might be impressed by its capability but if you were rigorous about it you would conclude that an LLM pretends to be able to recognize grammars but can't actually do it.

And that's true about everything they do, one could argue that "in an exact sense, LLM can't do anything at all") They'll make a try at a weird question like

   what percent of North Americans would recognize the kitsune hand gesture?
which is a public opinion research question similar in character to

   what is lowest mass eigenstate of the neutrino?
in that it could be answered rigorously (but still in terms of probability, even hep results have p-values)

[1] javac implements type 1 behavior in the java language which is a substrate


I think it could be useful to combine the two paradigms to maybe get a better understanding of what transformers can and cannot learn.

E.g. would it be possible to create an algorithm that takes a grammar (and maybe a desired context window size) as input and constructs a transformer network that generates sentences exactly from that grammar?

("Construct" meaning directly setting the weights, without any iterative training process)


They are combined. Chomsky Hierarchies are the core of modern Computer science because they map perfectly into automata theory. They are always taught together in computer science.

>E.g. would it be possible to create an algorithm that takes a grammar (and maybe a desired context window size) as input and constructs a transformer network that generates sentences exactly from that grammar?

You don't need transformers for what you describe. That's 101 theory of computation class where you learn about automata, grammars, parsers, and generators.


Yeah, I know the theory of formal grammars and automata that the Chomsky hierarchy is part of. What I meant is that language models and specifically transformer networks are usually entirely separate from that theory, so it would be useful to build a bridge between "modern" language processing using GPTs/LLMs and the classical formal theory.

The most obvious overlap in usage is with programming languages: LLMs can parse and generate code in formal languages, but their processing model is completely different from syntax trees and parsers. So the question is, how do they store the formal structure of a programming language and could this be mapped back in any way to a grammar or automaton?


The way I see it is that attention is graph structured -- this token here is connected to that token there and so forth by the attention lighting up or also in the sense that there are a bunch of places in the document where people are talking about "Noam" or "Chomsky" or "Noam Chomsky" or "The Author" or "him", etc.

Alternately if you were looking it from a semantic web perspective the knowledge expressed in a document is a graph and that graph structure is more fundamental than the tree structure of a text because you could express the same knowledge in different orders. Serialization fundamentally requires putting things in some specific order which might specifically be chronological (work from 2005 to the present as a play with many acts) or could be organized around some conceptual hierarchy (kitsune legends, self psychology, character acting, animal and human behavior and physiology, ...) or the minimization or elimination of backward references (whatever it is that the C spec does a touch wrong but post-Common Lisp specs do right) , etc. Ultimately the graph is pruned away into a tree where the remaining links are denoted by syntactic features in the local scale of the document, you're kinda left filling in the rest of the links with some combination of pragmatics, logical inference, something like SAT solving, etc,

A conventional parsing point of view sees a Java program as a tree but for ordinary purposes it does not matter what order you put the fields and methods in and even though procedural programs are allegedly a sequence of operations done in a certain order it is frequently the case that it does not matter at all if you run line 71 or line 75 first so it is often the case that the graph is the real thing and the trees that we're so comfortable with are the shadows on the walls of the cave.


Chomsky Hierarchy pertains to "languages" defined in the theory of computation: i.e., it is a subset of the set of all finite sequence of alphabets (for some fixed notion of "alphabets"). If a sentence (a particular finite sequence of alphabets) is in the subset, then it is a "valid" sentence of the language. Otherwise it is invalid.

It should be already clear from this that this notion of language is rather different from natural languages. For example, if there is a formal language that contains "Good morning" and "My hovercraft is full of eels" as valid sentences, then nothing distinguishes these sentences any more. (Of course you could add annotations and build semantic values but they are not essential to the discussion of formal languages.)

It gets a bit more ridiculous when you try to connect LLMs to the Chomsky hierarchy. Modern LLMs do not really operate on the principle of "is this a valid sentence?" yet provide vastly superior results when it comes to generating naturally sounding sentences.

I think LLMs have put an end to any hope that formal language theory (in the style of Chomsky Hierarchy) will be relevant to understanding human languages.


> For example, if there is a formal language that contains "Good morning" and "My hovercraft is full of eels" as valid sentences, then nothing distinguishes these sentences any more.

Mind explaining a bit? Because I've no idea what you mean.


The trouble is that english doesn’t fit neatly into any of these categories. It has features that make it at least a context-free language, but can’t handle other features of context-free languages like unlimited nesting.

Ultimately these are categories of formal languages, and natural language is an entirely different kind of thing.


Strictly speaking natural languages fit into Context-Sensitive (Type-1) in Chomsky Hierarchy, but that's too broad to be useful.

In practice they are classified into MCSL (Mildly Context-Sensitive) subcategory defined by Aravind K. Joshi.


Sure, if you accept and agree with Joshi.

No reason to do that though, except to validate some random persons perspective on language. The sky will not open and smash us with a giant foot if we reject such an obligation.


Natural languages being in MCSL (Mildly Context-Sensitive) is the consensus among linguistics, not some random individual's viewpoint.

OK? What concrete human problems human biology faces are resolved by this groups consensus? Obsession with notation does little to improve crop yields, or improve working conditions for the child labor these academic geniuses rely on.

Sure, linguists, glad you found some semantics that fit your obsession. Happy for you!

Most people will never encounter their work and live their lives never knowing such an event happened.


You can also reject quantum physics and the sky will not open and smash us with a giant foot. However, to do so without serious knowledge of physics would be quite dumb.

Apples and oranges. Language emerges from human biology which emerges from the physical realm. In the end language emerges then from the physical realm. Trying to de-couple it from physical nature and make it an abstract thought bubble is akin to bike shedding in programming.

You could say this about literally anything.

> Snowden revealed PRISM meant the US government just had straight access

People read this and think that US government had unhindered access to all data in major providers.

According to Edward Snowden, PRISM allowed the government to compel internet companies to turn over any data that matched specific court-approved search terms. such as email addresses, all under Section 702 of the FISA Amendments Act of 2008.

At least some parts of it were likely unconstitutional as it could target U.S. persons, but it was not free for all as "straight access" indicates. It was straight access after FISA court approval.

NSA runs much more invasive MUSCULAR program in the UK without FISA or other type warrant.


They were tapping fiber links between datacenters

And, they were directly installing compromised hardware in datacenters [1]

[1] https://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa...


Crypto is a zero sum game. "The Public" can't be making millions on aggregate.

Probably something like 90+% lose over longer term. And you make nothing until you sell.


Explain to me how crypto is zero sum? It can be infinity rehypothecatated..

Crypto is a box where a bunch of people put money in and get exactly the same amount out (just distributed differently).

By that logic even the Stock market is a zero sum game.

That's not true. The money you use to buy stocks gets you an ownership interest in a company that creates value. The money you put into crypto gets you a line on a distributed spreadsheet.

The money that comes out of your ownership share is tied to the success of the company, through dividends and buybacks.


That depends if you are using crypto as 1) a store of value; 2) a medium of exchange; or 3) an alternative to permission-based monetary policy. All of it depends on the jurisdiction of the fiat-to-crypto and/or crypto-to-fiat transaction.

Generating code has never been very valuable.

When you are building a reasonable software project (10k to 300k LOC), the actual generation of code is not a significant cost. If code generation were the true bottleneck, a 300k LOC project worth $50 million would take no more than 1–3 work-years to produce, even without AI. You could hire quality code monkey from India for $15 to $40/hour and get it done with less than $250,000.

The cost of management does not approach zero when you have Microsoft Exel.


Do most people read the article and think "F-4 couldn't see the F-22 simply because of stealth, even at close range."

In reality, these fighters have forward-looking radars. An F-4 could do the same to a lone F-22. The real issue here is the F-22's stealth against Iranian ground radars. For whatever reason, they did not warn the F-4.


As the top comment in MR points out this is not that impressive.

> 12 hour ahead prediction is .0581.

The "ahead" is Before Markets Resolve. Brier score 0.0581 12h before markets resolve is usually not impressive score and information gained has usually little value.

Another issue is the difficulty of questions. You can get arbitrarily low Brier score with easy questions.

Low Brier score when the information is not already there and it has value would be impressive. Accurate Brier score when things are often already settled and information has little value is not interesting.


You don't get any important information from this single automated transaction.

Look at the Nvidia executives disposition/acquisition schedule. Here is Shoquist https://www.nasdaq.com/market-activity/insiders/shoquist-deb... Disposition means the selling or otherwise getting rid of a security. Acquisition means the buying or otherwise gaining ownership of a security.

All Nvidia executives sell Nvidia stock constantly year after year. They get more stock with their options. Almost all their worth is in NVDA and they want to spend and diversify.

Changes whatever narrative you made your mind does it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: