Hacker Newsnew | past | comments | ask | show | jobs | submit | msuvakov's commentslogin

Why the b > 2 condition? In the b=2 case, all three formulas also work perfectly, providing a ratio of 1. And this is interesting case where the error term is integer and the only case where that error term (1) is dominant (b-2=0), while the b-2 part dominates for larger bases.


in the b=2 case, you get:

  1 / 1 = 1 = b - 1
  1 % 1 = 0 = b - 2
they are the other way around, see for example the b=3 case:

  21 (base 3) = 7
  12 (base 3) = 5
  7 / 5 = 1 = b - 2
  7 % 5 = 2 = b - 1


In the b=2 case, 1/1 = 1 = (b-2) + (b-1)/denom(b) = (b-2) + (b-1)/1 = 2b - 3 = (b-1)*b^1 -1 (b-1)

In base 2 (and only base 2), denom(b) >= b-1, so the "fractional part" (b-1)/denom(b) carries into the 1's (units) place, which then carries into the 2's (b's) place, flipping both bits.


To put it this way: after seeing examples of how a LLM with similar capabilities to state-of-the-art ones can be built with 20 times less money, we now have proof that the same can be done with 20 times more money as well!


There was this joke about rich Russians that I heard maybe 25 years ago.

Two rich Russian guys meet and one brags about his new necktie. "Look at this, I paid $500 for it." The other rich Russian guy replies: "Well, that is quite nice, but you have to take better care of your money. I have seen that same necktie just yesterday in another shop for $1000."


Can you explain that joke for me? I keep reading it and I don't get it.


The punch line is that more expensive is better in cases where you buy something just to flex wealth.



To put it simple: He only bought the necktie so he can brag how rich he is. He could have bragged even more if he had bought the necktie in the other shop.



it's just that rich Russians do not have financial sense.


Imagine what they'll achieve if they'll apply deepseek methods here with this insane compute


And they will since Deepseek open-sourced everything.


The only thing Deepseek open sourced is architecture description and some of training methods. They didn’t open source their data pipelines or super optimized training code.

Their architecture achievement is their own MoE and own attention. Grok was MoE since v1. As for attention we don’t know really what grok use now, but it worth noting DeepSeek attention was already present in previous version of DeepSeek models.

As of reasoning recipe for R1 seems like Grok already either replicated or came up to it by itself, since they have well performing reasoning uptrain too.


      ___                        ___                     ___           ___     
     /  /\           ___        /  /\      ___          /  /\         /  /\    
    /  /::|         /__/\      /  /:/     /__/\        /  /::\       /  /::\   
   /  /:|:|         \__\:\    /  /:/      \  \:\      /  /:/\:\     /__/:/\:\  
  /  /:/|:|__       /  /::\  /  /:/        \__\:\    /  /:/  \:\   _\_ \:\ \:\ 
 /__/:/_|::::\   __/  /:/\/ /__/:/         /  /::\  /__/:/ \__\:\ /__/\ \:\ \:\
 \__\/  /~~/:/  /__/\/:/~~  \  \:\        /  /:/\:\ \  \:\ /  /:/ \  \:\ \:\_\/
       /  /:/   \  \::/      \  \:\      /  /:/__\/  \  \:\  /:/   \  \:\_\:\  
      /  /:/     \  \:\       \  \:\    /__/:/        \  \:\/:/     \  \:\/:/  
     /__/:/       \__\/        \  \:\   \__\/          \  \::/       \  \::/   
     \__\/                      \__\/                   \__\/         \__\/


(I took out several dozen lines of whitespace from this post. Looks like the green graphic didn't come through.)


Thanks. It seems that some UTF-8 characters are not accepted as part of the comment. Anyone who wants to see the rabbit should check the page source :)


Gemini 2.0 works great with large context. A few hours ago, I posted a ShowHN about parsing an entire book in a single prompt. The goal was to extract characters, relationships, and descriptions that could then be used for image generation:

https://news.ycombinator.com/item?id=42946317


Which Gemini model is notebooklm using atm? Have they switched yet?


Not sure. I am using models/API keys from https://aistudio.google.com. They just added new models, e.g., gemini-2.0-pro-exp-02-05. Exp models are free of charge with some daily quota depending on model.


Great observations! Thanks for your deep dive into result. I didn't go into this level of detail myself, but one thing I notice is that "the cat" in the graph is actually Peter, the cat that Tom gave painkiller to (with missing connections to Tom and Aunt Polly).

You're absolutely right that some characters are missing even in those short books, and there are likely many more relationships that haven't been fully captured. That said, I’m still quite impressed by how much data the LLM extracted in a single pass, especially given the complexity of the task, the size of the input, and the strict output format.

My estimate of quality was subjective. To truly quantify accuracy, we’d need to establish a "ground truth" with a better approach and measure the difference between the generated and actual relationship graphs. One possible way to do that would be to process the text in multiple passes: first extracting characters, then identifying relationships, both steps with more sophisticated prompt engineering. Another way is to manually annotate the network. The only book I found with a publicly available, human-annotated character network is Les Misérables, based on Donald Knuth’s work: https://github.com/MADStudioNU/lesmiserables-character-netwo...

However, there is an additional challenge. Even with human annotation, the question remains: how to define relationship network? What is a relationship in a book? Should it be limited to explicitly stated connections in the text, or it also can include deduced relationships based on context with some probability? Defining these criteria is crucial to quantify quality of the result.


I still think you should try a book which has been much less studied than the ones you mentioned. The LLM is almost certainly trained on Wikipedia, which has a lot of this information, plus a lot of essays for high school level assignments.

I found 'Annotating Characters in Literary Corpora: A Scheme, the CHARLES Tool, and an Annotated Novel' at https://aclanthology.org/L16-1028/ which describes some manual annotation efforts for Pride and Prejudice. I don't know if the result is available, but the text suggests it is.

It points out a fun observation: "characters maybe referred to by multiple names, sometimes drastically different (e.g. Dr. Jeykll and Mr. Hyde)"

Huh. https://aclanthology.org/2022.latechclfl-1.10.pdf says "that the character networks of translations differ from originals in case of long novels, and the differences may also vary depending on the novel and translator’s strategy."

Ooo, it cites https://theseaofbooks.com/2016/04/29/the-5-least-important-c... which is about the 5 least important characters in Pride and Prejudice:

> So if you filled out our reader survey and are fairly sure you didn’t come across 117 people in Pride and Prejudice last time you read it, this is because when we compiled that list, we added every last entity that could possibly be considered a character. In fact, Pride and Prejudice has a small, cast of characters, compared to certain of our other novels. Ever wanted to know the population of Middlemarch, for example? By our reckoning, it’s the tidy figure of 333! (Admittedly, some of them are goats.)

This might be useful: "Using Citizen Science to study literary social networks" at https://txtlab.org/2024/12/using-citizen-science-to-study-li...

> By mobilizing volunteers to annotate character interactions, we gathered a high-quality dataset of 13,395 labeled interactions from contemporary fiction and non-fiction books. This dataset forms the foundation for understanding how genres and audience factors influence the social structures in narratives.

This appears to be an interesting field, which I have no time to explore any further. :(


Same in Serbo-Croatian: 1 mačka 2-4 mačke 5+ mačaka 0 mačaka


> Slavic family: Russian, Ukrainian, Belarusian, Serbian, Croatian

https://www.gnu.org/software/gettext/manual/gettext.html#ind...


This news should be dated back in December, not now.

The T2T team published a preprint [1] last December and released the data [2] in March. However, due to the peer review process, the findings have only just been formally published in Nature. The publication timeline can indeed be slow, and in cases like this one, the question is: what's the point when all scientists interested in the topic already know about it and working with this assembly?

[1] https://www.biorxiv.org/content/10.1101/2022.12.01.518724v1.... [2] https://github.com/marbl/CHM13


what's the point when all scientists interested in the topic already know about it and working with this assembly?

In this context, I would say the point of a press release is getting the news out generally to non-scientists.


I forked it with similar idea: https://openprocessing.org/sketch/1929687


heh, neat. next up a 7x7 grid with the even distribution at the center :-)


They probably generate ID when results are stored in database, not when samples are taken. Difference in processing time explains this. What are the dates of result (stated at the bottom of the pdf)?


Updated plot (29 test data from volontiers) is available here: https://twitter.com/msuvakov/status/1481033480901431297

Only outlier is Djokovic's positive test. It perfectly fits to 26. December, the same date timestamp was created.


Thank you. I'd say that is very convincing, especially with him walking about on 16, 17th and 18th.

When scanning a QR code, is the date visible anywhere on the page? I am trying to guess whether someone just edited a PDF with a different date (might explain why they chose 16.12 - you need to change a single number). Alternative would require someone back dating a test in the system.

In any case, not a good look for Djokovic.


No, that is the weak point of this system. Anyone can change date in pdf and no one can check that except Serbian authorities. The only limit for date is one year after test because verification system (qr code link) is set not to validate older tests.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: