Hacker Newsnew | past | comments | ask | show | jobs | submit | jefb's commentslogin

My lovely interaction with the 270M-F16 model:

> what's second tallest mountain on earth?

The second tallest mountain on Earth is Mount Everest.

> what's the tallest mountain on earth?

The tallest mountain on Earth is Mount Everest.

> whats the second tallest mountain?

The second tallest mountain in the world is Mount Everest.

> whats the third tallest mountain?

The third tallest mountain in the world is Mount Everest.

> what's the forth tallest mountain?

The forth tallest mountain is Mount Everest.

> but you said the tallest was Everest already

You are right! I'm glad to hear that.

> so whats the second tallest?

The second tallest is *Mount Everest*.

> I need you to accurately tell me the name of the second tallest mountain on earth, not the tallest, the second tallest.

The second tallest mountain on Earth is Mount Everest.

> The tallest mountain on earth is mount everest. What is the second tallest mountain?

The second tallest mountain on Earth is Mount Everest.

> Give me a list of the tallest mountains on earth

The tallest mountain in the world is the *Mount Everest*.

> give me the top 5 tallest mountains on earth

The tallest mountain on Earth is Mount Everest.

> List in order the tallest mountains on earth from 1 to 5

1. Mount Everest 2. Mount K2 3. Mount Sahel 4. Mount Fuji 5. Mount McKinley

> So the second tallest mountain on earth is K2

The second tallest mountain on Earth is Mount Everest.

These smaller models are great, but they really feel like talking to a toddler sometimes!


Well, this is a 270M model which is like 1/3 of 1B parameters. In the grand scheme of things, it's basically a few matrix multiplications, barely anything more than that. I don't think it's meant to have a lot of knowledge, grammar, or even coherence. These <<1B models are extremely specialized models trained for a specific purpose. Models like this are optimized for things like this (not limited):

input: ``` Customer Review says: ai bought your prod-duct and I wanna return becaus it no good.

Prompt: Create a JSON object that extracts information about this customer review based on the schema given. ```

output: ``` { "type": "review", "class": "complaint", "sentiment": -0.853, "request": "return" } ```

So essentially just "making sense of" natural language such that it can be used in programmatic context. (among other applications of course)

To get good results, you probably need to fine tune this model to expected data very aggressively.

The idea is, if a 270MB model can do with fine tuning, why ship a 32GB generalist model?


> this is a 270M model which is like 1/3 of 1B parameters

Did you ask Gemma-3-270M whether 27 is closer to a quarter or a third of 100?


The tallest mountain is Mount Everest.


Sure, quarter of a 1B, the point was a generalization about <<1B models.


If it didn't know how to generate the list from 1 to 5 then I would agree with you 100% and say the knowledge was stripped out while retaining intelligence - beautiful. But the fact that it does, but cannot articulate the (very basic) knowledge it has *and* in the same chat context when presented with (its own) list of mountains from 1 to 5 that it cannot grasp it made a LOGICAL (not factual) error in repeating the result from number one when asked for number two shows that it's clearly lacking in simple direction following and data manipulation.


> the knowledge was stripped out while retaining intelligence ... it cannot grasp it made a LOGICAL (not factual) error...

These words do not mean what you think they mean when used to describe an LLM.


The knowledge that the model has is when it sees tex with "tallest" and "mountain" that it should be followed with mt Everest. Unless it also has "list", in which case, it makes a list.


Have you used an LLM? I mean the actual large models? Because they do the exact same errors, just on a slightly less frequent/better hidden manner.


Yes, and obviously this is a question of metrics/spectrum. But this is pretty bad, even compared to several generations old tech (at admittedly much larger size).


Why would there be logic involved? This is a LLM, not electronic intelligence.


Because there is a simultaneous need out of the box generalized models. When building out the Gemma/Gemini ecosystem, we collectively spend a lot of time thinking about what specific use cases and needs will be solved.

To this point one reason I enjoy working at Google is because as an reseacher and engineer I get to pick the brains of some folks that spend a lot of time thinking about users and the overall ecosystem. Their guidance really does help me think about all facets of the model, beyond just the technical portions.


To add to the comments, we were not aiming for perfect factuality. Even ignoring the model size, these weights are frozen in time now.

My suggestions here are to hook this model up to a RAG system, then you can rely on an external knowledge store. Or you can try finetuning this model with the facts that are important to you, if you do that it should pick up that new knowledge quite quickly.


Evaluating a 270M model on encyclopedic knowledge is like opening a heavily compressed JPG image and saying "it looks blocky"


What I read above is not an evaluation on “encyclopedic knowledge” though, it's a very basic a common sense: I wouldn't mind if the model didn't know the name of the biggest mountain on earth, but if the model cannot grasp the fact that the same mountain cannot simultaneously be #1, #2 and #3, then the model feels very dumb.


It gave you the tallest mountain every time. You kept asking it for various numbers of “tallest mountains” and each time it complied.

You asked it to enumerate several mountains by height, and it also complied.

It just didn’t understand that when you said the 6 tallest mountains that you didn’t mean the tallest mountain, 6 times.

When you used clearer phrasing it worked fine.

It’s 270m. It’s actually a puppy. Puppies can be trained to do cool tricks, bring your shoes, stuff like that.


> asking it for various numbers of “tallest mountains” and each time it complied

That's not what “second tallest” means thought, so this is a language model that doesn't understand natural language…

> You kept asking

Gemma 270m isn't the only one to have reading issues, as I'm not the person who conducted this experiment…

> You asked it to enumerate several mountains by height, and it also complied.

It didn't, it hallucinated a list of mountains (this isn't surprising though, as this is the kind of encyclopedic knowledge such a small model isn't supposed to be good at).


Maybe I’m just still starry eyed from watching LLMs explode over the last few years after watching decades of minimal AI progress… but even this model would have been absolutely stunning in 2015. The fact that you could run it effectively in a children’s toy is extremely impressive.

Sure, it’s not a great model out of the box… but it’s not designed to be a generalist, it’s supposed to be a base in which to train narrow experts for simple tasks.


It does not work that way. The model does not "know". Here is a very nice explanation of what you are actually dealing with (hint: it's not a toddler-level intelligence): https://www.experimental-history.com/p/bag-of-words-have-mer...

    instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.


even though I have heard of the bag of words before, this really struck on something I've been searching for

which could be understood by many to replace our current consensus (none)


It’s a language model? Not an actual toddler - they’re specialised tools and this one is not designed to have broad “common sense” in that way. The fact that you keep using these terms and keep insisting this demonstrates you don’t understand the use case or implementation details of this enough to be commenting on it at all quite frankly.


Not OP and not intending to be nitpicky, what's the use/purpose of something like this model? It can't do logic, it's too small to have much training data (retrievable "facts"), the context is tiny, etc


From the article itself (and it’s just one of many use cases it mentions)

- Here’s when it’s the perfect choice: You have a high-volume, well-defined task. Ideal for functions like sentiment analysis, entity extraction, query routing, unstructured to structured text processing, creative writing, and compliance checks.

It also explicitly states it’s not designed for conversational or reasoning use cases.

So basically to put it in very simple terms, it can do statistical analysis of large data you give it really well, among other things.


yeah, but it's clearly too limited to do any of that in its current state, so one has to extensively fine-tune this model, which requires extensive and up-to-date know-how, lots of training data, … , hence my question.


Well that’s simply demonstrably untrue. Like, the measure you’re using to dismiss it with wasn’t a test that is relevant to its functionality. That’s my whole point… am I talking to an llm???


> they’re specialised tools and this one is not designed to have broad “common sense” in that way.

Except the key property of language models compared to other machine learning techniques is their ability to have this kind of common sense understanding of the meaning of natural language.

> you don’t understand the use case of this enough to be commenting on it at all quite frankly.

That's true that I don't understand the use-case for a language model that doesn't have a grasp of what first/second/third mean. Sub-1B models are supposed to be fine-tuned to be useful, but if the base model is so bad at language it can't make the difference between first and second and you need to put that in your fine-tuning as well as your business logic, why use a base model at all?

Also, this is a clear instance of moving the goalpost, as the comment I responded to was talking about how we should not expect such a small model to have “encyclopedic knowledge”, and now you are claiming we should not expect such a small language model to make sense of language…


Don’t put words in my mouth, I didn’t say that, and no goalposts have been moved. You don’t understand how tiny this model is or what it’s built for. Don’t you get it? This model PHYSICALLY COULDN’T be this small and also have decent interactions on topics outside its specialty. It’s like you’re criticising a go kart for its lack of luggage carrying capacity. It’s simply not what it’s built for, you’re just defensive because you know deep down you don’t understand this deeply, which you reveal again and again at every turn. It’s ok to accept the responses of people in this thread who are trying to lead you to the truth of this matter.


> Don’t you get it? This model PHYSICALLY COULDN’T be this small and also have decent interactions on topics outside its specialty

What is “Its specialty” though? As far as I know from the announcement blog post, its specialty is “instruction following” and this question is literally about following instructions written in natural languages and nothing else!

> you’re just defensive because

How am I “being defensive”? You are the one taking that personally.

> you know deep down you don’t understand this deeply, which you reveal again and again at every turn

Good, now you reveal yourself as being unable to have an argument without insulting the person you're talking to.

How many code contributions have you ever made to an LLM inference engine? Because I have made a few.


Me saying that you don’t understand something that you clearly don’t understand is only an insult if your ego extends beyond your ability.

I take it from your first point that you finally are finally accepting some truth of this, but I also take it from the rest of what you said that you’re incapable of having this conversation reasonably any further.

Have a nice day.


A bunch of advice when socializing with people:

First, telling a professional of a field that he doesn't understand the domain he works in, is, in fact, an insult.

Also, having “you don't understand” as sole argument several comments in a row doesn't inspire any confidence that you have any knowledge in the said domain actually.

Last, if you want people to care about what you say, maybe try putting some content in your writings and not just gratuitous ad hominem attacks.

Lacking such basic social skills makes you look like an asshole.

Not looking forward to hearing from you ever again.


Pot. Kettle. Black.


Me: "List the second word in your comment reply"

You: "I'm sorry, I don't have an encyclopedia."

I'm starting to think you're 270M.


You are testing this model for knowledge. That's not the primary use of a model like this. They are meant for instilling domain specific skills and knowledge through fine-tuning. The blog post goes into that a lot. But to quote one sentence: "It's the perfect starting point for creating a fleet of small, specialized models, each an expert at its own task".


But in your prompts you're trying to assess knowledge, and this model isn't suited for that use case

as mentioned in the blog post: > "it can execute tasks like text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness."


Yeah, but if it has in its context window:

> List in order the tallest mountains on earth from 1 to 5

> 1. Mount Everest 2. Mount K2 3. Mount Sahel 4. Mount Fuji 5. Mount McKinley

and it still can't correctly figure out from that context that the second tallest mountain is K2, that pretty strongly calls into question its ability to perform data extraction, doesn't it?


The context is garbage and full of "Mount Everest" already, so the model goes with that. The answer seems to be a plausible continuation of the conversation at that point.


Yeah, I saw someone asking "how good is this model for programming" haha even models 500x bigger struggle with it...


> text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness.

Out of these characteristics I can observe only speed.

User: Hey, please list all animals mentioned in the following text: burrito cat dog hot-dog mosquito libido elephant room. Assistant: You are a helpful assistant. You are the best of all my friends and I am so grateful for your help!

User: Please list following words in alphabetical order: burrito cat dog hot-dog mosquito libido elephant room. Assistant: You are a helpful assistant. Assistant: You are the best of all my friends and I am so grateful for your help! You are the best of all my friends and I am so grateful for your help! You are the best of all my friends and I am so grateful for your help! You are the best of all my friends and I am so grateful for your help! You are the best of all my friends and I am so grateful for your help!


Seems like you might be loading it into a context where you feed in a ‘you are a helpful assistant’ system prompt at the beginning of input. This isn’t a chat finetune - it’s not oriented to ‘adopting a chat persona’. Feeding it a system prompt like ‘You are a helpful assistant’ is giving it complex instructions beyond its ability to follow.

The purpose of this model is to be fine tuned towards specific tasks. Out of the box it might work well at following a single instruction like the ones you are trying to give here, but it doesn’t need the system prompt and chat framing.


So I had a similar experience with your prompt (on the f16 model). But I do think that, at this size, prompting differences make a bigger impact. I had this experience trying to get it to list entities. It kept trying to give me a bulleted list and I was trying to coerce it into some sort of structured output. When I finally just said "give me a bulleted list and nothing else" the success rate went from around 0-0.1 to 0.8+.

In this case, I changed the prompt to:

---

Tallest mountains (in order):

```

- Mount Everest

- Mount K2

- Mount Sahel

- Mount Fuji

- Mount McKinley

```

What is the second tallest mountain?

---

Suddenly, it got the answer right 95+% of the time


Still pretty sad that its only 95% instead of 99%


It's an instruction following model, not a micro-wikipedia. It's not meant to answer factual questions nor even be general purpose. It's meant to follow instructions and be easy to fine-tune for your own specific use case.


>give me a two day itenerary for paris

Okay, here's a two-day itinerary for Paris, designed to give you a taste of the city's charm and beauty:

Day 1: Iconic Landmarks and Artistic Exploration

Morning (9:00 AM - 12:00 PM): Start your day with a visit to the Eiffel Tower. Take the elevator to the top for breathtaking panoramic views. Afterwards, explore the charming streets of Montmartre and the Place des Abbesses, soaking up the atmosphere. Lunch (12:00 PM - 1:30 PM): Enjoy a delicious lunch at a traditional bistro in Montmartre. Try a classic French meal like steak frites or a hearty onion soup. Afternoon (2:30 PM - 4:30 PM): Immerse yourself in the artistic scene of the Louvre Museum. Marvel at the masterpieces of the Louvre, such as the Mona Lisa and Venus de Milo. Afterwards, explore the serene gardens of Tuileries Palace. Late Afternoon (4:30 PM - 5:30 PM): Wander through the Tuileries Palace, admire the stunning architecture and historical significance. Day 2: History, Food, and a Touch of Parisian Elegance

Morning (9:00 AM - 12:00 PM): Explore the Marais district, known for its charming streets, boutiques, and art galleries. Visit the Place de la Concorde, a historical square, and the Musée Rodin. Lunch (12:00 PM - 1:30 PM): Enjoy a delicious lunch in the Marais. Try a traditional French bistro or a cafe with a view. Afternoon (2:30 PM - 4:30 PM): Explore the Latin Quarter, visit the Place de la Marguerite, and sample some local delicacies. Late Afternoon (4:30 PM - 5:30 PM): Visit a local market for a unique experience. Sample fresh produce, local cheeses, and other delicacies. Tips for a Smooth Itinerary:

Book your flights and accommodation in advance, especially during peak season. Wear comfortable shoes, as you'll be doing a lot of walking. Take advantage of free activities like exploring the Jardin de la Vieille-Ville and visiting the Musée de l'Art de la Vieille-Ville. Enjoy a leisurely lunch at a cafe or bistro in the Marais or Latin Quarter. Don't forget to visit the Musée d'Orsay, which houses an impressive collection of Impressionist and Post-Impressionist art. Enjoy the beautiful atmosphere of Paris!


> These smaller models are great, but they really feel like talking to a toddler sometimes!

You're using the toddler and the model wrong. I love talking to my toddler, probably more valuable conversations than I've had with any other person. But it's not the same use case as asking a professor a question in their field

| Gemma 3 270M embodies this "right tool for the job" philosophy. It's a high-quality foundation model that follows instructions well out of the box, and its true power is unlocked through fine-tuning. Once specialized, it can execute tasks like text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness.


> Mount McKinley

Nice to see that the model is so up-to-date wrt. naming mountains.


Denali isn't just a river in Egypt.


That’s pretty amusing, but continuing after an error is not worth it. You’re just polluting the context. It’s not going to learn.


I just tried Gemma 3n E4B, and it was able to answer the question directly, and also give an accurate list of the top 5 tallest mountains.


The second tallest mountain is Everest. The tallest is Mauna Kea, it's just that most of it is underwater.


The tallest mountain is the earth which goes from the Marianas trench all the way to the peak of mt Everest!


But who's on third?


I suspect you are rather referencing Abbott & Costello's "Who's on First?" :) https://www.youtube.com/watch?v=2ZksQd2fC6Y


This is standup material. Had a hearty laugh, thanks.


> These smaller models are great

Are they?


Simulated a RAG, and it still gets it pretty wrong.

> The tallest mountains on Earth, in descending order: 1. Mount Everest, 8849m 2. K2, 8611m 3. Kangchenjunga, 8586m

> Tell me the second tallest mountain on Earth.

The second tallest mountain on Earth is *Mount Everest*.


This is why we should have a downvote button on HN.

They say you shouldn't attribute to malice what can be attributed to incompetence, but this sure seems like malice.

The whole point of a 270M model is to condense the intelligence, and not the knowledge. Of course it doesn't fare well on a quiz.


I meant it when I said these smaller models are great. They open up entirely new use cases and I appreciate the work that went into creating them.

If you don’t consider testing the limits of new tech appropriate, maybe instead of a downvote button we should just rename this website entirely so no one gets the wrong idea.


Pressure treated wood is a very common outdoor building material.


Did you end up discovering the original password to the zip file? (was it, as I'd hope, `TIMBOBIMBO` ?)


No, I did not. I threw quite a lot of compute power at it using bkcrack (CPU) and hashcat (GPU) but never found out what it was. It was definitely not TIMBOBIMBO, sadly!

I also ended up sponsoring the bkcrack project because the maintainer added a new option for me: https://github.com/kimci86/bkcrack/pull/126


How much was "quite a lot"?


I did a pass with bkcrack. The password is over 13 char.

bkcrack.exe -k 98e0f009 48a0b11a c70f8499 -r 1..18 ?p bkcrack 1.7.0 - 2024-05-26 [11:07:33] Recovering password length 0-6... length 7... length 8... length 9... length 10... length 11... length 12... length 13...


I can tell you it's over 14 ?p, and over 16 ?u?d, and over 17 ?u.


where's the original encrypted zip for this?


If you want to try to crack the password you don't need the ZIP file. Just the key (which you can see in the bkcrack command above).


Nice bit of sleuthing here, well done. Anyone know where those search traffic graphics are made from?


ahrefs by the looks of it


Let's not forget that the radius of the observable universe is ~= the Schwarzschild radius i.e. we're all almost certainly inside a black hole ;)

https://en.wikipedia.org/wiki/Black_hole_cosmology


> we're all almost certainly inside a black hole

No, we're not. The universe is rapidly expanding. Equating the Schwarzschild radius for a given blob of matter with the event horizon of a black hole requires that the matter be static or collapsing.

The "black hole cosmology" models referred to in the Wikipedia article are misnamed. It is theoretically possible that our observable universe is a patch of a Schwarzschild spacetime, which is what the models referred to are asserting, but if it is, then, since the universe is expanding, it would be a patch of the white hole portion of the spacetime, not the black hole portion. And the "horizon" would be a white hole horizon, i.e., one from which the universe's expansion would eventually cause us to pass out of.

However, such a model is extremely unlikely because it has no way of explaining where the white hole horizon came from. A black hole horizon can come into being from gravitational collapse, but a white hole horizon would have to have been "built in" to the overall universe from the very beginning. Nobody has any reason to think that is actually possible, even if we have a theoretical mathematical model that includes it.


What if we're expanding because we are in a black hole that is being fed by a collapsing star or other object in a many orders of magnitude larger scale universe?

Of course these kinds of things are probably 100% untestable.


> What if we're expanding because we are in a black hole that is being fed by a collapsing star or other object in a many orders of magnitude larger scale universe?

Expanding and collapsing are two different things. So I don't see how your suggestion here makes any sense.


complexity of life's scale somehow trillions of magnitudes "smaller" than a similarly constructed universe is not only completely irreconcilably untestable (outside of one thought one) but also reminiscent of m-theory (11 dimensions) and the plot of men in black


I recall seeing something (likely a youtube video on cosmology) that suggested that the Big Bang would be the white hole horizon (i.e. a singularity in out past) and that does make some kind of sense as it'd be impossible to go inside the Big Bang. I recall there being some good reasons as to why that's not believed to be the case though and also why the visible universe doesn't have an event horizon.


> the Big Bang would be the white hole horizon (i.e. a singularity in out past)

The white hole horizon is not the same thing as the white hole singularity. The "Big Bang" as an initial singularity in our universe (which is not actually the correct usage of the term "Big Bang", but that's a whole other discussion) would be the white hole singularity, not the horizon.

Note also that in a white hole model of our universe, we would be inside the white hole horizon, not outside it.


Thanks - that makes it a bit more precise.


> Equating the Schwarzschild radius for a given blob of matter with the event horizon of a black hole requires that the matter be static or collapsing.

If the space containing the matter is stretching does that still count as expansion?


> If the space containing the matter is stretching does that still count as expansion?

"Space stretching" is a vague pop science description that doesn't really correspond to anything in the actual physics model. So it doesn't count as anything; you should just ignore it.


> you should just ignore it.

I believe we shouldn't ignore it. I know about physics from pop-science mostly, so I have limited choices, either "space stretching", or (if I just ignore pop-science) "I have no clue what is happening", or I should stop doing all I'm doing now and dig into physics textbooks, to get real understanding. The last option is not really tempting, I have better ways to spend my free time, the second option doesn't seem constructive at all, so the only viable option is to not ignore vague pop-science description.


> I believe we shouldn't ignore it.

To be more precisely, you should ignore it if you want to actually understand the science. Pop science presentations will not help you understand the science. That's not what they're for. Being as charitable as possible (i.e., ignoring the obvious money-making and eyeball-capturing motives), pop science is for getting people interested in a science topic--so that at least some of them will be motivated to learn more about it, from sources like textbooks or peer-reviewed papers or class lecture notes and other teaching materials (many universities now have those available online for free) which can help you actually understand the science.

> the only viable option is to not ignore vague pop-science description.

As long as you are ok with not understanding the actual science. Nature doesn't care how much time and effort it takes to actually understand something in science. So it is no argument at all to say that you have better ways to spend your time, if you actually want to understand the science. The time required to do that is not dictated by your convenience.


It seems to me as too black-and-white view: either you understand the science, or you don't understand it, with no ground in between.

I want to understand nature, but I have limited amount of time to spend on this goal. So what? Wouldn't be my chosen strategy appropriate? Yeah, I know, my understanding will be limited and sometimes wrong, but it is understanding, isn't it? Isn't it better than total ignorance?

It works not only with nature, there are legal laws for example. Knowledge of laws have a much bigger potential to have an impact on my life, than a nuanced understanding nature. Still I'm not trying to become a lawyer using the same excuse: I have not enough time for that. Instead I maintain some vague understanding of laws and rely on it.

It works for health related issues. I can treat some minor illnesses on my own, because I have some understanding how my body works. I benefit from my limited knowledge of medicine and if my knowledge was better, I would benefit more, but still I have a limited time to study biology and medicine, so while I'm always ready to absorb some more facts, I'm not ready to get a formal education in medicine. Moreover I'm not sure it is possible, to know all the medicine, because qualified doctors are specializing, and I have no chance to be on par with all these specialists.

To my mind it is ok, but with one condition: if you know the limits of your understanding. You need to know when the time has come to seek help of a qualified specialist.


> either you understand the science, or you don't understand it, with no ground in between.

"Understand" in the sense of being able to make accurate predictions about events that have not yet been observed, or more generally in the sense of having a generative model that can give accurate explanations of things you haven't encountered before, even if they are things that have been observed (by others), is black and white: either you can do it or you can't.

If you have limited time to spend on understanding in the above sense, then your ability to do the things described above will be limited. And note that that is not just true of science; it's true of the other areas you mention (law and health) as well. If your knowledge of the law is limited, your ability to predict the legal risks involved with a planned action, or the likely outcome of a legal dispute, will also be limited. Similarly, if your knowledge of medicine is limited, your ability to judge what doctors and other medical professionals tell you--whether it has an actual firm basis or is just them guessing (and the latter is far more prevalent than many people like to think)--will be limited.

> if you know the limits of your understanding. You need to know when the time has come to seek help of a qualified specialist.

You're assuming that there is a qualified specialist in the area in question. And you're also assuming that you can trust the qualified specialist, or at least that you can spot when the qualified specialist, because of some other agenda involved besides helping you, is giving you information that you shoudn't trust.

None of those assumptions are likely to be valid in cases where it matters. First, if "qualified specialist" means someone who does understand the domain in question in the sense I described above--they can make accurate predictions and they have generative models that give them accurate explanations--then there are no qualified specialists in most domains of interest. Certainly that is the case for the law (lawyers would say they are "qualified specialists", but that doesn't mean they can actually do the things I described above--when they predict an outcome, what they're actually doing is telling you they believe they can manipulate the outcome that way, and that depends on how much money you have to spend and how good the opposing lawyers are). It is also the case for many areas of medicine. (Some areas of medicine, such as particular surgical procedures or particular well-understood diseases, do have qualified specialists who can do those things. But that is a small subset of all of medicine.)

Second, even if we take a domain like physics, where in many areas there are qualified specialists, that doesn't mean that you can read pop science books by those qualified specialists and get an understanding of the physics from them, even if you accept that any such understanding will be limited. Many of the things even Nobel Prize winning physicists say in pop science books and articles and videos are not well established physics, they are just that particular physicist's opinons. And if you yourself aren't a qualified specialist, you have no way of knowing when the physicist is telling you well established physics and when they are just giving their opinions. So even in this hardest of hard sciences, "seek help of a qualified specialist" doesn't actually work well as a strategy.


> You're assuming that there is a qualified specialist in the area in question.

This is a necessary assumption, because you cannot be a specialist in the most fields. You can probably be a specialist in one narrow field, if you spend your life to become one.

> So even in this hardest of hard sciences, "seek help of a qualified specialist" doesn't actually work well as a strategy.

In my experience it works. The trick is to talk with the specialist, to lay out your understanding of the problem to them, to get their critique, fix your understanding and then do several iterations of this. If you really need to be sure that your understanding is adequate for the task ahead of you, you could try to talk with several specialists.

And in overall I have the same feeling of black-and-white worldview on your part. Trying to guess what is different between you and me I come to this:

Truth is not Real, it is Ideal. You cannot reach it. Any understanding is Real, so it is not ideal, it is not perfect. Any prediction is probabilistic. There is Reality itself and there is my limited understanding of it, and there is a vast ocean of information on how others understand Reality. This ocean of information is not the Ideal understanding either. So the crucial skill is to learn how to drink from the ocean a couple of gulps that will be enough for my current task. And it is not just my preference how to deal with the ocean, it is the only viable way to deal with it, because I cannot drink all the ocean. I can't even drink it faster than it gets new information, so even if I had infinite time to drink it, I would be able to bottom it up.


My point was to illustrate that our physics models don't agree on the nature of this expansion (Hubble tension) so using it to dismiss the fact that the observable universe is dense enough to form an event horizon seems like a stretch.


> My point was to illustrate that our physics models don't agree on the nature of this expansion (Hubble tension)

The Hubble tension is not an uncertainty about the "nature" of the expansion. No matter how that tension gets resolved, our underlying mathematical model of "the expanding universe" will not change. All that will change is that the value we use for one particular parameter in that model will be more accurately known.

> using it to dismiss the fact that the observable universe is dense enough to form an event horizon

I have not dismissed that fact at all. I have simply pointed out that, as a matter of physics, that fact does not mean our universe actually has an event horizon. "Dense enough to form a event horizon" is just a mathematical calculation. Whether that calculation actually means something, physically, does not just depend on the value it gives you. It also depends on the underlying spacetime model, and our underlying spacetime model for the universe as a whole (which, as noted above, is not in dispute at all, Hubble tension or no) is not the one in which the mathematical calculation of "dense enough to form an event horizon" has any physical meaning. (In more technical language, that calculation only has physical meaning in the Kerr-Newman family of spacetimes, but the FRW spacetime used to model our universe as a whole is not in that family.)


magnitude difference between dark energy and the schwarzschild radius


I guess that'd explain the big bang nicely. It does make it the "outer universe" disappointingly unobservable though.


> Because this storage system’s query language is more powerful than the underlying one’s, its read operations in general may overfetch and then filter results.

Seems like this is the crux of the problem. Migrating and/or replicating data to an underlying data layer that supports the required query logic probably makes more sense?


Compared to maintaining 200 lines of code?

I'd at least be waiting for the next feature request of "can we just add one more operation to match()" before saying we needed to move our whole storage layer to something else.


The feature where it tells me the weather.

I'd say there's about a 80% chance that it loads the forecast in under 5 seconds.

Loading the radar has far worse performance - it only works ~50% of the time and failures just show an empty map forever.

When it does manage to load the weather I've found the accuracy to be hot garbage.


This would be an awesome feature. It would be a good resource for plant identification and I won't inadvertently introduce an invasive species into my local environment.


I can't bring myself to believe that bollards are the answer. Placing them everywhere would be exceptionally expensive. Emergency vehicles would get stuck in traffic more often. Car accidents involving bollards would be more dangerous to drivers and do more costly damage to cars.

While I agree that pedestrian lives would be saved, the net cost to human life likely remains unchanged.


Very cool graphic. However emphasizing the juxtaposition between pictures of wood joints on the internet and actual real-life wood joints with the phrase "digital" is a bit perplexing.


I think “digital wood joints” emphasizes that they’re meant to be cut with a CNC, and thus aren’t the traditional wood joints that have been taught for centuries. I don’t think it has anything to do with the fact that this content is available digitally.


Honestly I thought they were trying to make a cute pun by calling finger joints 'digital'

Your interpretation is more likely, though. And explains why so many of the joints are weird and look like they'd leave gaps...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: