Hacker Newsnew | past | comments | ask | show | jobs | submit | baltimore's commentslogin

Since the first (good) image generation models became available, I've been trying to get them to generate an image of a clock with 13 instead of the usual 12 hour divisions. I have not been successful. Usually they will just replace the "12" with a "13" and/or mess up the clock face in some other way.

I'd be interested if anyone else is successful. Share how you did it!


I've noticed that image models are particularly bad at modifying popular concepts in novel ways (way worse "generalization" than what I observe in language models).


Maybe LLMs always fail to generalize outside their data set, and it’s just less noticeable with written language.


This is it. They’re language models which predict next tokens probabilistically and a sampler picks one according to the desired ”temperature”. Any generalization outside their data set is an artifact of random sampling: happenstance and circumstance, not genuine substance.


However: do humans have that genuine substance? Is human invention and ingenuity more than trial and error, more than adaptation and application of existing knowledge? Can humans generalize outside their data set?

A yes-answer here implies belief in some sort of gnostic method of knowledge acquisition. Certainly that comes with a high burden of proof!


Yes. Humans can perform abduction, extrapolating given information to new information. LLMs cannot, they can only interpolate new data based on existing data.


Yes


Can you elaborate on what you mean by that, and prove it?

https://journals.sagepub.com/doi/10.1177/09637214251336212


The proof is that humans do it all the time and that you do it inside your head as well. People need to stop with this absurd level of rampant skepticism that makes them doubt their own basic functions.


the concept is too nebulous to "prove" but the fact im operating a machine (relatively) skillfully to write to you shows we are in fact able to generalise. This wasn't planned, we came up with this. Same with cars etc. We're quite good at the whole "tool use" thing


Most image models are diffusion models, not LLMs, and have a bunch of other idiosyncrasies.

So I suspect it's more that lessons from diffusion image models don't carry over to text LLMs.

And the Image models which are based on multi-mode LLMs (like Nano Banana) seem to do a lot better at novel concepts.


But the clocks in this demo aren't images.


Yes, but they are reasoning within their dataset, which will contain multiple example of html+css clocks.

They are just struggling to produce good results because they are language models and don’t have great spatial reasoning skills, because they are language models.

Their output normally has all the elements, just not in the right place/shape/orientation.


They definitely don't completely fail to generalise. You can easily prove that by asking them something completely novel.

Do you mean that LLMs might display a similar tendency to modify popular concepts? If so that definitely might be the case and would be fairly easy to test.

Something like "tell me the lord's prayer but it's our mother instead of our father", or maybe "write a haiku but with 5 syllables on every line"?

Let me try those ... nah ChatGPT nailed them both. Feels like it's particular to image generation.


They used to do poorly with modified riddles, but I assume those have been added to their training data now (https://huggingface.co/datasets/marcodsn/altered-riddles ?)

Like, the response to "... The surgeon (who is male and is the boy's father) says: I can't operate on this boy! He's my son! How is this possible?" used to be "The surgeon is the boy's mother"

The response to "... At each door is a guard, each of which always lies. What question should I ask to decide which door to choose?" would be an explanation of how asking the guard what the other guard would say would tell you the opposite of which door you should go through.


Also, they're fundamentally bad at math. They can draw a clock because they've seen clocks, but going further requires some calculations they can't do.

For example, try asking Nano Banana to do something simpler, like "draw a picture of 13 circles." It likely will not work.


  Generate an image of a clock face, but instead of the usual 12 hour numbering, number it with 13 hours. 

Gemini, 2.5 Flash or "Nano Banana" or whatever we're calling it these days. https://imgur.com/a/1sSeFX7

A normal (ish) 12h clock. It numbered it twice, in two concentric rings. The outer ring is normal, but the inner ring numbers the 4th hour as "IIII" (fine, and a thing that clocks do) and the 8th hour as "VIIII" (wtf).


It should be pretty clear already that anything which is based (limited?) to communicating words/text can never grasp conceptual thinking.

We have yet to design a language to cover that, and it might be just a donquijotism we're all diving into.


> We have yet to design a language to cover that, and it might be just a donquijotism we're all diving into.

We have a very comprehensive and precise spec for that [0].

If you don't want to hop through the certificate warning, here's the transcript:

- Some day, we won't even need coders any more. We'll be able to just write the specification and the program will write itself.

- Oh wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more.

- Exactly

- And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?

- Uh... no...

- Code, it's called code.

[0]: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...


Ive been thinking about that a lot too. Fundamentally it's just a different way of telling the computer what to do and if it seems like telling an llm to make a program is less work than writing it yourself then either your program is extremely trivial or there are dozens of redundant programs in the training set that are nearly identical.

If you're actualy doing real work you have nothing to fear from LLMs because any prompt which is specific enough to create a given computer program is going to be comparable in terms of complexity and effort to having done it yourself.


I don’t think that’s clear at all. In fact the proficiency of LLMs at a wide variety of tasks would seem to indicate that language is a highly efficient encoding of human thought, much moreso than people used to think.


Yea it’s amazing that the parent post literally misunderstands the fundamental realities of LLMs and the compression they reveal in linguistics even if blurry is incredible.



Really? I can grasp the concept behind that command just fine.


I gave this "riddle" to various models:

> The farmer and the goat are going to the river. They look into the sky and see three clouds shaped like: a wolf, a cabbage and a boat that can carry the farmer and one item. How can they safely cross the river?

Most of them are just giving the result to the well known river crossing riddle. Some "feel" that something is off, but still have a hard time to figure out that wolf, boat and cabbage are just clouds.



It really shows how LLMs work. It's all about probabilities, and not about understanding. If something looks very similar to a well known problem, the llm is having a hard time to "see" contradictions. Even if it's really easy to notice for humans.


Claude has no problem with this: https://imgur.com/a/ifSNOVU

Maybe older models?


Try to twist around words and phrases, at some point it might start to fail.

I tried it again yesterday with GPT. GPT-5 manages quite well too in thinking mode, but starts crackling in instant mode. 4o completely failed.

It's not that LLMs are unable to solve things like that at all, but it's really easy to find some variations that make them struggle really hard.



That's just a patch to the training data.

Once companies see this starting to show up in the evals and criticisms, they'll go out of their way to fix it.


What would the "patch" be? Manually create some images of 13-hour clocks and add them to the training data? How does that solution scale?


s/13/17/g ;)


This is really cool. I tried to prompt gemini but every time I got the same picture. I do not know how to share a session (like it is possible with Chatgpt) but the prompts were

If a clock had 13 hours, what would be the angle between two of these 13 hours?

Generate an image of such a clock

No, I want the clock to have 13 distinct hours, with the angle between them as you calculated above

This is the same image. There need to be 13 hour marks around the dial, evenly spaced

... And its last answer was

You are absolutely right, my apologies. It seems I made an error and generated the same image again. I will correct that immediately.

Here is an image of a clock face with 13 distinct hour marks, evenly spaced around the dial, reflecting the angle we calculated.

And the very same clock, with 12 hours, and a 13th above the 12...


This is probably my biggest problem with AI tools, having played around with them more lately.

"You're absolutely right! I made a mistake. I have now comprehensively solved this problem. Here is the corrected output: [totally incorrect output]."

None of them ever seem to have the ability to say "I cannot seem to do this" or "I am uncertain if this is correct, confidence level 25%" The only time they will give up or refuse to do something is when they are deliberately programmed to censor for often dubious "AI safety" reasons. All other times, they come back again and again with extreme confidence as they totally produce garbage output.


I agree, I see the same even in simple code where they will bend backwards apologizing and generate very similar crap.

It is like they are sometimes stuck in a local energetic minimum and will just wobble around various similar (and incorrect) answers.

What was annoying in my attempt above is that the picture was identical for every attempt


These tools 'attitude' reminds me of an eager, but incompetent intern or a poorly trained administrative assistant, who works for a powerful CEO. All sycophancy, confidence and positive energy, but not really getting much done.


The issue is the they always say "Here's the final, correct answer" before they've written the answer, so of course the LLM has no idea if it's going to be right before it starts, because it has no clue what it's going to say.

I wonder how it would do if instead it were told "Do not tell me at the start that the solution is going to be correct. Instead, tell me the solution, and at the end tell me if you think it's correct or not."

I have found that on certain logic puzzles that it simply cannot get right, it always tells me that it's going to get it quite "this last time," but if asked later it always recognizes its errors.


Gemini specifically is actually kinda notorious for giving up.

https://www.reddit.com/r/artificial/comments/1mp5mks/this_is...


you can click the share icon (the two-way branch icon, it doesn't look like apple's share icon) under the image it generates to share the conversation.

i'm curious if the clock image it was giving you was the same one it was giving me

https://gemini.google.com/share/780db71cfb73


Thanks for the tip about sharing!

No, my clock was an old style one, to be put on a shelf. But at least it had a "13" proudly right above the "12" :)

This reminds me my kids when they were in kindergarden and were bringing home their art that needed extra explanation to realize what it was. But they were very proud!


I was able to have AI generate an image that made this, but not by diffusion/autoregressive but by having it write Python code to create the image.

ChatGPT made a nice looking clock with matplotlib that had some bugs that it had to fix (hours were counter-clockwise). Gemini made correct code one-shot, it used Pillow instead of matplotlib, but it didn't look as nice.


Weird, I never tried that, I tried all the usual tricks that usually work including swearing at the model (this scarily works surprisingly well with LLMs) and nothing. I even tried to go the opposite direction, I want a 6 hour clock.


I do playing card generation and almost all struggle beyond the "6 of X"

My working theory is that they were trained really hard to generate 5 fingers on hands but their counting drops off quickly.


That's because they literally cannot do that. Doing what you're asking requires an understanding of why the numbers on the clock face are where they are and what it would mean if there was an extra hour on the clock (ie that you would have to divide 360 by 13 to begin to understand where the numbers would go). AI models have no concept of anything that's not included in their training data. Yet people continue to anthropomorphize this technology and are surprised when it becomes obvious that it's not actually thinking.


The hope was for this understanding to emerge as the most efficient solution to the next-token prediction problem.

Put another way, it was hoped that once the dataset got rich enough, developing this understanding is actually more efficient for the neural network than memorizing the training data.

The useful question to ask, if you believe the hope is not bearing fruit, is why. Point specifically to the absent data or the flawed assumption being made.

Or more realistically, put in the creative and difficult research work required to discover the answer to that question.


It's interesting because if you asked them to write code to generate an SVG of a clock, they'd probably use a loop from 1 to 12, using sin and cos of the angle (given by the loop index over 12 times 2pi) to place the numerals. They know how to do this, and so they basically understand the process that generates a clock face. And extrapolating from that to 13 hours is trivial (for a human). So the fact that they can't do this extrapolation on their own is very odd.


gpt-image-1 and Google Imagen understand prompts, they just don't have training data to cover these use cases.

gpt-image-1 and Imagen are wickedly smart.

The new Nano Banana 2 that has been briefly teased around the internet can solve incredibly complicated differential equations on chalk boards with full proof of work.


>> The new Nano Banana 2 that has been briefly teased around the internet can solve incredibly complicated differential equations on chalk boards with full proof of work.

That's great, but I bet it can't tie it's own shoes.


No, but I can get it to do a lot of work.

It's a part of my daily tool box.


And a submarine can't swim. Big deal.


I wonder if you would have more success if you painstakingly described the shape and features of a clock in great detail but never used the words clock or time or anything that might give the AI the hint that they were supposed to output something like a clock.


And this is a problem for me. I guess that it would work, but as soon as the word "clock" appears, gone is the request because a clock HAS.12.HOURS.

I use this a lot in cybersecurity when I need to do something "illegal". I am refused help, until I say that I am doing research on cybersecurity. In that case no problem.


The problem is more likely the tokenization of images than anything. These models do their absolute worst when pictures are involved, but are seemingly miraculous at generalizing with just text.


I wonder if it's because we mean different things by generalization.

For text, "generalization" is still "generate text that conforms to all the usual rules of the language". For images of 13-hour clock faces, we're explicitly asking the LLM to violate the inferred rules of the universe.

I think a good analogy would be asking an LLM to write in English, except the word "the" now means "purple". They will struggle to adhere to this prompt in a conversation.


That's true, but I think humans would stumble a lot too (try reading old printed text from the 18fh cenfury where fhey used "f" insfead of t in prinf, if's a real frick fo gef frough).

However humans are pretty adept at discerning images, even ones outside the norm. I really think there is some kind of architectural block hampering transformers ability to really "see" images. For instance if you show any model a picture of a dog with 5 legs (a fifth leg photoshopped to it's belly) they all say there are only 4 legs. And will argue with you about it. Hell GPT-5 even wrote a leg detection script in python (impressive) which detected the 5 legs, and then it said the script was bugged, and modified the parameters until one of the legs wasn't detected, lol.


An "f" never replaced a "t".

You probably mean the "long s" that looks like an "f".


Yes, the problem is that these so called "world models" do not actually contain a model of the world, or any world


Ah! This is so sad. The manager types won't be able to add an hour (actually, two) to the day even with AI.


From my experience they quickly fail to understand anything beyond a superficial description of the image you want.



Related ongoing thread:

Nano Banana can be prompt engineered for nuanced AI image generation - https://news.ycombinator.com/item?id=45917875 - Nov 2025 (214 comments)


I've been trying for the longest time and across models to generate pictures or cartoons of people with six fingers and now they won't do it. They always say they accomplished it, but the result always has 5 fingers. I hate being gaslit.


LLMs are terrible for out-of-distribution (OOD) tasks. You should use chain of thought suppression and give constaints explictly.

My prompt to Grok:

---

Follow these rules exactly:

- There are 13 hours, labeled 1–13.

- There are 13 ticks.

- The center of each number is at angle: index * (360/13)

- Do not infer anything else.

- Do not apply knowledge of normal clocks.

Use the following variables:

HOUR_COUNT = 13

ANGLE_PER_HOUR = 360 / 13 // 27.692307°

Use index i ∈ [0..12] for hour marks:

angle_i = i * ANGLE_PER_HOUR

I want html/css (single file) of a 13-hour analog clock.

---

Output from grok.

https://jsfiddle.net/y9zukcnx/1/


> Follow these rules exactly:

"Here's the line-by-line specification of the program I need you to write. Write that program."


Can you write this program in any language?


Yes.


No, do I need to?


it's lazy to dust off the major advantages of a pseudocode-to-anylanguage transpiler as if it's somehow easy or commonplace.


Well, that's cheating :) You asked it to generate code, which is ok because it does not represent a direct generated image of a clock.

Can grok generate images? What would the result be?

I will try your prompt on chatgpt and gemini


Gemini failed miserably - a standard 12 hours clock

Same for chatgpt

And perplexity replaced 12 with 13


> Please create a highly unusual 13-hour analog clock widget, synchronized to system time, with fully animated hands that move in real time, and not 12 but 13 hour markings - each will be spaced at not 5-minute intervals, but at 4-minute-37-second intervals. This makes room for all 13 hour markings. Please pay attention to the correct alignment of the 13 numbers and the 13 hour marks, as well as the alignment of the hands on the face.

This gave me a correct clock face on Gemini- after the model spent a lot of time thinking (and kind of thrashing in a loop for a while). The functionality isn't quite right, not that it entirely makes sense in the first place, but the face - at least in terms of the hour marks - looks OK to me.[0]

[0] https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...


I'll also note that the output isn't quite right --- the top number should be 13 rather than 1!


I mean, the specification for the hour marks (angle_i) starts with a mark at angle 0. It just followed that spec. ;)


close enough, but digit at the top should be the highest, not 1 :/


Same author in March 2014 was having segfault issues with atop apparently: https://rachelbythebay.com/w/2014/03/02/sync/


Rachel's been a reliable source of interesting issues like these for the better part of eternity now. Her blog's well worth reading.


After reading that: atop should've used SQLite.


My version of the full glass of wine challenge is "clock face with 13 hour divisions". Nothing I've tried has been able to do it yet.


> Really weird

No, not weird. The extra stuff is there to show you ads and/or track your behavior, which generates a stream of revenue for the TV maker. W/o the extra stuff, the only revenue comes from the one-time purchase.


> ... for a repressive government ...

Why shouldn't a virtuous and transparent government (should one materialize somehow, somewhere) be interested in identifying leakers?


That’s like asking why a fair and just executive shouldn’t be interested in eliminating the overhead of an independent judiciary. Synchronically, it should. Diachronically, that’s one of the things that ensures that it remains fair and just. Similarly for transparency and leakers, though we usually call those leakers “sources speaking on condition of anonymity” or some such. (It does mean that the continued transparency of a modern democratic government depends on people’s continual perpetration of—for the most part—mildly illegal acts. Make of that what you will.)


Both can be true! This is essentially making it easier to do [x] argument, which itself is essentially security through obscurity.

It was always possible to do watermark everything: any nearly-imperceptible bit can be used to encode data that can be used overtly.

Now enabling everyone everywhere to do it and integrate it may have second-order effects that were opposite of one's intention.

It is very convenient thing, for no one to trust what they can see. Unless it was Validated (D) by the Gubmint (R), it is inscrutable and unfalsifiable.


If they are transparent, what is leaking?


There is always a need for _some_ secrets to be kept. At the very least from external adversaries.


> Why shouldn't a virtuous and transparent government

That doesn't exist.


The parent comment says that it has dangerous use-cases, not that it does not have desirable ones.


No, a mass terror attack would indiscriminately target victims. This is almost entirely opposite -- an organization widely recognized as a terrorist group (1) is narrowly targeted.

(1) https://en.wikipedia.org/wiki/List_of_designated_terrorist_g...


Also because one would have to weigh the alternative.

Imagine Israel declared an old fashioned war against Lebanon as response to the missile strikes originated from its territory.

I think the number of civilian casualties of a conventional "legal" war would be much much higher than the collateral damage of this operation.

Now, does that make it "right"? To me war is horror and is to avoid at all cost. Is a smaller horror a cost one's willing to pay to avoid a bigger horror? Hard to say. But I think it's still important to at least try to see things in a broader context otherwise we may never understand why people on the ground make the choices they do.


Your post is non-sense, because this operation was clearly meant as a starting salvo (to create a confusion) in a war against Lebanon. It was at risk of being discovered, so they triggered the explosions early, to at least get some effect. Reports in Israeli media confirmed this rather quickly after the explosions. Also the mass bombings killing thousand people in a few days started almost right away after the operation.

I don't understand people who think Israel is some benevolent entity that just tries to defend itself while causing as little harm and disruption as possible. They murder innocents left and right every day, create havoc in multiple countries at once, terrorize and occupy people for decades, all while playing a victim card constantly.


[flagged]


h


[flagged]


Does Hamas have military infrastructure in dedicated buildings at all?


This is getting transpiled to SQL, right? So I still have to understand how the (now generated) SQL will perform on my data and DBMS, plus I got to learn this new syntax. This will be a hard sell.


If that was the end of the story, no transpiled language could ever succeed. But they sometimes do. One could say the very same about e.g. MarkDown which typically renders as HTML, so you still have to understand HTML to a degree. And in the right environment (e.g GitHub repo readmes) you can always fall back to HTML in places where the transpiler support is lacking.

The great thing about transpilers is when they 1) produce tractable, nicely formatted output and 2) you get an 'escape hatch' so you can fall back on the transpilation target. Because then you can always 1) check the output to understand details of an unfamiliar construct, 2) just use the target language in cases where you want to copy-paste a solution written in the target, and 3) just opt out of using the transpiler by using the code it produced as a starting point for your new implementation.


"no transpiled language could ever succeed".... TypeScript.


TypeScript doesn't introduce new runtime semantics in practice, in 99% of the cases the generated JS is your TS code with types erased. There're no magic keywords that expand into pages of generated JS.


They implemented the `x.y?.z` syntax pretty long before JS did, so that was at least transpiled for a while. I'll bet there are more features like that.


Similar story for await and iterator


yes but databases are a bit more critical in nature


not exactly. Just like with the regular code, most of the stuff done on database are pretty trivial and not resource heavy. And the things that are really perfomance-critical are usually crafted very differently even in SQL


PRQL is not aiming to be an ORM or data layer replacement and is focusing on a specific use case, viz making it simple and ergonomic to write *Analytical Queries*. See the the FAQ [0] for more on this.

In most cases you want to be able to interactively do some data exploration and craft your query as you go along - the sequential flow of PRQL is great for this as you can simply add lines as you go along.

For most people, the RDBMS query optimiser will do a good job of turning the resulting SQL into an optimised query plan. If you need to hand craft the production SQL code for the last few percent of performance, then PRQL gives you full access to the SQL to do that. You probably will still have saved yourself development time by generating the SQL from PRQL in the first place though.

[0]: https://prql-lang.org/faq/#:~:text=How%20is%20PRQL%20differe...


If you're spoiled your friendly neighbourhood DBA will help you there.

The problem I have with these tools is that you then have to reincorporate their optimizations in such a way that the transpiled SQL is identical. If you have to resort to an ORM expression API or raw SQL you gain nothing and are arguably in a worse situation.


SQL query optimisation has been studied since the days IBMers were competing against Quel. If the transpiled SQL has sensible optimisations performance could be equal to or even faster than hand-written SQL. I don't see how this differs from a "language extension" that adds a bit of Pythonic flavour to SQL, which IMO is the right step forward.


Related: alcohol is prohibited as a performance-enhancing substance in most archery and shooting competitions.


And to most hunters, alcohol is encouraged as a performance-enhancing substance.

I believe this is even reflected in some video games, where drinking will give you “deadeye” for a period, rendering your shots more accurate.


Is that a joke, stereotype, or a serious proposition? I've only ever been hunting trips that started so early in the morning that nobody drank anything other than coffee. Afterwards, sure. Maybe there's another way.


> Is that a joke, stereotype, or a serious proposition?

Yes, certainly.

On a multi-day trip where the vast majority of the time is spent not seeing anything, and a vanishing percent is seeing something very briefly and needing to ready your gun, calm your nerves, aim, fire, in less than a second... you find any way possible to minimize time spent in the "calm your nerves" portion and improve the accuracy of the "aim, fire" one.

If that ties in with improved mood during the "not see anything" potion, even better.


But required for darts and snooker.


I assume that’s a safety thing, not a level-playing-field thing.


No, in small doses it enhances performance by lowering heart rate, reducing anxiety, slowing breathing, etc. All are important in sports like rifle shooting.


Performance-enhancing indicates it makes shooting easier.


I'm sure only to a certain point of consumption. There's surely a "Ballmer Peak" equivalent.

https://xkcd.com/323/


Of course: if you're seeing double, you went too far!


Of course.

I compete in pistol shooting at an amateur level. I have never been even slightly intoxicated when shooting a firearm. But I shoot 10m air pistol[1] at home for practice, and I've tried it while a little tipsy. So I can tell you that for sure, 3+ drinks will have you thinking you're doing well when in fact your vision is so delayed from reality that you cannot call your shots accurately. You'll think it was a good shot and it was a 7.

But one or two drinks in, there is a small performance boost. It comes from not 2nd-guessing yourself. You just hold on target and watch it happen.

Pistol shooting is a lot about quieting the mind because nobody can hold the gun perfectly steady. You get the best results from accepting a little bit of natural wobble, and smoothly operating the trigger during the smallest part of that wobble pattern. If you try to shoot the gun right as the sights cross the bullseye you will typically throw the shot off badly. Even if you don't yank the trigger, your reaction time is such that the wobble has moved on by the time you react to it looking perfect, so you end up grouping all around the 10 ring instead of in it this way.

The other thing you'll do in pursuit of perfection is hold too long on the target, not happy with how large your wobble is. You end up holding for 10+ seconds and by the time you break the shot your vision is suffering from the Troxler effect[2] and your hold has gotten worse, not better -- you just think it's good because you aren't seeing it as clearly.

It is best to steadily break the shot not at a specific instant, but anytime within your mostly 9-ring ideal wobble area. You settle into this good wobble zone for a few seconds, perhaps from 3 to 7 seconds after putting the gun on target. By Gaussian distribution you will shoot a lot of 10s, a fair amount of 9s, and a scarce handful of 8s.

But I cannot tell you how hard it is to convince yourself to do this! The mind thinks it can make the gun fire when it touches the 10. It also thinks that the wobble is way, way bigger than it really is, and that you'll shoot 7s if you let it happen. You'll even panic subconsciously right before the shot breaks when it looks "imperfect" and twitch to try to fix the alignment at the last moment -- always terrible.

A little depressant makes it much easier to relax and confidently break the shot in that "good enough" zone. Don't get me wrong, it'll never turn an amateur like me (520 - 550, depending on the day) into a top shooter (580+). But it does make it easier to perform on the 540+ side of my average.

[1]https://en.wikipedia.org/wiki/ISSF_10_meter_air_pistol [2]https://en.wikipedia.org/wiki/Troxler%27s_fading


I shot rimfire bullseye with my father growing up, but I live in a city now and have gotten away from the hobby. Action shooting takes up all the air in American gun culture; I always found it very calming.


Action shooting is definitely the hotter sport right now. But I think bullseye will stay around, especially as the current generation of action shooters ages into it. It has one huge advantage over action: every competitor can shoot at the same time, up to the capacity of the range. If you go to a bullseye match, you get to shoot for a few hours. If you go to an action pistol match you spend hours waiting around only to shoot for a few minutes.


Beta blockers also banned iirc, so I don’t think it’s a safety thing


Yes. But not because of Elon Musk.


Is there any end to this? E.g., why not include Galileo's pictograms of Saturn as seen here: https://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=...


Galileo's pictograms aren't being used as a letter, though: they're explicitly a diagram (I believe he says "like this: <picture>").


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: