Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, "what a digital image is" is a sequence of numbers. There's no single correct way to interpret the numbers, it depends on what you want to accomplish. If your digital image is a representation of, say, the dead components in an array of sensors, the signal processing theoretic interpretation of samples may not be useful as far as figuring out which sensors you should replace.


> There's no single correct way to interpret the numbers

They are just bits in a computer. But there is a correct way of to interpret them in a particular context. For example 32 bits can be meaningless - or it can have an interpretation as a twos complement integer which is well defined.

If you are looking to understand how an operating system will display images, or how graphics drivers work, or how photoshop will edit them, or what digital cameras produce, then it’s the point sample definition.


Cameras don't take point samples. That's an approximation, just as inaccurate as a rectangle approximation.

And for pixel art, the intent is usually far from points on a smooth color territory.

Multiple interpretations matter within different contexts inside the computer context.


> Cameras don't take point samples. That's an approximation

They use a physical process to attempt to determine light at a single point. That’s their model they try to approximate.

> And for pixel art, the intent is usually far from points on a smooth color territory.

And notice that to display pixel art you need to tell it to interpret the image data differently.

Also it has a vastly different appearance on a CRT where it was designed which is less like a rectangle.


> They use a physical process to attempt to determine light at a single point. That’s their model they try to approximate.

According to who?

A naked camera sensor with lens sure doesn't do that, it collects squares of light, usually in a mosaic of different colors. Any point approximation would have to be in software.


Yep, cameras filter the input from sensors.

> Any point approximation would have to be in software.

Circuits can process signals too.


They usually do, but their software decisions are not gospel. They don't change the nature of the underlying sensor, which grabs areas that are pretty square.


> which grabs areas

And outputs what? Just because the input is an area does not mean the output is an area.

What it if it outputs the peak of the distribution across the area?

> that are pretty square.

If we look at a camera sensor and do not see a uniform grid of packed area elements would that convince you?

I notice you haven’t shared any criticism of the point model - widely understood by the field.


> And outputs what? Just because the input is an area does not mean the output is an area.

> What it if it outputs the peak of the distribution across the area?

It outputs a voltage proportional to the (filtered) photon count across the entire area.

> If we look at a camera sensor and do not see a uniform grid of packed area elements would that convince you?

Non-uniformity won't convince me points are a better fit, but if the median camera doesn't use a grid I'll be interested in what you have to show.

> I notice you haven’t shared any criticism of the point model - widely understood by the field.

This whole comment line is a criticism of the input being modeled as points, and my criticism of the output is implied by my pixel art comment above (because point-like upscaling causes a giant blur) and also exists in other comments like this one: https://news.ycombinator.com/item?id=43777957


> It outputs a voltage proportional to the (filtered) photon count across the entire area.

This is not true. And it’s even debunked in the original article.


> And it’s even debunked in the original article.

No, it's not. That article does not mention digital cameras anywhere. It briefly says that scanners give a gaussian, and I don't want to do enough research to see how accurate that is, but that's the only input device that gets detailed.

It also gives the impression that computer rendering uses boxes, when usually it's the opposite and rendering uses points.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: