Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a very good question, which goes to the heart of statistical Physics. We use phase spaces for this (typically a 6N-dimensional vector space in which each microstate is represented by a point). The system has a probability of being in (or rather very close to) each microstate, which depends on several factors, like the conditions (isolated system, constant pressure, temperature, number of particles, etc). Counting microstates is “just” calculating integrals of that probability weight in the phase space. Of course, most of the time it is impossible, so we have tools to approximate these integrals. There are a lot of subtleties, but that’s the general idea.

The phase space does not change depending on temperature, so there’s nothing weird like the space getting bigger. But the probability of each microstate might, as high-energy states become more accessible.



So would it make sense to think of a microstate as a region of phase space, a point and those points "very close to" it? And "increasing number of microstates" just means a larger number of these regions have non-negligible probabilities? In continuous terms you would see this as the distribution flattening out. I might be having trouble visualising what we're integrating, since if it's a probability the integral over the whole phase space can only be 1, right?


Yes, that is the principle. The probability of a single point is zero because an integral over a point is zero, hence “very close to it” (in an infinitesimal volume around the point).

The integral of the probability over the phase space is indeed 1. This is the purpose of the partition function, which is the normalisation factor. The weight function is not normalised a priori.


That helps a lot. Thanks!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: