A cent is a factor of 2^(1/1200). There are 1200 cents in an octave, which is a doubling in frequency: if we have 1200 of them, 2^(1200/1200) = 2^1 = 2.
Lisp-wise:
2> (exp2 (/ 1200)) ;; exp2 is courtesy of ISO C 99
1.00057778950655
So, it is a .0578 percent change in pitch, or determiner of pitch like record rotations speed.
A little more intuitively, 1 semitone is one "note", aka the interval between two successive notes on a piano (including white and black keys) such as C to C#, and to talk about intervals less than that, we use cents as 1/100 of a semitone.
Musically, two notes an octave apart vary by a factor of 2 (e.g. Middle A (A4) is 440Hz and the next A (A5) is 880Hz), and our ears hear frequencies logarithmically rather than linearly, so each of the 12 semitones in an octave is a factor of 2^(1/12) higher than the previous one (so that 12 of them in a row result in doubling the frequency), and therefore the 100 cents between each semitone are each (2^(1/12))^100 = 2^(1/1200) greater than the previous one.
There are 100 cents to a semitone - by definition. The frequency shift of a 25 to 40 cent adjustment isn't likely going to be noticed by the average listener. A lot of times you don't notice it until you try to play guitar along with the song and you notice something is a little "off."
In can drive you nuts when you're playing by ear because when trying to find the root note, i.e. the key, you'll find it between two semitones. Always go for the lower tone. For example, if it sounds like it's a little sharper than A, but it's definitely not Bb, then you know it was A and the mastering was sped up. That's another thing, in my experience it's always sped up and never slowed down.
I have a theory that one of the reasons Eb standard guitar tuning started becoming so popular in the late 60s and into the 70s was because the speed up was getting to be so common that guitar players started adjusting their tuning for it, not to mention you generally get a "heavier" sound.