In fact GCC will virtually always produce optimal code for multiplication by a constant. Targetting AVR, for example (which doesn't always have hardware multiply), GCC produces more compact code if you use '*' than if you use '<<'. Similarly, any attempt I made to multiply by non-round constants using bitshifting and addition led to less compact code.
The one case so far where I've seen better results using tricks like reciprocal multiplication and shifts was when there was no way for gcc to know it could throw away almost half of the bits due to input range limits. It would be kind of nice to have, say, a 19-bit integer type.
Regardless of the compiler, let's say (x << 3) is slightly more efficient than (x * 8), I don't see it make such difference since my job will probably be writing code for webapp sort of things. For the things I do I actually believe readability advantage of (x * 8) overcomes the efficiency disadvantage (which doesn't even exist with modern compiler).
Some interviewers (not just the "homework") raise their bar by not allowing you to make any tiny mistake. And I just don't get it. If someone is good enough to write something like a lite version of Hacker New website in hours, I'm not going to turn him/her down because of such mistake.
And the person who could potentially write the "most perfect" code with all of the conditions applied is often not within that company's price range.
I did interviews at a company for about 2 years and there was constant pressure from management to do trivial crap like that, it finally came to a head and I invited management and one of the "rockstars" to do a mock interview.
When it was evident that the person they thought of as a "rockstar" could not solve these tests(without prior knowledge of the problems), they immediately discounted them as a worthless and stopped bugging me (about that, of course not about everything else).
What the hell. A developer who writes x << 3 instead of x * 8 when writing code that is logically doing multiplication is not a good developer. Written code is designed for human readability; the human should not be having to convert that mentally to its math equivalent to understand what operations are taking place. In nearly every possible case, this is at best a micro-optimization. At worst, a symptom of someone who writes unnecessarily obfuscated code to show off their knowledge in a manner that only hurts the maintainability of code by a team.
If I found bit shifting in an interview code sample as a replacement for basic multiplication, I would ask the developer why they chose to do it that way. They'll either a) calmly explain that their computer science classes taught it that way or it's a habit they've adopted after writing code for embedded systems or similar where the optimization actually made a difference, or b) their ego will make an appearance with a "because I'm so senior" attitude. The latter is not a good sign.