Ah that other thread
Well, I think you're overreacting. I'm sure that comment there was also not meant as an insult. We do like to tease each other around here quite a bit.
Anyway, back to the issue. I've ran the game through an OpenGL debugger (BuGLe on Linux in case anyone cares), logging all calls, looking for something specific: a call to glFinish. I found none. So /me enters lecture mode.
The documentation states that glFinish's function is to wait until all OpenGL commands sent so far have been properly executed. If you call it after whatever you use to swap buffers (OS dependant), you can be sure what you just rendered is going to get visible to the user in the very next screen refresh. And if you don't call it ever, you simply have no guarantee about that. In some cases (most notably, simple geometry, but large objects eating fillrate, just as is the case with the menu), many frames (tens or even hundreds) worth of commands can sit in the buffer, waiting to be rendered. The user will perceive this as 'input lag'; there is a noticeable delay between his input and the visible response on the screen, because the response spends some time in the rendering buffer. Worse, this buffering can happen in bursts: the OpenGL driver will gladly accept a bunch of frames from the application without delay, then all of a sudden block and wait for the queued commands to be executed. The user perceives this as choppyness; several frames are rendered with hardly any change relative to each other, and then comes a big jump. Both effects are hard to describe for the average user and will often get confused with performance problems by the reader. And the worst part of all this is: it never ever happens on the programmers' machines.
So, please, please, IV: at least add an option to call glFinish. I know the downside (most notably, nVidia and ATI drivers will eat 100% CPU time while in glFinish in all cases I have seen), but it's the only way to make sure there's as little input lag as possible.