I will often dig through a game trying to understand why it works... or, in many more cases, why it doesn't. I'll reach the core of the problem - the fundamental design choices that led to all of the higher level stuff. I'll be excited about this, and want to share it with people and see what they think - get some "peer review", you could say. Once in a blue moon, they'll agree. Sometimes, they'll make some solid counter-arguments and cause me to re-think my analysis. But mostly, they'll attempt to justify the bad decision by blurting out the same buzzword:
I'll give an example. I'll often attack 3D gameplay, since, in my view, it's at a huge disadvantage from 2D gameplay right out of the gate(I can get into this at another time, but the short of it is a huge reduction in gameplay clarity - representing a 3D space on a 2D image and forcing the player to translate several different angles at once without the aid of depth perception, as well as loss of visual clarity and - ah, this needs to be its own article). You may disagree with this assertion, but I hope that you disagree with it for a better reason than one of the following two:
1. "3D graphics are state-of the art technology and higher levels of technology = better games"
The mainstream wisdom goes something like this: the more pixels on the screen, the more polygons, the more visual effects, the more technology used and the more cinematic the presentation, the more immersive the game is.
Firstly - and I hate that I have to say this so often - but more is not better. Less is more. One who is experienced in any kind of design or art form understands this - expressing your idea in as few strokes as possible is the holy grail of creation. Elegance is better. Having more pixels on the screen means only one thing: there are more pixels on the screen. It means absolutely nothing else about the quality of the game. I think you should meditate on that idea for a moment and really try to internalize it. A game with 320x200 resolution is not worse than a game with ten times that resolution. It's also not better, but a lot less people make that mistake.
Secondly, cinema does not improve a game any more than gameplay improves a film. Cutscenes in a game do not make a game more movie-like, any more than taking a few electric guitar breaks between chapters of "Catcher in the Rye" makes the the book more musical. Games are great at being games, and movies are great at being movies. I'm not saying you can't make something good by blending the two in some way or taking inspiration from each other, but I am saying that they don't need to. The future of games isn't "more and more like movies", the future of games is "better games".
But here's the largest point I want to make, about this word "immersion" itself. Like I stated before, and like you already know, when people say it they are talking about visual/aural stuff. Which really means they're talking about technology. "HD makes the game more immersive", "voices make the game more immersive", "bump mapping makes the game more immersive", etc. The truth is that none of these things make a game more immersive.
Ironically, it's actually when we look past the graphics, sound, controls, and everything else that people normally consider the elements of immersion, that we become immersed. When the Pong-paddle becomes an extension of our arm, or our thoughts, and we subtly, unconsciously, shift our weight in our chairs. When we frantically spin the falling Tetris piece, shouting a bizarre litany of curses. This is when we are fully immersed in a game, and it has absolutely nothing to do with visuals or technology. So what does it have to do with, then?
My friend (and Dinofarm Games lead artist) Blake Reynolds says that the most important thing about an illustration is "transparency". What he means by this is that the quality of the drawing is such that we don't even think about the drawing itself; we can look past it and are able to appreciate the meaning or purpose behind the work. This rule applies to game design as well. The way to create an "immersive" game is by achieving a transparent game design.
Now, where it gets tricky is that "transparent design" is a synonym for "good design". What's nice is that the laws of design are very fundamental to the human experience, and what's even nicer is that they are not new. Video games are new, but they are simply a new type of canvas for the same laws of design. If we find a new island, or even a new planet, the laws of physics stay the same.
If you want your game to be immersive, express it in as "few strokes" as you can. How simple could your game possibly be while still expressing your gameplay idea? If you achieve a simple elegance (like that of Tetris, Portal, or Go), then you are on the right track. Each new level of complexity you add is another chance that the player will be shaken from his trance and disassociate himself with your game.
So finally, back to my initial example. 3D graphics aren't "more immersive" than 2D graphics. 2D graphics aren't more immersive than 3D either, and games with no graphics at all are also not less or more immersive than games with graphics. If visuals made things more immersive, then movies would be inherently more immersive than books, but we know that this is not the case. People can be just as wrapped up and immersed in a book as they can in a movie. In a game, immersion comes from gameplay.
Game designers, we should be taking full advantage of the awesome power of human imagination - a power that allows us to literally place a human consciousness inside of a "@" symbol in Rogue, or inside of the 2x2 pixel protagonist of Atari's Adventure. This power is far greater than that of any video card's will ever be.