Return to “Technical”

Post

Immersive User Interface Modes

#1
Anthony Stonehouse blogged at Gamasutra today on a classification system for user interfaces (UIs) in games.

This classification model assesses a game's UI in two areas:
  • Geometry: the UI exists within the gameworld or not
  • Narrative: the UI helps communicate the lore of the gameworld or not
Combining these two areas yields four quadrants representing four primary styles of UI:
  • Diegetic: UI is in-world and fits the aesthetic
  • Spatial: UI is in-world but tells the player things the character can't know
  • Meta: UI fits the aesthetic but is shown on-screen, not in-world
  • Non-Diegetic: UI breaks narrative and is not shown in-world
Although the author doesn't say so explicitly, the implication is that the most immersive interface for most players will be one that is fully diegetic: all information comes from objects represented inside the geometry of the gameworld, and all information presented is limited to what the character inside that gameworld could reasonably know.

Based on what's been shown of Limit Theory so far (early March 2014), LT is using a mix of Diegetic and Meta styles.

It's not Non-diegetic since there are some UI elements in the world that support the aesthetic of the world. And I don't see any data yet that qualify as purely Spatial, since there really isn't any purely RPG-like character stats info -- which an actual person is very unlikely to know -- displayed within the gameworld's geometry.

I would say that the new feature of mining survey drones showing the type, amount, and yield for an ore location as text is diegetic... sort of. The text tag exists inside the geometry of the world; it's attached to the drone which is attached to a point on an asteroid. This piece of UI is also provides information that's fully within the context of the gameworld, since the player's character can reasonably be expected to want to get that kind of data from a mining survey drone.

On the other hand, how is the player's character, who is (by definition) inside the gameworld, seeing that text information? A piece of text on a callout is not an actual object inside the gameworld... so how is the player character seeing it?

The cylindrical projection Josh applied in the previous video update implies (to me, anyway) that every ship has a screen onto which data are projected by a computer. Futuristic HUDs often use this convention, so I suspect it doesn't break immersion for most core gamers to think that the text callouts for survey drone data are represented through that kind of visual metaphor.

The same reasoning applies to the Meta UI elements -- the bits of UI that represent data a character in the world could know, but which are shown to the player on a 2D HUD mapped to the player's monitor. These include the aiming reticle, the node map, the scanner, holograms, and other visual data systems that the player can call up. Currently these could be assumed to be shown to the player's character on a computer-driven monitor mounted inside the character's ship.

Another possibility is that all characters are avatars with eye-mounted VR devices. Yet another explanation is that all characters are robots with built-in display screens as their "eyes."

None of these cases are truly emphasized by the current UI, however. At this time, both Diegetic and Meta information are presented with no overt markers saying "this is a screen inside the gameworld" onto which all info sources are overlaid.

There is one interesting exception to this, however, and that's the occasional bit of interface "static" that briefly distorts parts of the UI. This does imply a screen of some kind that exists inside the gameworld.

That said, we haven't yet seen the latest iterations on the interfaces for things like placing and taking contracts, selecting research goals, configuring ships, and talking to NPCs. Those could be implemented in a Meta style (as 2D fields on our real monitor screen), or in a more diegetic way by representing them on some device rendered within the world of the game (such as our ship's viewscreen).

Do you have a preference? Do you feel more immersed in a game when all info comes from objects inside the gameworld? Or can you find yourself deeply immersed in a game even if info comes to you, the player (such as character stats), and is presented through a purely HUD-like UI such as a row of hotkey icons your character in the game can't see?

If you like the idea of a fully diegetic UI, where all information is what your in-game character could know and it's all shown on some device that's clearly inside the gameworld, how would you design that interface? In other words, what are some ways that data you need/want to see as a player appear to be presented to your character? Are there any other tricks like the display "static" that would help the Limit Theory UI feel more like it's part of the world of the game itself?
Post

Re: Immersive User Interface Modes

#2
Interesting article and concepts. I'm in favour of a fully-diegetic user interface, which I believe Limit Theory already has, though I make some assumptions:
  • Like in EVE Online, I imagine small (microscopic?) camera drones that fly around the exterior of the vessel and transmit visual feeds back to the ship's computer. This accounts for the third-person view of the vessel.
  • I assume the player and other agents are AGI (yep, I'm completely attached to that idea now). All information that can be known to the player can be displayed in any shape, way or form and still be consistent with a diegetic UI style, because the UI is really just a way of visualising the agent's knowledge for the real-life player's benefit.
To that end, I don't want to be shown anything through the UI that my character shouldn't be able to know himself.

Fully-diegetic interfaces are also the most consistent with Simulationist-style play: "Simulation-inclined players are inclined to talk of their characters as if they were independent entities with minds of their own, and model their behavior accordingly. (For example, they may be particularly reluctant to have their character act on the basis of out-of-character information, and indisposed to tolerate such behavior in others.)"

I think a useful question to ask people in the thread is what information they would like to be shown in different situations, and to separate that information into diegetic and non-diegetic categories.
Post

Re: Immersive User Interface Modes

#3
@ThymineC:
I tend to see "external" views as purely synthetic imagery created by sensors around my ship.

I may do not have a direct view on the side of my ship, but i do know its material, paintjob, structural status etc and lighting conditions from sensors around my ship.

So instead of using cameras showing me an image of my ship, my computer generates images out of known data
Post

Re: Immersive User Interface Modes

#4
Cornflakes_91 wrote:@ThymineC:
I tend to see "external" views as purely synthetic imagery created by sensors around my ship.

I may do not have a direct view on the side of my ship, but i do know its material, paintjob, structural status etc and lighting conditions from sensors around my ship.

So instead of using cameras showing me an image of my ship, my computer generates images out of known data
Pretty much, yep. Human eyes suck.

Online Now

Users browsing this forum: No registered users and 6 guests

cron