Cooperative online FPS featuring proximity voice chat
Sound design overview & implementation details
MIMIC ENCOUNTER
Music states are based on the mimic behavior
MENU THEME
Some sound materials used for the mimic encounter, menu and timer music are made with recordings of violin bow frictions against metal and processed in Phaseplant's granular sampler.
You will find the soundbank of the raw recordings on this page
MIMIC ENCOUNTER MUSIC BLUEPRINT
THE HOUSE
Roomtones
Offtune synthetic chord pads
Random noises across the house
Points of interests with random behavior (fridge, heaters, windows, flickering pc screen...)
THE HUB
Diegetic radio through spread out monitors in the hub.
Multipositioning : 1 voice, multiple emitters
BLUEPRINTS
ICE SPRAY
The amount of spray remaining drives an RTPC that controls a blend container. The lower the ammunition, the more the spray can sounds empty.
MIMIC WHISTLE
Mimics answer to it if they are in range
AMMO BOX
REVOVLER
IMPACTS
The bullet impact material switch is set when the pistol’s fire raycast hits an object.
An actor blueprint is spawned at the hit location, containing an AkComponent on which the appropriate switch state is set before posting the impact Wwise event. The actor self-destructs after 3 seconds.
The physical material of the hit object is retrieved and used to determine the correct switch value to apply to the AkComponent thanks to the surface type enumerator.
In Mimic Hunt, we intended to implement 2 voice chat rooms. One for the living players and one for the dead players.
The voices of the living players are spatialized according to their world location.
Living players can hear each other, but can not hear the dead players.
When dying, we pause the capture in the 3D Room and switch to the 2D Room audio capture
The voices of the dead players is not spatialized.
Dead players can hear each other, and can hear the living players through the spectator perspective.
When respawning, we resume the 3D Room audio capture and shuts the 2D Room capture.
Audio input is handled as follows :
We use Unreal Engine’s audio capture system (in this case we use both audio engines, Wwise & UE5).
This captured audio is treated as a media stream sent to the server.
The server associates this media with a Peer ID, which corresponds to the player’s User ID.
Each player is assigned a User ID used to connect to the voice chat room. Once inside the room, a Peer ID is obtained. We also have a unique Player ID for each player within Unreal.
When a player connects, other players receive their media stream. We then identify the corresponding User ID from the received Peer ID. Using this User ID, we retrieve the correct Player ID and access the associated AkOdinInput which is an Odin custom class based on AkInput. This allows the media captured by Unreal to be converted into an AkInput, enabling the use of Wwise's attenuations and processing including Spatial Audio.
Wwise Spatial Audio - Rooms & Portals
Diffraction and transimission applied to all sounds, including proximity voice chat
Online considerations
The player's feedbacks are dissociated into First person and Third person assets in the Wwise Actor-Mixer Hierarchy.
This way we can handle the replicated feedbacks (from the server-controlled player characters) independently from the client one (locally controlled), allowing more control on the overall mix.
If the investigator is the player character and is locally controlled, then the First Person Event is triggered
Momentary Max loudness normalization
Since Wwise 2023, we can normalize either to an integrated loudness target or a momentary max loudness target within the actor-mixer hierarchy. I set the expected momentary max level for each SFX category to avoid large volume discrepancies or excessive make-up gain adjustments.
Attenuation Presets
To maintain control over the mix, I set different attenuation presets for various distances while ensuring a consistent ratio between them.
We produced a game trailer mixed in 7.1 surround sound, designed for immersive playback in a theater setting. Our goal was to recreate the game experience in a different context. At the beginning of the trailer, a mimic can be heard moving all around the theater in complete darkness, prompting the audience to look around to locate it. It eventually returns to the center to possess a television, which then becomes our point of view for the first part of the trailer.
Go back home