Multi-Modalities in c#
This video showcases a vertical slice demo of a first-person tower defence style game.
Created in Unity, I used a plethora of C# scripts to handle all mechanics.
The focus of this work was multi-modality, including mouse, keyboard and speech input for player control.
The Windows Speech Recognition API was used to handle voice input. The goal of voice input was to reduce the amount of physical player inputs, allowing the player to undertake deployment, purchasing, and portal destruction, while aiming and shooting simultaneously. The speech integration is outlined below.
How the speech input was integrated:
Three unique keyword recognisers were utilised, handling ally asset deployment, shop purchasing and portal destruction.
Deployment contains an array of potential keyword options, such as ally type, size, and spawn location. Portal destruction combines with mouse input, the player must be aiming at a target portal while commanding the destruction for a successful outcome.
Deployment explained:
Keyword arrays were set up (Deploy, Size, Type, Location).
Using nested loops, these are formed into complete commands.
The recognizedSpeech method detects any player speech, identifying the chosen keyword in each item of the command.
Depending on the outcome, the script will execute the relevant deployment logic.