UIs for multiple platforms

I've been working on a menu system in Unity to be used in Windows Standalone mode (with keyboard/mouse and gamepad support) and OpenXR. In this stage, the UI canvases are set to world-space configuration, which, from what I've typically seen, is bad practice. Or at least it can be if the experience is made to be poor by:

  • requiring the player to move/look to view the entire menu,
  • managing looking vs. selecting state,
  • forcing raycast/"laser pointer"-only selections via mouse/hand pointer,
  • moving the mouse cursor with a gamepad (shudder)

And yet, I'm doing all of these – for now. Part of the reason is due to using Unity's "new" Input System and their XR Interaction Toolkit. The purpose of such choices is to save time and effort in the long run. For instance, the same "move" action and player logic can be bound to multiple inputs simultaneously, such as: WSAD on a keyboard, the left analog stick on a gamepad, and a left-hand XR controller's  analog stick. Some of the work is deferred to configuration files outside of script, but the jist isn't so bad to work with:

  1. Give an action a name
  2. Bind it to input(s) and configure
  3. Define an "On(ActionName)" method in your player class

Still, tedium creeps in. You know how games will often seamlessly switch between input prompts? "Press enter to start" becomes "Press Ⓐ to start" as soon as you touch a controller's analog stick. Suddenly, the benefits of handling input actions like an interface is restrictive – you aren't able to see which device the input came from in the handler, so you aren't sure if and when to switch displays. Additionally, allowing users to rebind becomes quite involved, too. The InputSystem allegedly has some things built-in to help, but nothing is free. In turn, there's a plugin for the InputSystem called PlayerInput that's supposed to help with both of these issues, and so the perpetual struggle to work with Unity input and not against it continues.

With all that in mind, I'm happy I can test out KBM and gamepad interactions in the same scene. Longer-term goals for the UI options include having directional navigation in the menus for all devices - arrows on keyboard, sticks/dpads on controllers - to avoid the need for raycasting. That could be a single raycast to select the menu using any collision on its canvas, then transition to an interaction state so the directional controls navigate between menu options. And which is more evil?

Let's see how well that test poll does...

Side note: I took a major detour in writing this post that went something like this:

  1. Can Ghost embed polls? Well, it can embed Google Forms...
  2. Can Ghost embed JS and HTML? Yes! Now we're talking...
  3. Can I whip together a poll database? Yes! (You see where this is going)
  4. Can I write some PHP to interface with the database and invoke with with cURL?
  5. How about with fetch from JS?
  6. Can I generate the form in HTML and the submit event handler in JS from the PHP?
  7. Can I add the returned JS to the page so it can be executed on the form submission?
  8. Can I show results for a poll after it's been submitted?
  9. Can I allow users to change their vote?

Conclusion: cascading "Yes!"es are dangerous.