For quite a long while I’ve been interested in stereoscopy and virtual reality. It was part of why I started working on Citra back in the days, getting to the point where it could actually display both images like on an actual 3DS.
This interest of mine was again revived when Armada added stereoscopy support to Dolphin a few years later. It was around that time that I started investigating the idea of making it run on my platform of choice, Linux with Wayland.
I first started by designing a Wayland protocol for Wayland clients to communicate with the compositor about stereoscopy support, that is receive events when an output is known to support stereoscopy, and add metadata to their surfaces to signal their layout is actually one of the supported stereo ones. This protocol has been partly inspired by the HDMI 1.4 specification, which describes a set of stereo layouts.
The next part was to make an implementation of the protocol. I chose Weston as it is the reference Wayland compositor, I am already quite familiar with its code, and many people around can help me in case I need anything. Implementing the protocol was mostly straightforward, it was the integration with other features such as wp_viewporter, buffer transform or HiDPI which proved being quite annoying, and required a lot of incremental changes to the protocol specification.
My target of choice would be VR HMDs, but there is currently no standard for display discovery with each device having their own proprietary APIs or ways to bring up, so I went for 3DTVs instead, as a first step. The DRM API provided by the Linux kernel drivers was already exposing the stereo modes supported by the TV, so it was just a matter of choosing the correct one and the TV automatically switched to it.
For the rendering part itself I tried multiple approaches, using
weston_view
s resulted in input not being correctly tracked, exposing
multiple wl_output
s was a no-go as it was a single logical output…
In the end I went back to the approach I used when prototyping,
rendering the scene twice in OpenGL when stereoscopy is enabled, and
saying that stereoscopy isn’t available when using another renderer.
This series is only a first step towards stereoscopy on Wayland, one of the prime users for it will be GL drivers for quad-buffering, but there is no EGL extension exposing that capability as far as I know. I haven’t looked into the Vulkan parts yet, but I expect its story to be quite similar. And then of course there is virtual reality HMDs, which don’t only require stereoscopy but also movement tracking, distortion shaders to account for lens imperfections, some even “forgot” to expose stereo modes in their EDID, so some amount of hardware-specific fixes will have to be introduced, maybe on the model of libinput?
You can test this work by building my stereoscopy
branch of
wayland-protocols
and
weston,
add a stereoscopy=frame-packing
option in your 3DTV’s [output]
section in ~/.config/weston.ini
. This series will soon be available
for review on
wayland-devel@,
and once accepted, in the following wayland-protocols/Weston releases!
I would also like to thank my former employer Collabora for having sponsored part of this work.