OLED Defringer

This project started after I upgraded my monitor to the Alienware AW3423DWF which uses a Samsung Display quantum dot OLED (QDOLED) panel. I love the monitor but it has one downside:

Example of green/magenta fringing found on qd oled displays. Photo: Samuel Buchmann

This is caused by the irregular subpixel layout used by QDOLED panels where the green subpixel is on top and the blue / red subpixels are on the bottom:

Fringing occurs wherever there's a high contrast transition on the vertical axis. Light-to-dark causes magenta fringing, and dark-to-light causes green fringing, as seen in the image above. To my surprise, there did not a exist a solution to eliminate the fringing at the time of purchasing the monitor in 2023. There had been attempts to fix this issue, but only for text, which is only a partial solution since fringing can occur on any high contrast transition. I saw a Github issue thread that suggests a full-screen filter, which gave me the idea for this app.

The basic idea is to apply a post-process effect on the entire Windows UI. After doing some research, I learned the Windows UI renderer service, dwm.exe, uses Directx11 for drawing. To apply a post-process filter on the whole UI, I would only need to inject a custom HLSL shader into the service. Luckily, I found an open source application that does just this for another use case. After building my app using the other project as a base, it was onto the final task: create a fringe reducing post-process shader.

My idea for the filter was to reduce the brightness of the green subpixel wherever a dark-to-light transition occurs and vice-versa for light-to-dark transitions. A simple filter would output the final green subpixel brightness as the average of the original value and the original value of the green subpixel above it. In the example below, you can see how the top-most lit green subpixel row has its brightness reduced by half with this algorithm:

This worked very well to defringe the UI:

The main downside to this approach is that it's, in essence, a box blur-type filter on the green channel. As a result, everything ends up looking slightly blurred. I improved the filter by masking the blur operation with an edge detection filter. Now, the green channel box blur is only applied to pixels where a Sobel filter detects an edge. It's still not perfect, but I'm hoping to improve it further with some help. If you're an image processing expert, please check out the GitHub repository and suggest potentially better techniques.

Github Project