While you might not see an official “photorealism update” roll out to GTA V tomorrow, you may have already played a game or watched a video that’s benefited from another kind of machine learning - AI upscaling. Those “G-buffers,” as the researchers call them, can include data like the distance between objects in the game and the camera, and the quality of textures, like the glossiness of cars. The researchers say their enhancements go beyond what other photorealistic conversion processes are capable of by also integrating geometric information from GTA V itself. It doesn’t entirely behave like it’s real, but it looks very much like it’s built from real things.
It’s dimmer and from a different angle, but it almost captures what I imagine a smoother, more interactive version of scrolling through Google Maps’ Street View could be like.
The group offers a more in-depth and thorough explanation for how image enhancement actually works in their paper (PDF), but as I understand it, the Cityscapes Dataset that was used - built largely from photographs of German streets - filled in a lot of the detail. The Intel researchers suggest some of that photorealism comes from the datasets they fed their neural network.