To make the hair rendering more realistic in gaming titles, the researchers from Microsoft, Pinscree, and the University of South California have collaborated, according to PCGamesN. The new deep learning-powered technique will be first of its kind.
By employing this technology, the graphics card companies will be able to deliver more realistic images during the gameplay. At the moment, with Nvidia’s HairWorks in AAA titles like Final Fantasy XV, the performance ends up suffering at the cost of visual immersiveness.
The new convolutional neural network uses the 2D orientational field of a hair image and produces evenly distributed strand feature. This approach is efficient when it comes to storage and speed; it runs 1000 times faster during the process of generating hair with 30K strands.
“Realistic hair modeling is one of the most difficult tasks when digitizing virtual humans,” the researchers say. They hope that with repeated training and improvement, this technology could be used in actual games.
The researchers fed the neural networking with a huge dataset of 40,000 hairstyles and 160,000 2D orientations. With this, the AI was trained to produce hair in different styles, color, lengths, etc., from a picture.
The future application of AI in video games is inevitable. The companies like Nvidia are already using and further exploring the possible scenarios where AI would play a role in games.
Let us know your thoughts in the comments section and keep reading Fossbytes.