In order to lose yourself in a great story—whether it’s a gaming experience or the next blockbuster superhero movie—you must believe in the world you’re entering. Achieving the level of realism necessary to engage audiences often requires sophisticated visual effects and complex animation that involves a great deal of manual input.
Deep learning is a subset of AI that is providing game developers, animators, movie makers, and other content creators with inspired shortcuts to complete repetitive tasks much faster, allowing artists to spend more time focusing on valuable creative work.
Deep learning works by using layers of mathematics-based computer systems called neural networks to learn a wide variety of complex tasks very rapidly. These networks learn by example, taking in massive amounts of data to recognize patterns and understand how things look and move. For the media and entertainment (M&E) industry, applications include everything from simple image upscaling without pixelization to complex facial animation and 3D character generation.
AI Generated Digital CharactersAndrew Edelsten
Ongoing research is enabling new deep learning tools for content creators on almost a weekly basis, providing an arsenal of productivity enhancers to give artists more control over the creative process.
Disney, for example, is using GPU-accelerated deep learning to create realistic clouds in a matter of minutes, shaving hours off a traditionally tedious process. By teaching machines with a large dataset to learn the effects of volumetric path tracing on artist-created and procedurally-generated 3D clouds, Disney has been able to speed up subsequent cloud design by reducing the reliance on computationally intensive light scattering techniques.
But that’s just the tip of the iceberg. By training on motion and audio data from live subjects, deep learning networks can create believable 3D humans, animals, and other characters that not only move correctly but also convey emotion. Once a neural network has been trained to understand the correlation between a live human vocal performance and the associated facial animation, for example, it can receive new audio input and automatically generate the corresponding facial animation with embedded emotional expression.
With a few minutes of high-quality training material collected as a foundation for a character, additional content becomes much quicker to create. The results are already workable for characters in games, and the day will come when these techniques will pass muster for theatrical releases. Deep learning networks can create realistic characters that don’t exist in reality.
At the forefront of the AI revolution in graphics, GPU manufacturer NVIDIA is making it easy for M&E professionals to get started with deep learning right away. Because do-it-yourself deep learning deployments are time consuming and complex, NVIDIA has created ready-to-use deep learning containers, accelerated for NVIDIA GPUs, that can help researchers, engineers, and technical directors get up and running in minutes.
Through partnerships with leading deep learning frameworks, such as TensorFlow, Caffe, PyTorch, MXNet, and others, NVIDIA has tuned, tested, optimized, and certified these popular platforms for maximum performance with NVIDIA GPUs. NVIDIA’s revolutionary new Volta GPU architecture further introduces Tensor Cores, delivering 3X performance speedups in training and inference over the previous generation.
The results of all this work can be found in the NVIDIA GPU Cloud GPU-Accelerated Container Registry, where any M&E professional can access the GPU-accelerated containers for deployment on the desktop, in the data center, or in the cloud.
NVIDIA deep learning containers are updated monthly to keep systems running at peak performance, and everyone who signs up for the registry can get access to the latest software releases and learn about new ways to use deep learning to reduce or eliminate repetitive tasks—and engage audiences that demand increasingly realistic content.