Stanford A.I. can realistically score computer animations just by watching them

In the early days of cinema, organists would add sound effects to silent movies by playing along to whatever was happening on screen. Jump forward to 2018, and a variation on this idea forms the basis of new work carried out by Stanford University computer scientists. They have developed an artificial intelligence system that’s able to synthesize realistic sounds for computer animation based entirely on the images it sees and its knowledge of the physical world. The results are synthesized sounds at the touch of a button.

“We’ve developed the first system for automatically synthesizing sounds to accompany physics-based computer animations,” Jui-Hsien Wang, a graduate student at Stanford’s Institute for Computational and Mathematical Engineering (ICME), told Digital Trends. “Our approach is general, [meaning that] it can compute realistic sound sources for a wide range of animated phenomena — such as solid bodies like a ceramic bowl or a flexible crash cymbal, as well as liquid being poured into a cup.”

The technology that makes the system work is pretty darn smart. It takes into account the varying position of the objects in the scene as assembled during the 3D modeling process. It identifies what these are, and then predicts how they will affect sounds being produced, whether it be to reflect, scatte,r or diffract them.

“A great thing about our approach is that no training data is required,” Wang continued. “It simulates sound from first physical principles.”

As well as helping more quickly add sound effects to animated movies, the technology could also one day be used to help designers work out how products are going to sound before they are physically produced.

There’s no word on when this tool might be made publicly available, but Wang said that the team is currently “exploring options for making the tool accessible.” Before it gets to that point, however, the researchers want to improve the system’s ability to model more complex objects, such as the lush reverberating tones of a Stradivarius violin.

The research is due to be presented as part of ACM SIGGRAPH 2018, the world’s leading conference on computer graphics and interactive techniques. Take a second to feel sorry for the poor Pixar foley artist at the back of the hall who just bought a new house!






Article source: https://www.digitaltrends.com/cool-tech/stanford-system-creates-sound/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Translate This Page »