Kleriq said:
So do you assume that the AudioGeometry component just holds a pointer to some sort of mesh resource?
Pretty much. Ideally the mesh should be shared if it is instanced multiple times in the scene (each with separate transform matrix).
Kleriq said:
So do you assume that the AudioGeometry component just holds a pointer to some sort of mesh resource?
Pretty much. Ideally the mesh should be shared if it is instanced multiple times in the scene (each with separate transform matrix).
@aressera Wow, i had no idea audio is that interesting and advanced! I feel mind blown and ashamed because i have completely ignored this all the years. But i realize this only now. ;D
Thanks for the clear sum up, will read your papers. I always consider my realtime GI system might be eventually useful for audio too, so maybe one day i will come back with related questions…
Regarding the friction example, i did not mean to synthesize sound from materials and physics. I know that's still out of reach.
I meant we have few samples, e.g. wood scraching over metal, and wood bumping on metal. Collisions / contact information from physics would select the ‘proper’ sample.