Originally Posted by
ChrisCall
Just speaking from my own experiences here, I have found it handles speaking exceptionally well in general.
I've heard some fantastic results on rap/hip-hop tracks etc. The flip side is that when the instrumentation is bare, every tiny little missed detail stands out. It's easier to scrub in spectral editing since there's not much sound on the spectrum to dig through ... but it all sticks out.
But really, since it's trained on voice, so much depends on how well it recognizes a specific type of voice sound, and how much it mistakes certain instrumentation for voice. That's why having multiple models could potentially prove very useful. Reverb kinda fits into that category as well. This model is trained on different music than the primary AI over in the other thread. The primary one can't handle reverb nearly as well as this one.