Blending between inputs
Hi all,
I'm trying to create a 2D character capable of dynamic lip sync and facial expressions. I've created all the visemes and a few facial expressions and all was working well using an Additive Blend in my State Machine which I've been able to control in JavaScript. My lip sync works really well.
I'm now trying to include my facial expressions and I noticed that if I make my character smile 100% then the lip sync animations stop completely. So I went back to Rive and indeed, if I dial up my "smile" input (which is above my visemes in my input list) then any viseme mouth shapes get ignored completely.
I guess I was expecting the "smile" to get "added" to the mouth shape rather than override it. Is there a way to get inputs working in this way please? Something like how Shape Keys work in Blender?
By the way, I spent an hour or so writing my own blend functionality in JavaScript but it just made my character's mouth move a lot less (like a bad ventriloquist!) when I introduced the smile :o)
Any help much appreciated!
Hey, it could be an issue with overlapping transform spaces. Would you mind sending over a copy of the .rev so we can check it? You can always open a support ticket with us if you want to keep the file private.
Yeah. If you set the smiling animation to 100%, it means that only that animation will be seen in the Blend state. If what you want is for the expressions to work on top of the visemes, what you can do is separate the expressions from the visemes. Use a blend state for visemes and another blend state for expressions. For this, you have to use different controls for the mouth. One, only for visemes and another for expressions.
Ho
Hi JC, thanks for the reply and the suggestion. I've done what you suggested and moved my emotional expressions into a different Blend State, but now it's behaving very strangely. Did I set up the linkage incorrectly?
All my emotional facial expressions are in the right-hand Blend State, but with this new set up, my character is no longer blinking and when I change one of my inputs there's a weird jitter and the facial expressions are not at all what my inputs dictate. I have a looped blinking animation and a neutral face state in the bottom of my first Blend stack:
but I don't have these in my new expressions Blend State. Could this be part of the problem?
Where might I have learned about using multiple Blend States? Is there any documentation on this that I might have missed?
I see. You don't need to use Any State in this case. You just need to connect entry to the Blend state. For the Blned state of the expressions, you need to use a new layer, like this example:
To export the .rev, you have to use the export for backup option.
Hi
(as I suspected, I did have to upgrade to the pro version in order to export the .rev file)
Sorry
Okay. I want to tell you a couple of things first. When I talk about expression, I don't mean a smile or an expression of surprise. These types of expressions are best not mixed with visemes. I'm referring to making the character say the same word with a happy or sad attitude, animating the sides of the mouth. This movement can be mixed with the visemes. But for this, you need double control in the mouth. One is to move or create the visemes, and the other is on top to move the sides of the mouth and create happy, neutral, and sad poses.
The problem with your file is that you are using the same vertices for the visemes and expressions. As you know, the layers of the state machine are mixed. They do it from right to left, so as the visemes and the expressions use the same vertices/keys, the layers of the expressions are mixed on top of the visemes, and for this reason, the visemes do not work when you activate the expressions. What you can do is create some controls, using bones connected to the vertices to be able to use the vertices in the videos, as you have now, and the bones (that move the same vertices) for the expressions. I will prepare a small demo tomorrow so you understand what I mean.
Hi JC, thanks again for spending time on this for me. I think I understand what you're saying, that I need to create additional vertices or use bones to combine emotional expressions with visemes, but this isn't how I expected this to work at all.
Here's an example of how I think this should work in the same Blend in the same layer:
If I have a vertex whose coordinates are (0, 0) in the neutral (default) state, and I apply one morph which has this vertex at (2, 0) and another which has it at (4, -4), then the resulting shape should display that vertex at a coordinate of (6, -4), when the states are ADDITIVE and the inputs are dialed up to 100%. Because the x and y coordinates are added together, right? This is how Morpher works in 3DS Max and Shape Keys in Blender:
If this is not how this works, then perhaps the Blend in Rive should not be called Blend Additive - perhaps it should be called something else?
I actually wrote my own SVG animation library some years ago in JS and mine allowed for this additive blending when morphing shapes. In fact, in my library, I could dial up a morph value to, say, 200%, and the vertices would move beyond those in the morph target (but in the same direction of course), allowing me to exaggerate morphs without having to create new targets. I think this should also be the case in Rive, and if not, inputs should not allow me to input a value greater than 100.0 or less than 0.0.
Okay. I see what you mean. Yes, I think this should work on the same layer. Let me do a little test and I'll tell you.
Okay. I've been doing some testing and talking to the developers about this, but it doesn't seem like this type of mix is possible. The mix is always done on top, so when you use the smile, the viseme will stop working. They tell me that they are working to add an option that would allow this type of mixes. Sorry for all this confusion. What you're saying makes sense, but right now it's not possible to do it this way. The only way I can think of is to use double control as I mentioned before.
Hi JC, that's a real shame. So I'd have to build multiple versions of each viseme - one for each emotional expression? That's going to be a ton of work for each character and I suspect makes Rive unsuitable for my purposes. As a bit of context, I am building a virtual assistant plug-in which currently works in 3D using ThreeJS but which even after ruthless optimisation is still a total download of ~60MB and runs slowly on older/slower devices which is why I'm investigating 2D fallbacks. The characters are currently driven using JSON files that describe the facial expressions and body animations. The 2D version would have to work in exactly the same manner, so each character would have to have the same suite of facial expressions, visemes and body animations. You can see a sneak preview of the 3D version here:
Do you have any idea when your dev team is considering including the additive blends? If it is not any time soon then I need to look for another solution - I may even dust off my own SVG animation library!
This is a massive shame for me because I was expecting great things from Rive and I've already spent weeks of my time evaluating it. I really think your guys should rename the Additive Blend if it is not additive - this is very misleading. Perhaps something like "top one overrides everything else Blend"?
Thanks for all your time and effort on this,
I can't tell you when this option will be available. It is something they are starting to work on and looking for the best option for this. I understand what you're saying about not being able to use Rive, but I want to show you this demo I've been working on. It's more cartoony, but maybe it will help you understand what I mean with the idea of double control. In this demo I use joysticks to create the visemes. It is not necessary to use joysticks, but I did think it was good to use controls. By control I mean weighing several vertices to a bone and using it to deform the mouth. This way you can reduce the number of keys in the animation, something that can affect performance and file size. I have seen in your example that you use the vertices of all the shapes of the mouth. These are many keys. In this demo you can see that I use a double control on the right side. One is to move the corners of the mouth and thus create the visemes. This control is nested within another one, which is the one I use for the expression. This way I use two different time zones and the expression does not cancel the viseme. I hope this helps.
Hey JC, reviving this thread for a question related to the mouth demo you shared (looks awesome by the way). I saw a clip where this animation was lip synced to some dialogue. I'm wondering how you managed that because I don't see a way to scrub through audio in the timeline to accurately key all of my visemes to my imported dialogue.
Cheers,
B