top of page
AI based emotive graphics system research

During my time at SFU, I worked with Professor Steve Dipaola and his grad team of PhDs on their research on AI based emotive graphics system. My role was to focus on the animations, models, and renderings of the avatars. The goal was for the avatar to be more natural and affective for health and educational use. I used mainly Maya, Unity, and Photoshop.

Contributions

Animations

Capture36.PNG

Models

Capture20.PNG

Renderings

Capture42.PNG
Animations

My responsibility in animations was to create, organize and quality control. When an AI avatar is doing speeches, he/she will perform a series of minute animations mimicking what people would do. I took part in creating these animations with Maya such as the beat gestures like chop motions, and some idle gestures, such as shifting weight left and right.

Animations

an AI avatar doing a chopping beat gesture, 

I have organized all the animations that the research team had collected so far into categories and names. It is important because the programming part of the research would have an easier time implementing the animations into the avatars.

excel1.PNG

animation list with different highlights noting their issues

In terms of quality control, I make sure the animations have no clipping issues and have a reasonable speed. If any clipping issues were found, I fixed them. One of the most frequent issues was that the gestures were animated with just the starting gesture and the ending gesture in mind, but missing the movement in between. So the hands often happened to clip through each other to get to the ending gesture. A lot of fixes that I did was to animate the trajectory of the hands between the starting and ending gesture.

greekboi_startinggesture.PNG

a clipping issue

The animations were made by many students, so the speed of the animations vary. I adjusted the speed so the gestures would be slightly faster than the normal speed. The reason being that during the transitions between animations, the speed of the animation would not look slower than usual.

Models

The AI research was looking to have multiple avatars, including an androgynous avatar. I created an androgynous avatar from Adobe Fuse and imported it into Unity. Since there are only male or female to choose as a beginning model in Fuse, I have two ways to create an androgynous avatar: start from a male model and give him feminine features, or start from a female model and give her masculine features. I did the latter only because I thought it might be easier.

Models
Capture29.PNG

the androgynous avatar viewed in Fuse

Capture37.PNG

applying the HumanIK skeleton in Maya

Renderings

I was tasked to work on the rendering on the androgynous avatar that I have. I mainly focus on lighting, texture files, and shaders. I used a standard three point lighting to create soft shadows, so the avatar would look easier to approach. The sharpen tool, colour correction and contrast were used in Photoshop on texture files to bring out the details, such as the freckles on the face of the avatar. Shaders were the main aspect that brought the avatar to life as the different shading mode, roughness, and normal maps (bump maps) changes how the light would react to the materials. For example, you can see the light source from the avatars’ pair of eyes.

Renderings
Capture23.PNG

androgynous avatar in Fuse

Capture42.PNG

androgynous avatar rendered in Unity

Capture46.PNG

basic three point lighting of the scene

Capture39.PNG

an avatar doing a talking gesture

bottom of page