Please tell us about yourself

On February 11, when the annual scientific and technical awards of Academy of Motion Picture Arts and Sciences will be presented at Beverly Hills, among those receiving the awards will be an Indian American technologists — Kiran Bhat.

The team of Kiran Bhat, Michael Koperwas, Brian Cantwell, and Paige Warner will receive it for the design and development of the ILM (Industrial Light + Magic) facial performance-capture solving system.

Simply put, Kiran Bhat is the brain behind the technology which made it possible to make the facial expressions of Hulk (The Avengers) and Tarkin (in Rogue One) look real. Though, it took him (and his team) seven years to build the technology, the results were as sweet as anyone would imagine.

Kiran won the Oscar for technical achievement in 2017 (Sci-Tech Academy Awards) for developing the facial performance-capture system along with his team at IL&M. He holdsa degree in electrical and electronics, and mechanical engineering from BITS Pilani, and a PhD in Robotics from Carnegie Mellon University.

The technology which he developed at IL&M has been used in movies like Avengers (for Hulk), Pirates of the Caribbean (for Davy Jones), Teenage Mutant Ninja Turtles, Warcraft, Star Wars: Episode VII, and Rogue One: A Star Wars Story. With his startup Loom.ai, Kiran is on a mission to build the best 3D representation for every face on the planet.

Original Link:

https://yourstory.com/2017/06/techie-tuesdays-kiran-bhat

http://www.thehindu.com/todays-paper/tp-national/tp-tamilnadu/Another-Coimbatorean-wins-Oscar-technical-award/article17015979.ece

What did you study?

After completing his schooling (from Class VIII to XII) in Coimbatore, Mr. Bhat went on to do his double degree in EEE and Mechanical Engineering at BITS Pilani and his Ph.D. in robotics and artificial intelligence from the School of Computer Science, Carnegie Mellon University, Pittsburg.

How did you end up in such an offbeat, unconventional and fascinating career?

Kiran was born and brought up in Thiruvananthapuram. His father was a rocket scientist in ISRO, who later decided to move to Coimbatore to start his own company. Kiran recalls being an academically oriented kid who got exposed to programming early on. He was exposed to lateral thinking for the first time when he moved to a Coimbatore (after class seventh) school.

Kiran loved maths, physics, and chemistry. He could speak Malayalam, Thulu and Tamil. He even learnt French in class 11 and 12. Kiran came first in the district in class 12 board examinations.

He recalls his high school days, “I was blown away by the movies The Abyss and The Terminator. Looking at the computer-generated characters, I wondered what will it take to build such a thing.”

Kiran’s choice to go to BITS Pilani after class 12 was somewhat contentious in his family as he had secured admissions in many other nearby universities (Chennai, Trichy, and other southern cities) as well.

BITS Pilani had people from different geographies and it felt more than just an academic institute. Kiran says,

Tell us about your experience at BITS PILANI

You could find your corner in the world and do what you wanted to do. There was no rat race of achieving just one goal. What made BITS special was that a lot of people came to the college with similar drive and aspirations. I’ve always believed that the best places to study are the ones where you can learn from others.

The 41-year-old technologist had his interest in robotics and digital control theory even while he was at BITS Pilani. “I was always fascinated with understanding movement and nature. So, studying facial movements and representing it in a computer felt like the ultimate challenge in this aspect. Representing faces in a computer is tricky to get right — we have to capture both, the large scale movement and subtle movements and nuances as they are very important in conveying emotion. The best way to solving this was to use computer vision technology, which I has been pursuing right from my Ph.D. days,” he told The Hindu by e-mail.

Since Kiran was interested in continuous math, calculus, and mechanics, he chose mechanical engineering (which had all three components). Because of his interest in robotics, which required him to learn control theory at the end of his first year, Kiran opted for electrical and electronics engineering (in addition to mechanical engineering). He adds, “It made a complete package as I could build the whole system and drive it (and give it a brain).”

He started tinkering with robots as a hobby but soon it became an obsession. In his second year, Kiran built PD controller. He says, “We had a robot arm in the lab. I was trying to get it to move in space. The control algorithm which allowed that, was written with fuzzy logic.”

Kiran was surrounded by some great minds in the robotics lab. He shares the story of Sartaj Singh, a hardware guru, who once hijacked a fully functional robot (bought for the institute) in the lab, removed its motors, and then put his own motors (and wrote his own controllers). He did all this just because the robot had a lousy operating system and a control box which was frustrating to program. Kiran recalls, “There were a bunch of guys like him who were not shy or afraid of anything. They convinced the professors to set up collaboration with Motorola so we had all these microcontroller chips, evaluations models coming in.”

Kiran and others in the robotics lab represented a diverse combination of people with interest in control theory, building mechanisms, DSPs, and fabrication. Every one fed off and challenged each other in a creative way. They spent a lot of time in the lab and almost started living there.

What did you do next?

Looking back, Kiran believes that the five years (for the dual degree) at Pilani gave him enough time to sink his teeth into something substantial.

Eventually, he graduated with 9 CGPA and enrolled for PhD at CMU in 1998. Choosing the college was a no-brainer given the fact that it was the best university in robotics then.

Kiran felt that CMU was like BITS Pilani except that it was much smaller. His class had 16 students from different academic (AI, perceptive computing, hardware) and geographical background (Chile, Mexico, Europe, China, India, Canada, and the US). Since the robotics students tend to attend most of the computer science (CS) classes and a few dedicated robotics classes (like control theory), Kiran ended up interacting with a lot of CS students.

A lot of research work going then (1998-2004) formed the basis of deep learning and AI revolution of today. At that time, they were considered to be esoteric research topics.

In the first two years at CMU, Kiran was a part of two all-terrain vehicles (ATV) robotics team. The vehicles would roam around the grounds of CMU and communicate with each other. Kiran was working on the perception aspect of the ATVs, ie how to identify the sidewalks and roads. He says, “Pittsburgh has crazy weather and it snows a lot. Snow with slush becomes grey snow. The robot has to see the environment and make sense of the world.”

Kiran had three advisers in CMU. His mentor was Pradeep Khosla (now the Chancellor of UC San Diego). The two advisers included Prof Steven Seitz (now a faculty at the University of Washington) and Prof Jessica K. Hodgins.

What was your career path?

At the end of his second year, Kiran came to IL&M for an internship. This dramatically changed his perspective on computer vision. Till now his work was more AI based in computer vision but here it was focused on graphics. I got a chance to work on data from the actual cameras.

He had to figure out how to solve for a camera motion in a real video sequence. He says, “For example, in a Transformers movie where Michael Bay is blowing shit up on the screen, how will you put the CGI robots into the (film) plate?”

Most of the parts you see in these movies are CGI layers. For all this to work, the most basic thing is to know how the camera was moving in real world relative to the objects in the scene and what it was doing optically. Kiran tells that one can’t put too many sensors on the camera in real-world situations because the directors hate to put specialised motion capture set-up, or an optical encoder on the camera.

Luckily mathematics is there to help. Kiran explains,

If you have a video sequence and you ask a question—what’s the movement of the camera that resulted in this video you’re seeing? That turns out to be a standard computer vision problem called structure from motion and camera motion reconstruction. There are many ways of solving it. The more complex the sequence is, the harder it is to automate it

Kiran suggested to solve this with a program (algorithm) that will analyse the situation. It will still need the artists because the computer algorithms are fragile when there are complex changes in the environment. An explosion can make all computer vision algorithms useless. Most of these algorithms make some assumption about visual continuity in the scene and pixels change colour dramatically on an explosion. Similarly, motion blur, smoke, and all the things that are used to make something look interesting cinematically, throw off computer algorithms. Fully automated algorithms won’t work in such cases.

Mr. Bhat, who had his first stint at Hollywood in 2000 when he spent a semester interning at ILM, returned to ILM full-time in 2006 and the team developed the ILM facial performance-capture solving system. Hulk (The Avengers), Maz (Star Wars Episode VII) and The 4 Turtles (Teenage Mutant Ninja Turtle) are some of the famous characters for which this technology was used, he says.

When Kiran joined IL&M, they were working on episode 2 of Star Wars and trying to make Yoda’s cape look real. They were struggling with the simulation of the cloth. There were many problems where the IL&M team wanted to mimic nature but didn’t know how to do it.

Kiran suggested,

We just have to look at the videos of real cloth and measure perceptually how real cloth moves and get a program to analyse it and match it to the computer simulation (to make it look like real). It’s called an inverse problem where we (computer vision people) look at the video and figure out a model which could have caused that video.

Talking to friends at IL&M, Kiran knew the most difficult challenges that VFX industry faced at that point of time. Everything lined up nicely in terms of what he wanted to do personally.

His research was on how to make numerical simulation match reality (specifically cashed on dynamic objects like cloth, fluids, rigid bodies). Numerical simulations are partial differential equation (PDEs) which are very sensitive to initial conditions. To make these numerical systems behave well, people add damping into the system. But it doesn’t feel real. He explains,

How did Loom.ai happen?

He worked on the technology for six years, from 2009 to March 2015 when he left ILM, and is now the CTO and co-founder of Loom.ai, a San Francisco start-up building advanced machine learning and VFX technology to bring virtual communications to life.

A large part of movies now are using an abundance of digital technologies. “I feel that the next few years will see a surge in high-quality digital characters, in part due to advances in performance capture,” he says.

“Look for what the top students around the world are excited by, and try to see what’s unique about those topics,” is Mr. Bhat’s message to students.

Last year, Cottalango Leon, who is also from Coimbatore, received the Technical Achievement Award with J Robert Ray and Sam Richards for the “design, engineering and continuous development of Sony Pictures Imageworks Itview”.