As Senior Vice President in the office of the CTO of LG Electronics Inc., Dr. Nandhakumar is currently leading strategic technology projects in the areas of next generation media & displays, smart cities & communities, and robotics. Dr. Nandhakumar has a Master’s degree in Computer Engineering from the University of Michigan, Ann Arbor, and a Ph.D. in Electrical Engineering from UT Austin. He is a senior member of the IEEE, and serves on the board of directors and advisory committees of various industry organizations, startups, incubators, and academic institutions. He recently won a Technology & Standards Leadership Award from the CTA for his contributions to the industry as the Chairman of CTA’s Technology Council.
Joshua: You have a very diverse history, and it would be great if you could just talk us through that briefly, your experience, your life journey, and how you ended up where you are at now, if you don't mind.
Dr. Nandhu: Sure. I can try to make it short. I did my Ph.D. at UT Austin, but I did my master's at University of Michigan Ann Arbor, in 1D pattern recognition, focused on biomedical applications, at the very early days of AI.
My Ph.D. was in computer vision, machine vision, and an AI. And then, I taught for nine years in academia, two years at University of Texas at Austin and seven years at the University of Virginia Charlottesville. And during my teaching career, I supervised a bunch of Masters and Ph.D. students. I was supported by funding from NASA, from the Air Force, from NSF, from DARPA, in the area of machine vision and robotics.
And then, I moved to industry. Went to the Bay Area, commercialized some of these technologies at a mid-sized company, and then I was recruited by LG to help start a new lab looking at advanced technologies and media processing. I joined LG in 1997 when we actually had already invested heavily in machine learning and artificial intelligence. We had hired really bright people from Stanford and other universities around the world. They were working on speech recognition.
In those days, we worked on things like face recognition. But this was so long ago, the cost of things was just not inexpensive enough. It was very expensive to implement that kind of processing in consumer products.
In those days, we worked on things like face recognition. But this was so long ago, the cost of thing was just not inexpensive enough. It was very expensive to implement that kind of processing in consumer products.
And connectivity was very poor. We still had dial-up modems. So we couldn't do back-end processing like we can do today, and move images to the cloud.
So we shelved a lot of that advanced R&D for when the economics would be appropriate to bring those technologies to our customers' products. But we did work on areas such as fuzzy logic to help improve the behavior of appliance. My team's focus was very much on media.
Eventually, we ended up converting that lab into a standalone business unit, which exists today. It's called Triveni Digital. And that business unit developed software products for the emerging digital television industry, specifically on the server-side, inserting metadata into digital broadcast beams, in order to make end devices much more intelligent, like program rights and interactivity. We worked on projects with PBS for deployment of interactive systems.
It's a profitable company, based in Princeton, New Jersey, where I lived for about eight, nine years. And then, I moved to the West Coast to join a startup, and then moved back to LG's ThinQ organization. I was asked to look at areas a few years down the road that would become important revenue-generating services. So in 2007, I engaged with Netflix, and we were the first consumer electronics company to incorporate the Netflix service in our product.
We were the first consumer electronics company to incorporate the Netflix service in our product.
And once we went down that path, once Netflix saw the interest, they actually spun out a team which is now Roku. Roku was initially incubated within Netflix, and it spun off when LG engaged. And then, we opened the door for a bunch of other CE companies that followed and connected the Netflix service in their TVs.
Joshua: With everyone moving to not just smart televisions but also to services like Netflix, and watching things on Roku and Apple TV instead of a cable box, and choosing their services, even more so now than ever with Disney+ and Apple TV+, what was supposed to simplify things, has ironically made things a lot more complicated, because there are so many more choices. And everyone seems to be desperate for some form of technology that helps us figure it all out.
At what point do things become easier?
Dr. Nandhu: I think we are already there in many ways where we don't know about it. So, there are two aspects. One is personalization. Netflix has done a fantastic job from the very early days of collecting data and understanding what viewers want, and presenting viewers with options that they ultimately decide is the right option for them.
One of the features that I talked about at last AI Summit was something that we have been refining over the years. We have it to a point where it's quite sophisticated. One is personalizations, I think, and one is just searching for content.
I gave this example. I was on a plane. I saw a show, a television show, and I forgot the name of it. When I came back, I went to my LG Q9, which has voice recognition and personalization technology.
I spoke into the remote and said, "I'd like to watch a television show based on a high school football team in Beverly Hills." And it returned to me the show. It also opened Netflix and went to episode one. I had no idea, of course, the name of the show, or any actor or any director. I had no idea it was on Netflix, and it figured that out for me. The queries were handled, the complex searching was handled by a combination of ThinQ AI and Google.
I spoke into the remote and said, "I'd like to watch a television show based on a high school football team in Beverly Hills." And it returned to me the show.
We have an architecture where we integrated with a bunch of the other services, like Amazon, like Google. And our TV generally decides what is it that the TV can process, through our ThinQ backend, and what queries are appropriate for Google, and then wait for Google to send that information back to ThinQ. And then, implement the appropriate action that is very TV-specific, knowing that I have a Netflix subscription, it's active, and all that.
So I was quite impressed, actually. This is the kind of thing that exists today, and many people are not familiar with it.
Joshua: True - people don't know this tech exists. What, in your opinion, do we need to do, do you need to do, in order to get there?
Dr. Nandhu: I think it's just explaining these capabilities. And I think we as consumer electronics companies perhaps don't do enough, because we have just so many capabilities now, either in a smartphone or a TV. And to explain all of this to consumers is quite difficult.
But folks like you, who write intellectual articles, influencers who post on forums, enthusiasts find out about it. Common consumers, still, it takes a while for them to become aware. And I know that my kids can use Google assistant and exploit the potential there a lot more than I can.
I often say to my kids, "Oh, I didn't know that my phone could do that."
I often say to my kids, "Oh, I didn't know that my phone could do that." And they knew it, just by talking to their friends from school. They're exploring all these functions, and they are just a lot more aware, so I think it might be just a generational thing. The next generation generally communicates and finds out more about these capabilities while we have our specific jobs that we are doing and are more focused on it. I'm not sure that I know exactly the answer to your question, but that's what I'm observing.
Joshua: Is this that younger generations are more comfortable with AI? Do all the data people may be sharing with their tech make them uncomfortable?
Dr. Nandhu: Yes, I think, so, as this is not just an LG thing but also an industry issue, right? It's across many different types of products and services, and it's definitely something that we are addressing in the Consumer Technology Association as well. I don't know if you know, but I'm also the chairman of something called the Technology Council. It consists of a number of execs from many different industries. These would be cable service providers, telcos, technology companies, all providing media technologies. And we form a council within the technology, the CTA, either advising the senior management on the big trends, or addressing specific areas for standardization or for positioning in terms of marketing, messaging, definitions, or for communications to government agencies, in terms of potential regulations or just even highlighting certain issues, like cybersecurity, AI, robotics. All of these are areas that we address.
So we discuss this not just within LG, but broadly, to make sure that the industry is doing the right thing. So this data and privacy issue is a very important topic that we all think about. And as you said, it's a double-edged sword. When you share data, you get some benefit out of it, and there is a risk of that data being misused.
When you share data, you get some benefit out of it, and there is a risk of that data being misused.
So we recommended practices on how you protect data if it's at different levels, it's within the device, it's the communication, it's in the backend. And it addresses technology for security, all the way from Silicon up to software development processes, deployment processes, users, and how they might interact with devices.
But it is an issue that, from user behavior, how much data do you share and what benefit do you get out of it. And as long as you're very careful about explaining it, then I think, and you see the benefits of it. [LG] users generally see that they are willing to share some data in order to get benefits. And we are careful about how we use data. We anonymize it and remove any personal identifiers when we store it.
There's also the aspect of control. I'm not a psychologist, but the observation has been that when you get to the next generation of technology, you gain some convenience, but you give up a certain amount of control. And I think that there is that hesitation in different generations. The typical example is automobiles. You go from a stick shift to an automatic, and some people dislike it because you got the convenience, but you didn't get the performance and the control that you had. And then, now you go from automatic transmission to autonomous vehicle, or even lane keeping, et cetera, and you find some hesitation. Some people don't want to give up that control and don't trust the thing. Do I really want to use my remote control to find things when I can speak into it? There's always a little bit of trust, and definitely, when you give a little control for reliability, you don't trust the machine to do as well a job as you do. There's that level.
Joshua: "No, I know where my stick shift is. I know where my keys are. I'm not touching that thing."
Dr. Nandhu: Yeah, exactly. Yeah. Once you get familiar with a certain technology, there is inertia against moving to something else, whatever that else might be. Especially if you have experienced less than 100% reliability.
When you get to the next generation of technology, you gain some convenience, but you give up a certain amount of control.
But one important thing is, in order to make sure that... The user experience is very important, right?
Dr. Nandhu: And I think at LG, we are doing a pretty good job, especially if you look at our webOS TVs, our appliance UIs. We're trying to hide a lot of that and make it simple to use, but with all the AI in the back, where you can't see it. I don't think it's such a good idea to let people know they're using complex AI.
Joshua: Given your history and your 10,000-foot seat in AI and machine learning, what are you most excited about?
Dr. Nandhu: The types of AI decision making systems have really multiplied. There are so many different approaches, and there are so many ways in which you can take an approach, adopt it for an application, or combine different approaches together, whether it's maybe mock-up models or recurrent neural networks, or recurrent units. There are so many different types of these architectures.
Even small changes in the algorithms can lead to huge improvements in performance. So we'll see the very rapid evolution of AI, because the systems are actually a lot more complex than we realize, and as we understand the complexity of these systems, we'll make interesting decisions on making them faster, making them better, be able to train better, classify better, and make decisions.
We are just on the tipping point of how quickly they can address problems and complexity. So, human life conversations, analysis, — I think we have the collection of tools to put things together, and that will grow very rapidly.
And that makes for AI in devices that were before not feasible, and that's going to be quite impactful in the capabilities that they provide.
The other area is in Silicon. We already are seeing it, the acquisitions being made by Intel and others, where making them less expensive, less power consuming, and very small. You can put these in sensors, they can essentially be driven by either tiny batteries, or solar. They can be self-powered. And that makes for AI in devices that were before not feasible, and that's going to be quite impactful in the capabilities that they provide.