Vishal Sikka: AI can be a great enabler and enhancer of human potential, creativity and imagination
Vianai founder Dr. Vishal Sikka sees “a future of AI working with and for humans, and together achieving unimaginable heights!”
By Rahil Menon
Chatbots are the new whiz kids in town. Powered by Artificial Intelligence (AI), these wonder machines can write executive summaries, answer questions, create images, write stories and even love poems!
And they can do a sundry other creative things that intelligent humans do, but in a jiffy. Their apparent ‘magical’ powers inspire awe and anxiety among mere mortals?
How do they work? Do they understand the meaning of things? And what danger do they pose of disrupting the world run by humans.
To answer such questions, we turned to Vishal Sikka, 55, Founder and CEO of Vianai, a San Francisco Bay Area-based startup that provides advanced technological software and services in Artificial Intelligence and Machine Learning to large companies around the world.
Sikka, former CTO of SAP AG and former CEO of Infosys, acknowledges that people with the intent and resources can do dangerous things with AI, like with any powerful technology.
“These technologies can also be used to create massive disinformation campaigns at very low cost, swaying election results, or fomenting violence against specific individuals and groups,” he says.
But Sikka for one “looks forward to a future where AI is a great enabler of human potential of human creativity. It is an enhancer or amplifier of human creativity, human imagination.”
“I see a future of AI working with and for humans, and together achieving unimaginable heights inaccessible to either one alone. I see a future of pervasive human-centered AI, full of life and intelligence,” he says.
“India has a unique responsibility and the opportunity to be a leader in this time of AI,” says Sikka. “We must make it possible for young people in India, and everyone else, to learn more about AI.”
In an interview with the American Bazaar, Sikka, who also serves on Oracle’s board of directors, the supervisory board of the BMW Group and as an advisor to the Stanford Institute of Human-Centered AI, answers all the questions about AI you were afraid to ask.
AB: How did you get interested in AI, and take it on as a career path?
VS: When I was about 17 or 18, I encountered a book which had essays by Marvin Minsky and Ed Feigenbaum, and Joe Weizenbaum, another professor at MIT.
Weizenbaum had written a chatbot in the 60s called Eliza, and I was fascinated by that. I remember I was still in high school when I wrote a letter to Marvin Minsky from India, and he sent me back a letter in reply. That hooked me, and that’s how I got interested in this field.
Sometime later, I wrote an essay for a magazine called Computers Today, and it was published. Then, I came to Syracuse for my undergraduate studies and spent some time at MIT taking classes from Marvin and spending time with him and his students like Danny Hillis. That brought me right into the epicenter of where this field was developing.
AB: There are many misconceptions about AI, and many people may not understand what it is. How would you define it in a way that an average person could understand it?
VS: I always go back to Marvin Minsky’s original definition of AI, given in 1956. John McCarthy came up with the term “Artificial Intelligence,” and McCarthy and Minsky are considered the two fathers of AI. Marvin provided a definition that, I think, is valid to this day.
He said that “AI is the science of doing those things that would be considered intelligent if they were done by people.” To me, this continues to be the best way to think about AI.
Can we get machines to do things that, if people did those things, we would consider those people to be intelligent people? This is a wonderful definition, and as you can see, we are still quite far from it.
AB: What are some of the most exciting things that you see going on in the field of AI today?
VS: Obviously, the recent advances in natural language processing are quite inspiring and wonderful. Things like text summarization, question answering, generation of text and images, etc. have become quite powerful.
A lot of the sensory tasks like recognizing sounds, recognizing patterns, recognizing objects and images – these also have become extremely powerful.
However, we are still really far away from having computers understand the meaning of things. John McCarthy called this “understanding the reality behind the appearance.”
Today, machine learning systems can recognize appearances, but they are not so good at recognizing the reality behind the appearance.
When I was the CEO of Infosys, we were among the first sponsors of OpenAI, which created ChatGPT. ChatGPT is a wonderful advancement in AI.
Let me give you some examples of what I mean. When we see a shadow moving, we know that there is a moving object or a person casting that shadow.
Most likely, we can infer where the source of light is that is causing the shadow. These are things that the popular forms of AI cannot easily do today.
Another example, someone, I can’t remember who, came up with, let’s say I created a new word – let’s say “schwister,” and schwister is simply defined as any sister who is between 12 and 17 years old.
Now if I asked you, “do you have a schwister?,” then you would immediately be able to tell me whether you have a schwister or not. Just from one definition, and you wouldn’t need a million examples of what a schwister is, right?
The common types of AI today might need to be trained on many examples of what is and what is not a schwister to be able to answer questions like these, which you as a human can do so easily. These create exciting opportunities for improving AI systems to include these types of capabilities.
AB: There’s a new program called ChatGPT, and it’s a bot that can generate human-like responses when you ask it questions. Can you explain how something like ChatGPT works, and what is your take on this system?
VS: When I was the CEO of Infosys, we were among the first sponsors of OpenAI, which created ChatGPT. ChatGPT is a wonderful advancement in AI.
It is a question answering bot and a generative system that can create things like poetry or textual compositions of various sorts. It can generate code in various programming languages, like python or java, amongst many other things.
These systems are made up of neural networks, which are computational systems inspired by human biology, by the 80-100 billion neurons that each of us have within us.
A computational neuron has activations, weights and aggregation functions that can take inputs and make outputs. These outputs become inputs to other neurons, and so on.
It turns out that when you put a very large number (billions) of these neurons together in a network, you can tune their weights using some iterative “training” techniques to make them do amazing things – like classification, prediction, text generation, image generation etc.
ChatGPT is a bit more complicated than this (it has more complex components like transformers that help implement an attention mechanism, etc.), but at its heart, this is what it is.
Given some “n” words in a sequence, it predicts what the next word, or the “n+1”th word, is likely to be. It does this again and again – and suddenly, you have a poem!
While it seems like this system is generating human-like responses, it is important to remember that systems like ChatGPT have absolutely no idea what they are saying.
They don’t know what a poem means, they don’t know what it means to write a program, or summarize a document, or use a recipe to cook some food.
They are simply applying a computational procedure to predict words that make up a sentence, given a question or a prompt. We, as humans, tend to be drawn towards anything that exhibits human-like behavior, and that is what is going on here.
AB: When it comes to chatbots or any other forms of AI, what are some of the real dangers that you see?
VS: Bias is a very big and common one. These technologies don’t have an inherent representation of meaning or semantics, so they are prone to generating toxic results depending on what kind of data they have been trained on.
If the output is used to make critical decision, these decisions could be unfair, discriminatory, incorrect, and so on. Another danger is that people with the intent and resources can do dangerous things with it, like with any powerful technology.
We have examples of powerful technologies that have been weaponized before, like nuclear technology or genetic engineering. Sophisticated facial recognition and systems for automated decision making can be used in weapons with devastating effect.
These technologies can also be used to create massive disinformation campaigns at very low cost, swaying election results, or fomenting violence against specific individuals and groups.
All of these are some very serious dangers of these technologies that we need to work very hard to mitigate.
AB: There’s a lot of fear in the US towards China getting ahead in terms of AI. Is there any truth to it? What is your take on that?
VS: China has gone through a tremendous set of advances in AI over the recent decades, but I don’t feel any particular concern. China has produced some remarkably successful applications of AI. Some of them that we are familiar with in the US, like TikTok, but there are many others used primarily within China.
However, in terms of AI technology and research, I think that the US and the West are still quite far ahead. So, it’s a mixed situation where China has the societal use of applications – video processing at a massive scale, weather forecasting, cybersecurity, surveillance, retail etc., but in foundational technology, advanced research, and core algorithms, they are still behind.
AB: What is your advice for people interested in entering the field of AI today? Where would you see as the best opportunity for them contributing?
VS: I strongly encourage all people to learn more about AI and get involved in whatever aspect of AI makes sense to them. This could be in applying AI in some interesting way, advancing the technology, building tools, teaching others about it, or even critiquing AI or highlighting its dangers, flaws, and limitations.
We have a very serious problem today of talent and asymmetry in AI. The number of people who understand AI technology is very small.
Just to give you an idea, my wife Vandana who is on the board of code.org, often mentions this statistic that back in the Dark Ages (5th -10th century AD) about 6% of the world’s population could read and write.
Today, in the 21st century, the number of people who can program a computer is below 1% (if you are generous with the definition of programming). The number of people who specialize in AI out of that is far smaller.
By one estimation, the number of people who can build an AI application today is less than two million. Out of those, the number of people who could operate an AI system (machine learning engineers or operations people) is less than 100,000. It’s a very small number of people who could explain the details of how ChatGPT works.
So, we have an acute shortage of AI talent and understanding, and I strongly encourage young people to get into AI. Try to use it, try to understand it, love it, or hate it – but don’t ignore it. The more we all engage with it, the better off we will be.
AB: What recommendations do you have specifically for India and Indians when it comes to AI?
VS: India has a unique responsibility and the opportunity to be a leader in this time of AI. We must make it possible for young people in India, and everyone else, to learn more about AI.
There are plenty of classes everywhere on YouTube, Skillshare and many other educational platforms. Go there, learn about it, get your hands dirty, build something (it could be simple or small), or understand what it lacks and write about it, help to improve it.
Go from being just a consumer or a bystander, to being a maker of this technology. For Indians, AI is not only a lucrative and relevant area of engagement, it is also something that is very badly needed for the country and the world.
AB: What are you personally working on right now as it relates to AI?
VS: I believe that AI will prove to be one of the most powerful and enduring technologies of our lifetimes. I, along with my colleagues at my company Vianai Systems, are working to make AI more useful, more reliable, less dangerous, cheaper and faster for everyone.
Systems like ChatGPT cost tens of millions of dollars to train, and they are incredibly expensive and energy inefficient to run. Humans operate with less than 2,000 Kilocalories in a day (just our brains operate with even less). We still have a long way to go before AI systems operate so efficiently, and I am working on this.
To make AI more useful, we build applications for enterprises like banks, insurance companies, manufacturing companies, retail companies etc. which use AI.
Our applications help these companies engage better with their customers, better understand business risks, predict failures that might be costly, get early warnings on things that might be important to them etc.
There are endless possibilities here, and I am really excited about being at the center of this transformation. It is a lot of fun.
AB: What does the future hold? Not only for your company, but the future of AI and the world?
VS: I look forward to a future where AI is a great enabler of human potential of human creativity. It is an enhancer or amplifier of human creativity, human imagination.
There is a lot of talk about AI replacing human workers and disrupting the world – but I see a future of AI working with and for humans, and together achieving unimaginable heights inaccessible to either one alone. I see a future of pervasive human-centered AI, full of life and intelligence.
AB: Finally, what are some of your favorite resources, books, websites, videos etc, to help the future generation learn more about and understand AI?
VS: That is a very good question. Marvin Minsky wrote this very nice book called the ‘Framework for Representing Knowledge,’ and another called ‘Semantic Information Processing,’ that are both great and still my favorites.
There is a wonderful textbook by Stuart Russell and Peter Norvig. Stuart was my academic sibling. He and I had the same PhD advisor.
He has also written a very nice book on the problem of defining boundaries for AI, or what is called the alignment problem. These are also my favorites. A book that I’m currently reading is ‘Machines Like Us’ by Brachman and Levesque.
There are, of course, many resources, podcasts, blogs and videos available on the web – the best one for you depends on your level of knowledge and interest, and your unique perspective.
I would say, just explore and find something that fits well for you. With an open mind and a critical thinking attitude, you cannot go wrong. We are fortunate to live in a time where AI will have a huge impact on our lives and my wish is that we all become “experts” in AI, the way we are today all experts in reading or math.
It has the potential to be yet another one of the great tools that we, as humans, have invented to make our own lives and the lives of others on the planet better. And I think that is a very wonderful pursuit.
READ MORE:
Vishal Sikka resignation: Infosys loses its first ever Indian American CEO (August 18, 2017)
Infosys’ new boss Vishal Sikka may operate from the US(June 13, 2014)