As artificial intelligence rapidly evolves — shaping industries from healthcare to infrastructure and becoming increasingly integrated into mission-critical applications — questions of explainability, traceability, and ethical considerations remain paramount, said Dr. Venkat Srinivasan, a Boston-based investor and AI thought leader.
Delivering a keynote at “Startup Bazaar: InnovateAI,” hosted by The American Bazaar on Jan. 31 at Vienna, VA, Srinivasan discussed the evolution and current state of AI, highlighting its potential to transform businesses and sectors while emphasizing the critical role of both data and human input in developing intelligent systems.
Srinivasan is the founder of Innospark Ventures, an evergreen venture fund that supports early-stage, AI-driven startups. With deep expertise in AI, computational algorithms, and natural language understanding, he has been at the forefront of innovation in these fields.
READ: Automating healthcare administration: In conversation with Eden’s Naomi Rajput and Evan Smith (January 2, 2025)
Over the course of his career, he has built multiple successful ventures including eCredit, Corporate Fundamentals, and Rage Frameworks. In recent years, he has launched purpose-driven initiatives such as EnglishHelper, which leverages AI to combat global illiteracy; Gyan.AI, an explainable large language model; Creda Health, a digital health assistant; and PrismX, an automated software development platform.
In the keynote, Srinivasan outlined AI’s transformative potential—enhancing efficiency, driving innovation, and reshaping industries. AI—beyond just generative AI—is set to revolutionize every field at an unprecedented pace, he said.
He highlighted four key areas where AI is making a significant impact. First, in straight-through processing, autonomous agents and agentic systems are streamlining workflows, reducing manual intervention, and increasing efficiency.
Second, AI is driving rapid discovery, accelerating breakthroughs in drug development, materials science, and catalysts, enabling advancements that would have taken years through traditional methods.
Third, personalization is transforming fields such as medicine, education, civic services, and business solutions by tailoring experiences and services to individual needs.
Finally, AI is revolutionizing instant service by enabling seamless interactions through conversational agents, self-checkouts, and autonomous systems, enhancing customer experience and operational efficiency.
Srinivasan explored the challenges of AI adoption, including data quality, causality understanding, and model explainability. He also shared examples of successful AI implementations across various fields such as education, healthcare, and construction.
“I live in this world every day, every hour,” Srinivasan stated, underscoring the necessity of explainability in AI. While deep learning has made model training more cost-effective, it has not resolved fundamental issues of transparency. “It doesn’t have to be right all the time, but you need to know why it is wrong,” he emphasized.
Current AI models, particularly black-box neural networks, struggle to provide clear reasoning for their outputs, Srinivasan said. Without the ability to trace back decisions and understand causality, AI will face hurdles in being adopted for mission-critical use cases. “Reasoning after the fact is useless,” he noted.
Tractability—the ability to improve a model by understanding what data is missing—is another critical concern. “A tractable model is going to point and say, ‘Look, if you had this type of data, then you can go from 50 to 60, or 70, or 80.’”
Reliance on brute force
However, today’s AI models largely rely on brute force, throwing more data at them without knowing if or how they will improve. This lack of repeatability makes them unreliable for applications where consistency is key, he said.
Srinivasan, who earned a doctorate from the University of Cincinnati’s Carl H. Lindner College of Business and also taught at Northeastern University’s College of Business Administration, highlighted areas where generative AI is already proving effective. One such example is coding, where large language models (LLMs) are reducing average programming efforts by 30% to 40%, with the potential to significantly diminish the need for programming altogether within the next five to ten years. He recalled that two decades ago, in an interview with a Wharton journal, he had predicted that AI would eventually render programming obsolete.
He also noted that LLMs could be particularly useful in discovery-driven fields where human knowledge is inherently limited, such as drug discovery. However, he emphasized that while these models might aid in the initial discovery phase, they would not be as effective in managing the subsequent stages of the process.
Additionally, he pointed out that AI can be valuable for non-mission-critical tasks such as drafting speeches or generating creative content like writing a poem to celebrate someone’s 50th birthday.
According to Srinivasan, who holds nine patents and has authored over 30 peer-reviewed research papers, the future of AI lies in neuro-symbolic models — a hybrid approach combining neural networks with symbolic reasoning. He recalled how the neural approach has evolved since the time of MIT professor Marvin Minsky, known as the father of artificial intelligence, and the debate between “nature and nurture.”
READ: Perplexity AI CEO Aravind Srinivas stuck in green card backlog (January 31, 2025)
Unlike purely observational neural models, which build intelligence from observed data, symbolic AI incorporates abstraction. “Maybe there is a level of abstraction that we humans bring to connect things that are not observed,” he explained, using the example of how a two-year-old can learn from a single observation and apply that knowledge elsewhere.
This human-like reasoning is critical for building AI that can explain its decisions. “Guardrails are human intelligence, symbolic intelligence, inserted into neural models to make them less prone to hallucination,” he stated, emphasizing that current language models, such as OpenAI’s GPT-4, do not truly understand what they process — they simply predict the next word based on statistical probabilities.
Srinivasan argued that the belief that all intelligence must come from data is flawed. The complexity of these models makes it impossible to fully understand how they work, he warned. Emerging architectures are seeking to address this issue.
He also highlighted DeepSeek, an AI model that aims to lower training costs while maintaining performance. “It’s not going to cost you a hundred million to build your model. Maybe it’ll cost you about a million — less.” These innovations signal a shift toward more efficient and interpretable AI models.
While AI faces significant hurdles in explainability, tractability, and ethics, it is not going away, Srinivasan said. “We have seen two waves of AI before, and there is often talk of an ‘AI winter’ where progress stalls. However, while many generative AI projects may fail — [the research company] Gartner predicts 80% will — I don’t think AI itself is going away.”
The next generation of AI will likely be defined by models that integrate symbolic reasoning with neural processing, moving away from black-box architectures toward systems that are interpretable, repeatable, and ethical. More fundamentally different models of intelligence will emerge, Srinivasan predicted, signaling a shift toward a more responsible and effective AI landscape.
The ethics of AI
Srinivasan also touched on the ethical aspects of AI, including privacy concerns and data bias, while expressing optimism about its future and its potential impact on industries like radiology and personalized medicine.
“Ethics is a huge issue. But I see ethics in two parts: ethics in what data you use and ethics in how you apply that data.”
He cited an example where a company tracked employee movements without consent. “There was no consent. Maybe they consented when signing the employee contract, but what do you think about moving forward with the crisis when it comes to the ethical and moral aspects of audits?”
AI models are inseparable from their training data, which can introduce biases. “For them to be unbiased, the model has to be separate from the data,” Srinivasan noted. Current attempts to mitigate bias involve separating training and test data, but this does not fully eliminate the issue. “If the training data is not representative — whether left-wing only, right-wing only, or biased toward specific vendors — then the model itself is completely biased.”
Despite these challenges, AI is making significant strides in various industries, Srinivasan pointed out.
He cited several companies backed by Innospark Ventures that are leveraging AI effectively, including EnglishHelper, Encoder, Polaris, Ripath, and Neural.
EnglishHelper uses AI to help individuals develop reading, thinking, and comprehension skills in English. Encoder develops wristbands that help Parkinson’s patients control their tremors.
READ: Apple CEO comes out in support and praise for DeepSeek (January 31, 2025)
Polaris is a precision medicine company that uses nuclear magnetic resonance to create metabolic profiles, helping doctors determine the best medication for kidney transplant patients.
Ripath is a Princeton-based therapeutics company that uses AI to discover new antibiotics, an area that has seen little progress in the last 40 years. Neural is a brain-computer interface company that has developed a non-intrusive alternative to Neuralink.
These companies demonstrate how AI can be successfully applied in mission-critical scenarios — without relying on black-box models like GPT-4, Srinivsan said.


