Over the past three decades, Google has used its engineering hubs in cities like Bengaluru to develop technologies specifically for the Indian market—such as Google Pay and Google Maps features—which it then exported globally, the company’s global chief scientist Jeff Dean told Mint in an exclusive interview.
Today, Google India remains vital to the $4.1-trillion tech giant and its frontier research unit Google DeepMind, Dean said, adding that the region continues to drive the development of AI applications that are marketed worldwide.
Scaling the Indian AI engine
He cited three key examples, including a global flood prediction model developed in collaboration with India's central government. By gathering data from flood-prone Indian states, Google trained AI models that are now used to predict and mitigate flood risks in other nations.
A second major contribution from Google’s Bengaluru hub is the "long context window" for its AI app Gemini. This feature allows the AI to process up to 750,000 words in a single query, allowing it to handle exceptionally complex datasets and documents. Notably, Google was the first to bring this capability to global users, beating out competitors such as OpenAI and Anthropic.
“Google’s Bengaluru office has lots of engineers contributing to fundamental advances on how to make machine learning models have long context windows, and make context usage more efficient inside AI models. They also focus on local interest problems that they may be well suited to, such as how our India engineers work very closely with government agencies and local farming experts to develop frontier models and applications in agriculture,” said Dean.
For his third example, Dean pointed to the development of frontier models for agriculture, where engineers based in India work closely with government agencies and local farming experts to improve monsoon predictions and soil-specific crop analysis.
Dean, who turns 58 in three months, was Google's 30th employee. He joined the company in 1999, when Google was merely a contender in the search-engine market it now dominates. Today he leads the effort at the world’s second-most valuable company to translate scientific breakthroughs from Google DeepMind into global commercial products.
India expansion
In tandem with the global surge in AI, Google has steadily scaled its Indian operations over the past four years. It currently employs about 14,000 people across the country, with about 5,000 based in Bengaluru. According to public estimates, DeepMind has about 200 engineers in India. In February 2025 Google opened Ananta, its new Bengaluru office that’s expected to seat 20,000 people. A larger campus, set to become Google’s largest outside its Mountain View headquarters, is expected to open in Hyderabad this year.
“We started work on our flood forecasting model almost a decade ago with the government of India, which had very good raw data about stream-gauge measurements from when rainfall happens in some of the most flood-prone states in India. Through such data, we were able to build totally new neural network-based AI models that could predict flooding quite accurately, to the point that we can indicate which portion of a village will get flooded and which won’t,” Dean said.
These product launches arrive as Big Tech firms ramp up capital expenditure to compete in the intensifying global AI race. On 4 February, Google chief executive officer (CEO) Sundar Pichai announced during a post-earnings call that the company would double its AI spending to $185 billion through the 2026 calendar year. Securing government projects in sectors such as agriculture and flood forecasting is viewed as a strategic entry point for capturing a dominant share of these expanding AI markets.
On 14 October 2025, Google announced an investment of $15 billion by 2030, in partnership with Adani Enterprises and Bharti Airtel, for a 1 gigawatt (GW) AI data centre in Visakhapatnam.
Dean, who is chief scientist of both Google and DeepMind, said a key part of his role is balancing scientific research with capital outlays for commercial applications. “Between being a scientist and an executive, a key thing to keep in mind is where we go as a company in the long run. For instance, eight years ago, I suggested that Google should focus on multi-modal AI models. Today, Gemini is very good at multi-modal reasoning. Right now, I’m thinking about how inference hardware can be made more efficient, and that’s one direction Google wants to pursue. This long-term executive decision is influenced by what you see as a scientist,” he said.
One example of this, Dean added, is a moonshot project to build expertise in 1,000 different languages, including Hindi and other Indic dialects. “We have a long-running effort to make Google’s models work in 1,000 languages. They were initially trained with around the top 30 languages in mind. We’re already adept at more than 220 languages, and we’ve made rapid progress worldwide, going from 35 languages to around 220 within five years. We introduce these languages through Google Translate, and expanding our understanding of languages is a key part of making our AI research available to users worldwide,” he said.
The long road to AGI
At present, though, Big Tech is still some distance away from achieving artificial general intelligence (AGI), Dean said, echoing the views of his colleague and DeepMind cofounder, the Nobel laureate Demis Hassabis, from November last year. AGI refers to a theoretical stage of AI capable of performing any intellectual task a human can, possessing the ability to learn, reason, and apply knowledge across diverse contexts without specific programming or supervision.
“There are really varying definitions of AGI, and these definitions can vary by a factor of a trillion. To put it simply: AGI that is better than most humans at most things is far easier to achieve than AGI that is better than every human at everything,” Dean said.
“The truth is, we don’t really yet know exactly how to achieve some of the most advanced definitions of AGI. We’re at least a couple of more significant breakthroughs of machine-learning advances away from making models much more capable,” he added.