The head of Google DeepMind has issued an urgent call for accelerated research into AI safety, warning that the pace of capability development is outstripping the field’s ability to ensure these systems remain aligned with human values.
Speaking at the AI Impact Summit in Delhi, Demis Hassabis said: “We are in a critical window where the decisions we make about AI safety research will determine whether these technologies benefit all of humanity or become sources of unprecedented risk.”
His remarks were notably at odds with the position of the US delegation, which firmly rejected proposals for binding international AI governance frameworks. “We totally reject global governance of AI,” said the head of the US delegation.
The summit, attended by representatives from over 50 countries, highlighted growing divisions between nations that favour strong regulatory frameworks — including the EU, India, and many developing nations — and those that prefer voluntary industry commitments.
AI safety researchers broadly welcomed Hassabis’s remarks, though some noted the tension between Google’s calls for caution and its aggressive deployment of AI products across its ecosystem.

Leave a Reply