AI assistants can now recognize routes: Google Maps is directly "embedded" in the brain.
Google DeepMind has just made a major move for the Gemini API: Built - in tools and custom functions can finally be used together in the same call. Coupled with the "Context Circulation" across tools and the native integration of Google Maps, the nightmare of orchestration in Agent development is coming to an end. If you've ever developed an AI Agent, you must have experienced this pain - the model first needs to call Google Search to get external data, then call your backend API to check inventory, and in the middle, you have to manually feed the result of the previous step into the next step. The whole process is like building with Lego, but the interfaces of each Lego piece are different.
Google has just torn down the wall.
The latest update of the Gemini API brings three core changes, each of which directly addresses the pain points of Agent development.
Built - in Tools + Custom Functions
All Done in One Request
Previously, you either used built - in tools like Google Search or called your own functions. You couldn't use them together.
Developers had to manually orchestrate like traffic police - first let the model search, get the result, and then send a second request to call your backend.
Now, you can include Google Search, Google Maps, and your custom code in the same request.
Gemini 3 will decide which to call first, which to call later, and how to connect them in the middle.
Take a real - world scenario: You ask the AI to "search for the most popular noise - canceling headphones today and then check if we have them in stock."
Previously, this required two requests and manual splicing. Now, one request can handle it all - Gemini first searches the web for popular models and then automatically calls your inventory API to check one by one. The latency is halved, and the amount of code is reduced even more.
This is a feature that developers have been calling for a long time.
Google itself also said, "This is the most requested feature by developers since we launched built - in tools."
Context Circulation
AI Finally Has a Toolchain with "Long - Term Memory"
The most feared thing in multi - step workflows is "forgetting right away" - the data obtained in the first step is lost by the model when it's needed in the second step.
The newly introduced "Context Circulation" technology solves this problem. Its principle is that every tool call and the returned result are automatically retained in the model's context window. Subsequent steps can directly reference the data from any previous step.
For example, Gemini uses a built - in tool to check the real - time weather (30°C, sunny) and then seamlessly passes this result to your custom tool to book an outdoor venue - it knows to choose an open - air one. You don't need to intervene in the middle to forward the data.
With the newly added Tool Response ID, each tool call has a unique identifier.
This is especially crucial in parallel call scenarios - when the model initiates three function calls simultaneously, you can accurately match each return value to the correct call, and the debugging efficiency soars.
Google Maps Natively Integrated into Gemini 3
Geospatial awareness is a necessity for modern Agents - ordering food, navigation, finding stores, and route planning all rely on location information.
This update officially integrates Google Maps into all Gemini 3 models.
Your AI assistant can now sense in real - time "which cafes near Berlin's Alexanderplatz are open," check commuting times, and get business details.
Start with just a few lines of code:
The Real Change
From "Orchestration Nightmare" to "Declarative Agent"
Looking at these three updates together, what Google is really doing is enabling developers to shift from "manually orchestrating the order of tool calls" to "declaratively telling the model which tools are available."
The rest - when to call which tool, how to pass context, and how to handle parallel calls - Gemini will handle on its own.
This is completely in line with what Jensen Huang said at GTC about the "Agent era": AI is no longer just answering questions but autonomously calling tools, chaining processes, and completing complex tasks.
The difference is that NVIDIA is building Agent infrastructure from the hardware side (NemoClaw), while Google is building Agent development infrastructure from the API side.
Google has launched a new Interactions API for more complex Agent scenarios, focusing on server - side state management, complex context, and long - task processing;
However, since it is currently in Public Beta, the official also states that the standard production workload still uses GenerateContent as the main path.
For developers, the signal is clear: The infrastructure for Agent development is moving from "handicraft workshops" to "industrialization."
The combination of tools, context circulation, and geospatial integration - these may sound like minor features, but together, they are the infrastructure for Agents to move from demos to production.
References:
https://blog.google/innovation-and-ai/technology/developers-tools/gemini-api-tooling-updates/
https://ai.google.dev/gemini-api/docs/tool-combination
https://ai.google.dev/gemini-api/docs/maps-grounding
https://x.com/OfficialLoganK/status/2034309347040195071
This article is from the WeChat official account "New Intelligence Yuan". Author: New Intelligence Yuan, Editor: Adam. Republished by 36Kr with permission.