Google has introduced AntiGravity as an agentive development platform that sits on top of Gemini 3. It’s not only an autocomplete layer, it’s an IDE where agents plan, execute, and explain complex software tasks on editor, terminal, and browser surfaces. AntiGravity was launched with Gemini 3 on November 18, 2025, as part of Google’s effort toward agent-centric developer tools.
What exactly is antigravity?
AntiGravity is described by Google as a new agentic development platform with a familiar AI-powered IDE at its core. The goal is to evolve the IDE toward an agent first future with browser control and asynchronous interaction patterns that let agents autonomously plan and execute final software tasks.
In practice, AntiGravity looks and behaves like a modern AI editor but treats agents as first-class employees. Agents can break down tasks, coordinate with other agents, edit files, run commands, and run browsers. The developer works at the task level, while the system manages the low-level tool interactions.
Under the hood, AntiGravity is an Electron application based on Visual Studio Code. It requires signing in with a Google account and ships as a free public preview for macOS, Linux, and Windows.
Models, pricing, and runtime environments
AntiGravity exposes multiple foundation models inside a single agent framework. In the current preview, agents can use Gemini 3, Anthropic Cloud Sonnet 4.5, and OpenAI GPT OSS models. This gives developers flexibility within a single IDE rather than tying the model to a single vendor.
For individual users, AntiGravity is available at no charge. Google describes Gemini 3 Pro usage as subject to generous rate limits that refresh every 5 hours, and notes that only a small portion of power users are expected to be affected by this.
Editor View and Manager View
AntiGravity offers 2 main working modes that match different neural models. The documentation and coverage consistently describe these as editor view and manager view.
Editor view is the default. It looks like a standard IDE with an agent in the side panel. The agent can read and edit files, suggest inline changes, and use the terminal and browser when needed.
The Manager view moves the abstraction from single files to multiple agents and workspaces. This is where you coordinate multiple agent runs instead of editing the code line by line.
Artifacts, not raw device logs
A key design element in AntiGravity is the artifact system. Instead of simply exposing raw tool call logs, agents produce human readable artifacts that summarize what they are doing and why.
Artifacts are structured objects that may include task lists, implementation plans, walkthrough documents, screenshots, and browser recordings. They represent work at the function level rather than the API call level and are designed to be easier for developers to verify than dense traces of model actions.
Google positions this as a response to the problem of trust in the current agent framework. Many tools either show every internal step, which overwhelms users, or hide everything and show only the last code difference. AntiGravity tries to sit in the middle by exposing task level artifacts and enough validation signals so that a developer can audit the actions taken by the agent.
Four Design Principles and Feedback Channels
AntiGravity is clearly built around 4 principles, trust, autonomy, feedback and self-improvement.
Trust is controlled through artifacts and verification steps. Autonomy comes from giving agents access to multiple surfaces, editors, terminals, and browsers, so they can run more complex workflows without constant prompts. Feedback is enabled through comments on artefacts, and self-improvement is linked to agents learning from previous work and reusing successful processes.
AntiGravity allows developers to directly comment on specific artifacts, including text and screenshots. Agents can incorporate this feedback into their ongoing work without leaving the current task. This lets you fix a partial error without restarting the entire job.
The platform also exposes a knowledge feature where agents can retain snippets of code or sequences of steps from earlier actions. Over time, this becomes a reusable internal playbook that agents can query instead of reinventing the same strategies for each new project.
key takeaways
- AntiGravity is an agent first development platform that turns the IDE into a control plane where agents work on editor, terminal, and browser surfaces rather than a narrow inline assistant.
- The system is a Visual Studio Code fork that runs as a free public preview on Windows, macOS, and Linux, with generous Gemini 3 Pro rate limits and optional use of Cloud Sonnet 4.5 and GPT OSS.
- AntiGravity exposes 2 main modes, the Editor view for coding with an agent sidebar and the Manager view as a mission control interface for organizing multiple agents and workspaces asynchronously.
- Agents emit artifacts, task lists, execution plans, screenshots, browser recordings, and more, which act as verifiable evidence of work instead of raw tool logs and enable asynchronous review workflows.
- Feedback and self-correction are built-in, developers can attach Google Docs-style comments to artifacts on all surfaces, and agents incorporate this feedback and learn from the development knowledge base without having to rework tasks.
Google Antigravity is a practical step toward agentive development. It anchors Gemini 3 Pro inside a real IDE workflow, exposes editor view and manager view to supervised agents, and enforces task level visibility through artifacts. The four principles, trust, autonomy, feedback, self-improvement, are based on verifiable outputs rather than opaque traces and consistent knowledge. Overall, AntiGravity treats the IDE as a governed environment for autonomous agents, not as a chat window with coded actions.
check it out Full technical details hereFeel free to check us out GitHub page for tutorials, code, and notebooksAlso, feel free to follow us Twitter And don’t forget to join us 100k+ ml subreddit and subscribe our newsletterwait! Are you on Telegram? Now you can also connect with us on Telegram.

Michael Sutter is a data science professional and holds a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michael excels in transforming complex datasets into actionable insights.
🙌 Follow MarketTechPost: Add us as a favorite source on Google.