Yesterday's Top Poster

Industry News

Microfrontends support is now available in Public Beta. Microfrontends allow you to split large applications into smaller ones so that developers can move more quickly. This support lets teams and large apps build and test independently, while Vercel assembles and routes the app into a single experience. This reduces build times, supports parallel development, and enables gradual legacy migration. Developers can use the Vercel Toolbar to iterate and test their apps independently, while navigations between microfrontends benefit from prefetching and prerendering for fast transitions between the applications. To get started with microfrontends, clone one of our examples or follow the quickstart guide: In the Vercel dashboard...
  • M
  • By Mark Knichel, Kit Foster, Tom Knickman, Justin
Today, we're launching the official Vercel MCP server, a secure, OAuth-compliant interface that lets AI tools interact with your Vercel projects. AI tools are becoming a core part of the developer workflow, but they've lacked secure, structured access to infrastructure like Vercel. With this, tools like Claude Code and VS Code can access logs, docs, and project metadata directly from within your development environment or AI assistant. Read more Continue reading...
You can now access Claude Opus 4.1, a new model released today, using Vercel's AI Gateway with no other provider accounts required. This release from Anthropic improves agentic task execution, real-world coding, and reasoning over the previous Opus 4 model. AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime. To use it with the AI SDK v5, start by installing the package: Then set the model to anthropic/claude-4.1-opus: Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries. To deliver high performance...
  • W
  • By Walter Korman, Harpreet Arora, Rohan Taneja, Josh
You can now access gpt-oss-20b and gpt-oss-120b by OpenAI, open-weight reasoning models designed to push the open model frontier, using Vercel's AI Gateway with no other provider accounts required. AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime. To use it with the AI SDK v5, start by installing the package: Then set the model to either openai/gpt-oss-20b or openai/gpt-oss-120b: Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries. To deliver high performance and reliability to gpt-oss, AI...
  • W
  • By Walter Korman, Harpreet Arora, Jeremy Philemon
Vercel's official MCP (Model Control Protocol) server is now live at mcp.vercel.com, providing a remote interface with OAuth-based authorization for AI tools to securely interact with your Vercel projects. The server integrates with AI assistants, such as Claude.ai, Claude Code and Claude for desktop, and tools like VS Code, to: Search and navigate Vercel documentation Manage projects and deployments Analyze deployment logs Vercel MCP fully implements the latest MCP Authorization and Streamable HTTP specifications for enhanced security and performance. This update enhances collaboration between AI-driven workflows and Vercel ecosystems. For more details, read our the documentation. Read more Continue reading...
Vibe coding has changed how software gets built. Tools like v0 make it possible to turn ideas into working prototypes in seconds. Anthropic's CEO predicts 90% of code will be AI-generated in 3-6 months. Adoption is accelerating fast, and for many builders, we're already there. But here's the uncomfortable truth: The faster you build, the more risk you create Last week, a viral app leaked 72k selfies and government IDs. This wasn’t a hack or advanced malware. It was caused by default settings, misused variables, and the absence of guardrails. A misconfigured Firebase bucket that was mistakenly left public for anyone to access. The app was built quickly, shipped without security review, and went viral. Read more Continue reading...
Observability Plus users can now choose between line charts, volume charts, tables, or a big number when visualizing data returned by queries. Both the queries and their visualization settings can be saved to shareable notebooks. This update replaces fixed presets with customizable controls and is available now at no extra cost for teams on Observability Plus. Try it out or learn more about Observability and Observability Plus. Read more Continue reading...
Vercel now natively supports Hono, a fast, lightweight backend framework built on web standards, with zero-configuration.. With the code above, use Vercel CLI to develop and deploy your Hono application: With this improved integration, Vercel's framework-defined infrastructure now recognizes and deeply understands Hono applications, ensuring they benefit from optimizations made from builds, deployments, and application delivery. Now, new Hono applications deployed to Vercel benefit from Fluid compute, with Active CPU pricing, automatic cold start optimizations, background processing, and much more. Deploy Hono on Vercel or visit Hono's Vercel documentation. Read more Continue reading...
With over 2 million weekly downloads, the AI SDK is the leading open-source AI application toolkit for TypeScript and JavaScript. Its unified provider API allows you to use any language model and enables powerful integrations into leading web frameworks. Read more Continue reading...
Since launch, we’ve seen a growing wave of people building with v0 and sharing what they’ve created, from full-stack apps to UI experiments. Now, we’re going a step further by sponsoring builders innovating and showcasing what’s possible with v0. Today we’re launching the v0 Ambassador Program as a way to recognize and enable members of our community who create, share, and inspire. Apply to join the v0 Ambassador Program and help others discover the magic of what's possible with v0. Read more Continue reading...
Debugging a failing deploy can involve a lot of context switching between your deploy logs and external tools. We've added a new option to deep link directly into ChatGPT with your deploy analysis. Now you can get immediate AI-powered insights and debugging help without manually copying and pasting log information. How it works When a deploy fails, you'll see a "Why did it fail?" option. Clicking this now presents an "Ask ChatGPT" button which automatically opens and pre-populates ChatGPT with the relevant deploy context. Example use cases Quickly understand common errors in your build output. Get suggestions for fixing dependency issues. Receive explanations for unfamiliar error messages. Getting started Next time you encounter a...
You can now access GLM-4.5 and GLM-4.5 Air, new flagship models from Z.ai designed to unify frontier reasoning, coding, and agentic capabilities, using Vercel's AI Gateway with no other provider accounts required. AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime. To use it with the AI SDK v5, start by installing the package: Then set the model to either zai/glm-4.5 or zai/glm-4.5-air: Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries. Learn more about AI Gateway. Read more Continue reading...
You can now quickly share snapshots of any chart in Vercel Observability, making it easier to collaborate during debugging and incident response. Hover over a chart and press ⌘+C or Ctrl+C to copy a URL that opens a snapshot of the chart in Vercel Observability. The snapshot includes the same time range, filters, and settings as when copied. The link includes a preview image of the chart that unfurls in tools like Slack and Teams. Try it out or learn more about Observability and Observability Plus. Read more Continue reading...
A few months ago, we announced Fluid compute, an approach to serverless computing that uses resources more efficiently, minimizes cold starts, and significantly reduces costs. More recently at Vercel Ship 2025, we introduced Active CPU pricing for even more cost-effective compute on Vercel. Fluid compute with Active CPU pricing powers over 45 billion weekly requests, saving customers up to 95% and never charging CPU rates for idle time. Behind the scenes, it took over two years to build the required infrastructure to make this possible. Read more Continue reading...
Node.js 18 (LTS support ends April 30, 2025) and the Vercel legacy build image will be deprecated on September 1, 2025. If you are still using the legacy build image on this date, new builds will display an error. How do I know if I am still using the legacy build image? Projects using Node.js 18.x in Build and Deployment Settings use the legacy build image Projects using overrides in package.json use the legacy build image What changes between the legacy build image and latest build image? The minimum version of Node.js is now 20.x The Python toolchain version is now 3.12 The Ruby toolchain version is now 3.3.x Will my existing deployments be affected? Existing deployments will not be affected. However, the...
Model Context Protocol (MCP) is a new spec that helps standardize the way large language models (LLMs) access data and systems, extending what they can do beyond their training data. It standardizes how developers expose data sources, tools, and context to models and agents, enabling safe, predictable interactions and acting as a universal connector between AI and applications. Instead of building custom integrations for every AI platform, developers can create an MCP server once and use it everywhere. Read more Continue reading...
Vercel is partnering with Solara6, a digital agency known for building high-performing ecommerce experiences for customers like Kate Spade, Coach, and Mattress Firm. Their work emphasizes AI-powered efficiencies, fast iteration cycles, and user experience, while prioritizing measurable outcomes. Solara6 customers see improvements in their developer velocity, operational costs, page load times, conversion rates, and organic traffic. Read more Continue reading...
You can now access Qwen3 Coder, a model from QwenLM, an Alibaba Cloud company, designed to handle complex, multi-step coding workflows, using Vercel's AI Gateway with no other provider accounts required. AI Gateway lets you call the model with a consistent unified API and just a single string update, track usage and cost, and configure performance optimizations, retries, and failover for higher than provider-average uptime. To use it with the AI SDK v5, start by installing the package: Then set the model to alibaba/qwen3-coder: Includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries. To deliver high performance and reliability to Qwen3 Coder, AI Gateway leverages...
GrowthBook, the open-source experimentation platform, is now available as a native integration on the Vercel Marketplace. Easily add feature flags and A/B testing to your Vercel projects with minimal setup. With GrowthBook on Vercel, you can: Declare flags in code using Flags SDK and the @flags-sdk/growthbook adapter Sync feature flags directly to Vercel Edge Config, powering low latency evaluation Bring your own data using GrowthBook’s warehouse-native A/B testing platform Get started by installing GrowthBook from the Vercel Marketplace, with one-click setup and unified billing. Explore the Template to view and deploy the example. Read more Continue reading...
Back
Top