Cloudfest Conference 2025

Use code HH20 for 20% off an event ticket!

Yesterday's Top Poster

Industry News

You can now subscribe to webhook events for deeper visibility into domain operations on Vercel. New event categories include: Domain transfers: Track key stages in inbound domain transfers. Domain renewals: Monitor renewal attempts and auto-renew status changes, ideal for catching failures before they impact availability. Domain certificates: Get notified when certificates are issued, renewed, or removed, helping you maintain valid HTTPS coverage across environments. DNS changes: Receive alerts when DNS records are created, updated, or deleted. Project Domain Management: Detect domain lifecycle changes across projects, including creation, updates, verification status, and reassignment. These events are especially...
Vercel Queues is a message queue service built for Vercel applications, in Limited Beta. Vercel Queues lets you offload work by sending tasks to a queue, where they’ll be processed in the background. This means users don’t have to wait for slow operations to finish during a request, and your app can handle retries and failures more reliably. Under the hood, Vercel Queues uses an append-only log to store messages and ensures tasks such as AI video processing, sending emails, or updating external services are persisted and never lost. Key features of Vercel Queues: Pub/Sub pattern: Topic-based messaging allowing for multiple consumer groups Streaming support: Handle payloads without loading them entirely into memory...
Modern sophisticated bots don’t look like bots. They execute JavaScript, solve CAPTCHAs, and navigate interfaces like real users. Tools like Playwright and Puppeteer can script human-like behavior from page load to form submission. Traditional defenses like checking headers or rate limits aren't enough. Bots that blend in by design are hard to detect and expensive to ignore. Enter BotID: A new layer of protection on Vercel. Think of it as an invisible CAPTCHA to stop browser automation before it reaches your backend. It’s built to protect critical routes where automated abuse has real cost such as checkouts, logins, signups, APIs, or actions that trigger expensive backend operations like LLM-powered endpoints. Read more Continue...
Fluid compute exists for a new class of workloads. I/O bound backends like AI inference, agents, MCP servers, and anything that needs to scale instantly, but often remains idle between operations. These workloads do not follow traditional, quick request-response patterns. They’re long-running, unpredictable, and use cloud resources in new ways. Fluid quickly became the default compute model on Vercel, helping teams cut costs by up to 85% through optimizations like in-function concurrency. Today, we’re taking the efficiency and cost savings further with a new pricing model: you pay CPU rates only when your code is actively using CPU. Read more Continue reading...
Vercel Agent is now available in Limited Beta. Agent is an AI assistant built into the Vercel dashboard that analyzes your app performance and security data. Agent focuses on Observability, summarizing anomalies, identifying likely causes, and recommending specific actions. These actions can span across the platform, including managing firewall rules in response to traffic spikes or geographic anomalies, and identifying optimization opportunities within your application. Insights appear contextually as detailed notebooks with no configuration required. Sign up with Vercel Community, express your interest in participating, and we'll reach out to you. Read more Continue reading...
Vercel Microfrontends is now available in Limited Beta for Enterprise teams, enabling you to deploy and manage multiple frontend applications that appear as one cohesive application to users. This allows you split large applications into smaller, independently deployable units that each team can build, test, and deploy using their own tech stack, while Vercel handles integration and routing across the platform. Faster development for large apps: Smaller units reduce build times and enable teams to move independently Independent team workflows: Each team manages its own deployment pipeline and framework Incremental migration: Modernize legacy systems piece by piece without slow, large-scale rewrites Learn more about Vercel...
Rolling Releases are now generally available, allowing safe, incremental rollouts of new deployments with built-in monitoring, rollout controls, and no custom routing required. Each rollout starts at a defined stage and can either progress automatically or be manually promoted to a full release. You can configure rollout stages per project and decide how each stage progresses, with updates propagating globally in under 300ms through our fast propagation pipeline. Rolling releases also include: Real-time monitoring: Track and compare error rates and Speed Insights (like Core Web Vitals, Time to First Byte, and more) between versions Flexible controls: Rollouts can be managed via REST API, CLI, the project dashboard, or the Vercel...
Vercel BotID is an invisible CAPTCHA with no visible challenges or manual bot management required. BotID is a new protection layer on Vercel designed for public, high-value routes such as checkouts, signups, AI chat interfaces, LLM-powered endpoints, and public APIs that are targets for sophisticated bots mimicking real user behavior. Unlike IP-based or heuristic systems, BotID: Silently collects thousands of signals that distinguish human users from bot Mutates these detections on every page load, evading reverse engineering and sophisticated bypasses Streams attack data into a global machine learning mesh, collectively strengthening protection for all customers Powered by Kasada, BotID integrates into your application...
Vercel Functions on Fluid Compute now use Active CPU pricing, which charges for CPU only while it is actively doing work. This eliminates costs during idle time and reduces spend for workloads like LLM inference, long-running AI agents, or any task with idle time. Active CPU pricing is built on three core metrics: Active CPU: Time your code is actively executing in an instance. Priced at $0.128 per hour Provisioned Memory: Memory allocated to the instance, billed at a lower rate. Priced at $0.0106 per GB-Hour Invocations: One charge per function call An example of this in action: A function running Standard machine size at 100% active CPU would now cost ~$0.149 per hour (1 Active CPU hour + 2 GB of provisioned memory)...
Functions using the Edge runtime now run on the unified Vercel Functions infrastructure. This applies to both before and after the cache: Edge Middleware is now Vercel Routing Middleware, a new infrastructure primitive that runs full Vercel Functions with Fluid compute before the cache Edge Functions are now Vercel Functions using the Edge Runtime after the cache With these changes, all functions including those running the Edge runtime are: Fluid compute-ready: Runs on Fluid compute for better performance and cost efficiency Multi-runtime: Supports Node.js and Edge runtimes Framework-driven: Deployed automatically from supported framework code Consistent pricing: Uses unified Vercel Functions pricing based on...
The default limits for Vercel Functions using Fluid compute have increased, with longer execution times, more memory, and more CPU. The default execution time, for all projects on all plans, is now 300 seconds: Plan Default Maximum Hobby 300s (previously 60s) 300s (previously 60s) Pro 300s (previously 90s) 800s Enterprise 300s (previously 90s) 800s Memory and CPU instance sizes have also been updated: Standard (default) is now 1 vCPU / 2 GB (previously 1 vCPU / 1.7 GB) Performance is now 2 vCPU / 4 GB (previously 1.7 vCPU / 3 GB) These increased instances are enabled by Active CPU pricing, which charges based on actual compute time. Periods of memory-only usage are billed at a significantly...
AI Gateway gives you a single endpoint to access a wide range of AI models across providers, with better uptime, faster responses, no lock-in. Now in Beta, developers can use models from providers like OpenAI, xAI, Anthropic, Google, and more with: Usage-based billing at provider list prices Bring-Your-Own-Key support Improved observability, including per-model usage, latency, and error metrics Simplified authentication Fallback and provider routing for more reliable inference Higher throughput and rate limits Try AI Gateway for free or check out the documentation to learn more. Read more Continue reading...
Vercel Sandbox is a secure cloud resource powered by Fluid compute. It is designed to run untrusted code, such as code generated by AI agents, in isolated and ephemeral environments. Sandbox is a standalone SDK that can be executed from any environment, including non-Vercel platforms. Sandbox workloads run in ephemeral, isolated microVMs via the new Sandbox SDK, supporting execution times up to 45 minutes. Sandbox uses the Fluid compute model and charges based on Fluid’s new Active CPU time, meaning you only pay for compute when actively using CPU. See Sandbox pricing for included allotments and pricing for Hobby and Pro teams. Now in Beta and available to customers on all plans. Learn more about Vercel Sandbox. Read more Continue...
Vercel's CDN Cache can now be purged by users with the Owner role using the button in project settings or by running a CLI command, vercel cache purge --type=cdn, using version 44.2.0 or newer. The CDN cache is already purged when you create a new deployment but that can take seconds or minutes to build. This new button is able to purge the CDN cache globally in milliseconds. Some cached paths persist between deployments, for example Image Optimization, so you can now manually purge when you know the external source images have changed and you want to see fresh content. Learn more in the documentation. Read more Continue reading...
There is now a search feature in the top right corner of every page in the vercel.com dashboard. This search allows you to instantly find: Teams Projects Deployments (by branch) Pages Settings For more complex queries you can also ask the Navigation Assistant. This AI-powered feature can locate any page in the dashboard and apply filters based on your question. Learn more about Find in the documentation. Read more Continue reading...
Vercel is evolving to meet the expanding potential of AI while staying grounded in the principles that brought us here. We're extending from frontend to full stack, deepening our enterprise capabilities, and powering the next generation of AI applications, including integrating AI into our own developer tools. Today, we’re welcoming Keith Messick as our first Chief Marketing Officer to support this growth and (as always) amplify the voice of the developer. Read more Continue reading...
There is now a search feature in the top right corner of every page in the vercel.com dashboard. This search allows you to instantly find: Teams Projects Deployments (by branch) Pages Settings For more complex queries you can also ask the Navigation Assistant. This AI-powered feature can locate any page in the dashboard and apply filters based on your question. Learn more about Find in the documentation. Read more Continue reading...
Turso now offers a native integration with Vercel, available as Database & Storage provider in the Marketplace. The Turso integration brings fast, distributed SQLite databases to your Vercel projects with: Seamless integration with Vercel, including one-click setup and unified billing Edge-hosted SQLite databases built for speed and global distribution A developer-friendly experience, configurable through Vercel CLI workflows Get started with Turso on the Vercel Marketplace. Read more Continue reading...
Back
Top