Yesterday's Top Poster

Industry News

Vercel now automatically correlates logs with distributed traces for customers using OpenTelemetry to instrument their applications. Traces are a way to collect data about the performance and behavior of your application and help identify the cause of performance issues, errors, and other problems. OpenTelemetry (OTel) is an open source project that allows you to instrument your application to collect traces. When a request is traced using OTel, Vercel will enrich the relevant logs with trace and span identifiers. This allows you to correlate your individual logs to a trace or span. This feature is available to customers using log drains through our integrations with Datadog and Dash0. No action is required and log to trace...
Summary A cache poisoning vulnerability affecting Next.js App Router >=15.3.0 < 15.3.3 / Vercel CLI 41.4.1–42.2.0 has been resolved. The issue allowed page requests for HTML content to return a React Server Component (RSC) payload instead under certain conditions. When deployed to Vercel, this would only impact the browser cache, and would not lead to the CDN being poisoned. When self-hosted and deployed externally, this could lead to cache poisoning if the CDN does not properly distinguish between RSC / HTML in the cache keys. Impact Under specific conditions involving App Router, middleware redirects, and omitted Vary headers, applications may: Serve RSC payloads in place of HTML Cache these responses at the browser or CDN...
Summary A vulnerability affecting Next.js has been addressed. It impacted versions >=15.1.0 <15.1.8 and involved a cache poisoning bug leading to a Denial of Service (DoS) condition. Impact This issue does not impact customers hosted on Vercel. Under certain conditions, this issue may allow a HTTP 204 response to be cached for static pages, leading to the 204 response being served to all users attempting to access the page. This issue required the below conditions to be exploitable: Using an affected version of Next.js, and; A route using cache revalidation with ISR (next start or standalone mode); and A route using SSR, with a CDN configured to cache 204 responses. Resolution The issue was resolved by removing the...
Pro teams can now access a new usage dashboard (recently introduced to Enterprise customers) with improved filtering, detailed breakdowns, and export options to better understand usage and costs by product and project. You can now break down usage by: Product to quickly identify usage, drill down into spikes, and track costs of a single or set of products Team and project to understand your costs and monitor team activity across all or specific apps CSV exports for external analysis via integration into your cost observability tools and spreadsheets Explore the new dashboard today. Read more Continue reading...
  • C
  • By Christian Pickett, Shar Dara, Caleb Boyd, Chloe
Vercel now supports Nitro applications, a backend toolkit for building web servers, with zero-configuration. Nitro powers frameworks like Nuxt.js, TanStack Start, and SolidStart. Deploy Nitro on Vercel or visit Nitro's Vercel documentation. Read more Continue reading...
You can now subscribe to webhook events for deeper visibility into domain operations on Vercel. New event categories include: Domain transfers: Track key stages in inbound domain transfers. Domain renewals: Monitor renewal attempts and auto-renew status changes, ideal for catching failures before they impact availability. Domain certificates: Get notified when certificates are issued, renewed, or removed, helping you maintain valid HTTPS coverage across environments. DNS changes: Receive alerts when DNS records are created, updated, or deleted. Project Domain Management: Detect domain lifecycle changes across projects, including creation, updates, verification status, and reassignment. These events are especially...
My first week at Vercel coincided with something extraordinary: Vercel Ship 2025. Vercel Ship 2025 showcased better building blocks for the future of app development. AI has made this more important than ever. Over 1,200 people gathered in NYC for our third annual event, to hear the latest updates in AI, compute, security, and more. Read more Continue reading...
Vercel Queues is a message queue service built for Vercel applications, in Limited Beta. Vercel Queues lets you offload work by sending tasks to a queue, where they’ll be processed in the background. This means users don’t have to wait for slow operations to finish during a request, and your app can handle retries and failures more reliably. Under the hood, Vercel Queues uses an append-only log to store messages and ensures tasks such as AI video processing, sending emails, or updating external services are persisted and never lost. Key features of Vercel Queues: Pub/Sub pattern: Topic-based messaging allowing for multiple consumer groups Streaming support: Handle payloads without loading them entirely into memory...
Vercel Sandbox is a secure cloud resource powered by Fluid compute. It is designed to run untrusted code, such as code generated by AI agents, in isolated and ephemeral environments. Sandbox is a standalone SDK that can be executed from any environment, including non-Vercel platforms. Sandbox workloads run in ephemeral, isolated microVMs via the new Sandbox SDK, supporting execution times up to 45 minutes. Sandbox uses the Fluid compute model and charges based on Fluid’s new Active CPU time, meaning you only pay for compute when actively using CPU. See Sandbox pricing for included allotments and pricing for Hobby and Pro teams. Now in Beta and available to customers on all plans. Learn more about Vercel Sandbox. Read more Continue...
AI Gateway gives you a single endpoint to access a wide range of AI models across providers, with better uptime, faster responses, no lock-in. Now in Beta, developers can use models from providers like OpenAI, xAI, Anthropic, Google, and more with: Usage-based billing at provider list prices Bring-Your-Own-Key support Improved observability, including per-model usage, latency, and error metrics Simplified authentication Fallback and provider routing for more reliable inference Higher throughput and rate limits Try AI Gateway for free or check out the documentation to learn more. Read more Continue reading...
The default limits for Vercel Functions using Fluid compute have increased, with longer execution times, more memory, and more CPU. The default execution time, for all projects on all plans, is now 300 seconds: Plan Default Maximum Hobby 300s (previously 60s) 300s (previously 60s) Pro 300s (previously 90s) 800s Enterprise 300s (previously 90s) 800s Memory and CPU instance sizes have also been updated: Standard (default) is now 1 vCPU / 2 GB (previously 1 vCPU / 1.7 GB) Performance is now 2 vCPU / 4 GB (previously 1.7 vCPU / 3 GB) These increased instances are enabled by Active CPU pricing, which charges based on actual compute time. Periods of memory-only usage are billed at a significantly...
Functions using the Edge runtime now run on the unified Vercel Functions infrastructure. This applies to both before and after the cache: Edge Middleware is now Vercel Routing Middleware, a new infrastructure primitive that runs full Vercel Functions with Fluid compute before the cache Edge Functions are now Vercel Functions using the Edge Runtime after the cache With these changes, all functions including those running the Edge runtime are: Fluid compute-ready: Runs on Fluid compute for better performance and cost efficiency Multi-runtime: Supports Node.js and Edge runtimes Framework-driven: Deployed automatically from supported framework code Consistent pricing: Uses unified Vercel Functions pricing based on...
Vercel Functions on Fluid Compute now use Active CPU pricing, which charges for CPU only while it is actively doing work. This eliminates costs during idle time and reduces spend for workloads like LLM inference, long-running AI agents, or any task with idle time. Active CPU pricing is built on three core metrics: Active CPU: Time your code is actively executing in an instance. Priced at $0.128 per hour Provisioned Memory: Memory allocated to the instance, billed at a lower rate. Priced at $0.0106 per GB-Hour Invocations: One charge per function call An example of this in action: A function running Standard machine size at 100% active CPU would now cost ~$0.149 per hour (1 Active CPU hour + 2 GB of provisioned memory)...
Vercel BotID is an invisible CAPTCHA with no visible challenges or manual bot management required. BotID is a new protection layer on Vercel designed for public, high-value routes such as checkouts, signups, AI chat interfaces, LLM-powered endpoints, and public APIs that are targets for sophisticated bots mimicking real user behavior. Unlike IP-based or heuristic systems, BotID: Silently collects thousands of signals that distinguish human users from bot Mutates these detections on every page load, evading reverse engineering and sophisticated bypasses Streams attack data into a global machine learning mesh, collectively strengthening protection for all customers Powered by Kasada, BotID integrates into your application...
Rolling Releases are now generally available, allowing safe, incremental rollouts of new deployments with built-in monitoring, rollout controls, and no custom routing required. Each rollout starts at a defined stage and can either progress automatically or be manually promoted to a full release. You can configure rollout stages per project and decide how each stage progresses, with updates propagating globally in under 300ms through our fast propagation pipeline. Rolling releases also include: Real-time monitoring: Track and compare error rates and Speed Insights (like Core Web Vitals, Time to First Byte, and more) between versions Flexible controls: Rollouts can be managed via REST API, CLI, the project dashboard, or the Vercel...
Vercel Microfrontends is now available in Limited Beta for Enterprise teams, enabling you to deploy and manage multiple frontend applications that appear as one cohesive application to users. This allows you split large applications into smaller, independently deployable units that each team can build, test, and deploy using their own tech stack, while Vercel handles integration and routing across the platform. Faster development for large apps: Smaller units reduce build times and enable teams to move independently Independent team workflows: Each team manages its own deployment pipeline and framework Incremental migration: Modernize legacy systems piece by piece without slow, large-scale rewrites Learn more about Vercel...
Vercel Agent is now available in Limited Beta. Agent is an AI assistant built into the Vercel dashboard that analyzes your app performance and security data. Agent focuses on Observability, summarizing anomalies, identifying likely causes, and recommending specific actions. These actions can span across the platform, including managing firewall rules in response to traffic spikes or geographic anomalies, and identifying optimization opportunities within your application. Insights appear contextually as detailed notebooks with no configuration required. Sign up with Vercel Community, express your interest in participating, and we'll reach out to you. Read more Continue reading...
Fluid compute exists for a new class of workloads. I/O bound backends like AI inference, agents, MCP servers, and anything that needs to scale instantly, but often remains idle between operations. These workloads do not follow traditional, quick request-response patterns. They’re long-running, unpredictable, and use cloud resources in new ways. Fluid quickly became the default compute model on Vercel, helping teams cut costs by up to 85% through optimizations like in-function concurrency. Today, we’re taking the efficiency and cost savings further with a new pricing model: you pay CPU rates only when your code is actively using CPU. Read more Continue reading...
Modern sophisticated bots don’t look like bots. They execute JavaScript, solve CAPTCHAs, and navigate interfaces like real users. Tools like Playwright and Puppeteer can script human-like behavior from page load to form submission. Traditional defenses like checking headers or rate limits aren't enough. Bots that blend in by design are hard to detect and expensive to ignore. Enter BotID: A new layer of protection on Vercel. Think of it as an invisible CAPTCHA to stop browser automation before it reaches your backend. It’s built to protect critical routes where automated abuse has real cost such as checkouts, logins, signups, APIs, or actions that trigger expensive backend operations like LLM-powered endpoints. Read more Continue...
Vercel's CDN Cache can now be purged by users with the Owner role using the button in project settings or by running a CLI command, vercel cache purge --type=cdn, using version 44.2.0 or newer. The CDN cache is already purged when you create a new deployment but that can take seconds or minutes to build. This new button is able to purge the CDN cache globally in milliseconds. Some cached paths persist between deployments, for example Image Optimization, so you can now manually purge when you know the external source images have changed and you want to see fresh content. Learn more in the documentation. Read more Continue reading...
Back
Top