Yesterday's Top Poster

Industry News

The Vercel Observability dashboard now includes a dedicated view for middleware, showing invocation counts and performance metrics. Observability Plus users get additional insights and tooling: Analyze invocations by request path, matched against your middleware config Break down middleware actions by type (e.g., redirect, rewrite) View rewrite targets and frequency Query middleware invocations using the query builder View the dashboard or learn more about Observability and Observability Plus. Read more Continue reading...
Since February, Vercel blocked 148 billion malicious requests from 108 million unique IP addresses. Every deployment automatically inherits these protections, keeping your workloads secure by default and enabling your team to focus on shipping rather than incidents. Our real-time DDoS filtering, managed Web Application Firewall (WAF), and enhanced visibility ensure consistent, proactive security. Here's what's new since February. Read more Continue reading...
Rate limiting now has higher included usage and broader availability to help protect your applications from abuse and manage traffic effectively. The first 1,000,000 allowed rate limit requests per month are now included. Hobby teams also get 1 free rate limit rule per project, up to the same included allotment. These changes are now effective and have been automatically applied to your account. Learn more about configuring rate limits or create a new rate limiting rule now. Read more Continue reading...
We’ve optimized connection pooling in our CDN to reduce latency when connecting to external backends, regardless of traffic volume. Lower latency: Improved connection reuse and TLS session resumption reduce response times by up to 60% in some regions, with a 15–30% average improvement. Reduced origin load: 97% connection reuse and more efficient TLS resumption significantly cut the number of new handshakes required. This is now live across all Vercel deployments at no additional cost. Read more Continue reading...
The Observability dashboard now surfaces caching behavior for external API calls using Vercel Data Cache. External APIs page, you’ll see a new column indicating how many requests were served from the cache vs. the origin. Caching insights are available per hostname for all users, and per path for Observability Plus subscribers. View the external API dashboard or learn more about Vercel Data Cache. Read more Continue reading...
Storage should be simple to set up, globally available, and built to last, without slowing you down or adding complexity. It should feel native to your app. That's why we built Vercel Blob: AWS S3-backed storage that's deeply integrated with Vercel's global application delivery and automated caching, with predictable pricing to serve public assets cost-efficiently at scale. Vercel Blob is now generally available. It's already storing and serving over 400 million files, and powers production apps like v0 and the Vercel Dashboard. Read more Continue reading...
Vercel recently published a Model Context Protocol (MCP) adapter that makes it easy to spin up an MCP server on most major frameworks. Vapi is building an API for building real-time voice agents. They handle orchestration, scaling, and telephony to provide a completely model-agnostic and interchangeable interface for building agents. Vapi rebuilt their MCP server on Vercel, letting users create agents, automate testing, analyze transcripts, build workflows, and give agents access to all of Vapi’s endpoints. Read more Continue reading...
Vercel Blob is now generally available, bringing high-performance, globally scalable object storage into your workflows and apps. Blob storage’s underlying S3 infrastructure ensures 99.999999999% durability, and already stores over 400 million files while powering production apps like v0.dev. Pricing is usage-based: Storage: $0.023 per GB per month Simple API operations (e.g. Reads): $0.40 per million Advanced operations (e.g. Uploads): $5.00 per million Blob Data Transfer: $0.050 per GB Pricing applies to: New Blob stores starting today Existing stores starting June 16, 2025 Hobby users now get increased free usage: 1 GB of storage and 10 GB of Blob Data Transfer per month. Get started with Vercel Blob and...
The Vercel AI Gateway is now available for alpha testing. Built on the AI SDK 5 alpha, the Gateway lets you switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts. The Gateway handles authentication, usage tracking, and in the future, billing. Read more Continue reading...
We’re updating how pricing works in v0. Usage is now metered on input and output tokens which convert to credits, instead of fixed message counts. This gives you more predictable pricing as you grow and increases the amount of usage available on our free tier. Existing v0 users will transition to the new pricing at the start of your next billing period. New users will start on the improved pricing today. Read more Continue reading...
Hypertune now offers a native integration with Vercel Marketplace. You can find it as a Flags & Experimentation provider in the Flags tab. The Hypertune integration offers: Powerful flags, A/B testing, analytics and app configuration Full end-to-end type safety with type-safe client generation Personalization with the lowest latency using Edge Config A first class Flags SDK adapter Install and access on Vercel with one-click setup and unified billing. Deploy the Hypertune template built for Vercel Marketplace today. Read more Continue reading...
The Observability dashboard now includes a dedicated tab for Vercel Blob, which provides visibility into how Blob stores are used across your applications. At the team level, you can see total data transfer, download volume, cache activity, and API operations. You can also drill into activity by user agent, edge region, and client IP. This allows you to understand usage patterns, identify inefficiencies, and optimize how your application stores and serves assets. Try it out or learn more about Vercel Blob. Read more Continue reading...
Builds on Vercel now initialize 45% faster on average, reducing build times by around 15 seconds for Pro and Enterprise teams. Build initialization includes steps like restoring the build cache and fetching your code before the Build Command runs. These improvements come from continued enhancements to Hive, Vercel’s build infrastructure. This improvement also reduced I/O wait times for file writes inside the build container by 75%, improving performance for the entire build. Learn more about builds on Vercel. Read more Continue reading...
Fern is improving how teams build and host documentation. As a multi-tenant platform, Fern enables companies like Webflow and ElevenLabs to create, customize, and serve API documentation from a single Next.js application—scaling seamlessly across multiple customer domains. With 6 million+ page views per month and 1 million+ unique visitors, performance and reliability are key. By running on Vercel’s infrastructure, Fern benefits from automatic caching, optimized content delivery, and instant scalability, all while maintaining a fast iteration cycle for development. Additionally, their migration to Next.js App Router has driven a 50-80% reduction in page load times, improving navigation speed and Lighthouse scores for customers...
CLI updates Netlify CLI 21.4.1 has a cleaner look, simplified workflows, and a new command. Install the latest version: npm install netlify-cli -g # or yarn global add netlify-cli Onboard to Netlify projects with netlify clone command Pull an existing Netlify project locally from its repo with one command: netlify clone <repository-url> Clones the repo and links it to the right Netlify site–ideal for onboarding or switching projects. Builds are included with netlify deploy The deploy command now runs a build step by default, eliminating separate build commands. Instead of netlify build && netlify deploy or netlify deploy --build, simply run: netlify deploy To deploy without building, use netlify deploy --no-build. Smarter manual setup...
The Next.js team recently disclosed CVE-2025-32421, a low-severity vulnerability allowing for CDN cache poisoning in some scenarios. The engineering team at Netlify has confirmed that all Next.js sites on Netlify are not vulnerable. The vulnerability requires use of a CDN that may cache responses without explicit Cache-Control headers, but Netlify's CDN never does so. As a general security precaution, we recommend upgrading to the latest versions of the Next.js framework and allowing automatic updates of the OpenNext Netlify Next.js adapter. Continue reading...
A low severity cache poisoning vulnerability was discovered in Next.js. Summary This affects versions >14.2.24 through <15.1.6 as a bypass of the previous CVE-2024-46982. The issue happens when an attacker exploits a race condition between two requests — one containing the?__nextDataRequest=1 query parameter and another with the x-now-route-matches header. Some CDN providers may cache a 200 OK response even in the absence of explicit cache-control headers, enabling a poisoned response to persist and be served to subsequent users. Affected Versions Next.js versions >14.2.24 through <15.1.6 Impact This vulnerability allows an attacker to poison the CDN cache by injecting the response body from a non-cacheable data request...
Since 2014, Consensys has shaped the web3 movement with tools like and Linea, Infura, and MetaMask—the most widely used self-custodial wallet on the web, with millions of users across the globe. As the blockchain ecosystem quickly matured, the need for a site that could move as fast as the teams building it became clear. To meet that demand, Consensys migrated MetaMask.io to Next.js and Vercel, creating an architecture built for scale, speed, and continuous iteration. Read more Continue reading...
Vercel’s CDN, which can proxy requests to external backends, now caches proxied responses using the CDN-Cache-Control and Vercel-CDN-Cache-Control headers. This aligns caching behavior for external backends with how Vercel Functions are already cached. This is available starting today, on all plans, at no additional cost. Per the Targeted HTTP Cache Control spec (RFC 9213), these headers support standard directives like max-age and stale-while-revalidate, enabling fine-grained control over CDN caching without affecting browser caches. You can return the headers directly from your backend, or define them in vercel.json under the headers key if your backend can't be modified. No configuration changes or redeployments required. Return...
We’re updating how pricing works in v0. Usage is now metered on input and output tokens which convert to credits, instead of fixed message counts. This gives you more predictable pricing as your grow and increases the amount of usage available on our free tier. Existing v0 users will transition to the new pricing at the start of your next billing period. New users will start on the improved pricing today. Read more Continue reading...
Back
Top