Yesterday's Top Poster

Industry News

You can now trigger GitHub Actions workflows in response to Vercel deployment events with enriched data using repository_dispatch events. These events are sent from Vercel to GitHub, enabling more flexible, cost-efficient CI workflows, and easier end-to-end testing for Vercel deployments. Previously, we recommended using deployment_status events, but these payloads were limited and required extra parsing or investigation to understand what changed. With repository_dispatch, Vercel sends custom JSON payloads with full deployment context—allowing you to reduce Github Actions overhead and streamline your CI pipelines. We recommend migrating to repository_dispatch for a better experience. deployment_status events will continue to work...
Meta’s latest and most powerful Llama 4 models are now available through the Vercel Marketplace via Groq. To get started for free, install the Groq integration in the Vercel dashboard or add Groq to your existing projects with the Vercel CLI: You can then use the AI SDK Groq provider with Lama 4: For a full demo, check out the official Groq chatbot template (which now uses Llama 4) or compare Llama 4 against other models side-by-side on our AI SDK Playground. To learn more, visit our AI documentation. Read more Continue reading...
Observability Plus customers can now create and share custom queries directly from the Observability dashboard—making it easier to investigate specific metrics, routes, and application behavior without writing code. The new query interface lets you: Filter by route to focus on specific pages and metrics Use advanced filtering, with auto-complete—no query language needed Analyze charts in the context of routes and projects Share queries instantly via URL or Copy button This new querying experience builds on the Monitoring dashboard, helping you stay in context as you drill deeper into your data. To try it out, open your Observability dashboard and select Explore query arrows on any chart or the query builder from the...
PAIGE, a leading denim and apparel retailer, faced significant technical complexity due to their existing ecommerce architecture. Seeking a faster and more reliable online experience, they reimagined their ecommerce strategy by adopting a simpler headless tech stack—one powered by Shopify, Next.js, and Vercel—that ultimately boosted their Black Friday revenue by 22% and increased conversion rates by 76%. Read more Continue reading...
Users can now secure their accounts using Multi-Factor Authentication (MFA) with Time-based One-Time Passwords (TOTP), commonly provided by authenticator apps like Google Authenticator or Authy. Your current Passkeys (WebAuthn keys) can also be used as second factors. MFA adds an extra security layer to protect your account even if the initial login method is compromised. To Enable MFA: Navigate to Authentication in Account Settings and enable MFA Log in using your existing method (email OTP or Git provider) as your first factor Complete authentication with a TOTP authenticator as your second factor Important information: Passkey logins (WebAuthn) are inherently two-factor and won't prompt for additional verification...
Teams using Vercel Secure Compute can now associate each project environment—Production, Preview, and custom—with a distinct Secure Compute network, directly from the project settings. This simplifies environment-specific network isolation within a single project. To connect your project's environments to Secure Compute: Navigate to your project's Secure Compute settings For every environment you want to connect to Secure Compute: Select an active network Optionally, select a passive network to enable failover Optionally, enable builds to include the project's build container in the network Click Save to persist your changes Learn more about Secure Compute. Read more Continue reading...
In the process of remediating CVE-2025-29927, we looked at other possible exploits of Middleware. We independently verified this low severity vulnerability in parallel with two reports from independent researchers. Summary To mitigate CVE-2025-29927, Next.js validated the x-middleware-subrequest-id which persisted across multiple incoming requests: However, this subrequest ID is sent to all requests, even if the destination is not the same host as the Next.js application. Initiating a fetch request to a third-party within Middleware will send the x-middleware-subrequest-id to that third party. Impact While the exploitation of this vulnerability is unlikely due to an attacker requiring control of the third-party, we want to be...
Composable commerce projects frequently become overly complex, leading to missed objectives and unnecessary costs. At Vercel, we take a no-nonsense approach to composable commerce that's solely focused on business outcomes. Architecture should serve the business, not the other way around. Ivory tower architectures disconnected from clear business goals inevitably lead to projects plagued by runaway costs. Here are five truths we stand by when it comes to composable commerce: Read more Continue reading...
Verified webhook providers—including Stripe and PayPal—are now automatically allowed in Attack Challenge Mode, ensuring uninterrupted payment processing. Well-behaved bots from major search engines, such as Googlebot, and analytics platforms are also supported. Vercel Cron Jobs are now exempt from challenges when running in the same account. Like other trusted internal traffic, they bypass Attack Challenge Mode automatically. To block specific known bots, create a custom rule that matches their User Agent. Known bots are validated to be authentic and cannot be spoofed to bypass Attack Challenge Mode. Learn more about Attack Challenge Mode and how Vercel maintains its directory of legitimate bots. Read more Continue reading...
Vercel now caches dependencies for projects using Yarn 2 and newer, reducing install times and improving build performance. Previously, caching was only supported for npm, pnpm, Bun, and Yarn 1. To disable caching, set the environment variable VERCEL_FORCE_NO_BUILD_CACHE with a value of 1 in your project settings. If you're using Yarn 4, enable Corepack, as recommended by Yarn. Visit the Build Cache documentation to learn more. Read more Continue reading...
The Flags SDK 3.2 release adds support for precomputed feature flags in SvelteKit, making it easier to experiment on marketing pages while keeping them fast and avoiding layout shift. Precomputed flags evaluate in Edge Middleware to decide which variant of a page to show. This keeps pages static, resulting in low global latency as static variants can be served through the Edge Network. Precompute handles the combinatory explosion when using multiple feature flags statically. Generate different variants of a page at build time, rely on Incremental Static Regeneration to only build a specific combinations on demand, and more. We also improved the Flags SDK documentation by splitting it across different frameworks and explicitly...
The AI SDK is an open-source toolkit for building AI applications with JavaScript and TypeScript. Its unified provider API allows you to use any language model and enables powerful UI integrations into leading web frameworks such as Next.js and Svelte. Read more Continue reading...
The Flags SDK adapter for OpenFeature allows using any Node.js OpenFeature provider with the Flags SDK. Pick from a wide range of flag providers, while benefiting from the Flag SDK's tight integration into Next.js and SvelteKit. OpenFeature is an open specification that provides a vendor-agnostic, community-driven API for feature flagging that works with your favorite feature flag management tool or in-house solution. OpenFeature exposes various providers through a unified API. The Flags SDK sits between your application and the source of your flags, helping you follow best practices and keep your website fast. Use the Flags SDK OpenFeature adapter in your application to load feature flags from all compatible Node.js OpenFeature...
Vercel provides the tools and infrastructure to build AI-native web applications. We're partnering with xAI to bring their powerful Grok models directly to Vercel projects through the Vercel Marketplace—and soon v0—with no additional signup required. To help you get started, xAI is introducing a new free tier through Vercel to enable quick prototyping and experimentation. These Grok models now power our official Next.js AI chatbot template with the AI SDK. This is a part of our ongoing effort to make using AI frictionless on Vercel. Read more Continue reading...
Vercel now maps dependencies in your package manager’s lockfile to applications in your monorepo. Deployments only occur for applications using updated dependencies. This feature is based on Turborepo's lockfile analysis, supporting the package managers listed as stable in Turborepo's Support Policy. Previously, any change to the lockfile would redeploy all applications in the monorepo since it was treated as a shared input. Now, Vercel inspects the lockfile’s contents to determine which applications have dependency changes, further reducing potential queue times. Learn more about skipping unaffected projects in monorepos. Read more Continue reading...
xAI's Grok models are now available in the Vercel Marketplace, making it easy to integrate conversational AI into your Vercel projects. Get started with xAI's free plan—no additional signup through the Marketplace Access Grok's large language models (LLMs) directly from your Vercel projects Simplify authentication and API key management through automatically configured environment variables Pay only for what you use with integrated billing through Vercel To get started, you can use the AI SDK xAI provider in your project: Then, install the xAI Marketplace Integration with Vercel CLI (or from the dashboard): Once you've accepted the terms, you'll be able to use Grok models from within your project, with no additional...
We have deployed a proactive security update to the Vercel Firewall, protecting against a recently disclosed vulnerability in the xml-crypto package, dubbed SAMLStorm (CVE-2025-29774 and CVE-2025-29775). This vulnerability, which affects various SAML implementations, could allow attackers to bypass authentication mechanisms. What This Means for Vercel Customers Automatic protection with the Vercel Firewall: Vercel Firewall automatically mitigates this risk for you, but updating xml-crypto is still recommended Update xml-crypto: If you're using xml-crypto package 6.0.0 and earlier, or a package that depends on xml-crypto, update to 6.0.1, 3.2.1, or 2.1.6 for the patched versions We'll continue to monitor for new developments and...
The Vercel Marketplace now has an AI category for tools to integrate AI models and services directly into Vercel projects. Groq, fal, and DeepInfra are available as first-party integrations, allowing users to: Seamlessly connect and experiment with various AI models to power generative applications, embeddings, and more Deploy and run inference with high-performance AI models, optimized for speed and efficiency Leverage single sign-on and integrated billing through Vercel, including new prepaid options for better cost control With prepaid plan options, users can now manage AI costs more predictably by purchasing credits upfront from a model provider. These credits can be used across any model offered by that provider...
Back
Top