Yesterday's Top Poster

Industry News

There is now a search feature in the top right corner of every page in the vercel.com dashboard. This search allows you to instantly find: Teams Projects Deployments (by branch) Pages Settings For more complex queries you can also ask the Navigation Assistant. This AI-powered feature can locate any page in the dashboard and apply filters based on your question. Learn more about Find in the documentation. Read more Continue reading...
There is now a search feature in the top right corner of every page in the vercel.com dashboard. This search allows you to instantly find: Teams Projects Deployments (by branch) Pages Settings For more complex queries you can also ask the Navigation Assistant. This AI-powered feature can locate any page in the dashboard and apply filters based on your question. Learn more about Find in the documentation. Read more Continue reading...
Vercel is evolving to meet the expanding potential of AI while staying grounded in the principles that brought us here. We're extending from frontend to full stack, deepening our enterprise capabilities, and powering the next generation of AI applications, including integrating AI into our own developer tools. Today, we’re welcoming Keith Messick as our first Chief Marketing Officer to support this growth and (as always) amplify the voice of the developer. Read more Continue reading...
Turso now offers a native integration with Vercel, available as Database & Storage provider in the Marketplace. The Turso integration brings fast, distributed SQLite databases to your Vercel projects with: Seamless integration with Vercel, including one-click setup and unified billing Edge-hosted SQLite databases built for speed and global distribution A developer-friendly experience, configurable through Vercel CLI workflows Get started with Turso on the Vercel Marketplace. Read more Continue reading...
Teams can now require all members to enable two-factor authentication (2FA) for added security. Team owners can enable enforcement in the Security & Privacy section of team settings. Owner controls View and filter each member’s 2FA status in the team members settings Member restrictions Once enforcement is enabled, members without 2FA will be restricted from: Triggering builds from pull requests Accessing new preview deployments Viewing the team dashboard Making API requests Using access tokens Enforcement lock-in & visibility Members of a team with 2FA enforcement cannot disable 2FA unless they leave the team In each user’s account settings, teams that require 2FA are now listed for clarity Enable...
Users can now secure their accounts using Two-Factor Authentication (2FA) with Time-based One-Time Passwords (TOTP), commonly provided by authenticator apps like Google Authenticator or Authy. Your current Passkeys (WebAuthn keys) can also be used as second factors. 2FA adds an extra security layer to protect your account even if the initial login method is compromised. To Enable 2FA: Navigate to Authentication in Account Settings and enable 2FA Log in using your existing method (email OTP or Git provider) as your first factor Complete authentication with a TOTP authenticator as your second factor Important information: Passkey logins (WebAuthn) are inherently two-factor and won't prompt for additional verification...
Observability Plus users can now create a collection of queries in notebooks to collaboratively explore their observability data. Queries in Vercel Observability allow you to explore log data and visualize traffic, performance, and other key metrics, and can now be saved to notebooks. By default, notebooks are only visible to the user who created the notebook, but you have the option to share a notebook with all members of your team. This is available to Observability Plus subscribers at no additional cost. Try it out or learn more about Observability and Observability Plus. Read more Continue reading...
Tray.ai is a composable AI integration and automation platform that enterprises use to build smart, secure AI agents at scale. To modernize their marketing site, they partnered with Roboto Studio to migrate off their legacy solution and outdated version of Next.js. The goal: simplify the architecture, consolidate siloed repos, and bring content and form management into one unified system. After moving to Vercel, builds went from a full day to just two minutes. Read more Continue reading...
Dubai (dxb1) is now part of Vercel’s delivery network, extending our global CDN's caching and compute to reduce latency for users in the Middle East, Africa, and Central Asia without requiring any changes. The new Dubai serves as the first stop for end-users based on proximity and network conditions. It's generally available and serving billions of requests. Teams can configure Dubai as an execution region for Vercel Functions, which supports Fluid compute to increase resource and cost efficiency, minimize cold starts, and scale dynamically with demand. Learn more about Vercel Regions and Dubai's regional pricing. Read more Continue reading...
The Model Context Protocol (MCP) standardizes how to build integrations for AI models. We built the MCP adapter to help developers create their own MCP servers using popular frameworks such as Next.js, Nuxt, and SvelteKit. Production apps like Zapier, Composio, Vapi, and Solana use the MCP adapter to deploy their own MCP servers on Vercel, and they've seen substantial growth in the past month. MCP has been adopted by popular clients like Cursor, Claude, and Windsurf. These now support connecting to MCP servers and calling tools. Companies create their own MCP servers to make their tools available in the ecosystem. The growing adoption of MCP shows its importance, but scaling MCP servers reveals limitations in the original design...
Fluid compute now gracefully handles uncaught exceptions and unhandled rejections in Node.js by logging the error, allowing inflight requests to complete, and then exiting the process. This prevents concurrent requests running on same fluid instance from being inadvertently terminated in case of unhandled errors, providing the isolation of traditional serverless invocations. Enable Fluid for your existing projects, and learn more in our blog and documentation. Read more Continue reading...
We've improved the team overview in the Vercel dashboard: Activity is now sorted by your activity only Projects can be filtered by git repository Usage for the team is now shown as a card on the overview directly To learn more about the Vercel dashboard, visit the documentation. Read more Continue reading...
Our two conferences (Vercel Ship and Next.js Conf) are our chance to show what we've been building, how we're thinking, and cast a vision of where we're going next. It's also a chance to push ourselves to create an experience that builds excitement and reflects the quality we strive for in our products. For Vercel Ship 2025, we wanted that experience to feel fluid and fast. This is a look at how we made the conference platform and visuals, from ferrofluid-inspired 3D visuals and generative AI workflows, to modular component systems and more. Read more Continue reading...
Search is changing. Backlinks and keywords aren’t enough anymore. AI-first interfaces like ChatGPT and Google’s AI Overviews now answer questions before users ever click a link (if at all). Large language models (LLMs) have become a new layer in the discovery process, reshaping how, where, and when content is seen. This shift is changes how visibility works. It’s still early, and nobody has all the answers. But one pattern we're noticing is that LLMs tend to favor content that explains things clearly, deeply, and with structure. "LLM SEO" isn’t a replacement for traditional search engine optimization (SEO). It’s an adaptation. For marketers, content strategists, and product teams, this shift brings both risk and opportunity. How do...
You can now filter runtime logs to view fatal function errors, such as Node.js crashes, using the Fatal option in the levels filter. When a log entry corresponds to a fatal error, the right-hand panel will display Invocation Failed in the invocation details. Try it out or learn more about runtime logs Read more Continue reading...
The AI Gateway, currently in alpha for all users, lets you switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts. The Observability tab in the Vercel dashboard now includes a dedicated AI section to surface metrics related to the AI Gateway. This update introduces visibility into: Requests by model Time to first token (TTFT) Request duration Input/output token count Cost per request (free while in alpha) You can view these metrics across all projects or drill into per-project and per-model usage to understand which models are performing well, how they compare on latency, and what each request would cost in production. Learn more about Observability. Read more Continue...
  • J
  • By Julia Shi, Walter Korman, Nico Albanese, Ethan
An AI agent is a language model with a system prompt and a set of tools. Tools extend the model's capabilities by adding access to APIs, file systems, and external services. But they also create new paths for things to go wrong. The most critical security risk is prompt injection. Similar to SQL injection, it allows attackers to slip commands into what looks like normal input. The difference is that with LLMs, there is no standard way to isolate or escape input. Anything the model sees, including user input, search results, or retrieved documents, can override the system prompt or event trigger tool calls. If you are building an agent, you must design for worst case scenarios. The model will see everything an attacker can control. And...
Back
Top