Hero image for Cursor Composer 2 & Kimi K2.5: What the Disclosure Means
By AI Tool Briefing Team

Cursor Composer 2 & Kimi K2.5: What the Disclosure Means


I recommended Cursor to every developer I know. Multiple times. In writing. On this site. I called it the best AI-powered code editor on the market and meant it. That changed when Cursor’s Composer 2 was found to be built on Kimi K2.5, a Chinese AI model, without disclosure.

So when developers discovered last week that Cursor had quietly built Composer 2 on top of Moonshot AI’s Kimi K2.5 — a Chinese-developed model — without telling anyone, I felt something between embarrassment and anger. Not because a Chinese model is inherently dangerous. But because I’d been telling people they knew what was running their code. They didn’t.

The Short Version

What happenedCursor used Kimi K2.5 as the base model for Composer 2 without disclosing it
When it brokeMarch 22-25, 2026, after devs found Kimi model IDs in API responses
Cursor’s responseCo-founder Aman Sanger: “It was a miss to not mention the Kimi base in our blog from the start”
The licensing issueKimi K2.5’s license requires products over $20M/month revenue to display “Kimi K2.5” in UI — Cursor exceeds $2B ARR
Data concernUnknown what training data flows exist between Cursor’s implementation and Moonshot AI
Who should worryEnterprise teams with data sovereignty requirements, regulated industries, government contractors

Bottom line: This isn’t about China. It’s about a $2B+ company hiding what’s under the hood of its flagship feature from users who had every reason to care.

What Actually Happened with Cursor Composer 2 and Kimi K2.5

Here’s the timeline, pieced together from reporting by TechCrunch, VentureBeat, and Security Boulevard between March 22-25.

  1. Cursor ships Composer 2 with faster completions and better multi-file edits. The blog post credits proprietary training. No mention of Kimi K2.5.
  2. Developers notice Kimi model identifiers appearing in API responses while debugging their Composer 2 workflows.
  3. The community starts asking questions. Reddit threads, Twitter posts, GitHub discussions.
  4. Cursor co-founder Aman Sanger responds publicly. His exact words: “It was a miss to not mention the Kimi base in our blog from the start.”
  5. Cursor clarifies that approximately 25% of Composer 2’s compute comes from the Kimi K2.5 base, with the remaining 75% being proprietary reinforcement learning on top.

A “miss.” That’s the word choice. Not a deliberate omission. Not a transparency failure. A miss. Like forgetting to CC someone on an email.

Why “Only 25%” Doesn’t Solve the Problem

Cursor’s defense is that Kimi K2.5 is only the base model and that most of Composer 2’s actual capability comes from their own proprietary RL training layered on top. Only about 25% of the compute, they say, comes from the Kimi foundation.

Here’s the thing: that 25% is the foundation. Every inference runs through it. Every line of your code that Composer 2 processes touches that base layer. It’s not an optional module you can disable. It’s the floor the whole building sits on.

The analogy would be a construction company saying “only 25% of this building is the foundation — the rest is our proprietary design.” Sure. But if I care about what the foundation is made of, the superstructure doesn’t change that.

For developers using Cursor for personal projects? Probably fine. For enterprise teams with compliance requirements, data residency obligations, or government contracts? That 25% is 100% of the problem.

The Licensing Problem Nobody’s Talking About

This is where it gets genuinely uncomfortable for Cursor.

Kimi K2.5 ships under an open-weight license with a specific commercial clause: any product generating more than $20 million per month in revenue that uses K2.5 must prominently display “Kimi K2.5” in its user interface.

Cursor reportedly exceeds $2 billion in annual recurring revenue. That’s roughly $167 million per month.

They displayed nothing. No attribution. No mention. The only reason anyone knows about the Kimi base is because developers found model IDs leaking through API responses.

I’m not a licensing attorney. But the gap between “must prominently display” and “didn’t mention it at all” is wide enough to drive a lawsuit through. Whether Moonshot AI pursues enforcement is a separate question — but the obligation appears clear, and Cursor appears to have ignored it.

What This Means for Enterprise AI Procurement

If you’re evaluating AI coding tools for a team of any size, this incident should change how you ask questions during procurement. Not because Cursor is uniquely bad, but because Cursor is the first major example of what I’ve been worried about for months: AI supply chains are opaque, and the companies selling to you don’t always know (or disclose) what’s inside their own products.

These are the questions I’d put to any AI vendor right now:

5 Questions Your AI Vendor Should Answer

  1. What base model(s) does your product use, and where were they developed? Not just “we use proprietary AI.” Which models, specifically.
  2. Does any user data — code, prompts, outputs — touch infrastructure outside your disclosed hosting regions? Including during training, fine-tuning, or inference.
  3. What are the licensing terms of your base model(s)? Open-weight models often have commercial use restrictions that downstream vendors inherit.
  4. How do you handle model changes? If you swap base models, do customers get notified? Can they opt out?
  5. Can you provide a model bill of materials (MBOM)? An inventory of every model, dataset, and third-party component in your AI pipeline.

If a vendor balks at any of these, that tells you something. If they can’t answer #1, that tells you more.

For a broader look at navigating these questions in regulated environments, see our AI safety for business guide and our breakdown of enterprise AI deployment considerations.

Is Cursor Still Safe to Use?

Depends on your definition of “safe.”

For individual developers and small teams: Almost certainly yes. Cursor remains an excellent code editor. The Kimi K2.5 base doesn’t mean your code is being sent to China. Cursor processes inference on their own infrastructure. The model architecture running locally or on Cursor’s servers doesn’t inherently create a data exfiltration risk.

For enterprise teams with compliance requirements: You now have an unknown in your supply chain that wasn’t there yesterday (or rather, was always there — you just didn’t know about it). Whether that unknown is acceptable depends on your specific regulatory environment. SOC 2 and ISO 27001 compliance programs typically require you to understand and document your data processing chain. A surprise base model from a Chinese AI lab creates a documentation gap at minimum.

For government contractors and defense-adjacent work: I’d pause. Not because of proven risk, but because the disclosure failure itself is a red flag for the kind of transparency these environments demand. If Cursor didn’t disclose this voluntarily, what else might you not know?

If you’re shopping alternatives, our Cursor vs. Claude Code vs. Copilot comparison and AI code assistants roundup are both current.

The Bigger Picture: AI Supply Chain Transparency Is Broken

This isn’t just a Cursor story. It’s the first high-profile example of a pattern the industry will keep repeating.

The AI industry right now operates like the food industry before ingredient labels. You pick a product off the shelf (Cursor, Copilot, whatever) and trust that what’s inside matches what’s on the box. But there’s no regulation requiring AI companies to disclose their model supply chain. No equivalent of a nutrition label. No mandatory ingredient list.

Here’s what’s actually happening across the industry:

  • Model stacking is common. Many AI products use one model as a base and train proprietary layers on top. The base model is rarely disclosed.
  • Model swaps happen silently. Companies routinely switch underlying models for cost or performance reasons without notifying users.
  • Open-weight doesn’t mean open-source. Models like Kimi K2.5 are downloadable and inspectable, but their commercial licenses carry conditions that downstream users may not know about.
  • Data provenance is murky. Even when the model architecture is known, the training data often isn’t. Chinese-developed models trained on Chinese internet data carry different privacy assumptions than Western models.

I wrote about the broader Chinese AI model surge a month ago. The quality is real. The competition is healthy. But the supply chain opacity is a problem that the industry hasn’t begun to solve.

What Cursor Should Do Now

I still use Cursor. I’m typing this in a different editor, but my day-to-day coding happens in Cursor, and I’m not switching tomorrow. The product is too good.

But here’s what I’d want to see:

Immediate: Publish a complete model bill of materials for every Cursor product. Not vague descriptions. Specific model names, versions, origins, and licensing terms.

Short-term: Implement a model change notification policy. If the base model changes, users should know before it happens, not after someone finds model IDs in debug output.

Long-term: Push for industry standards around AI supply chain disclosure. Cursor is big enough to lead here. At $2B+ ARR, they have the credibility and the obligation.

Aman Sanger’s “miss” framing isn’t going to age well. The window for getting ahead of this story is closing. The window for setting an industry standard is still open.

Frequently Asked Questions

Is my code being sent to China through Cursor?

No evidence suggests that. Cursor runs inference on its own infrastructure. The Kimi K2.5 model weights are used locally on Cursor’s servers — your code isn’t routed to Moonshot AI or Chinese servers during normal operation. The concern is about transparency and supply chain provenance, not active data exfiltration.

What is Kimi K2.5?

Kimi K2.5 is an open-weight large language model developed by Moonshot AI (月之暗面), a Beijing-based AI company. It’s designed for coding and reasoning tasks. “Open-weight” means the model parameters are publicly available for download, but it ships with a commercial license that includes revenue-based attribution requirements.

Does this violate any regulations?

That depends on your jurisdiction and industry. For most commercial software companies, no specific regulation was violated. For companies operating under ITAR, FedRAMP, or certain EU data sovereignty frameworks, an undisclosed Chinese model component in their development toolchain could create compliance complications. Consult your compliance team.

Should I switch away from Cursor?

For most users, no. Cursor is still a strong product, and this incident is about disclosure practices, not product quality. For teams with strict supply chain requirements, it’s worth having a conversation with Cursor’s sales team about their model architecture before continuing. See our AI safety and privacy guide for a framework to evaluate these decisions.

What does “25% of compute” actually mean?

Cursor claims that Kimi K2.5 provides the base model weights that Composer 2 runs on, but that approximately 75% of the model’s capability comes from Cursor’s own reinforcement learning training applied on top. In practical terms, every inference still passes through the Kimi base layer. The 25% figure describes the proportion of training compute, not the proportion of runtime involvement.


Last updated: March 26, 2026. Reporting based on coverage from TechCrunch, VentureBeat, and Security Boulevard, March 22-25, 2026, and Aman Sanger’s public statements.