Introduction
LongCat AI Statistics: It would not be an overstatement to say that LongCat AI has received considerable buzz in the domain of open-source artificial intelligence due to its promising nature. The project was launched by Meituan as a platform providing an efficient and high-quality AI solution for text, images, and video generation under one roof.
Reports have indicated that the technology can provide effective solutions when it comes to reasoning, instruction following, and long-context understanding. Considering that large generative models are being developed outside the walls of mainstream tech companies, LongCat AI can prove to be a notable contender in the field of AI technology going forward.
In this article, you’ll learn about LongCat AI is, how it works, where it’s used, and what challenges it still faces.
Editor’s Choice
- Longcat AI represents an extremely large, open-source artificial intelligence system created by Meituan in 2025.
- Longcat AI would be launched with an open-source code and at least 560B parameters, which could reduce the adoption cost barrier.
- On average, users spend 3 minutes per visit on Longcat AI with 3.7 pages viewed per visit.
- The Longcat AI has 500,000 free tokens per day (up to 5 million).
- Longcat-Flash-Lite is a small 68.5 billion parameter model that activates 2.9 billion to 4.5 billion parameters per request, enabling a context of up to 256,000 tokens.
General LongCat AI Statistics
- Longcat AI represents an extremely large, open-source artificial intelligence system created by Meituan in 2025.
- It employs the MoE structure, which includes 560 billion parameters in total – the most extensive number for any open-source AI system.
- In spite of its gigantic size, Longcat activates just a small fraction of parameters in each task performed.
- Usually, no more than 27 billion parameters are active when performing a single task.
- This AI system was designed specifically for agentic artificial intelligence, namely, reasoning, coding, and multi-step problem-solving.
- Its performance level is similar to that of leading AI models, though the number of actively used computational resources is considerably smaller.
- In addition, the training process involves reinforcement learning and domain-parallel training techniques, contributing to the development of the reasoning function.
- LongCat-Flash Base uses significantly fewer parameters than other large models but performs at the highest level.
- Total 560B parameters, 27B activated per token – this AI is competing with the best.
- Impressive results in the toughest benchmarks – MMLU-Pro, logical reasoning, mathematics, and coding tasks.
LongCat AI Adoption Statistics
- Adoption was driven by increased automation needs, as billions of AI-based solutions were employed globally at the time, including voice assistants.
- Roughly 55% of enterprises had adopted AI, with about 33% implementing generative AI, providing an ecosystem wherein Longcat-like models emerged.
- AI adoption by enterprises amounted to 72%, raising the demand for open-source AI models, including Longcat AI.
- Global AI adoption jumped up to 78%, suggesting that there is significant readiness among firms to employ AI models like Longcat AI.
- Approximately 71% of enterprises leverage generative AI at least in one business function, thus supporting the adoption of Longcat AI.
- Approximately 16.3% of the world’s population employs AI technology, demonstrating quick potential for the consumer-level adoption of Longcat.
- At the end of 2025, 72% of tech companies would be adopting AI compared to only 19% of governmental organizations, indicating that adoption of Longcat AI will be industry-specific.
- Larger companies can demonstrate adoption rates up to 78%, whereas small businesses struggle with adoption rates between 23%. This can affect the diffusion process of Longcat AI adoption.
- Longcat AI would be launched with an open-source code and at least 560B parameters, which could reduce the adoption cost barrier.
| Timeframe / Category | Key Data Point |
| Early Adoption (2020–2022) | 16.3% of the global population actively uses AI tools |
| 2023 Inflection Point | 55% of organizations adopted AI; 33% used generative AI |
| 2024 Expansion Phase | 72% of enterprises adopted AI |
| 2025 Mainstream Adoption | 78% global AI adoption across industries |
| Generative AI Boom (2025) | 71% of organizations use generative AI in at least one function |
| Population-Level Usage (2025) | 16.3% of global population actively uses AI tools |
| Industry Adoption Disparity | 72% tech vs 19% government adoption |
| Enterprise vs SMB Gap | 78% large enterprises vs 11–35% SMBs |
| Longcat AI Launch Impact (2025) | 560B-parameter open-source model introduced |
LongCat AI User Statistics
- The monthly number of users for Longcat AI went from 18K in August 2025 to over 561K by October 2025, representing rapid and exponential growth.
- Nearly 69.4% of users access Longcat AI through direct navigation, indicating high engagement and repeated visits by users who know about the site.
- China has the highest percentage of users (58.9%), followed by India (7.7%) and the United States (5.8%), reflecting a global appeal.
| China | 58.9% |
| India | 7.7% |
| US | 5.8% |
- On average, users spend 3 minutes per visit on Longcat AI with 3.7 pages viewed per visit.
- Being an open-source platform deployed on GitHub and Hugging Face, Longcat AI has been adopted predominantly by developers and companies that create AI applications.
LongCat AI Usage Statistics
- The Longcat AI has 500,000 free tokens per day (up to 5 million), which means that there is high usage on a daily basis among both developers and businesses.
- The model allows the use of 128k tokens per conversation, allowing longer interactions.
- It can generate up to 100+ tokens per second on H800 GPUs.
- Token usage is lowered by 64.5%, meaning there are more queries for the same amount of computational effort.
LongCat AI Features Statistics
- The Longcat AI’s main model has a MoE architecture with 560B parameters, but only 27B parameters are activated per inference.
- The model can handle a maximum of 128K tokens, making it suitable for long-term reasoning, document processing, and conversational dialogue.
- The “Re-thinking Mode” has an 8-path parallel reasoning capability and consumes 64.5% less token than the conventional approach for tools-based inference.
- Longcat has a unified model that can process texts, images, audios, and videos in one tokenized space, offering real-time multimodal interaction capabilities.
- The video model is capable of generating a 5-minute 720p video at 30fps, and the image model scores best on benchmarks using only 6B parameters.
LongCat AI Models Statistics
- Longcat-Flash-Chat is a Mixture-of-Experts (MoE) model with 560 billion parameters but uses 18.6 billion to 31.3 billion per request.
- Longcat-Flash-Thinking has 560 billion parameters and is tailored for reasoning and tool-using tasks with enhanced precision in benchmarking.
- Longcat-Flash-Lite is a small 68.5 billion parameter model that activates 2.9 billion to 4.5 billion parameters per request, enabling a context of up to 256,000 tokens.
- Longcat-Image is designed to perform image editing tasks using only 6 billion parameters, but outperforms other larger models in benchmarking.
- Examples of Longcat models include Flash-Omni, which has 560 billion parameters and specializes in multimodal tasks, and Flash-Prover, which attains a 97.1% success rate in MiniF2F-Test.
LongCat AI Performance Statistics
- The LongCat-Flash Base model demonstrates exceptional parameter efficiency, achieving performance that is either on par with or superior to significantly larger models, all while maintaining a compact size of 560 billion total parameters and 27 billion active parameters.
- In general-domain standards such as CMMLU, CEval, MMLU, and MMLU-Pro, LongCat-Flash Base either matches or surpasses state-of-the-art models, exhibiting a particularly strong advantage in more difficult tasks.
- For reasoning tasks, it achieves higher average scores compared to similar models on SuperGPQA, GPQA, CLUEWSC, and WinoGrande.
- In coding benchmarks and mathematics, including MBPP+, MATH, HumanEval+, GSM8K, CRUXEval, and MultiPL-E, LongCat-Flash Base constantly beats many larger models, with only minor discrepancies on select benchmarks.
| MMLU | 89.7% |
| MMLU-Pro | 82.7% |
| HumanEval | 88.4% |
| GPQA | 73.2% |
| MATH-500 | 96.4% |
| Inference Speed | >100 tokens per second (TPS) |
LongCat Flash Statistics
| Total Parameters | 560 billion |
| Activated Parameters | 18.6B–31.3B (average 27B) per token |
| Context Window | 128,000 tokens |
| Architecture | Shortcut-connected MoE (ScMoE) with 28 layers and 6144-dimensional hidden state |
| Training Data | 20+ trillion tokens |
| License | MIT License (Freely available) |
LongCat AI Recent Development Statistics
- The model Longcat-Flash-Chat was made available as a 560B parameter Mixture-of-Experts model, using 27B parameters per query for balanced scale and efficiency against leading LLMs.
- By early 2025, the Longcat AI technology had been adopted by enterprises, achieving efficiency gains of up to 20% during pilot implementations in use cases such as customer service.
- Emerging models such as LongCat-Video (13.6B parameters) and LongCat-Next are multimodal AI solutions, supporting not just text but images, audio, and videos as well.
Conclusion
Overall, the advent of Longcat AI represents a significant trend in the development of increasingly efficient and versatile AI technology systems. The open-source nature of the model, coupled with its impressive multimodal abilities, points to the future direction towards an integrated AI framework capable of handling a wide range of applications.
The constant improvements in reasoning and learning ability indicate how rapidly the landscape of this industry is moving to keep pace with proprietary technologies.
FAQ
It is the inaugural open-source video model capable of producing coherent videos lasting several minutes, free from the typical issues of color drift, motion blur, or looping hallucinations. This assertion is not merely a marketing statement. LongCat is a model comprising 13.6 billion parameters, trained to comprehend continuity and the flow of elements, rather than solely their appearance.
LongCat AI generates lengthy, coherent videos (lasting several minutes) by extending frames through text-to-video or image-to-video methods, primarily utilizing ComfyUI or API. To utilize it, one must download the models (LongCat, Distill LORA, VAE) into ComfyUI, establish a workflow to produce 5-second segments, and employ “extend image” nodes to link scenes for videos ranging from 4 to over 30 minutes.
The LongCat.AI Platform is maintained and managed by Beijing Kuxun Interactive Technology Co., Ltd. and its associated companies (hereinafter referred to as “we”, “Meituan” or “LongCat Platform”), which is dedicated to offering generative artificial intelligence technology services to developers.
