Alibaba aims to be an AI "salesperson": Wu Yongming leads the ATH Business Group, and the Wukong Division makes its debut.

“Token anxiety” is spreading in the AI circle.

With AI Agent tools like “Little Lobster” OpenClaw gaining popularity, a question is gradually emerging: There are not enough tokens. At AWE 2026 held in Shanghai, more and more manufacturers have begun to emphasize AI capability integration, from sweeping robots to smart home appliances to embodied intelligent devices. All these products rely on large models and computational power support, further driving up token demand.

Against the backdrop of rapidly increasing token demand, Alibaba is undergoing a significant integration of its AI business.

On March 16, Alibaba officially established the Alibaba Token Hub (ATH) business group, creating a new organization centered on the core goals of “creating tokens, delivering tokens, and applying tokens,” directly overseen by Alibaba CEO Wu Yongming.

It is worth noting that the ATH business group will operate parallel to Alibaba Cloud Intelligence Group and the E-commerce Group. This is not only one of the largest AI organizational integrations in recent years but also signifies the company’s attempt to rebuild its AI business system around the core resource of tokens.

If in the past, internet companies competed for traffic entry points, then in the era of generative AI, the infrastructure capabilities surrounding computational power and tokens are becoming the new focus of competition.

As AI begins to work autonomously, tokens are also becoming a new energy source in the AI era. By establishing ATH, Alibaba hopes to play the role of “selling shovels” in the AI industry chain, providing underlying capabilities for the growing AI applications and Agent ecosystem.

Alibaba’s AI Business Reorganization

With the competition in generative AI heating up, internet giants are accelerating their systematic layout of AI businesses. On March 16, Alibaba announced the establishment of the Alibaba Token Hub (ATH) business group, integrating Tongyi Laboratory, MaaS business line, Qianwen Division, Wukong Division, and AI Innovation Division.

According to the internal letter from Wu Yongming, the Tongyi Laboratory in the ATH business group is responsible for creating multimodal models; the MaaS business line is responsible for building an efficient and open model service platform and technical system, supporting the entire industry AI ecosystem; the Qianwen Division will continue to promote personal AI assistant products; the newly established Wukong Division is responsible for creating a B-end AI native work platform, deeply integrating model capabilities into enterprise workflows; and the AI Innovation Division is responsible for exploring new AI application scenarios.

Industry insiders believe that the core logic of this adjustment is to use tokens as the core unit for the flow of AI capabilities, integrating large model research and development, platform services, and application products originally scattered across different departments into a complete chain.

In the generative AI system, tokens are typically seen as the basic unit for model processing and generating information. Each user input and the content generated by the model are decomposed into tokens for calculation, and most AI products also use token consumption as a billing standard. Therefore, in the AI industry chain, tokens are not only the basic unit at the technical level but are gradually becoming a key indicator in AI business models.

As AI applications continue to increase, the importance of tokens is also rising. Especially as AI Agents gradually enter production scenarios, the token consumption often far exceeds that in traditional conversational applications.

IDC data shows that with the continuous upgrade of local model capabilities, the rapid maturity of agent technology and application ecosystems, and the resonance of industrial policies, the number of active intelligent agents in Chinese enterprises is expected to exceed 350 million by 2031, with a compound annual growth rate of over 135%, leading the major global markets. At the same time, due to the increase in task execution density and the complexity of tasks, the annual consumption of tokens by intelligent agents will also experience an exponential leap of over 30 times.

Huatai Securities pointed out that applications like OpenClaw require massive token consumption daily, and the growth in token consumption will directly increase the demand for computational power, which in turn relies on power support. As the AI paradigm shifts from “conversational AI” to AI agents that can autonomously execute complex tasks around the clock, token consumption will show a significant exponential growth trend. Data indicates that China’s overall daily token consumption will rise from about 100 billion at the beginning of 2024 to over 30 trillion by mid-2025, reaching 180 trillion by February 2026.

Against this backdrop, Alibaba’s establishment of ATH aims to integrate Tongyi large models, model service platforms, and C-end and B-end applications under the same business group, attempting to break down the collaboration barriers between different business units and strengthen resource synergy through a unified organizational structure.

IDC China Research Manager Sun Zhenya stated that this growth cycle will open a rare strategic opportunity for enterprises. Companies that are the first to layout intelligent agents will benefit simultaneously in efficiency enhancement, cost optimization, and business innovation. In this process, enterprises should quickly complete the transition from validating intelligent agent scenarios to scalable operations and prepare for architectural upgrades, governance systems, and cost control.

The “Wukong Division” Emerges

The Times reporter noticed that after Alibaba announced the establishment of the Token Hub business group, the official DingTalk account released a video to promote the upcoming AI DingTalk 2.0 launch.

In the video, DingTalk’s iconic black mascot transforms into Sun Wukong wielding a golden staff, standing atop a mountain facing four gigantic “lobsters.” The visual style is rich with Eastern mythological elements, with swirling clouds and a tense atmosphere. The scene then freezes on a striking subtitle—The Great Sage Returns.

In this organizational adjustment, a previously undisclosed department—the Wukong Division—has become the focus of external attention.

According to disclosures from Alibaba’s internal letter, the Wukong Division is positioned as a “B-end AI native work platform,” with the core goal of deeply integrating large model capabilities into enterprise workflows. Many industry insiders believe this positioning also indicates that Alibaba is further strengthening its enterprise-level AI application layout.

As one of Alibaba’s most important enterprise service platforms, DingTalk has over 700 million users. In recent years, DingTalk has gradually evolved from an instant messaging tool into a digital operating system for enterprises, covering collaborative office, project management, human resources management, and various enterprise applications.

In recent years, DingTalk has successively launched features like AI assistants, magic wands “/”, and AI smart minutes. Subsequently, DingTalk introduced the concept of digital employees, attempting to embed AI Agent nodes into daily enterprise processes, promoting the transition from “can call” to “can do work.”

In the context of the rising prominence of AI Agents, enterprise office software is now viewed as one of the most direct scenarios for AI implementation.

From this perspective, the establishment of the Wukong Division is likely to become an important support point for Alibaba in the enterprise AI field. By collaborating with enterprise service products like DingTalk, Alibaba can extend large model capabilities from conversational tools to enterprise production processes.

Industry insiders told The Times that the core value of enterprise AI is not just answering questions, but participating in specific work processes, such as automatically generating reports, handling customer service tickets, analyzing data, and even completing some R&D tasks. “When AI truly enters enterprise systems, it is no longer just a tool, but a part of enterprise productivity.”

As more and more enterprises begin to explore deploying AI in office, customer service, and R&D scenarios, enterprise-level AI applications are also seen as an important direction for the commercialization of generative AI. From this angle, the emergence of the Wukong Division signifies that Alibaba is attempting to integrate Tongyi large models, enterprise office platforms, and AI Agent capabilities to push AI from being an auxiliary tool to being part of the enterprise production system.

AI Competition Enters Systemic Warfare

As generative AI gradually shifts from chat tools to production tools, the focus of competition in the AI industry is also changing. From model capabilities to application ecosystems, to computational power infrastructure, competition among major tech companies is increasingly evolving into a systematic contest around AI productivity.

In this process, the rapid growth of token consumption is driving up the cost of AI computational power, and related service prices are beginning to show an upward trend.

Since the beginning of this year, several cloud computing and AI vendors have successively raised their service prices. Amazon Cloud was the first to increase the prices of some services aimed at large model training by about 15%; Google announced a price increase for global data transfer services starting May 1, with the rate per GB in North America doubling from $0.04 to $0.08.

Price adjustments are also occurring at the model service level. OpenAI raised the input price for GPT-5.4 to $2.5 per million tokens, with the output price reaching $15 per million tokens. Domestically, Zhiyuan AI raised subscription prices for GLM Coding Plan by up to 60%, with API call prices increasing by 67% to 100%. On March 13, Tencent Cloud also announced a price adjustment for its Hunyuan series models.

What truly determines the competitive landscape is often the complete industrial system formed around model capabilities. This includes not only the research and development of basic models but also various links such as computational power infrastructure, development platforms, and application ecosystems.

In overseas markets, this trend has gradually become evident. Microsoft has embedded OpenAI model capabilities into the Office and Teams enterprise software ecosystem through its Azure cloud and Copilot product system; Google, on the other hand, has promoted AI capabilities into broader application scenarios through its Gemini model in collaboration with Google Workspace and search products.

Against this backdrop, domestic internet companies have also begun to accelerate the integration of AI resources. The establishment of the ATH business group indicates that Alibaba is attempting to build a complete system covering models, platforms, and applications through a unified architecture.

As generative AI accelerates towards industrial applications, those who can provide tokens stably and at low cost will have a greater opportunity to become the “shovel sellers” in the AI industry chain.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin