Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Colleague.skill" goes viral: Behind the meme are legal and technical risks
《Science and Technology Innovation Board Daily》April 5 reported (Editor: Song Ziqiao) “Transform cold parting into a warm Skill—welcome to join Cyber Immortality / Digital Life 1.0.”
When this line appears on the homepage of a GitHub project, it feels more like a piece of dark humor than something truly “warm.”
This open-source project “colleague.skill,” which launched five days ago and has already collected 7.3k stars while quickly going mainstream, is an AI Agent tool based on Claude Code. It can collect data such as departing employees’ chat logs, documents, and code (e.g., Feishu messages, DingTalk documents, emails, screenshots), add some subjective descriptions, and feed everything into a large model—thereby generating an AI doppelgänger that can mimic that employee’s work habits and way of speaking, an artificial being that can take over the employee’s work.
Developers have even prepared a “fully automated extraction tool,” covering the most mainstream chat-log export tools currently available on the market (such as WeChatMsg, PyWxDump). It supports data extraction from Feishu, DingTalk, Slack, all the way to iMessage.
Overall, the bottom layer of this architecture is “Work Skill,” which is responsible for capturing professional capabilities—from coding style, business logic, to project SOP—compiling one person’s workplace experience into an executable workflow. The top layer is the “Persona” module, which, through a five-layer structure (hard rules, identity positioning, expression style, decision-making mode, interpersonal behavior), simulates a real person’s emotions and lines.
“AI replacing people” is not just a plot from science-fiction novels.
A report released on April 3 by the consulting firm Challenger, Gray & Christmas shows that in the first quarter of 2026, the U.S. tech industry laid off 52,050 people, up 40% year over year, with AI explicitly listed as a core reason. Andy Challenger, Chief Revenue Officer of Challenger, Gray & Christmas, said: “Companies are shifting budgets to invest in AI rather than creating jobs. The trend of job replacement has already become apparent, especially in programming roles.”
Anthropic CEO Dario Amodei has also said: “AI may replace about half of office jobs in the next one to five years.”
But behind this, concerns have already begun to surface.
On one hand, the technical capabilities of the “colleague.skill” project are being exaggerated. Essentially, the project is still a text-generation tool based on prompt engineering, not a true “consciousness upload.” Its quality depends entirely on the “raw materials”—long documents are better than fragmented messages, proactive output is better than passive replies. It can replicate methods, but it cannot replicate creativity or the intuition for real-time adaptation.
On the other hand, the ethical and privacy minefields have been trampled. Chat logs and work content involve sensitive information under the Personal Information Protection Law. Using other people’s data for AI training without permission may violate individuals’ rights to personal information and copyrights.
However, the allocation of relevant responsibilities and liabilities remains unclear. According to 21st Century Business Herald, Chen Tianhao, a long-term employed associate professor at the School of Public Policy and Management of Tsinghua University and an assistant director at the Tsinghua University Science and Technology Development and Governance Research Center, believes that the tacit knowledge formed by workers in the course of their work should, in principle, be controlled by the workers themselves. The current legal framework has a gap in this regard; in the future, it will be necessary to revise the Labor Law and related laws and regulations, and also to set in advance through labor contracts who has the right to use this tacit knowledge and where the boundaries of such use lie.
At a deeper level, when a “digital double” can take over the person’s communication, decision-making, and even “blame-shifting,” how should responsibilities and liabilities be defined? When workplace relationships become datafied and commodified, will the connection between people become further alienated?
AI takes over junior employees’ menial tasks; in the short term, efficiency improves. But new hires lose the soil for training—so is this a victory for efficiency, or a dilemma for talent development?
AI is good at optimizing processes, but not at managing relationships. Since current AGI has not been achieved, are we overestimating AI’s technical boundaries and placing too much trust—meant for humans—on machines?
Technology has no inherent good or evil; what matters is the user’s intent. In today’s world where the AI wave sweeps everything, the above questions are especially worth deep reflection.
Endless information, precise interpretation—only on the Sina Finance app