OpenAI Exposes "Polaris" Project, "2028 Great Unemployment" May Actually Be Coming

robot
Abstract generation in progress

Recently, a “2028 Prediction” article went viral online. The article pointed out that due to advances in AI, there will be a wave of unemployment in 2028, with many jobs being replaced by AI.

After the article was published, combined with the situation in the Middle East, it caused a sharp drop in the US stock market that day. This event is quite surreal—after all, the article was clearly written by AI, but it seemed to align perfectly with people’s fears of “AI causing massive unemployment,” thus having a significant impact.

Recently, a piece of news revealed by OpenAI made people realize that the “mass unemployment in 2028” might not be just a rumor.

Recently, OpenAI Chief Scientist Jakub Pachocki said in an exclusive interview with MIT Technology Review a chilling statement— their “North Star” is to build a fully automated multi-agent research system by 2028.

The first phase goal will be achieved this September:

An “autonomous AI research intern” capable of independently handling specific research problems.

This is not a placeholder in a product roadmap, nor a casual boast by Altman on X. It is OpenAI committing all company resources to one direction.

The meaning of “North Star”

When tech companies talk about “North Star,” it usually means two things: first, other projects will make way for it; second, there is internal consensus.

From OpenAI’s recent actions over the past two weeks, this judgment is largely correct.

On March 19, OpenAI announced the acquisition of developer tools company Astral, integrating the team into the Codex division; at the same time, the company announced plans to unify ChatGPT, Codex, and the browser into a single desktop “super app,” led by application head Fidji Simo, with Greg Brockman assisting in organizational reform.

The era of fragmented products is coming to an end. OpenAI is pushing all chips toward one goal.

And that goal is “making AI do research on its own.”

Pachocki’s logic is quite clear: reasoning models, agents, and interpretability—these three technical routes were once separate within OpenAI, but now they are being integrated under one goal—to create an AI researcher that can operate autonomously in data centers for extended periods. He said once this is achieved, “this will be what we truly rely on.”

Former OpenAI researcher Andrej Karpathy’s view is even more direct—“All leading large language model labs will do this; this is the ultimate boss battle.” He added a phrase worth pondering: “Scaling will of course be more complex, but doing this is just an engineering problem, and it will succeed.”

Pay attention to his wording: it’s not “whether” it can be done, but “when.”

Anthropic in action

On the very day OpenAI announced its “North Star,” Anthropic quietly launched Claude Code Channels—a feature allowing developers to interact directly with a running Claude Code session via Telegram and Discord.

This may seem small on its own, but in the context of overall trends, it is very significant.

Anthropic’s logic is: rather than telling developers what AI can do in the future, it’s better to embed it into their current workflows. Telegram and Discord are not academic papers—they are where programmers work every day. Making Claude Code live here means transforming it from a “tool” into a “colleague.”

Community reactions confirm this judgment.

Some users directly said: “Claude, through this update, has killed OpenClaw—you no longer need to buy a Mac Mini.” The implication is that Anthropic’s infrastructure improvements have already eliminated the cost advantage of open-source alternatives.

From a broader timeline perspective, Anthropic’s iteration speed on Claude Code is indeed astonishing. In just a few weeks, it integrated text processing, thousands of MCP skills, and autonomous bug fixing capabilities. While OpenAI is strengthening Codex through the Astral acquisition, Anthropic has already put Claude Code directly into developers’ chat windows.

Both companies are heading toward the same endpoint, but their routes are completely different—OpenAI is working on a “fully automated researcher in 2028,” while Anthropic is building “intelligent agent tools available today.”

The real challenge

However, there is a detail that cannot be overlooked.

Pachocki did something rare in the interview—he openly discussed the challenges of safety and controllability, and he was quite candid.

He said their plan is to use other large language models to “monitor the AI researcher’s notes,” catching bad behavior before problems arise. But he immediately admitted: “Our understanding of large language models is not enough to fully control them. To truly say ‘this problem is solved,’ it will still take a long time.”

A chief scientist of a company saying “we do not yet have complete control” while announcing a fully automated AI research system by 2028 is worth serious reflection.

This is not about pessimism but about understanding the real difficulty of the task. Pachocki’s words indicate a clear awareness within OpenAI of the road ahead.

On a technical level, a “Pacassi cycle” summarized by researchers is worth noting—successful automated AI research frameworks require three elements: an agent with permission to modify individual files, a single objective for objective testing, and a fixed experimental time limit.

This framework has already begun to produce results in real environments. Shopify CEO Tobias Lütke shared an example: he let an autoresearch agent run overnight, and the next morning, it conducted 37 experiments, improving the model’s performance by 19%.

From concept to implementation, this path is shorter than expected.

The future with a $20,000 subscription fee

The “North Star” project is not only a technological advantage but also a business game-changer.

Paul Roetzer cited some numbers that make people want to look again: he referenced internal OpenAI forecasts that by 2029, the agent business alone could generate $29 billion annually, including a $2,000/month “knowledge agent” and a $20,000/month “research agent.”

These figures show that “AI researchers” are never just a technical goal—they are a revenue roadmap.

The $20,000/month “research agent” translates to a fraction of a senior researcher’s annual salary, but it can work 24/7, running 37 experiments simultaneously. It is not about replacing a specific person but redefining “research productivity” itself.

This reminds me of Karpathy’s statement—“This is the ultimate boss battle.” The “boss” he refers to is not a competitor but the ceiling of AI capability itself.

Once AI can autonomously advance scientific research, the pace of AI progress will no longer be limited by the number of human researchers and working hours.

Pachocki also expressed the same idea, more restrained—“Once the system can operate autonomously in data centers for a long time, that is what we truly rely on.”

The AI research intern of September 2026 is not the end but an important starting point.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin