OpenAI has responded swiftly to emerging Chinese competition by releasing o3-mini, a specialized AI reasoning model, making advanced AI capabilities freely available to ChatGPT users for the first time.
The release comes after DeepSeek, a Chinese AI startup, rattled the US tech industry with its R1 model, which claimed comparable performance at a fraction of the cost.
o3-mini is a significant advancement in accessible AI technology, specifically designed for technical tasks like programming, mathematics, and scientific problem-solving. Unlike conventional language models, it employs a “chain of thought” approach, methodically working through problems step by step and self-correcting before providing answers.
While the model offers similar capabilities to its predecessor, o1 in STEM domains, it boasts remarkable improvements in efficiency. OpenAI claims that o3-mini responds 24% faster than o1-mini and costs 63% less per input token to operate. However, at $1.10 per million input tokens, it still runs about seven times more expensive than traditional models like GPT-4o mini.
Access to o3-mini varies by subscription tier. Free users on ChatGPT can activate it via a “Reason” button but face usage limits, Plus and Teams subscribers get 150 queries daily, and Pro subscribers ($200/month) enjoy unlimited access. The model is also available through OpenAI’s API.
The release is a good indicator of the growing competition in AI development between US and Chinese companies. DeepSeek’s R1 model, which reportedly cost just $6 million to train compared to GPT-4’s estimated $100+ million, has raised questions about the future of US dominance in AI technology. The success of R1 even contributed to a $1 trillion drop in the tech-heavy Nasdaq index.
As for the o3-mini, there are some new safety challenges to consider. OpenAI acknowledged the model is the first to receive a “medium risk” rating for model autonomy due to its enhanced capabilities in specific coding tasks.
The company employed “deliberative alignment” during training to ensure the model adheres to internal policies, though it notes reasoning models are generally harder to control than their conventional counterparts.