Saturday, 5 Jul 2025
  • My Feed
  • My Interests
  • My Saves
  • History
  • Blog
Subscribe
fxbias logo fxbias logo
  • Home
  • Finance
  • Forex
  • Crypto
  • Tech
  • Science
  • Africa
  • 🔥
  • Finance
  • Forex
  • US
  • Tech
  • Trump
  • AI
  • Crypto
  • China
  • Tariffs
  • DeepSeek
Font ResizerAa
Fxbias.comFxbias.com
  • My Saves
  • My Interests
  • My Feed
  • History
Search
  • Pages
    • Home
    • Finance
    • Forex
    • Crypto
    • Tech
    • Science
    • Africa
  • More
    • Contact Us
Follow US
© FXBias.com. All Rights Reserved.
Home » Blog » Researchers created an open rival to OpenAI’s o1 ‘reasoning’ model for under $50
Tech

Researchers created an open rival to OpenAI’s o1 ‘reasoning’ model for under $50

Jarvis GN
Last updated: February 6, 2025 9:18 am
Jarvis GN
Share
Image Credits: Yuichiro Chino / Getty Images
SHARE

AI researchers at Stanford and the University of Washington were able to train an AI “reasoning” model for under $50 in cloud compute credits, according to a new research paper released last Friday.

The model, known as s1, performs similarly to cutting-edge reasoning models, such as OpenAI’s o1 and DeepSeek’s R1, on tests measuring math and coding abilities. The s1 model is available on GitHub, along with the data and code used to train it.

The team behind s1 said they started with an off-the-shelf base model, then fine-tuned it through distillation, a process to extract the “reasoning” capabilities from another AI model by training on its answers.

The researchers said s1 is distilled from one of Google’s reasoning models, Gemini 2.0 Flash Thinking Experimental. Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month.

To some, the idea that a few researchers without millions of dollars behind them can still innovate in the AI space is exciting. But s1 raises real questions about the commoditization of AI models.

Where’s the moat if someone can closely replicate a multi-million-dollar model with relative pocket change?

Unsurprisingly, big AI labs aren’t happy. OpenAI has accused DeepSeek of improperly harvesting data from its API for the purposes of model distillation.

The researchers behind s1 were looking to find the simplest approach to achieve strong reasoning performance and “test-time scaling,” or allowing an AI model to think more before it answers a question. These were a few of the breakthroughs in OpenAI’s o1, which DeepSeek and other AI labs have tried to replicate through various techniques.

The s1 paper suggests that reasoning models can be distilled with a relatively small dataset using a process called supervised fine-tuning (SFT), in which an AI model is explicitly instructed to mimic certain behaviors in a dataset.

SFT tends to be cheaper than the large-scale reinforcement learning method that DeepSeek employed to train its competitor to OpenAI’s o1 model, R1.

Google offers free access to Gemini 2.0 Flash Thinking Experimental, albeit with daily rate limits, via its Google AI Studio platform.

Google’s terms forbid reverse-engineering its models to develop services that compete with the company’s own AI offerings, however. We’ve reached out to Google for comment.

S1 is based on a small, off-the-shelf AI model from Alibaba-owned Chinese AI lab Qwen, which is available to download for free. To train s1, the researchers created a dataset of just 1,000 carefully curated questions, paired with answers to those questions, as well as the “thinking” process behind each answer from Google’s Gemini 2.0 Flash Thinking Experimental.

After training s1, which took less than 30 minutes using 16 Nvidia H100 GPUs, s1 achieved strong performance on certain AI benchmarks, according to the researchers. Niklas Muennighoff, a Stanford researcher who worked on the project, told TechCrunch he could rent the necessary compute today for about $20.

The researchers used a nifty trick to get s1 to double-check its work and extend its “thinking” time: They told it to wait. Adding the word “wait” during s1’s reasoning helped the model arrive at slightly more accurate answers, per the paper.

In 2025, Meta, Google, and Microsoft plan to invest hundreds of billions of dollars in AI infrastructure, which will partially go toward training next-generation AI models.

That level of investment may still be necessary to push the envelope of AI innovation. Distillation has shown to be a good method for cheaply re-creating an AI model’s capabilities, but it doesn’t create new AI models vastly better than what’s available today.

Source: techcrunch.com/

TAGGED:AIDeepSeekOpenAI
Share This Article
Facebook Whatsapp Whatsapp Email Copy Link Print
Previous Article Is Your Antivirus Spying on You? Yes, and Some Are Worse Than Others
Next Article Cedi Cedi gains strength, now trading at GH¢15.38 to $1 on interbank market
Leave a Comment Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Follow us on social media for accurate and timely updates!

110kLike
83kFollow
270kFollow
- Advertisement -
Ad imageAd image

Popular Posts

U.S. Dollar Price Action Setups into CPI Week

The U.S. Dollar continues to dangle on headlines around tariffs and as I had warned…

By Jarvis GN

Bank of England Could be Forced to Abandon Rate Cutting Cycle

The Bank of England could be forced into abandoning the interest rate cutting cycle as…

By Jarvis GN

Cocoa prices decline on fading supply fears – ING

The cocoa market continues to sell off, with London cocoa pulling back aggressively. The nearly…

By Jarvis GN

You Might Also Like

Tech

Is Your Antivirus Spying on You? Yes, and Some Are Worse Than Others

By Jarvis GN
Apple-iPhone-16e
FinanceTech

Apple’s $599 iPhone 16e adds AI, launches February 28

By Jarvis GN
Polish PM Tusk
FinanceTech

Microsoft to invest $700 million to boost Poland’s cybersecurity

By Jarvis GN
Jack Ma
FinanceTech

China invites Jack Ma and DeepSeek founder to meet top leaders

By Jarvis GN
fxbias logo fxbias logo
Facebook X-twitter Instagram Youtube Whatsapp

About US

FXBias.com is your trusted source for financial market insights, delivering real-time news, expert market bias, trading education, and in-depth analysis. 

Useful Links
  • Blog
  • Contact
  • Fundamentals Bias
  • History
  • Home
  • My Feed
  • My Interests
  • My Saves
  • Sample Page
Company
  • Contact Us
  • Advertise with US
  • Complaint
  • Privacy Policy
  • Cookie Policy
  • Submit a Tip

© FXBias.com. All Rights Reserved.

Note: All information on this page is subject to change. The use of this website constitutes acceptance of our user agreement. Please read our privacy policy and legal disclaimer. Trading foreign exchange on margin carries a high level of risk and may not be suitable for all investors. The high degree of leverage can work against you as well as for you. Before deciding to trade foreign exchange you should carefully consider your investment objectives, level of experience and risk appetite. The possibility exists that you could sustain a loss of some or all of your initial investment and therefore you should not invest money that you cannot afford to lose. You should be aware of all the risks associated with foreign exchange trading and seek advice from an independent financial advisor if you have any doubts.

fxbias logo fxbias logo
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?