How was DeepSeek created? A analysis of DeepSeek’s growth history

How was DeepSeek created? A analysis of DeepSeek’s growth history

In the future, there will be more and more hardcore innovation. It may not be easy to understand now, because the entire social group needs to be educated by facts. When this society allows people who innovate hardcore to succeed, the collective mindset will change. We just need a bunch of facts and a process….

DeepSeek has done it! OpenAI admits closed source mistake, leading edge advantage becomes smaller

DeepSeek has done it! OpenAI admits closed source mistake, leading edge advantage becomes smaller

After OpenAI released the o3-mini model, its CEO Sam Altman, Chief Research Officer Mark Chen, Chief Product Officer Kevin Weil; Vice President of Engineering Srinivas Narayanan, Head of API Research Michelle Pokrass, and Head of Research Hongyu Ren, conducted an online technical Q&A on reddit, one of the world’s largest comprehensive forums. The main topics…

OpenAI o3-mini vs. DeepSeek-R1: Who is the king of the new generation of AI models?

OpenAI o3-mini vs. DeepSeek-R1: Who is the king of the new generation of AI models?

o3-mini is here, with the momentum of a challenger On January 31, OpenAI released the brand new o3-mini large model and provided some of its functions for free to all ChatGPT users. Although there is a limit on the number of queries, it allows users to experience OpenAI’s latest commercial model as soon as possible….

First launch! SiliconFlow X Huawei Cloud jointly launch DeepSeek R1 & V3 inference services based on the Ascend Cloud!

First launch! SiliconFlow X Huawei Cloud jointly launch DeepSeek R1 & V3 inference services based on the Ascend Cloud!

DeepSeek-R1 and DeepSeek-V3 have caused a global sensation since their open source launch. They are a gift from the DeepSeek team to all of humanity, and we are sincerely happy for their success. After days of hard work by the Silicon Mobility and Huawei Cloud teams, today we are also giving Chinese users a Chinese…

A comprehensive comparison of OpenAI’s newly released o3-mini and DeepSeek R1

A comprehensive comparison of OpenAI’s newly released o3-mini and DeepSeek R1

OpenAI has released its latest inference model, o3-mini, which is optimized for fields such as science, mathematics, and programming, providing faster response, higher accuracy, and lower cost. Compared with its predecessor o1-mini, o3-mini has significantly improved its inference capabilities, especially in solving complex problems. Testers prefer o3-mini’s answers by 56%, and the error rate has…

In the AI circle, DeepSeek R1 has steadily surpassed o1 and Claude in physical tests, and we have entered the golden age of RL.

None of us expected that this is how 2025 would begin in the AI field. DeepSeek R1 is truly amazing! Recently, the “mysterious Eastern power” DeepSeek has been “hard controlling” Silicon Valley. I asked R1 to explain the Pythagorean theorem in detail. All this was done by AI in less than 30 seconds without any…

Breaking news! OpenAI released 2 new inference models today: o3-mini and o3-mini-high.

Breaking news! OpenAI released 2 new inference models today: o3-mini and o3-mini-high.

o3-mini and o3-mini (high) will be released today. Regular users will also get o3-mini, and plus users will be able to use o3-mini (high). o3-mini (high) is about 200 points higher than o1 on Codeforce, faster than o1, and performs better in coding and mathematics, but the cost is still at the level of o1-mini….

Altman: We were wrong about open source AI! DeepSeek has made OpenAI less advantageous, and the next one is GPT-5

Altman: We were wrong about open source AI! DeepSeek has made OpenAI less advantageous, and the next one is GPT-5

o3-mini arrived late at night, and OpenAI finally revealed its latest trump card. During a Reddit AMA Q&A, Altman deeply confessed that he had stood on the wrong side of the open source AI. He said that the internal strategy of open source is being considered, and the model will continue to be developed, but…

Paper-DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

Abstract This paper introduces DeepSeek’s first-generation reasoning models: DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, trained through large-scale reinforcement learning (RL) without supervised fine-tuning (SFT), demonstrates remarkable reasoning capabilities. Through RL, it naturally develops powerful reasoning behaviors. However, it faces challenges like poor readability and language mixing. To address these issues and enhance reasoning performance, DeepSeek-R1 was developed,…