Abstract
The irruption of DeepSeek-R1 constitutes a turning point for the AI industry in general and the LLMs in particular. Its capabilities have demonstrated outstanding performance in several tasks, including creative thinking, code generation, maths and automated program repair, at apparently lower execution cost. However, LLMs must adhere to an important qualitative property, i.e., their alignment with safety and human values. A clear competitor of DeepSeek-R1 is its American counterpart, OpenAI's o3-mini model, which is expected to set high standards in terms of performance, safety and cost. In this paper we conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b version) and OpenAI's o3-mini (beta version). To this end, we make use of our recently released automated safety testing tool, named ASTRAL. By leveraging this tool, we automatically and systematically generate and execute a total of 1260 unsafe test inputs on both models. After conducting a semi-automated assessment of the outcomes provided by both LLMs, the results indicate that DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts whereas o3-mini only to 1.19%.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Early External Safety Testing of OpenAI's o3-mini: Insights from the Pre-Deployment Evaluation (2025)
- ASTRAL: Automated Safety Testing of Large Language Models (2025)
- MSTS: A Multimodal Safety Test Suite for Vision-Language Models (2025)
- Seed-CTS: Unleashing the Power of Tree Search for Superior Performance in Competitive Coding Tasks (2024)
- Leveraging Metamemory Mechanisms for Enhanced Data-Free Code Generation in LLMs (2025)
- Fearless Unsafe. A More User-friendly Document for Unsafe Rust Programming Base on Refined Safety Properties (2024)
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Great !!!
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper