South Korea's data protection Regulatory is set to question Chinese AI startup DeepSeek about how it handles user information, an official confirmed on Friday, January 31, 2025. The Personal Information Protection Commission will soon send a formal request to DeepSeek's operators, seeking details about their data management practices. The move follows similar investigations in France, Italy, and Ireland into DeepSeek’s use of personal data. DeepSeek has launched its new AI chatbot, R1, which it claims rivals top American AI models at a fraction of the investment. However, the chatbot's debut has raised concerns about data privacy and led to regulatory scrutiny. Italy's data regulatory has already blocked DeepSeek from processing Italian users’ data and is investigating what kind of information the AI model uses for training. France’s Commission Nationale Informatique & Libertés is also looking into the chatbot to assess potential data privacy risks. DeepSeek has stated that it built its AI system using H800 chips, which were legally available for sale to China until 2023 under the export regulations of the United States. Meanwhile, South Korean chip manufacturers Samsung Electronics and SK hynix remain key suppliers of high-performance AI chips. What is DeepSeek-R1? DeepSeek-R1 is a powerful AI model that claims to outperform others in several important tasks. The model, along with its variations like DeepSeek-R1-Zero, uses large-scale reinforcement learning (RL) and multi-stage training to build its capabilities. The company has also taken a big step by open-sourcing not only its flagship model but also six smaller versions, ranging from 1.5 billion to 70 billion parameters. These models are MIT-licensed, which means researchers and developers can freely refine and commercialize them. How Does DeepSeek Compare to OpenAI? Both OpenAI and DeepSeek have developed large language models (LLMs), but there’s a key difference. Traditional models, like those from OpenAI, use supervised fine-tuning, while DeepSeek-R1-Zero claims to excel at reasoning tasks after only being trained using RL. To improve readability, DeepSeek introduced DeepSeek-R1, which performs similarly to OpenAI's model on reasoning tasks. DeepSeek also made advances with techniques like multi-head latent attention (MLA) and a mixture of experts, making its models more affordable. The latest DeepSeek model requires just a fraction of the computing power compared to Meta’s similar Llama 3.1 model, as per a report from Epoch AI.