Study: AI models that consider user's feeling are more likely to make errors Ars TechnicaTraining language models to be warm can reduce accuracy and increase sycophancy NatureAI chatbots can prioritize flattery over facts - and that carries serious risks The ConversationFriendly AI chatbots more likely to support conspiracy theories, study finds The Guardian"Warm" AI Chatbots Are More Likely