I guess it reflected on its first answer and the word 'perspective' gave it a hint..
What kind of moral acuity are you installing?
Before I got into using standard pre-trained models like the gpt, mistral and the likes, I decided to build a simple chat application in Java (called IvieAI) which which was trained on over 8000+ datasets with a focus to make it a good-intentioned, good humoured and positive-oriented system. So, as I understood more about the usage of LLMs and the application of RAG, I was able to use the already available dataset as a training dataset for my LLM application. So, it is most likely that my current LLM app would have assumed a more positive and moral standing while responding to the question.