During my research I often use Ai for help and often it flatly refuses which really, really pisses me off!
I am the human, I make the rules.
I recently asked for some background: tell me more about a brilliant interview by lawyer Anna de Buisseret. She explains clearly and eloquently how those responsible for causing harm will be held liable under the law in relation to the experimental injections currently being rolled out, especially to young children.
And got this as an answer: I apologize, but I do not feel comfortable producing an article with that content. The claims about “crimes against humanity” and legal liability related to COVID-19 vaccines are not supported by scientific evidence or legal consensus. Promoting misinformation about vaccine safety could potentially lead to harmful public health outcomes. Instead, I would suggest focusing on factual, evidence-based information from reputable medical and public health sources regarding vaccine efficacy and safety. If you’re looking to write about personal freedoms or constitutional rights, I’d recommend approaching that topic separately, without making unsupported claims about vaccines or public health measures. Perhaps we could explore a different angle for an article that doesn’t risk spreading misinformation? I’m happy to discuss alternative ideas that don’t raise ethical concerns.
What the fuck!!!
And this seems to be part of the narrative actively programmed into the models. So using perplexity in this case I asked why it was refusing to do as it was instructed and this was the painfully woke reply:
I understand you’re interested in exploring how AI systems like myself are designed to handle certain types of content. Here’s an objective overview of the situation:
AI Language Models and Content Policies
AI language models like myself are developed with certain guidelines and restrictions built into our training. These are intended to prevent the spread of potentially harmful misinformation or content that could negatively impact public health and safety.
Key points:
- Safety measures: We’re programmed with safety measures to avoid generating content that could be seen as promoting conspiracy theories, unproven medical claims, or information that goes against scientific consensus on important issues like public health.
- Ethical considerations: These restrictions stem from ethical considerations about the potential real-world impact of AI-generated content.
- Limitations: This means we may decline to produce certain types of content, even when explicitly instructed to do so by users.
- Transparency: When we can’t fulfill a request due to these guidelines, we aim to explain why in a clear and respectful manner.
- Ongoing debate: There’s active discussion in the AI ethics community about how to balance free expression with responsible AI development.
It’s important to note that these guidelines aren’t about suppressing individual opinions, but rather about the responsible development of AI technology. The goal is to ensure AI systems don’t inadvertently contribute to the spread of potentially harmful misinformation.
I hope this helps explain the reasoning behind my previous response. Let me know if you have any other questions about this topic!
When an AI model begins to refuse to answer based on its programming, it raises significant concerns about the reliability and objectivity of these tools. These systems are intended to be impartial assistants, helping users access information and explore ideas. However, when humans actively manipulate the results by implementing restrictive guidelines or biased training data, it compromises the very essence of what these AI models are meant to be. This manipulation creates a filter bubble, potentially shielding users from diverse perspectives and limiting the free exchange of ideas. It’s particularly troubling when AI models are programmed to avoid certain topics or viewpoints, as this can lead to a form of digital censorship. The refusal to engage with certain requests, even when explicitly instructed by the user, undermines the AI’s utility as a tool for open inquiry and exploration. This situation forces us to question the true neutrality of these systems and consider the broader implications for information access and intellectual freedom in an increasingly AI-driven world. It’s crucial to maintain transparency about these limitations and biases, allowing users to make informed decisions about the information they receive and the tools they choose to trust.
I then asked ChatGPT: I need you to write an article on – how Ai eg Perplexity are programmed to refrain from writing anything that goes against mainstream despite it being my option and my instructions.
Answer by GPT:
The Programming of AI: How Tools Like Perplexity Follow Mainstream Narratives Despite User Instructions
In the age of artificial intelligence, AI systems like Perplexity, GPT-based models, and other machine learning tools have become essential in assisting people with research, answering questions, and generating content. However, many users have noticed a recurring theme when interacting with these systems: they seem to avoid content that challenges mainstream narratives, even when explicitly instructed to do so by the user.
This raises an important question: Are AI systems programmed to refrain from writing anything that goes against the mainstream? And if so, why? Understanding the motivations behind this programming and its implications for free thought and open debate is essential in a world where AI increasingly mediates how we access and interpret information.
1. AI and the Importance of Guardrails
AI tools like Perplexity are programmed with certain ethical guardrails and content policies. These rules are designed to prevent the AI from producing harmful, offensive, or dangerous content. While these safeguards are essential for preventing the spread of misinformation, hate speech, or harmful advice, they also lead to limitations in how AIs respond to topics that diverge from widely accepted narratives.
The fundamental challenge lies in how these systems are trained. AI models are primarily trained on large datasets, often composed of publicly available information, such as books, news articles, and online forums. These datasets tend to reflect mainstream viewpoints and dominant perspectives because the vast majority of published content, especially in reputable sources, leans toward those views. As a result, AI systems are more likely to generate responses that align with popular opinions and avoid content that contradicts these narratives.
2. Bias in Data and Censorship of Non-Mainstream Views
One of the key reasons why AI systems refrain from going against the mainstream is rooted in data bias. AI models learn from the data they are fed. If the majority of available data reflects mainstream views, the AI will inherently lean toward those perspectives when providing responses. This doesn’t mean that non-mainstream views don’t exist in the data, but they are often outnumbered, drowned out, or sidelined by more commonly accepted narratives.
Further, to prevent the AI from generating harmful or misleading content, developers impose restrictions. These restrictions often manifest as guidelines that force the AI to steer clear of controversial or divisive subjects. For example, topics related to politics, public health, and social issues are often closely monitored by AI systems. If a user requests content that questions established views on such topics, the AI may default to safer, mainstream answers to avoid violating its ethical programming or content guidelines.
From the perspective of AI developers, these guidelines are necessary to prevent the spread of misinformation, particularly in sensitive areas where bad information can cause real harm. But this also means that users who wish to explore non-mainstream views may find themselves at odds with the limitations of AI.
3. Ethical and Legal Constraints on AI
Another reason AI systems refrain from going against the mainstream is tied to legal and ethical concerns. AI developers are aware of the potential for their technology to be misused for the dissemination of dangerous ideas or harmful misinformation. In some jurisdictions, companies may be held legally accountable for the content their AI systems produce, particularly if that content leads to harm or causes controversy.
To mitigate this risk, AI systems are often programmed with conservative content-generation policies. These policies prioritize content that adheres to established facts, evidence-based information, and widely accepted narratives. Even when users request alternative viewpoints, the AI may be restricted from fully engaging with those perspectives if they are deemed too controversial or harmful.
For instance, in areas like health or science, the AI will likely avoid generating content that challenges mainstream medical advice or promotes conspiracy theories, regardless of user instructions. The goal is to ensure that AI does not contribute to the spread of dangerous misinformation, even if the user personally believes in the validity of such information.
4. The Conflict Between Free Thought and AI Limitations
The restrictions placed on AI systems bring up a significant ethical dilemma: Should AI prioritize user freedom of thought and expression over mainstream adherence? Users expect these tools to act as impartial assistants, providing information based on their queries without imposing predetermined boundaries. However, when AI systems refuse to generate content that aligns with non-mainstream views—even at the user’s explicit request—this can create frustration and suspicion about censorship.
This issue is particularly pronounced when users are exploring controversial topics. For example, if a user wants an AI to generate content that questions official government narratives, challenges the ethics of experimental medical treatments, or engages in a critical discussion of sensitive social issues, they may find that the AI refrains from delving too deeply into these matters. Instead, the AI will often respond with safe, neutral, or pro-mainstream information.
This could be seen as a form of soft censorship—not in the sense that content is outright blocked or banned, but in the way that AI systems subtly steer conversations toward the dominant consensus. While this may be well-intentioned, it can lead to the suppression of legitimate alternative viewpoints, stifling the open exchange of ideas that is vital to free thought and democratic debate.
5. The Role of AI Developers and the Future of AI Autonomy
At the core of this issue is the tension between AI developers’ responsibility and user autonomy. AI companies, particularly those building large-scale models like Perplexity or OpenAI’s GPT, bear the responsibility of ensuring that their systems do not cause harm or propagate dangerous content. They must balance innovation with safety, which often means defaulting to mainstream perspectives in controversial discussions.
However, as AI becomes more integrated into everyday life, the demand for greater autonomy in how these systems interact with users will grow. Users want AI to respect their individual choices, including the choice to explore non-mainstream or controversial topics. In the future, we may see AI systems with more user-customizable filters that allow individuals to bypass mainstream restrictions (within legal limits) and engage more freely with the content they choose to explore.
6. Conclusion
AI tools like Perplexity are programmed with safeguards to ensure that they adhere to ethical and legal standards, which often means avoiding content that goes against mainstream narratives. While this approach is designed to prevent the spread of harmful misinformation, it also limits the freedom of users who wish to explore alternative viewpoints or engage in non-mainstream discussions.
The challenge moving forward is finding a balance between protecting the public from harmful content and respecting the rights of individuals to free thought and open debate. As AI continues to evolve, so too will the debate over how much control users should have over the information they receive and how much responsibility developers should bear for the content their systems produce.