News | National
19 Aug 2025 15:35
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    AI free from bias and ideology is a fantasy – humans can’t organise data without distorting reality

    AI models are not politically neutral nor free from bias. In fact, it may not even be possible for them to be unbiased.

    Declan Humphreys, Lecturer Cybersecurity and Ethics, University of the Sunshine Coast
    The Conversation


    In July, the United States government made it clear that artificial intelligence (AI) companies wanting to do business with the White House will need to ensure their AI systems are “objective and free from top-down ideological bias”.

    In an executive order on “preventing woke AI in the federal government”, President Donald Trump refers to diversity, equity and inclusion (DEI) initiatives as an example of a biased ideology.

    The apparent contradiction of calling for unbiased AI while also dictating how AI models should discuss DEI shows the notion of ideologically free AI is a fantasy.

    Multiple studies have shown most language models skew their responses to left-of-centre viewpoints, such as imposing taxes on flights, restricting rent increases, and legalising abortion.

    Chinese chatbots such as DeepSeek, Qwen and others censor information on the events of Tiananmen Square, the political status of Taiwan, and the persecution of Uyghurs, aligning with the official position of the Chinese government.

    AI models are not politically neutral nor free from bias. More importantly, it may not even be possible for them to be unbiased. Throughout history, attempts to organise information have shown that one person’s objective truth is another’s ideological bias.

    Lying with maps

    Humans struggle to organise information about the world without distorting reality.

    Take cartography, for example. We might expect maps to be objective – after all, they reflect the natural world. But flattening a globe onto a two-dimensional map means having to distort it somehow. American geographer Mark Monmonier has famously argued maps necessarily lie, distort reality, and can be tools for political propaganda.

    Think of the classic world map that uses the Mercator projection, hung in every primary school classroom. It converts the globe into a cylinder and then lays it flat. I grew up thinking Greenland must be massive compared to the rest of the world.

    In fact, Africa is 14 times larger than Greenland, despite appearing to be roughly the same size on this type of map.

    In the 1970s, German historian Arno Peters argued Mercator’s distortions contributed to a skewed perception of the inferiority of the global south.

    Such distortions could be an analogy for the current state of AI. As Monmonier wrote in his book How To Lie With Maps:

    a single map is but one of an indefinitely large number of maps that might be produced for the same situation or from the same data.

    Similarly, a single large language model’s response is one of an indefinitely large number of responses which might be produced for the same situation or from the same data.

    Think of the many ways a chatbot could formulate a response when prompted about something like diversity, equity and inclusion.

    A built-in classification bias

    Other historic attempts at organising information have shown the bias of their designers and users too.

    The widely used Dewey decimal classification (DDC) system for libraries, published in 1876, has been known to be racist and homophobic.

    Over the course of the 20th century, LGBTQIA+ books have been categorised under Mental Derangements, Neurological Disorders, or Social Problems in the DDC, with more recent efforts being made to eliminate outdated and derogatory terms from the classification.

    Under Religion, roughly 65 sections out of 100 are dedicated to Christianity because the library where the classification was originally developed had a strong focus on Christianity. While Islam has an estimated 2 billion followers to Christianity’s 2.3 billion today, in the DDC Islam only has a single section dedicated to it.

    AI learns from humans, after all

    The large language models that power AI chatbots are trained on countless pieces of text, from historical works of literature to online discussion forums. Biases from these texts can unknowingly creep into the model, such as negative stereotypes of African Americans from the 1930s.

    Just having raw information is not enough. Language models must be trained how to retrieve and present this information in their answers.

    One way to do this is to have them learn to copy how humans respond to questions. This process does make them more useful, but studies have found it also makes them align with the beliefs of those who are training them.

    AI chatbots also use system prompts: instructions that tell them how to act. These system prompts are, of course, defined by human developers.

    For example, the system prompts for Grok, the AI chatbot developed by Elon Musk’s company xAI, reportedly instruct it to “assume subjective viewpoints sourced from the media are biased” and to not “shy away from making claims that are politically incorrect, as long as they are well substantiated”.

    Musk launched Grok to counter his perceived “liberal bias” of other products such as ChatGPT. However, the recent fallout when Grok started spouting anti-semitic rhetoric clearly illustrates that attempts to correct for bias necessarily replace it with a different kind of bias.

    All this goes to show that for all their innovation and wizardry, AI language models suffer from a centuries-old problem. Organising and presenting information is not only an attempt to reflect reality, it is a projection of a worldview.

    For users, understanding whose worldview these models represent is just as important as knowing who draws the lines on a map.

    The Conversation

    Declan Humphreys serves on the Executive Committee of the Australian Chapter of the IEEE's Society for the Social Implications of Technology.

    This article is republished from The Conversation under a Creative Commons license.
    © 2025 TheConversation, NZCity

     Other National News
     19 Aug: The murder conviction of a Christchurch man almost 30 years ago, is going back before the Court of Appeal
     19 Aug: The ten best songs under one minute long
     19 Aug: Putting houses in the right place - is the goal of zoning rule changes in Auckland
     19 Aug: Werewolf exes and billionaire CEOs: why cheesy short dramas are taking over our social media feeds
     19 Aug: Auckland Mayor Wayne Brown is welcoming an overhaul of the Building Act
     19 Aug: ‘There’s no such thing as someone else’s children’ – Omar El Akkad bears witness to the destruction of Gaza and the west’s quiet assent
     19 Aug: Trump-Putin summit: Veteran diplomat explains why putting peace deal before ceasefire wouldn’t end Russia-Ukraine war
     Top Stories

    RUGBY RUGBY
    All Blacks Patrick Tuipulotu and Anton Lienert-Brown will be absent due to respective head and nose knocks this week against Argentina in Buenos Aires - prop Tamaiti Williams and loose forward Wallace Sititi are poised to return More...


    BUSINESS BUSINESS
    Hopes 2026 will be a better year for construction - after a tough 2025 More...



     Today's News

    Rugby League:
    Frustration from Warriors coach Andrew Webster at the three-match suspension handed to Jackson Ford for a crusher tackle in the NRL win over the the Dragons 15:27

    Entertainment:
    SZA has been appointed artistic director of Vans 15:12

    International:
    The realities of monetising your hobbies 15:07

    Law and Order:
    The murder conviction of a Christchurch man almost 30 years ago, is going back before the Court of Appeal 14:57

    International:
    Key takeaways from the White House peace talks featuring Trump, Zelenskyy and seven European leaders 14:47

    Entertainment:
    Alison Brie's "anti-marriage" stance was a "solid part" of her "identity" until she met Dave Franco 14:42

    International:
    Trump-Zelenskyy meeting: Ukrainian president says no 'unacceptable decisions' made during White House talks — as it happened 14:17

    Living & Travel:
    The ‘wrong kind of sorry’: will a record fine for Qantas deter other companies from breaking the law? 14:17

    Entertainment:
    Anthony Mackie felt "proud" to call Malcolm-Jamal Warner a friend 14:12

    Business:
    Hopes 2026 will be a better year for construction - after a tough 2025 14:07


     News Search






    Power Search


    © 2025 New Zealand City Ltd