News | Business
17 Jan 2025 19:33
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > Business

    ‘AI agents’ promise to arrange your finances, do your taxes, book your holidays – and put us all at risk

    AI systems that can autonomously make decisions on our behalf will be a huge time saver – but we must deploy them with care.

    Uri Gal, Professor in Business Information Systems, University of Sydney
    The Conversation


    Over the past two years, generative artificial intelligence (AI) has captivated public attention. This year signals the beginning of a new phase: the rise of AI agents.

    AI agents are autonomous systems that can make decisions and take actions on our behalf without direct human input. The vision is that these agents will redefine work and daily life by handling complex tasks for us. They could negotiate contracts, manage our finances, or book our travel.

    Salesforce chief executive Marc Benioff has said he aims to deploy a billion AI agents within a year. Meanwhile Meta chief Mark Zuckerberg predicts AI agents will soon outnumber the global human population.

    As companies race to deploy AI agents, questions about their societal impact, ethical boundaries and long-term consequences grow more urgent. We stand on the edge of a technological frontier with the power to redefine the fabric of our lives.

    How will these systems transform our work and our decision-making? And what safeguards do we need to ensure they serve humanity’s best interests?

    AI agents take the control away

    Current generative AI systems react to user input, such as prompts. By contrast, AI agents act autonomously within broad parameters. They operate with unprecedented levels of freedom – they can negotiate, make judgement calls, and orchestrate complex interactions with other systems. This goes far beyond simple command–response exchanges like those you might have with ChatGPT.

    For instance, imagine using a personal “AI financial advisor” agent to buy life insurance. The agent would analyse your financial situation, health data and family needs while simultaneously negotiating with multiple insurance companies’ AI agents.

    It would also need to coordinate with several other AI systems: your medical records’ AI for health information, and your bank’s AI systems for making payments.

    The use of such an agent promises to reduce manual effort for you, but it also introduces significant risks.

    The AI might be outmanoeuvred by more advanced insurance company AI agents during negotiations, leading to higher premiums. Privacy concerns arise as your sensitive medical and financial information flows between multiple systems.

    The complexity of these interactions can also result in opaque decisions. It might be difficult to trace how various AI agents influence the final insurance policy recommendation. And if errors occur, it could be hard to know which part of the system to hold accountable.

    Perhaps most crucially, this system risks diminishing human agency. When AI interactions grow too complex to comprehend or control, individuals may struggle to intervene in or even fully understand their insurance arrangements.

    A tangle of ethical and practical challenges

    The insurance agent scenario above is not yet fully realised. But sophisticated AI agents are rapidly coming onto the market.

    Salesforce and Microsoft have already incorporated AI agents into some of their corporate products, such as Copilot Actions. Google has been gearing up for the release of personal AI agents since announcing its latest AI model, Gemini 2.0. OpenAI is also expected to release a personal AI agent in 2025.

    The prospect of billions of AI agents operating simultaneously raises profound ethical and practical challenges.

    These agents will be created by competing companies with different technical architectures, ethical frameworks and business incentives. Some will prioritise user privacy, others speed and efficiency.

    They will interact across national borders where regulations governing AI autonomy, data privacy and consumer protection vary dramatically.

    This could create a fragmented landscape where AI agents operate under conflicting rules and standards, potentially leading to systemic risks.

    What happens when AI agents optimised for different objectives – say, profit maximisation versus environmental sustainability – clash in automated negotiations? Or when agents trained on Western ethical frameworks make decisions that affect users in cultural contexts for which they were not designed?

    The emergence of this complex, interconnected ecosystem of AI agents demands new approaches to governance, accountability, and the preservation of human agency in an increasingly automated world.

    How do we shape a future with AI agents in it?

    AI agents promise to be helpful, to save us time. To navigate the challenges outlined above, we will need to coordinate action across multiple fronts.

    International bodies and national governments must develop harmonised regulatory frameworks that address the cross-border nature of AI agent interactions.

    These frameworks should establish clear standards for transparency and accountability, particularly in scenarios where multiple agents interact in ways that affect human interests.

    Technology companies developing AI agents need to prioritise safety and ethical considerations from the earliest stages of development. This means building in robust safeguards that prevent abuse – such as manipulating users or making discriminatory decisions.

    They must ensure agents remain aligned with human values. All decisions and actions made by an AI agent should be logged in an “audit trail” that’s easy to access and follow.

    Importantly, companies must develop standardised protocols for agent-to-agent communication. Conflict resolution between AI agents should happen in a way that protects the interests of users.

    Any organisation that deploys AI agents should also have comprehensive oversight of them. Humans should still be involved in any crucial decisions, with a clear process in place to do so. The organisation should also systematically assess the outcomes to ensure agents truly serve their intended purpose.

    As consumers, we all have a crucial role to play, too. Before entrusting tasks to AI agents, you should demand clear explanations of how these systems operate, what data they share, and how decisions are made.

    This includes understanding the limits of agent autonomy. You should have the ability to override agents’ decisions when necessary.

    We shouldn’t surrender human agency as we transition to a world of AI agents. But it’s a powerful technology, and now is the time to actively shape what that world will look like.

    The Conversation

    Uri Gal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2025 TheConversation, NZCity

     Other Business News
     17 Jan: Banks are once again jockeying for position -- with ASB lowering its rates this morning, off the back of a move by Westpac yesterday afternoon
     16 Jan: A key site in central Wellington that's been boarded up since 2019, is set to be re-developed
     16 Jan: Australia's unemployment rate has hit 4 percent, strengthening the case for a Reserve Bank rate cut
     16 Jan: Food price inflation is picking up again
     16 Jan: Kiwi retail investors are increasingly putting their money in US stocks rather than local ones
     16 Jan: All seven of Rotorua's emergency housing motels will stay open for now - but they must be shut down by the end of the year
     16 Jan: Retail investors are feeling more upbeat
     Top Stories

    RUGBY RUGBY
    All Black Wallace Sititi's knee surgery today will see him miss a chunk of the Super Rugby Pacific season More...


    BUSINESS BUSINESS
    Banks are once again jockeying for position -- with ASB lowering its rates this morning, off the back of a move by Westpac yesterday afternoon More...



     Today's News

    International:
    Live updates: Ceasefire back on track as Israel says it's reached deal with Hamas 19:27

    Entertainment:
    Bill Hader has been left "in shock" by the devastation caused by the Los Angeles wildfires 19:05

    Auckland:
    A man is in hospital in a moderate condition after falling during a fight between two men near Auckland's Newmarket train station 18:57

    Soccer:
    Aryna Sabalenka is putting her clutch third-round win at the Australian tennis Open down to her experience and mental toughness 18:47

    Cricket:
    A first win this season in cricket's Super Smash for the Central Hinds 18:37

    Entertainment:
    Melissa Rivers is putting on a brave face amid the traumatic Los Angeles wildfires 18:35

    National:
    The hidden health risk of having your hair washed 18:17

    International:
    Israel reaches deal with Hamas to return hostages, Netanyahu says 18:17

    Living & Travel:
    Confidence from the French SailGP team that their boat will finally be ready this season at the third event in Sydney next month 18:07

    Law and Order:
    A man's critically injured after an altercation outside a music event at Auckland's Trusts Arena 18:07


     News Search






    Power Search


    © 2025 New Zealand City Ltd