Government has never been the fastest adopter of new technology. It’s mostly because government has to care about things that private companies don’t have to care about in the same way: legality, accountability, fairness, privacy, procurement, public trust, and what happens when something goes wrong at scale.
Ministers heavily scrutinise “rogue” endeavours by agencies, and the public doesn’t forgive Ministers who allow for high-risk, chance-taking with their taxpayers’ funds.
But AI is moving so quickly that even governments, with all of that natural caution, are being pulled into it.
Across the Australian Public Service and State Government, agencies are now being required to put in place accountable officials, internal registers of AI use cases, and public AI transparency statements. In other words, this is no longer just a fringe innovation conversation. It is becoming part of how government is expected to operate.
And that is why I think there is about to be an AI jobs boom in government.
Not an AI Agent boom, but a boom for humans who can make sure AI actually works.
Government will not use AI the way big tech does
A lot of the conversation about AI and jobs is being shaped by big technology companies. In that world, the story is often about replacing people, automating tasks away, and reducing headcount.
Government is different. Government heads can’t afford to treat AI as a cost-cutting tool, because government services only work if people trust them. A faster process that makes the wrong decision, gives inconsistent advice, or removes a person’s ability to question the outcome is not a better service. It is just a faster way to lose trust or votes or both.
That lesson remains ever-present in the minds of government decision-makers after Robodebt.
Robodebt was not an AI system; it was automation. It did not have a smart model making dynamic judgments (which might have been better if it had), but it was absolutely a warning about automation in government. It showed what happens when a system is allowed to produce harmful outcomes at scale without enough legal, human, or ethical scrutiny. The Royal Commission later recommended a clearer legal framework for automation in government, plain-language public explanations of automated processes, review pathways for people affected, and independent monitoring or auditing of automated decision-making.
That matters in this moment of AI because it means the future of AI in government will not be “set and forget”.
The introduction of AI into government work at any scale will need a lot of people to think about it. There needs to be advocates of people who are AI-friendly and can see a really practical path forward.
There will need to be people with an understanding of government risk appetite who can influence decision-makers, so AI can be safely introduced without being cut off at the knees. There’ll need to be a lot of developers, testers, engineers, and cybersecurity architects to protect personal information and prevent our national security plans from being uploaded to a chatbot.
Other roles will include translators, people who can understand the requirements that AI systems need to work properly, and regulators who understand what a framework for AI adoption looks like. How will the next AI decision be s managed, tested, explained, and implemented? All this means a lot of high-skilled jobs.
Government will not use AI the way a small business does either
A small business can sign up today for a free trial and get started using the latest AI tool immediately. You can test tools, see what works, and turn things on or off within a day.
That doesn’t work in government. Firstly, software must meet strict security, privacy, and procurement requirements. As a result, most government employees can’t access the full versions of popular tools in their browser and are limited to enterprise-approved platforms. These are tightly controlled environments, which often reduce functionality, especially when tools can’t connect to broader data or external systems.
You can see this with the rollout of Copilot across agencies using Microsoft 365. Governments have invested heavily in secure, contained versions of the tool with strict controls. The downside is that many employees’ first experience with AI is through a limited version, which can lead them to conclude that AI doesn’t really work.
And there’s a reason for that. The core services people rely on every day need to keep working without failure. Access to Medicare, digital driver’s licences, or frontline workers like teachers, nurses, doctors and firefighters all depend on reliable, real-time systems. These aren’t optional. They can’t break.
In government, you can’t just sign up a couple of users and start changing how work gets done. Even small changes require testing, configuration, research, consultation, and multiple rounds of validation, along with mapping current processes to future ones. It’s a slower, more deliberate process, with far more moving parts.
Even when the licence cost is low, the staff time required to implement and align on its use is significant. And the benefits are not always consistent at scale. One person might clearly see how a tool improves their work, while someone in the same role, or a neighbouring team, may not see the value at all or may have entirely different priorities.
All of this leads to a clear outcome.
There will be strong demand for people who can define requirements, build business cases, and design AI solutions that actually work within government constraints.
In the age of AI, technical skills will not be the focus.
A core skill in government has always been the ability to talk between worlds. That means being able to sit between the technical team and the service delivery team. Or between policy and implementation. Or between senior executives and frontline operations.
This is often where your best business analysts, service designers, delivery leads and change managers sit. They can speak both languages and genuinely understand the constraints on both sides. They keep projects moving and stakeholders aligned.
That same skill is about to become even more valuable. Because there is now a new gap that needs a new translator between operational teams and AI agents.
Policy teams will need to understand what an AI system is actually doing, where it helps, where it should not be trusted, and what good output looks like.
Service delivery teams will need to know how AI is supporting their work without undermining service quality or confusing the citizen experience.
And leaders will need people around them who can translate all of that into something practical.
That is a very human job. In fact, I think it becomes one of the defining government skills of the next decade.
AI in government will create work before it removes it
The current pervasive narrative frames AI as a destroyer of jobs. In government, I suspect the near-term effect will be the reverse.
As agencies adopt more AI, they will need more people who can:
- define requirements properly
- test outputs and identify failure points
- manage change across teams
- write governance and operating procedures
- ensure human review happens where it should
- explain systems in plain language
- handle complaints, review rights and edge cases
- train staff to use AI tools well
- make sure service quality does not drop
The more government adopts AI, the more this human layer becomes essential.
And there is already evidence that the public sector is moving in this direction. The Australian Taxation Office told the ANAO it had 43 AI models in production, including some that involved fully automated actions. At the same time, the ANAO found major gaps in ethics assessment coverage for those systems. AI is happening, but the governance maturity has to catch up. (Australian National Audit Office)
In New South Wales, the Ombudsman reported 275 automated decision-making systems in use or planned across state and local government. (NSW Ombudsman)
That creates a lot of new work.
This is a huge opportunity to do government services better
AI is not a threat to the public sector. It will bring opportunities for government services to be delivered better, more consistently and with less human interaction (e.g. waiting on the phone to talk to Services Australia) that reflects how society expects to interact with government.
There will be a huge shift in how government processes work in the years ahead. This is necessary, but not because AI is necessary. It is because the demand on the government is ever-growing, and the cost of the delivery is largely tied to the ability to meet that demand through human hours. Australia’s population is growing, our population is ageing, and people expect better service experiences than they did ten years ago.
AI, used properly, can help government meet that pressure through efficiencies rather than cut-backs.
What would this change look like in practice? NDIS Example
Look at the recent changes proposed to the NDIS, where the focus has been on reducing the cost of the system by reducing the supports available to people with disability. At the same time, more time and money is being spent introducing new assessment processes and reassessing people with clearly permanent disabilities.
AI offers a different path. Instead of cutting the money that goes directly to people with disabilities, the government could use AI to run ongoing, real-time analysis across client plans and spending to identify where there is genuine potential for savings. That might highlight patterns such as clients consistently using less than their allocated supports, or situations where costs are significantly higher than for others with similar needs, especially where this is driven by provider pricing rather than client requirements.
Used properly, this shifts the focus. Instead of reducing funding for individuals, the system can target inefficiencies in how services are delivered and priced. The goal becomes lowering the cost of delivery, not lowering the level of support.
That is why the jobs boom is not just for AI engineers.
It is for the public servants, project staff, analysts, policy people and operational leaders who can help government adopt AI to improve services, not cut them.
If you are worried about AI and jobs, government may be one of the best places to be
AI is rapidly impacting work, and change is happening fast.
That change is creating new opportunities. The immediate need in government is not necessarily a developer or engineer to build a tool. The need now is for the steps leading up to that.
The variety of ways in which AI can be used and adopted to improve government delivery is almost endless. Work needs to be done now to determine which path forward is best suited to the complexity, breadth and depth of government work. To ensure AI integrations fit the law, fit the process, fit community expectations, and actually help deliver something better.
While AI is removing roles in the private sector, the AI jobs boom is starting in government.
If you want to get into government now, we can help. Our Dream Application Package is built to help you map out exactly where your skills sit and how to position yourself for your next government role.





