Last week, I outlined how AI is a job maker rather than a job killer for government work. Government has to worry about trust, legality, privacy, fairness, procurement, public accountability, and what happens when a system goes wrong at scale.
NSW now has an AI Assessment Framework for agencies and guidance on how government should approach AI agents. In plain English, that means agencies are being asked to think about risk, ownership, privacy, human oversight, and safe rollout before they scale these tools. That means they’ve been given the green light to use AI, as long as they do a whole bunch of bureaucracy around it.
There are now a number of examples available of how AI is being used in NSW government agencies. I’ve captured three of them below.
The first two are exactly what I would expect: very low risk, very cautious, and very controlled.
Still, it is good to see that at least agencies are starting to use AI, because the hardest part is actually just getting these things implemented, given how risk-averse most government decision-makers are.
You can see the impact of that bias towards risk aversion in the first two examples below, but that doesn’t take away from the hard work that I’m sure many advocates of these improvements have put in just to get anything off the ground.
It’s real kudos to them for persevering and getting at least this level of AI support in there.
The road ahead may be slower than we anticipated, but this is a crucial time for AI. Even limited use is a significant step forward, and we have the opportunity to build on this foundation. Embracing these early advancements can lead to broader applications down the road.
Example 1: When AI Becomes A Slightly Better Search Bar
One of the simplest ways AI can help government is by shortening the path between a person’s question and a useful answer.
Anyone who has tried to find information on a government website knows the problem. The answer might be there, but it can be buried across several pages, written in formal language, or hard to locate on a mobile phone.
NSW Government Digital Channels has sought to address this with AI search summaries on nsw.gov.au.
If you’ve used Google lately, you know what this is. The feature works by placing a short AI-generated summary above the normal search results. So instead of just getting a list of links, users get a short summary and can then click through for more detail.
But this isn’t quite the Google experience.
Firstly, because it’s government, every search starts with a disclaimer that is longer than the AI summary:
Quick answer is an AI-enabled website search function that draws only on publicly available information on nsw.gov.au. The AI generated summary is provided as a general overview to improve access to information on nsw.gov.au and should not be relied on as a substitute for professional advice. While reasonable efforts have been made to provide a convenient and accurate service, generative AI is an emerging technology and may occasionally produce errors or inaccurate outputs. Users should always check the original source referenced in the AI generated summary. Find out more.
I did a few searches using this tool, and the AI summaries were basically rephrased versions of the first and second search results.
The AI summary is also limited to published pages on the website.
This is a key example of how government agencies approach and adopt AI: closed to approved information that is already publicly accessible, and still with a huge disclaimer that it may be wrong.

Example 2: What Happens When Government Builds Its Own ChatGPT
The NSW Department of Education’s NSWEduChat is another good example of how government thinks about using AI.
We’ve all been exposed to tools like ChatGPT. They can help draft, summarise, brainstorm, explain, and restructure information.
Sounds like an ideal solution to the challenge teachers face in producing and adapting lesson plans and learning materials into new, engaging content aligned with a curriculum.
Wouldn’t that be nice?
But of course, the government needs to consider data privacy and security. Oh, and age-appropriate use. And what if it hallucinates? We need to ensure the highest quality standards too!
Here is the full list of risks in NSW Department of Education’s “Guidelines regarding the use of generative AI”:
- Data privacy breaches: AI tools may store or share personal information. Remove personal details before use.
- Bias and misinformation: AI outputs may be inaccurate, biased, outdated, or not suitable for Australian students.
- Inappropriate content: AI may produce harmful or explicit content, especially without supervision.
- Lack of accountability and transparency: AI systems may not explain how they work, and users remain responsible for how content is used.
(ChatGPT helped me make this list a little shorter!)
NSWEduChat is a generative artificial intelligence tool owned and designed by the NSW Department of Education. It is a highly controlled and limited environment. NSWEduChat includes system prompts, jailbreak prevention, profanity filtering, semantic content filtering, and user input is not used to train the tool.
This is like an uber-parental filter.
Does that impact how useful it can be? Here are two reviews I found on Reddit:
It is similar to what the free version of ChatGPT was about 2 years ago.
Not great. Ask it to help the kids research the holocaust. Oh wait, it’s a banned topic. But that’s on the nsw history curriculum… hmm. Also I think they stopped resourcing it now that it’s built.
Again, this shows a very large bias towards risk aversion, limiting the usefulness of technology. A public sector tradition.
This Is What Useful Government AI Looks Like
The two examples above don’t really paint a great picture of how government is adopting AI.
But this example in transport shows that when there is a really clear problem to solve, and there’s effort and support behind it, AI can make a big difference on defined problems.
Asset AI is a project to deliver an AI-driven solution to the expensive task of road inspections. Road inspections are expensive, manual, and too infrequent. Councils and transport agencies need to know where potholes, damaged signs, faded line markings, and other road defects are emerging, but traditional inspections can be time-consuming and resource-intensive.
Asset AI uses cameras and sensors mounted on Transport for NSW and council vehicles to collect information about roads as those vehicles travel through the network. The data is then analysed by AI to identify defects more quickly.
In practical terms, this means a vehicle that is already out doing its normal work can also help build a live picture of road conditions.
Asset AI is a machine-learning program that can highlight and eventually help predict safety issues such as damaged signage, faded line markings, potholes, and rutting.
The program’s biggest advantage appears to be the far more rapid collection of useful data, and secondarily, the ability to transform a much higher and deeper level of data into a more predictive system.
Using this approach, councils report they can effectively cover their entire network every two weeks. That is a big change from traditional road condition scans, which may occur every one to five years.
Asset AI Data Process. Source: Roads & Transport Directorate
This is a far more thought-through and detailed implementation of AI. It is on another level and makes a real difference in government work. This is effective, but it isn’t a job killer. It is significantly improving services, reducing the need for manual defect reporting, and detecting errors and issues that need fixing sooner.

The Real Divide In Government AI Adoption
In reviewing these examples, it really tells me that there’s a two-speed approach to adopting AI. Although all government agencies are now subject to the same framework, which gives them clear permission and guidance to safely adopt and use AI, the approaches are wildly different across these examples.
The core challenge is having people who understand both the opportunity that AI represents and the existing processes within government agencies that can benefit.
When you don’t have both of these elements together, you end up with something like the first two examples.
The simplistic assisted search results and NSWEduChat both feel like examples where an agency started with, “What could we do with AI?” rather than, “What specific problem are we trying to solve?”
The result is AI being retrofitted into existing processes or service delivery, rather than being designed around a clear need.
The Problem With Safe But Limited Government AI Tools
NSWEduChat is almost the worst of both worlds of large language models.
The strength of a tool like ChatGPT is that it is broad, flexible, and useful in many different ways. One person can say, “Teach me about modern history for a classroom of five-year-olds,” while another can upload the syllabus, teaching resources and a detailed prompt, then ask it to build a lesson plan.
They’ll get very different outputs, but the tool can work for both of them.
NSWEduChat seems to have been built as a general-use chatbot for students and teachers, but without the full capability that makes a general-use chatbot powerful.
It is available to everyone, and everyone uses it differently, but it is also heavily controlled, heavily safeguarded, and limited in what it can do.
So it loses the main benefit of a general-purpose AI tool, which is flexibility, without replacing it with the main benefit of a specific AI tool, which is being really good at one clear task.
That’s the problem. It is too limited to be a powerful general-purpose tool, but too broad to be a really effective purpose-built education tool. It sits awkwardly in the middle.
The missed opportunity was for NSW Education to build something more targeted: a tool that helps teachers create locally tailored or student-tailored lesson plans, using its own syllabus, curriculum material and teaching resources as the baseline.
Instead, I’m almost certain this tool stemmed from a simple concern: teachers and students were already using ChatGPT, so the government needed to provide a safer, more controlled version.
The Asset AI example, in contrast, is clearly solving a real problem. But it is expensive. They’ve got cameras, sensors, their own systems, existing databases, and detailed manuals on how to use the tool and data. There’s a lot of work that has gone into doing this in a very specific way, giving the AI part of the data process a clear, specific, and repeatable role.
A generic AI summary or internal chatbot can sound useful in theory, but the question is: useful for what? If it isn’t built around a clear use case, it ends up being a vague tool with vague benefits.
The Real Opportunity For People Who Understand AI And Government
The examples above highlight exactly how technology is generally introduced into government. Small, low risk, controlled.
The search results is so simplistic, it doesn’t seem to add a lot of value. The solution of an in-house chatbot was more about getting them into a more controlled environment for using a large language model, rather than identifying an existing time-consuming process or repeatable piece of work that a large language model could reliably assist with.
It’s kind of backwards.
But that being said, there is still something positive here: government is starting to accept that AI has a role.
I know a lot of people on government teams aren’t really using AI yet. Some of them would never touch it unless their department or agency said, “Here is a version you can use safely, and you will not break anything.” So even if these tools are limited, they still give people a safe first step. They let staff see what AI can do, test it in a controlled setting, and build confidence.
Hopefully, there is training on it too: how to use AI well, how to prompt it properly, how to provide useful context, and how to understand its limitations.
People will not go from never using AI to confidently redesigning government services overnight. But exposure matters. They will use it a little, then a little more, then a little more.
That is why this is going to be a slow road if the government approaches AI the way it approaches most new things: cautiously, carefully, with more controls, more approvals, and more reasons to wait.
The People Government Needs For The Next Stage Of AI
My message from last week still stands. AI systems are being introduced into government. Slowly, unevenly, and often imperfectly, but they are being introduced.
That means one of the most valuable skills now is the ability to navigate this change and speak to both sides.
You need to be able to explain to teachers who are afraid of AI how it can be used safely and well.
You need to reassure IT decision-makers who are worried about staff using AI, and show them there is a mature way to manage the risk.
You need to convince deeply risk-averse people that better uses of AI are possible.
And you need to explain the trade-offs. Because every extra control, restriction and risk management process has an impact. It might make the tool safer, but it can also make it less useful. Knowing how to have that conversation is going to matter more and more.
That is the skill government needs now: people who understand the opportunity of AI, but can also work within the reality of government.
If you have those skills and want to get into government, we can help you find and apply for the right role.
And if you already have a role in mind, we can help you put your best foot forward, whether it is your CV, cover letter, pitch, selection criteria response, or whatever else they make you jump through.
We’ve got it covered for you.
Click here to see how we can help.
If this was useful, share it with a colleague, and send me the examples your team is seeing on the ground.




