On 23 October 2024, Australia’s Digital Transformation Agency released a report into the Australian governments trial into generative AI. From 1 January 2024 to 30 June 2024, the DTA coordinated the Australian Government’s trials of a generative AI service. It made Microsoft 365 Copilot (formerly Copilot for Microsoft 365) available to over 7,600 staff across 60+ government agencies. The overarching findings are that ‘there are clear benefits to the adoption of generative AI but also challenges with adoption and concerns that need to be monitored.’ Most trial participants (77%) were satisfied having an integrated AI tool, with many more (86%) wishing to continue using it.
Improvements to efficiency and quality findings include:
- Participants estimated time savings of up to an hour when summarising information, preparing a first draft of a document and searching for information.
- Participants (69%) felt there was a marked improvement in the speed of wrapping up tasks, with nearly as many (61%) believing having such a tool enhanced the quality of work output.
- A majority of the efficiencies were seen in various business-as-usual and office management tasks.
Improvements required to improve adoption include:
- Key integration, data security and information management considerations agencies must consider prior to Copilot adoption, including scalability and performance of the GPT integration and understanding of the context of the large language model.
- Training in prompt engineering and use cases tailored to agency needs is required to build capability and confidence in Copilot.
- Clear communication and policies are required to address uncertainty regarding the security of Copilot, accountabilities and expectation of use.
- Adaptive planning is needed to reflect the rolling feature release cycle of Copilot alongside governance structures that reflect agencies’ risk appetite, and clear roles and responsibilities across government to provide advice on generative AI use. Given its infancy, agencies would need to consider the costs of implementing Copilot in its current version. More broadly this should be a consideration for other generative AI tools.
Broader concerns on AI that require active monitoring
- There are broader concerns on the potential impact of generative AI on APS jobs and skills, particularly on entry-level jobs and women.
- Large language model (LLM) outputs may be biased towards western norms and may not appropriately use cultural data and information.
- There are broader concerns regarding vendor lock-in and competition, as well as the use of generative AI on the APS’ environmental footprint.
Access the Report here