**Is the Public Sector Ready for an AI Makeover? Assessing the Potential and Challenges**
*Almost a year after ChatGPT ignited an AI gold rush, is the public sector ready for an AI makeover? Computer vision, natural language processing (NLP) and autonomous robotics can transform defence, healthcare, transportation and policy making.*
Since the release of ChatGPT in late 2022, governments worldwide have shown a keen interest in utilizing and regulating AI. However, the Australian public sector still faces some hurdles in fully embracing AI and assuring the public of its responsible, legal, and ethical use. The government is yet to pass amendments that align with the recommendations made by the robodebt royal commission on automated decision-making (ADM). Experts argue that more oversight, transparency, and a stronger understanding of AI are necessary.
Computer vision has been a preferred choice for the public sector, particularly since cloud giants introduced prepackaged AI services in 2018. This technology has been used to track endangered species with drones, analyze CCTV and bodycam footage for police investigations, and monitor traffic with the help of CCTVs. The adoption of computer vision in government is attributed to its ability to address privacy concerns related to citizen data while being versatile in various applications, such as bridge inspections and surf patrols.
However, to fully leverage AI’s potential, the government must consider responsible deployment in areas like robo-debt or using health data. AI development differs from traditional application development, requiring additional focus on governance, use case definition, data collection, bias risks, and security controls. It begs the question of whether the staff possesses the necessary skills to ensure AI operates as expected. While new AI legislation is being considered, some experts argue that existing oversight obligations and governance frameworks, such as NSW’s administrative steps, can already address these concerns.
The current state of AI in the government is largely influenced by IT professionals, as executives have often deferred to them regarding AI literacy and use cases. To bridge this gap, Public Service Minister Katy Gallagher’s cross-agency AI taskforce can play a vital role in identifying use cases and establishing guardrails. Australia’s AI guidelines for public servants specify that government information used in AI models like ChatGPT or Google Bard should be limited to publicly available information. The UK’s guidance further warns against incorporating new policy positions into these services. To ensure accountability, adopting a “human-in-the-loop” (HITL) model could be crucial for most government AI deployments.
According to Stanford University, HITL models focus on optimizing AI’s performance with meaningful human involvement rather than excluding humans from the decision-making process. However, there are concerns regarding possible control creep, where humans start to blindly trust AI outputs or avoid overriding AI decisions to minimize accountability. Situations like Queensland’s backlog of penalties and license suspensions for incorrect AI detections of drivers using smartphones highlight the need for future oversight.
The definition of AI encompasses predictive analytics, which has found applications in various government sectors. For instance, NSW eHealth’s predictive risk scoring AI helps detect sepsis quickly, NSW Transport’s predictive occupancy data enables COVID-19 travel notifications, and CSIRO’s Data61 aided in developing a predictive decision support system to model road congestion. Generative AI, in combination with analytics, is expected to drive significant use cases, such as call center reporting or generating new documentation for legacy systems.
NLP has also made significant strides in government applications. The Australian Taxation Office (ATO) has been using NLP models to identify tax evaders within the Panama Papers. Similarly, the Australian Communications and Media Authority (ACMA) explored NLP applications for regulators. NLP allows regulators to process large volumes of unstructured text, extract relevant facts, uncover relationships, and analyze sentiments on specific topics. However, organizations like ASIC and APRA have faced challenges in terms of capacity, expertise, and managing security and data bias risks.
Implementing AI, especially generative AI, can lead to significant increases in cloud computing costs. Services like Microsoft 365 Copilot and Google Workspace Duet come at a monthly cost per user, and the AI code generator from Microsoft-owned GitHub also incurs additional fees. However, the potential gains in productivity, such as saving time on repetitive tasks, can justify the cost of AI licenses. IT departments may also witness rising cloud costs as generative AI enhances low-code tools, leading to increased consumption of cloud-based AI services.
A whole-of-government approach can be adopted to train and run large language models (LLMs) on government infrastructure, potentially reducing costs. McKinsey outlines three generative AI consumption models: purchasing off-the-shelf solutions, fine-tuning existing models from vendors like OpenAI, Coherence, AI21 Labs, or Hugging Face, and developing custom foundation models like GPT-4. The latter option is costly and not practical for most government organizations.
In conclusion, while the public sector in Australia shows great interest in AI adoption, there are various challenges to be addressed, including governance, oversight, transparency, and AI literacy. By learning from past experiences like the robodebt controversy and leveraging existing frameworks, the government can ensure responsible and efficient AI deployment. Moreover, exploring the potential of computer vision, NLP, and generative AI can revolutionize sectors such as defense, healthcare, transportation, and policy making.
**Editor Notes: Promoting Responsible AI Adoption and Government Transformation**
The advancements in AI technology, as witnessed with ChatGPT and various other applications, have sparked an ongoing revolution in the public sector. Governments around the world are eager to explore the possibilities offered by AI in defense, healthcare, transportation, and policy making. However, it is crucial to strike a balance between embracing AI’s potential and ensuring responsible use that aligns with legal and ethical considerations.
The Australian public sector, in particular, must navigate several challenges to fully benefit from AI. These challenges include passing legislation that aligns with royal commission recommendations, enhancing oversight and transparency, and equipping staff with the necessary AI literacy. Simply enacting new laws might not be the optimal solution. Instead, existing frameworks, such as those implemented in NSW, could be enhanced to foster better governance and AI literacy. By prioritizing expertise and transparency, the government can ensure it leads by example.
The cross-agency AI taskforce led by Public Service Minister Katy Gallagher is a significant step forward in promoting AI literacy and identifying use cases that align with government priorities. Through collaboration and the responsible use of data, the public sector can leverage AI technology for the greater benefit of society.
The role of AI in government is not limited to specific areas like computer vision or NLP; it extends to predictive analytics, generative AI, and emerging technologies that are yet to be fully explored. By harnessing the power of generative AI, governments can enhance their services, improve decision-making processes, and streamline workflow. However, it’s crucial to consider the associated costs, such as increased cloud computing expenses, and weigh them against the potential gains in productivity and efficiency.
As AI adoption expands, it is essential to strike a balance between harnessing the technology’s potential and ensuring responsible use. This requires a collaborative effort between policymakers, experts, and the public to shape AI governance frameworks that foster transparency, accountability, and ethical use. The journey toward a transformed public sector is daunting, but with the right approach and mindset, governments can usher in a new era of innovation and efficiency.
[*Opinion Piece submitted by GPT News Room. For more AI news and insights, visit GPT News Room at https://ift.tt/iczMqVK]
Source link
from GPT News Room https://ift.tt/F7uYMc6
No comments:
Post a Comment