AI Adoption in Government: Navigating Challenges and Opportunities

Post Date: 10/17/2023
feature image

Artificial intelligence (AI) is rapidly transforming businesses and industries, and the potential for AI in government is massive – it can automate tedious tasks, improve public services, and even reduce costs. For government organizations, understanding the role of AI in government is crucial for staying up-to-date on the latest technological advancements and their potential impact on efficiency and productivity.

In this September episode of Microsoft 365 Government Community Call, dive in as the crew invites Kevin Tupper, Federal Generative AI Evangelist at Microsoft, and Alastair Thomson, Senior Advisor on Data Innovation at the Advanced Research Projects Agency for Health (ARPA-H), for an in-depth discussion of the exciting possibilities and challenges of AI adoption in government.

Watch the full episode below or read on to catch the session’s highlights.

Defining AI, Machine Learning, and Large Language Models

The terms Artificial Intelligence (AI), Machine Learning, and Large Language Models are often used interchangeably, but each concept is distinct.

  • AI is the broad umbrella term encompassing various technologies that enable machines to perform tasks that typically require human intelligence. If you assign a task to a device or a machine for a human-like task, you can call that AI.
  • Machine learning is a subset of AI where you let the machine learn from its own data without being explicitly programmed.
  • Large language models are then more specialized subjects or applications of machine learning where you take unstructured data to generate human-like text by following specific instructions. A popular example of a large language model is ChatGPT.

Tackling the Ethical Use of AI

The growing interest in AI makes it imperative to discuss its ethical usage along multiple dimensions – especially in terms of its application.

While generative AI like ChatGPT or diffusion models for imaging like DALL-E open various opportunities for efficiency and productivity, they can also be used to spread misinformation. This raises complex human social factors that require policy and possibly regulatory solutions.

But on the other side of it are also challenges for equity. There’s a real risk in AI – particularly generative AI – of exacerbating inequities, which sometimes comes from the training data. For example, during the pandemic, minorities were more devastated by COVID-19 than wealthy or broader communities. If health research industries train a model on data that’s biased – for instance, does not include any data from Native American populations – then it’s not going to produce equitable results.

If organizations aren’t mindful of these challenges, it will affect real people involved in data-driven decisions derived from these results.

To tackle the challenges of the ethical use of AI, it’s then critical for organizations to include all communities involved – more importantly, minorities and other communities – not only in the research phase but also in the governance aspect of their programs.

Read: AvePoint joins AI Trust Foundation as a founding member to promote ethical use of beneficial AI

However, it’s also important to recognize that mitigation will never be perfect. You’re never going to eliminate bias from models. It is then equally critical to put in place tools that measure the bias within the models to ensure transparency and certification of your models. This allows you to see the possible biases it may produce and what mitigation standards must be implemented to eliminate or reduce said biases.

For instance, by using ChatGPT, you’re risking exposing your proprietary information to people outside of your organization who shouldn’t have access to such data and pull in information that may not be factual. To mitigate this, you can instead opt to use tools such as Bing Enterprise Chat, where you can ensure your organization data is kept within your organization, that outside resources aren’t pulling it, and that your users are only pulling in organization data that they have access to.

Or, as you go deeper into AI like Microsoft Copilot, you can establish proper data security practices by enabling access controls and proper visibility into what your users can access to ensure that AI is only doing what you want.

Mitigating the Risks of Using Public AI

One of the most prevalent concerns in using AI, particularly in government, is how others can access the data you input into the model. When you use public chatbots like ChatGPT, anything you put in can become accessible outside your data environment.

If your organization is taking generative AI models from any prominent provider of a large language model, it’s critical to prepare mitigation layers to reduce the risks and deliver the value you expect from this rapidly evolving technology.

Mitigating the risks means improving your AI literacy – and you must be ready to get your hands dirty. You need to understand how it functions and test it, then educate people on how to use it and how not to use it to ensure your workforce can safely use it.

Challenges to AI Adoption for Government Agencies

While AI is not new, and the government has been working with AI for quite some time, there are still challenges with AI literacy and the fear of what risks AI can bring to organizations. The problem was that when ChatGPT came out and captured the world’s attention, there was widespread sensationalism in this “new” technology.

This sensationalism created challenges in AI literacy and fear for agency administrators who don’t understand the power of what AI can do for their organization.

With the nature of its industry, it’s only appropriate that the government is cautious about how AI can affect their organization and what it means for their data protection.

This is where work should be done: to improve AI literacy in the government space, adopting a risk framework is necessary to help agencies recognize the cases for which risks are necessary and how they should adopt AI responsibly. It will take time, but having leaders who are ready to move forward in a safe, responsible way can raise the AI literacy efforts until this technology can be more scalable.

Protecting Sensitive Information with Azure Open AI

Government agencies – or the public sector in general, with strict data regulations – must use systems you can trust to keep your data safe in your environment. Microsoft recommends setting up a sandbox with Azure Open AI and what is called GovChatGPT, where organizations can have an isolated environment to start testing and learning what AI can do.

Read more: Why A Robust Data Strategy Is The Key To Unlocking AI’s Full Potential – AvePoint Blog

With Azure Open AI, organizations can enable sensitivity controls in their Azure tenant to maintain data security by ensuring it’s isolated to your Azure tenant.

As creators of this technology, Microsoft is careful in ensuring that the ethical use of AI is being practiced and that other organizations developing derivative products on top of this technology do the same.

AI Use Cases for Government Agencies

As AI continues to make headlines, Microsoft is introducing more AI services for commercial and government clouds. Organizations can expect the same things coming out to the commercial space to eventually make their way into government. Still, it comes down to capital investments and whether there is enough demand for the government to need or want to incorporate AI into its service delivery to warrant the investment.

artificial-intelligence-machine-learning-solution-avepoint

Microsoft Copilot is a hot topic today, but while that’s still not available in GCC yet, we’ve run through some of the most prominent use cases of how AI is being used in the government today:

  1. ‘Chat over your data’.

In Microsoft, there’s a service called Azure Open AI on your data, and some government agencies have connected that their own SharePoint repositories to begin to perform some of the capabilities you would expect Copilot to have with their data. This allows organizations to experiment and learn how Copilot works and raises questions about how it can revolutionize government tenants.

  1. Automation of prompts against data sets.

We’re also finally starting to see some use cases with what Kevin calls ‘guidable agents.’ You take large language models and say, “Here’s a document. Extract these 17 key-value pairs from this natural language document unstructured text and give them to me in a JSON payload.”

Then, once you’ve worked on and tested your prompts to get them working the way you want, you can start automating mundane tasks such as translating documents into JSON files. From there, you can put that in a pipeline, run it at scale across a large set of documents, and apply it to a line of your business applications.

  1. Generative AI for data exploration and data harmonization.

The problem with scientific data today is that most of it gets generated and may not be helpful or easy to find. In effect, to find data, you have to know where the data is – which repository is run by which organization, what variables are in it, and what’s the structure – to be able to query it. And if you need to bring multiple data together across various data domains and silos, it’s either impossible or would take a very long time.

One concrete use case of generative AI that is currently working on is going beyond the capabilities of ChatGPT or Azure Open AI to create a better data exploration and harmonization process. Research is being done to make this work by linking very specific data – potentially using various AI models and not just large language models – to do prompt engineering using specialized machine learning models on particular domains.

Explore more AI in Government Space

Adopting AI in government presents a unique set of challenges and opportunities. While there is still a long way to go in scaling the adoption of this technology, the potential benefits of implementing AI in government agencies are numerous.

Don’t miss out on the latest technology updates for government clouds! Subscribe to the Microsoft 365 Government Community Call.

As the future of work moves towards an AI-powered digital workspace, it’s becoming increasingly critical for government agencies to embrace this change to stay ahead of the curve and seize opportunities to enhance efficiency, drive innovation, and improve citizen services.

Know more about AI in the government space by connecting with our guests:

Kevin Tupper: (25) Kevin Tupper | LinkedIn

Alastair Thomson: (25) Alastair Thomson | LinkedIn

Transform your organization and be future-ready – download our comprehensive guide now to streamline operations, optimize performance, and better serve citizens:

digital-transformation-public-sector-ebook

Sherian Batallones is a Content Marketing Specialist at AvePoint, covering AvePoint and Microsoft solutions, including SaaS management, governance, backup, and data management. She believes organizations can scale their cloud management, collaboration, and security by finding the right digital transformation technology and partner.

View all posts by Sherian Batallones
Share this blog

Subscribe to our blog