top of page

MICROSOFT 365 COPILOT KNOWLEDGE AGENT - APPLIED

  • Writer: Jonathan Stuckey
    Jonathan Stuckey
  • Oct 25, 2025
  • 12 min read

Updated: Oct 28, 2025

author: Jonathan Stuckey

audience: solution designers, information managers, IT operations.


This is the second of two articles about the Microsoft 365 Copilot Knowledge Agent. In this one I'll outline what I found, and if it will cut-it for businesses today. Important to note is that this agent builds on top of Microsoft 365 Copilot platform.


Its a long read... so here's the main jumping off points


Contents


How to approach testing it.

This agent requires the user to have a Microsoft 365 Copilot license when testing it. Your tester(s) must be aware, and use Copilot on SharePoint to understand some features.


You cannot start testing this with any real focus, unless you understand the intent behind the Knowledge Agent.


If we boil-down the marketing vision Microsoft puts out the Knowledge Agent will:

transform SharePoint into an intelligent, self-maintaining, and AI-enabled workspace that streamlines both content management and productivity for everyone interacting with your sites and content!
A metallic robot with blue eyes examines a stamp amid flying books in a colorful library, creating a futuristic, busy atmosphere.
Knowledge Agent and good old-fashioned filing

Ah yeah? Nah. It doesn't. It's a good idea and starts off well - but it needs lots and lots of work. Right now, I would place this as tool to support ideation and a starter for employing real Agentic AI because it lacks a number of critical things:


  1. consistency in responses

  2. grounding in appropriate source data for information design

  3. ability to implement at scale (not a fraction of the corpus in 1 library)

  4. option to role-back in even of required changes, mistakes, gaps or issues

  5. role-based management


But I'm jumping ahead. Let's go back a couple of weeks in the discovery process...


What can it do

Well it can do a lot - but very simply. The objective of "Microsoft Knowledge Agent" is to leverage AI to make your content more valuable and accessible by organising, enriching, and automating its management for Copilot and other AI agents.


Its main focus is to ensure your files, pages, and sites are "AI-ready," meaning they are structured with proper metadata, kept up-to-date, and optimally organised to deliver precise answers and workflow automation via natural language prompts.​


Hey I do lots of IA, solution design and dealing with information managers - they talk like that...


Knowledge Agent context

So, Knowledge Agent has two primary focuses (or context):


  1. Page management

Most of the agent is re-badging existing functionality. Its not new, flash or changed - you just got a new way of accessing it. From home-page of a site you get:

SharePoint knowledge agent (preview) menu of actions for home-page

New functionality under "Improve this site" option

  1. Retire inactive pages - content clean-up (using metadata flag for expiry)

  2. Find content gaps - content consistency check(?),

  3. Fix broken links - page scanning with link-checking tool.


There is actually another context specific feature which appears on content pages (i.e. not a homepage) which is a great support for Accessibility and user mobility with direct audio playback of the page content


SharePoint knowledge agent (preview) menu of actions for an article or news item - highlighting 'listen to an audio overview'

On every page type I've tested so far its been really good a summarising, and the audio voice is actually pleasant doesn't fall in to the uncanny-valley with pronunciations (despite being American). Personally I love this one.


  1. Library (document) management

Again, most of the agent is re-badged existing functionality under each of the primary functions e.g. Move items, copy items, create view, create library rule etc.


SharePoint knowledge agent (preview) menu of actions for organising a library

The 'set up rules', and 'Create new view' are just new windows over the existing SharePoint library features with a new (PowerApp) style UI. This guides the user through a combination of form and pre-canned generative AI prompts for actions.


New functionality is offered from 'Organize this library' option which includes:


  1. Create columns - skins two features (add column, autogenerate)

  2. Create a rule - skin over existing 'Create a rule' wizard, and menu option

  3. Extract key actions - Copilot summary and autogenerate / autofill column

  4. Summarize documents - Copilot summary and autogenerate / autofill column

  5. Classify documents - skins two features (add column, autogenerate)


Notable is that 3 of the 5 (columns, key actions, and classify) are basically using the same underlying Syntex document processing on the same activity: metadata extraction and autofill.


Knowledge agent library optimisation menu with annotations for each option explaining what actually does

I soooooo wanted this to be one, and I the preview managed to snatch despair from the jaws of hope here. Its sort of ok, if you screw your eyes-up and look at the results side-ways. Saying that - it's already way better than the early test reviews I saw - so still time for it to take another jump up as the general training for the agent improves.


Key Functions and Goals

What are the the features meant to be providing for the user / Owner then?


  • AI-Ready Content: Automatically enrich and tag SharePoint documents, pages, and libraries, making them easily retrievable and improving Copilot-powered answers.​


  • Simplify Operational Management: Flag outdated or broken content, suggest fixes, and help maintain compliance by automating routine governance tasks.​


  • Natural Language Automation: Allow users to trigger actions, set up workflows, and get intelligent (?) answers by simply asking in conversational language.​


  • Role-Based Support: Provide tailored recommendations and shortcuts depending on whether you’re a site manager, content creator, or content consumer, helping each role be more effective*


To an extent these ring true in testing, and most of the actions are triggered from a pre-canned generative AI prompt, but barring some minor tweaking on the wording for scope or description the prompts are limited to the specific functions identified.


Good starting point, but needs a lot of work for refinement and to improve the range of task.


Practical actions

In theory the above goals and automation of operational actions will improve AI responses by suggesting and adding metadata to your content. The idea being that the Knowledge Agent ensures Copilot and other agents have trustworthy data sources for generating relevant, accurate responses.​ Simple right?


That means is you should be able to:


  • Automate business processes by letting users automate workflows, such as approvals and notifications - using intuitive, context-aware skills without technical expertise.​ i.e. using natural language description and interaction with the rules wizard.


  • Maintain site health keeping SharePoint sites fresh and organised, by proactively archiving, updating, or tagging content, and addressing broken links or compliance issues.​ i.e. based on generic checks and common actions it will set default metadata on new documents, or "archive" (read: delete)


Who benefits?

Well the blurb boils down to the following business roles getting something...


  • Site managers by improving site health and governance through automated recommendations.

  • Content managers and creators get help organizing libraries and building pages with AI-powered assistance.

  • Content consumers can rapidly access information and summaries powered by Copilot-ready content.​


Hmmmmm, having just spent a couple of weeks trying this out colour me sceptical.


In SharePoint access terms it's basically:


  1. SharePoint Site-owner (site admin) - you're going to see reports and recommendations in the agent for libraries on your site,


  2. Site member / visitor you'll get some prompts in Copilot and (hopefully) improved output from the way it now ingests


What to test, and who should do it?

Be clear that what you want out of it is understanding of the what its actually useful for, as well as the do's, the don'ts if you use in current state. Testing to is to provide what you need to make a recommendation on what next i.e. when and how-much effort and resourcing to keep putting in.


IMPORTANT: test-cases should be performed on UAT or on test-content only because you will make a mess.

Before you start ensure you have clear guidance for testers on: What is to be tried; Where this should be used (more importantly where not to use it); How to capture and report the testing and outcomes, What are the mechanisms for evaluation, review and feedback, and the expectations on the participants.


So, to focus:


  • How to run testing: run this as though you are testing a product you want to build/adopt. This is not a tool for a 'kick-about'.


  • Who is involved: Use your IM and Subject matter experts who

    • are involved on the Copilot trial test try this - but in very directed fashion, and

    • have a reasonable understanding of SharePoint libraries and metadata.


  • What are we trying out: Brief users on both scopes and current use-cases actions in the guided activities for:

    • site page content

    • library management


  • Where to run testing: duplicate / clone real-world content from sample sites to UAT areas.

    • limit permissions and access on these new ones to avoid end-user confusion

    • constrains testing to verifiable content that can be reset

    • avoids disruption and confusion on active working sites and content

      (no role-back without rolling back entire library)


Scenario testing

As part of getting preparation ready I tested multiple types of content, sites and scenarios, but some which are common for most organisations. See table for 3 practical examples of case which are (relatively) easy for most organisations to try-out.


Table: examples of types of business content addressed and testing outcome

Test area

Activity

Content types

Testing

Project coordination

project mgmt documents

contracts, reports, project deliverables, budge spreadsheets, task lists

metadata, views

Business activity

health and safety mgmt

process guidelines, training content, reporting

metadata, views, review (approval)

Corporate resources

operational policies

code of conduct, govn policies, forms

metadata, notifications (expiry)

Secondly, I tried the Page clean-up scope across a clone of intranet home, news and community of practice sites.


Table: examples of types of published pages and content addressed in testing

Test area

Activity

Content types

Testing

Intranet - publishing.

Corporate communication

News, alerts, people-profiles, blog.

expiry (retired) / metadata, approval

Technology & support

User knowledge base

Article pages, alerts, events

expiry, metadata, views, approval

These scenarios and typical range of content are ones where we can have a pretty good idea of what "good" looks like with the Knowledge Agents suggestions for:


  • metadata (new columns, properties)

  • views (groups, filtering, presentation)

  • automation (approval, notifications)


I did also run agent testing on process content like: Staff files, Learning and development content, corporate resources, Knowledge base (page), IP library, team collaboration 'admin' (junk) and few others... Similar kind of results achieved across the board.


What are the limitations?

IMPORTANT: THE KNOWLEDGE AGENT IS IN PREVIEW. IT HAS SIGNIFICANT LIMITATIONS TODAY

Well at the moment the Knowledge Agent feels a bit like the metaphorical hammer used to hit that darned difficult to find Information Architecture nail. Unfortunately its a bit like a giving real hammer to a pre-schooler and expecting them to build a desk-side cabinet and some drawers. Results wont be very pretty and someone will probably be crying before you're finished.


Limitations today include, but are not limited to:


Site Page management

Most of the agent is re-badging existing functionality. Its not new, flash or changed - just got a new way of accessing it. Current constraints and limitations:


  1. content clean-up (expiry) - trawls your site pages and checks for dates of - last-modified, returned in search results, visited stats to provide guidance.


    1. this is injecting a Y/N field (Retired) which is then used to inform search filtering in the /SitePages library


    2. deleting pages the user deems "not required" (or old) - without approval or review.


    Kind of good at a basic level, if the person driving this understands the content and knows what's good, not-good, still useful after 12-months etc. Otherwise could be seriously dangerous to your corporate knowledge visibility


    Process pages in batches of 5 in the UI, and the 'more' option just loading another 5. Actions are processed and applied by user on a per page basis. This is fine for site-by-site with limited content but wont scale if you have large knowledge bases, or significant amounts of Communication content.


    This is not as good as paid tools like Swoop Analytics for business grade support.


  2. content gaps - runs a crude checks of search terms with low/no response - based on the pages indexed on the site, and search results from last 6 - 12 months.


    1. examines which searches returned no results on the site - offers to make pages, or add sections based on these topics


    2. match search results with similar key-words and topics, that don't have pages or content - again options to create new page, or add page sections.


    3. suggestions - takes key-word and terms identified as gaps and presents 'Create page' option using the Copilot/SharePoint AI generation for auto-create.

      Knowledge Agent suggestion for content gap remediation with content. Recommended action and key-points on what it would content when auto-generated. Includes a 'button' to trigger action
      Content creation based on 'gaps'

    In Preview there is no ability to apply weighting or change the scope of the query criteria (period of time, key-words or reserved items) in the UI options unless you extract the prompt to redefine outcomes - which requires Copilot Studio.


    The agent logic seemed to be relatively superficial with recommendations based on but provided a good starting point for new content. A limited data-set was used for training and testing.


  3. broken link checks - this is about 15 years too late in SharePoint - and doesn't work well.

    1. Checks for malformed links - but doesn't test they work

    2. Checks for "missing" URLs on the page or a button (quick-link) - but doesn't check for returned result.

    3. Updates and puts new redirection links into site list at '/Lists/sp_redirecturls/Redirects.aspx'


Where this worked well was for migrated or converted pages with old content coming from other environments and HTML formats e.g. on-premises SharePoint, old ASP platforms, or Confluence pages - where detection was pretty obvious and source / original links are now redundant.


It didn't work at scale or over multiple sites at once, or even consistently on known and deliberately introduced broken link scenarios. It will need significant uplift to offer value on its own.


There are really cheap (and few freemium versions: Cognillo, ReplaceMagic, even a Github community one) of good link-checkers which are comprehensive and remediate links well without this... too little, too late in this instance.


Library management

Again, most of the agent is re-badged existing functionality like the e.g. Move, Copy, Create view.


New functionality slid-in is mostly around auto-detection and creation of metadata to be injected onto the documents in library using Syntex document processing and autofill:


  1. Create columns (and Classify) - running the Copilot ingestion and AI/ML model to process content (supplemented by Syntex document processing options) to run


    1. entity extract and recommendations really need training to get targeted recommendations which suite process.


    2. Limited ingestion of 20-items in library to create recommended properties in columns - limited scope to alter items selected, which means being careful where you initiate the process to ensure capture right items.


    3. Autofill processing is useful, but slow and has potential for incurring additional business costs.


  2. Create a rule - run rules for notifications and approvals are based on the available Rules engine which works pretty well. Not sure why we needed new UI over it?


  3. Extract summary and actions - document processing using underlying Syntex services which work well, and at scale - Proven technologies in a new context:


    1. same impact as for 'Create columns' as using same underlying Syntex services


    2. have potential to incur processing costs with autofill and large data-volumes


Current issues include:

  • no ability to make agent role-aligned (either via policy or security group)

  • inability to exclude restricted or secured sites with custom permission-levels from scope

  • no support for older site template formats, and some migrated pages

  • requires use of Pay-as-you-go billing options for scale and range of user access*


* PAYG required for users without Microsoft 365 Copilot license allocated, or some interesting error dialogues about getting your admin to enable PAYG


Roadmap

Ideally this toolset should ensure SharePoint sites remain up-to-date, efficient, and actionable without needing technical expertise. What users can do is completely mess-up any existing Information Design you might have embedded into your content.


I found my self saying this a lot over the last few years, its a good start - but its not ready for operational implementation, and I would go as far as to say it needs some serious consideration on how you would use it for proof-of-value or test, because this is not a tool you want all your end-users applying wherever they like in your tenancy.


Verdict

Built for demo, not for scale - today.

Microsoft's indicated scenarios in the walk-through/wizard on the bot UI, are ones most organisations need to tackle, but with anything Microsoft bangs the drum about be abit sceptical on first-release ...its all demo and no delivery in the initial release.


Try the agent across process or activity based content, for example on project site. Extraction of metadata by the agent for proposed columns will be immediately obvious if it is relevant and useful


Run the agent across active sites or teams which create a lot of page content that you have not taken time to apply metadata to e.g. Health and Safety portal with lot of information specific to incidents, accidents, hazards, reporting, processes etc


Do not run the Knowledge Agent across pre-existing highly curate content - you are likely to get less than useful information and recommendations back and there's potential for making a real-mess with spurious columns and properties being added. .


In this case you have already put the effort in and know what they should look like, so it seems like it would provide a useful 'yard-stick to measure against?! Its doesn't work. The agent assumes that that additional metadata is part of the raw information, not structured - its responses becomes muddled.


Its one to watch, test and derive the ideas of what's needed. Its one we should all be leaning on Microsoft to sort-out the product based on the feedback and suggested backlog because if they get this right it will solve a lot of historic content issues.


Resources


About the author: Jonathan Stuckey


Disclaimer:

Generative AI has been used in creating the head-line image in the article, and for QA checks on scope, topic and consistency. Everything else is real. All the irritation, griping and random comments are the responsibility of the author.

Comments


©2024 by What's This...?

  • LinkedIn
  • YouTube
  • X
bottom of page