When it comes to AI in software development, there's no end of example code, tutorial videos, and "Top 5 tips for AI adoption" that don't apply to your technology stack, and projects. Testimonials of successful AI usage are often presented in abstract terms.

Let's cut through all of that with a concrete, in-production example: using Xperience by Kentico’s Management API (via the MCP server) together with KentiCopilot’s agentic AI prompts to author a new content type and generate a Page Builder widget - both of which are now live on the Kentico Community Portal.

This isn’t a prototype or a demo; it’s a practical look at how Kentico is delivering tools and resources to support your AI assisted software development and accelerate delivery while also ensuring developers focus on the parts of Xperience that actually require human judgment.

FAQs for Community Contributions

First, let's take a look at the final result to give ourselves some context, which is visible on the Community Contributions page.

Many of you have likely built an FAQ web experience, authored content for one, or interacted with similar components.

This one is based on the Bootstrap accordion (because the Community Portal's design foundation is Bootstrap 5.3). Custom JavaScript enables toggling between expanding and collapsing all FAQ accordions in a group.

The widget itself supports selecting from two different content types - FAQ items and FAQ groups (basically, named groups of FAQ items). Xperience by Kentico's inline content authoring experience for marketers really shines with this type of widget.

It also has several presentation customization options. This means it is flexible for various use-cases on pages across the website.

Because of the clear separation of content and design - presentation comes from the widget and content is stored in the Content hub - it also ensures the content is easy to govern and reuse.

Xperience's smart content reuse location is also showcased with these kinds of connected structured content models and the channels that use them.

We can see the reuse location lists the Community Contributions page referencing this FAQ Group content item and this blog post (as I'm authoring it... pretty meta!) which links to the Community Contributions page.

Although I don't have immediate plans for it, there's no reason this FAQ content couldn't easily be reused in a Kentico Community Portal Newsletter email and content reuse location would show it there as well.

Content model and customer experience

Enough review of the results, let's look at the implementation process!

Working backwards from design

There are several popular approaches to building an experience that combines content and design in a DXP like Xperience by Kentico.

When building a new website, email series, or mobile app, you typically want to tackle your content modeling first. Designing an experience requires understanding the content it displays.

However, if the content and experience are simple enough, you can go in the other direction - start with the design using example content and generate the content model from that.

I am not a designer, so I prompted GitHub Copilot in agent mode (using Claude Sonnet 4.5) to create an HTML design of an FAQ experience using Bootstrap.

create snippet of html that shows a collection of "FAQ" components using bootstrap 5.3 accordions

the accordions should be grouped together but opening one should not close the other open one in the group - they are independent

instead, if there is more than 1 accordion in the group, add a nice looking UI element (using bootstrap 5.3 components) that lets me collapse or expand all accordions in the group

I saved this generated template in the Kentico Community Portal repository so you can see what I was working with. I plan to store these scaffolding pieces in the /agent-resources/ folder of the repository going forward.

Crafting a content model from design

When content modeling you start by identifying the goals of your content and common attributes it has, which you abstract out into the content model. In Xperience by Kentico that content model is implemented as content types.

Because I started with design, the next step was to generate the content model. I instructed GitHub Copilot to use Xperience's content management MCP server so I would not need to click around in the administration UI to build the content types by hand.

I started a new chat session and gave my agent the following prompt, referencing the design template.

analyze this design.html mockup FAQ widget component template

  • create a reusable FAQ content type that fits the data of this widget

  • the content type should follow the naming conventions of the content types in this project

  • use the xperience-management-api mcp server to retrieve information about existing content types and create a new one

Once the FAQ Item content type was generated I instructed the agent to create the FAQ Group content type:

now create an FAQ "group" reusable content item

  • has a title field and "FAQs" field

  • the FAQs field is the selection of FAQItemContent content items for this group

This did not take long and once completed, I reviewed the content types in the Xperience administration UI and found my overly simplistic prompt did not produce the results I wanted.

So, I requested some changes:

rename the field FAQGroupContentFAQItems to FAQGroupContentFAQItemContents

My original content type creation prompt wasn't very detailed. I also didn't model the content type in advance - the model was based on the design.

Unsurprisingly, I did not get the result I wanted. Yes, the agent generated the content types quickly but moving quickly in the wrong direction does not get to the destination any faster.

I had the agent iterate on the content model:

  • Adjust field naming
  • Add or remove reusable field schemas
  • Change the order of the fields
  • Add field validation

All of this could have been specified in my original prompt if I had taken the time to actually model the content.

Previously, this kind of rapid content type design iteration was not possible. It takes too long to click through the UI and then switch to see the content management experience when authoring a content item.

Based on this recent experience, I think Xperience by Kentico's MCP controlled content type authoring works best when you plan the content modeling ahead of time. I also believe this fast iteration enables a brand new kind of content design exploration that could be very powerful in the right situations!

Generate AI agent instructions with KentiCopilot

With the design and content types ready, I generated the content types' C# class code so Copilot could use it to build my widget.

Thanks to KentiCopilot, our set of tools, features, and educational materials we create to help developers work smarter with AI, getting my agent to write the widget code was a simple step-by-step process.

The KentiCopilot GitHub repository includes a folder for each feature and set of tools. I used the GitHub Copilot variant of the widget creation feature, which follows some of the AI assisted software development practices I've detailed in other blog posts, like using custom agent instructions.

I copied the KentiCopilot files into the Kentico Community Portal repository and authored a set of simple requirements for the widget:

  • support either 1 or more FAQItemContent content items or 1 FAQGroupContents

    • the marketer can select which data source they would prefer to use
  • expand all / collapse all UI can be displayed or hidden through widget properties

  • use the design.html template as an example of multiple FAQs being displayed

  • ensure multiple instances of this widget can be placed on the same page

    • the HTML IDs / attributes used to hook in the javascript functionality need to be unique for each widget instance (use a content item GUID)
  • don't use an IIFE for the js - instead use ES Modules in a <script> block

I then ran the /widget-create-research prompt (in a new chat context, of course):

Follow instructions in widget-create-research.prompt.md.
faq-widget-source

The agent then generated thorough widget creation instructions for the FAQ widget. The high fidelity of these instructions comes from the fact that it had the right context:

  • Xperience by Kentico documentation via the docs MCP server

  • Widget creation best practices from KentiCopilot

  • My FAQ widget requirements

  • Other existing widgets in the Kentico Community Portal repository

I reviewed the FAQ widget instructions and asked the agent to make a small edit, replacing the use of IMediator with IContentRetriever - I did not want the extra level of abstraction.

Create a widget with an AI agent

With all the pieces in place, I told the agent to use the /widget-create-implementation prompt:

Follow instructions in widget-create-implementation.prompt.md with FAQ_WIDGET_CREATION.instructions.md.

The agent created its own TODO list and worked through the requirements, consulting existing widget code for examples when something was ambiguous.

It identified things unique to the Community Portal project:

These were all correctly integrated into the widget implementation.

However, not everything was perfect. The widget's FAQ Group content source scenario was not being handled, so I prompted the agent about the requirements:

review the instructions again - is FAQGroupContent part of this widget?

After updating the widget to also handle this content type, I reviewed it and found the following to be true:

  • All widget properties were correctly defined with labels, order, descriptions, and dependencies

  • Enums were used with the project's EnumDropDownOptionsProvider

  • Custom widget property validation was implemented according to the project's patterns with ComponentError.cshtml displayed when property values were missing.

Final cleanup

There were a few things that needed final correction.

Because Bootstrap's JavaScript is loaded after the widget is rendered, the accordion functionality is not yet available so I prompted the agent to resolve this:

we will need to update the JS logic for the accordion because bootstrap might not be initialized when the module runs

instead we should register callbacks globally that can then be executed when setup() is called

I was honestly surprised that the agent interpreted this issue correctly. It found where client-side JavaScript is initialized in a completely separate area of the file system and hooked it all up perfectly.

Next, there was some presentation logic I did not think through when authoring my (minimal) requirements. After some quick UX testing, I asked the agent to add some more customization options to the widget:

add widget properties to display the FAQ Group title, description if the Group option is selected

if the Title of the Group is displayed it will override the widget label (include this in the explanation)

then update the view model and template to use these values

I also found the agent did some of the data transformation in the widget's methods - if possible, I prefer it in the view model class. I also prefer plural names for C# enum types.

Both of these could have been specified in repository-level requirements added to the agent's context in the widget creation instructions, but this was my first time using these tools, and I wanted to test what could be omitted from specifications.

My takeaways

I completed this work several weeks ago and since then I had the following realizations:

  • MCP driven content type authoring (and iteration from lack of planning) was definitely faster than doing it by clicking through the administration UI. It is going to be my go-to approach from here on out.

  • The AI agent populated all the labels, descriptions, and tooltips in content types and widget properties that I often skip on first pass. I typically forget about these and have to add them later.

  • If you are more comfortable on the front-end or back-end, AI can fill out your skillset in a very empowering way, helping you get unblocked and learn.

  • The more consistent your naming conventions, repository organization, and code base are the more likely the agent is going to generate what you expect but fail to clearly request. The Kentico Community Portal's quality standards helped me succeed with these AI tools.

  • The more you think through your problem and goals ahead of time, the better your AI generated output will be. But, even if you start without a plan, you will still be able to iterate and explore options more quickly with AI.

  • I absolutely recommend you try using Kentico's MCP servers, AI agent driven content type creation, and KentiCopilot's AI assisted widget creation workflow.

Real time savings

Overall, I was extremely impressed with this AI agent workflow and set of tools. It was my first time using it and I ended up with a fully functioning set of related content types and corresponding widget.

Most importantly, I did not configure the content types or write the code for the implementation - GitHub Copilot did!

The question many will ask is, "Did you save time compared to doing everything manually?"

Let's consider the playing field:

  • I'm a veteran software developer.

  • I've been working with Kentico and other CMS/DXP products for over 14 years.

  • I work full time as Lead Product Evangelist for Xperience by Kentico. It's my job to know how this platform works!

  • I'm both a marketer and developer on the Kentico Community Portal project, which gives me a lot of perspective and intuition on this project.

This all means I have a big advantage in speed when building Xperience by Kentico content types and widgets compared to most developers.

Despite all that, I still saved time - especially if you factor out initially becoming familiar with the tools, which did not actually take that long thanks to great documentation and repository instructions.

If you are newer to software development, CMS/DXPs, content modeling, Xperience by Kentico, or even just the project you've been assigned to work on, use KentiCopilot and AI to help you learn faster and do more in less time!

I'd love to know your thoughts on or experiences with the workflow I've described here in the discussion for this post.