Xperience by Kentico is designed to support modern software development workflows, distributed development, CI/CD, environment promotion, and regular incremental updates. But the platform's capabilities only go as far as your team's practices allow.

I recently asked a group of Kentico MVPs and Community Leaders how their teams actually work: how they structure development environments, coordinate parallel contributions, and keep clients current with monthly Refreshes.

Their answers reflect teams operating at different scales and under different constraints, but they reveal consistent patterns. When combined with Xperience's built-in tooling, these practices form a foundation that does more than keep projects organized; they prepare your team for the pace of agentic software development.

Start with your development environment

Every collaborative Xperience project begins with a foundational decision: how do developers share database state?

Xperience by Kentico supports two broad approaches.

  • Shared database: This gives every developer access to the same data. It is simple to set up, but it introduces risk as soon as more than one person starts making changes.

  • Independent databases: Each developer runs their own local instance, with schema and configuration shared through Xperience's Continuous Integration (CI) feature. This requires more setup upfront, but dramatically reduces conflict and gives every developer a consistent, reproducible starting point.

The expert consensus here is clear.

I've written in detail about the CI/CD developer scenarios that teams commonly encounter, and how to structure your repository to support them. One principle worth highlighting here: every developer builds from a database backup committed to your repository with the following qualities:

  • Well-maintained: Developers and agents understand under what conditions the backup is updated. This can be automatically validated with a CI workflow run during pull requests.

  • Lean and representative: You don't need (or want) a copy of production. Limit the data so that it contains what you need to validate features and reproduce bugs. Quality shared test data beats a 100GB production clone every time.

One thing the distributed approach enables that often goes unappreciated: it makes Xperience's CI feature genuinely useful for coordinating content model changes across a team. When schema and configuration live in source control, content type changes become part of your normal pull request workflow. That's worth a lot when multiple developers are working in parallel.

Patrick Huerto's team takes separation of concerns further than most, splitting frontend and backend into entirely separate repositories, with versioned frontend artifacts pulled into the backend build pipeline:

“The repo is the source of truth, so we're strict about keeping it clean. Code quality rules, project structure, CI/CD configuration - all of it needs to be solid, even if there's only one planned contributor. FE and BE are entirely different repos. FE produces versioned artifacts for assets, backend build pipelines pull the bundle as part of build pipelines based on version.”

Patrick Huerto
Patrick Huerto Head of Development, Devotion

It's up to you to decide how to take advantage of Xperience's architecture and features so that your team is as productive as possible, but one thing is clear: separate databases combined with the Continuous Integration feature is how expert teams work.

Get a QA environment in front of stakeholders early

Once your team has established their repository and development approach, you need to consider the other environments you'll work with in the future - the ones you don't directly work on but instead deploy to.

These are important because they are the environments non-developers (i.e. everyone else involved in a project) have access to.

One of the clearest points of agreement among our experts: create a stakeholder-facing environment as early in the project as possible.

For teams working on Xperience by Kentico SaaS, the platform's built-in environment promotion pipeline, from QA to production, makes this easier than it's ever been. There's less excuse not to have a client-visible environment running early. I've covered the SaaS-specific setup considerations in Xperience by Kentico SaaS best practices.

How do we keep non-production environments representative of production, and what are the data-fidelity goals we should prioritize?

  • Process: The ideal process is a periodic refresh from production data so these environments are representative of the production source-of-truth. Of course, this data needs to be sanitized to remove personally identifiable information.

  • Goals: The way SaaS deployments work ensures that code is identical between environments, so production-fidelity in QA, UAT, or staging focuses less on bug reproduction and more on integrated operational consistency - connections to infrastructure, external systems, and the overall customer and marketer experiences.

With regard to the data refreshing process, Mike Wills' team, following company security practices, downloads .bacpac files from production, sanitizes them, and restores to lower environments.

Andy Thompson's team takes a similar approach, but points out that compliance and data governance requirements shape the specific techniques.

“This is increasingly a challenge as customers' security postures become more cautious. In some situations we're literally not allowed to sync production data back to non-production environments due to governance and security policies. Ideally, we're able to periodically refresh non-production environments with a carefully sanitised version of the production data - surgically removing any personally identifiable or sensitive information - so it is not simply a copy of production.”

Andy Thompson
Andy Thompson CTO, Luminary

Xperience by Kentico SaaS supports this process with custom restores. They allow teams to populate environments with curated files and database backups but without the requirement of using CD deployment packages.

Whether your constraint is practical or regulatory, the principle is the same: aim for representative content, data, and configuration in pre-production environments and that the process is handled responsibly.

Coordinate parallel development without conflict

Once you have a distributed dev environment and a stable CI configuration, parallel development (across a team) becomes manageable. Whether teams have multiple developers working on a project and features simultaneously or take a more siloed approach, they all rely on standardized source control discipline.

They also use specific techniques like automation, AI assistance, small branches, a feature slice architecture, pull requests and code reviews to keep things organized.

Another key point: teams should leverage Xperience by Kentico's educational resources, which are designed to help developers be as productive as possible while still giving them the freedom to adapt the platform to their workflows.

One scenario worth addressing explicitly: evolving a content model while a project is in active development or after it goes live.

  • Adding database fields is non-destructive and straightforward.

  • Restructuring existing, and in-use, content types requires more care.

How do you keep team members from accidentally using data and code you intend to remove? The expand-and-contract pattern is the right approach here.

With the help of code comments and architectural decision records, fellow developers and AI agents stay aware of the multi-step process. Eventually, once all data has been migrated, you can remove old, unused fields with little fanfare because the entire application has since moved on.

Trevor Fayas calls out this point explicitly:

“Compared to older versions of Kentico, everything is in code. This means you can usually create, enhance, and import new content with no affect until you actually deploy the code that uses what you added.”

Trevor Fayas
Trevor Fayas Owner, The Physics Classroom

This clear separation between code and data is a huge benefit for teams that continuously evolve a DXP project over time, and all the expert recommendations above help make this technique reliable and effective.

Build a disciplined update cadence

Xperience by Kentico ships weekly hotfixes and monthly Refreshes. That's a fast cadence. The experts I spoke with have all developed practices to stay current without letting updates become a source of project risk or cost.

The key themes here are:

  • Communication and setting expectations with stakeholders

  • Automation to decrease costs

  • Predictable update cycles

  • Providing expert guidance on how marketers benefit from product updates

Patrick's guidance connects to something I've fully adopted myself: using AI agents to handle the mechanical parts of Xperience updates - NuGet and npm package updates, database migrations, CI repository file commits, and breaking change resolution.

The results are real and I documented the process in Can AI really update my Xperience by Kentico project?

The multiplier with AI: automated end-to-end tests

Nobody in this roundup mentioned automated testing, especially automated end-to-end (E2E) testing. That's not a criticism. It just reflects how the question was framed, and how testing tends to be treated in practice: as a nice-to-have that gets deprioritized under delivery pressure.

For a long time, it has been difficult to show the value of tests and prove their value exceeds their cost. I'd argue it's as important as the other practices in this article and it amplifies their impact.

Every practice described above is oriented toward shipping confidently:

  • Distributed local environments

  • CI for schema sync

  • Early QA for stakeholder validation

  • Disciplined update cadence

Many testimonials focus on the importance of automation. Automated E2E tests are what make project update confidence quantifiable. Without them, "confident" means "we reviewed the PR and it looked right." With them, it means "we reviewed the PR, it looked right, and our test suite passed."

This distinction becomes critical as soon as AI agents enter your development workflow.

AI writes code quickly. It refactors across files, resolves breaking changes, updates dependencies. But it doesn't know what it broke if it doesn't have a tool to validate it.

I wrote about exactly this scenario in Virtual Inbox, Real Tests: AI-driven E2E automation for Xperience by Kentico membership flows: an AI agent updating a membership flow that didn't touch the email template directly, but broke the variable reference that the template depended on. No compiler catches that. No unit test covers it. Only an E2E test that actually sends and receives email would have caught it before it reached production.

Thankfully, AI agents aren't just capable of quickly writing application code. They are also proficient at writing tests. If our projects are designed with automated E2E testing in mind, the cost of adding them today has been significantly reduced.

Xperience by Kentico's CI feature is also part of what makes E2E tests less brittle than they typically are in other platforms.

Developers have full control over code, content, and configuration in their local environment, so tests do not break because someone edited a page in a shared database. The environment is deterministic. This again lowers the cost of automated E2E testing.

Wrap up

Let's summarize the findings from each section above:

  • Structure your projects for local-first team collaboration

  • Don't forget about non-local environments and stakeholder visibility

  • Collaboration requires process and communication, but don't overcomplicate it

  • Keep your Xperience projects up-to-date and use automation to reduce costs

  • Add E2E test coverage to increase confidence and enable agentic acceleration

Each step gives your team confidence to move faster. The teams doing all of this aren't just delivering better projects today, they're building the harness that makes AI-assisted and eventually agentic development workflows safe to adopt.

There is a compounding return on getting the fundamentals right.

If you found these expert insights helpful and you're interested to see more from our community program members, read this article about SEO and GEO.

Thanks to Brian McKeiver, Mike Wills, Trevor Fayas, Milan Sustek, Andy Thompson, Patrick Huerto, Roel Kuik, and Roel de Bruijn for their contributions to this article.