Sunday, November 7, 2021

Sitecore Publishing Service - Publishing Sub Items of Related Items



We ran into an issue with our Sitecore 9.1 and Publishing Service 4.0 environment where when a page item with a rendering was being published, the corresponding data source of the rendering was not published fully.

To be more specific, if the rendering referred to a data source item that had multiple levels of items, then only the root of the data source was being published but not the child items.

A good example would be a navigation rendering that had a data source item with a lot of children. Content authors were making updates to all the link items within Experience Editor, but they were not being published.

This was happening for both manual publishing and publishing through workflow.

Configuration Updates

In our research, we discovered that publishing service allows you to specify the templates of the items you wish to publish as descendants of related items. 

Adding the following node to sc.publishing.relateditems.xml did trick (after a restart):

It is very important to note that the the template nodes need to have unique names in order for this to work. 

In other words: DatasourceTemplate1, DatasourceTemplate2, DatasourceTemplate3 etc. 

So as you can imagine, if you want to include a lot of data source item templates, the list in your configuration can get extremely large!

Final Words

I hope that this information helps developers who face a similar issue, as I could not find anything online about this related publishing configuration.

As always, feel free to comment or reach me on Slack or Twitter if you have any questions.                                                    

Tuesday, September 21, 2021

Sitecore xDB - Troubleshooting xDB Index Rebuilds on Azure

In my previous post, I shared some important tips to help ensure that if you are faced with an xDB index rebuild, you can get it done successfully and as quickly as possible.

I mentioned a lot of things in the post, but now, I want to mention common reasons where and why things can go wrong, and highlight the most critical items that impact the rebuild speed and stability.

Causes of Need To Rebuild xDB Index

Your xDB relies on your shard database's SQL Server change tracking feature in order to ensure that it stays in sync. This basically determines how long changes are stored in SQL. As mentioned in Sitecore's docs, the Retention Period setting is set to 5 days for each collection shard. 

So, why would 5-day old data not be indexed in time?
  • The Search Indexer is shut down for too long
  • Live indexing is stuck for too long
  • Live indexing falls too far behind

Causes of Indexing Being Stuck or Falling Behind, and Rebuild Failures

High Resource Utilization: Collection Shards 
99% of the time, this is due to high resource utilization on your shard databases. Basically, if you see your shard databases hitting above 80% DTUs, you will run into this problem.

High Resource Utilization: Azure Search or Solr
If you have a lot of data, you need to scale your Azure Search Service or Solr instance.  Sharding is the answer, and I will touch in this further down.

What to check?

If you are on Azure, make sure your xConnect Search Indexer WebJob is running.
Most importantly, check your xConnect Search Indexer logs for SQL timeouts. 

On Azure, the Webjob logs are found in this location: D:\local\Temp\jobs\continuous\IndexWorker\{randomjobname}\App_data\Logs"

Key Ingredients For Rebuild Indexing Speed and Stability

SQL Collection Shards

Database Health 

Maintaining the database indexes and statistics is critically important. As I mentioned in my previous post:  "Optimize, optimize, optimize your shard databases!!!" 

If you are preparing for a rebuild, make sure that you run the AzureSQLMaintenance Stored Procedure on all of your shard databases.

Database Size

The amount of data and the number of collection shards is directly related to resource utilization and rebuild speed and stability. 

Unfortunately, there is no supported way to "reshard" your databases after the fact. We are hoping this will be a feature that is added to a future Sitecore release.

xDB Search Index

Similarly to the collection shards, the amount of data and the number of shards is directly related to resource utilization on both Azure Search and Solr. 

Specifically on Solr, you will see high JVM heap utilization.

If your rebuilds are slowing down or failing, or even if search performance on your xDB index is deteriorating, it's most likely due to the amount of data in your index, the number of shards and distribution amongst nodes that you have set up.  

Search index sharding strategies can be pretty complex, and I might touch on in these in a later post.

Reduce Your Indexer Batch Size

Another item that I mentioned in my previous post. If you drop this down from 1000 to 500 and you are still having trouble, reduce it even further. 

I have dropped the batch size to 250 on large databases to reduce the chance of timeouts (default is 30 seconds) when the indexer is reading contacts and interactions from the collection shards.

Monday, May 10, 2021

Sitecore PowerShell Extensions - Find Content Items Missing From Sitecore Indexe


Over the course of the last month, we ran into data inconsistencies between what was in our content databases compared to our Solr indexes.

We have content authors from around the globe and content creation happens around the clock by authors via the Experience Editor and imports via external sources.

Illegal Characters Causing Index Issues

As mentioned by this KB article, index documents are submitted Solr in XML format, and if your content contains and “illegal” characters that cannot be converted to XML, all documents in the batch submission will fail.  

When you perform an index rebuild or re-index a portion of your tree, Sitecore will submit 100 documents in a batch to Solr. How is the related to the character issue? If you perform an index rebuild and have a single bad character in one of your items in the batch, none of the 100 docs in that batch will make it into your Solr index. 

What makes this especially difficult to troubleshoot is that item batches contain different items every time. So, what could be missing from your index during one rebuild, could be different during the next rebuild.  

There is a good Stack Exchange article that explains all of this, and kudos to Anders Laub who provides a pretty decent fix for this issue: 

PowerShell Index Item Check Script

There are several other reasons why content could be missing from your Sitecore Indexes, and so I needed to come up with a way to identify would could be missing.

PowerShell Extensions for the win!

I decided to create a PowerShell script to do just that - check for items in a selected target database that are missing in selected index, and produce a downloadable report.

What’s nice is that I strapped on an interactive dialog making it friendly for Authors or DevOps to make their comparison selections.

If you are newish to PowerShell Extensions, this could also help you understand how powerful it truly is, and serve as a guide to build your own scripts that you can use daily!