Tuesday, May 23, 2017

Sitecore Ecommerce Reporting using Google Tag Manager's Data Layer

Standard
A requirement on my last couple projects was to implement Google Tag Manager's Data Layer (GTM) in order to get eCommerce data to report in Google Analytics.

I have seen several custom implementations of this in Sitecore, most where developers end up writing some ugly spaghetti code in a view rendering that spits out the required data layer push script.

In this post, I will show you a clean way of implementing the data layer, that you can use as a guide in your own implementation if it is a requirement on your project.

This post assumes that your analytics team has configured the data layer in GTM and that they have provided you with the requirements to generate a data layer script in specific pages driven by targeted events.


What Is A Data Layer?

Before we get started, it's useful to understand what the data layer actually is.

The data layer is a piece of script that contains any information or variables that you want Google Tag Manager to read, and then report to Google Analytics - including eCommerce data.

Here is a great post that will help you to understand the true value of the data layer: Data Layer Demystified

Use Case

As previously mentioned, I was required to generate a data layer push script to track a visitor's activity on a Sitecore eCommerce site, along with the ability to allow purchase information to be sent to the data layer on the "Thank You" page after a purchase was complete.

As you can tell by the sample eCommerce script that was provided to me below, it is dynamic based on what items a visitor had purchased.

 digitalData = [{  
  page: {  
   category: {  
    pageType: 'menu item',  
   },  
   pageInfo: {  
    experienceType: 'desktop',  
    sysEnv: 'prod'  
   }  
  },  
  user: {  
   profile: {  
    profileInfo: {  
     loginStatus: 'logged-in',  
     profileID: '12345'  
    }  
   }  
  },  
  ecommerce: {  
   purchase: {  
    actionField: {  
     id: 'T12345', //Unique transaction ID. Required for purchases and refunds.  
     affiliation: 'Catering',  
     revenue: '35.43', // Total transaction value, including tax and shipping.  
     tax:'4.90',  
     shipping: '0', //Always set to '0'.  
     coupon: '' //Always set to empty string  
    },  
    products: [{  
     'name': 'Fruit Tray', //Product Name  
     'id': '12345', //Product SKU  
     'price': '26', //Product Price  
     'brand': '', //  
     'category': 'Trays', //Product Category  
     'variant': 'Small',  
     'quantity': 1  
    },{  
     'name': 'Barbeque Sauce', //Product Name  
     'id': '12345', //Product SKU  
     'price': '2', //Product Price  
     'brand': '', //  
     'category': 'Add On', //Product Category  
     'variant': '',  
     'quantity': 1  
    }]  
   }  
  }  
 }];  

My goal was to build the top section of the script on every eCommerce page, telling the Data Layer information about my visitor, and then add in the eCommerce section of the script when they completed a transaction.

POCO Time

You will notice that the data layer script itself is in JSON data format. So, I decided to create a nice clean POCO that I would populate with the required data and then serialize and output to my pages.

I created the following C# classes from the JSON:

   public class DataLayerModel  
   {  
     public Page page { get; set; }  
     public User user { get; set; }  
     public Ecommerce ecommerce { get; set; }  
   }  
   public class Category  
   {  
     public string pageType { get; set; }  
   }  
   public class PageInfo  
   {  
     public string experienceType { get; set; }  
     public string sysEnv { get; set; }  
     public string destinationURL { get; set; }  
     public string pageName { get; set; }  
   }  
   public class Page  
   {  
     public Category category { get; set; }  
     public PageInfo pageInfo { get; set; }  
   }  
   public class ProfileInfo  
   {  
     public string loginStatus { get; set; }  
     public string profileID { get; set; }  
   }  
   public class Profile  
   {  
     public ProfileInfo profileInfo { get; set; }  
   }  
   public class User  
   {  
     public Profile profile { get; set; }  
   }  
   public class ActionField  
   {  
     public string id { get; set; }  
     public string affiliation { get; set; }  
     public string revenue { get; set; }  
     public string tax { get; set; }  
     public string shipping { get; set; }  
     public string coupon { get; set; }  
   }  
   public class Product  
   {  
     public string name { get; set; }  
     public string id { get; set; }  
     public string price { get; set; }  
     public string brand { get; set; }  
     public string category { get; set; }  
     public string variant { get; set; }  
     public int quantity { get; set; }  
   }  
   public class Purchase  
   {  
     public ActionField actionField { get; set; }  
     public List<Product> products { get; set; }  
   }  
   public class Ecommerce  
   {  
     public Purchase purchase { get; set; }  
   }  

Hydrating The Data Layer

The next order of business was to write a method that would create and populate the object based on my data layer class.

My method would take the data layer eCommerce object as a parameter, and add it to the parent data layer object if it was populated.

I added this into my repository layer using two methods.

The first method (GetDataLayerModel) generated the data layer object and took an eCommerce object as a parameter. As I mentioned before, it added the eCommerce object to my data layer object if it was populated.

1:      public DataLayerModel GetDataLayerModel(Ecommerce commerceModel)  
2:      {  
3:        var currentItem = Sitecore.Context.Item;  
4:        var pageName = HttpUtility.JavaScriptStringEncode(currentItem.Fields[DisplayConstants.TitleField].Value);  
5:        var pageType = currentItem.TemplateName;  
6:        var experienceType = DisplayConstants.DesktopExperienceType;  
7:        var sysEnv = Sitecore.Configuration.Settings.GetSetting("DataLayerEnvironmentName");  
8:        var destinationUrl = HttpContext.Current.Request.Url.AbsoluteUri;  
9:        var loginStatus = DisplayConstants.NotLoggedIn;  
10:        var profileId = "";  
11:    
12:        var crnUser = new CrnUserModel();  
13:    
14:        if (crnUser.IsAuthenticated && !crnUser.IsSitecoreDomain)  
15:        {  
16:          loginStatus = DisplayConstants.LoggedIn;  
17:        }  
18:    
19:        if (Tracker.Current != null && Tracker.Current.Contact != null)  
20:        {  
21:          profileId = Tracker.Current.Contact.Identifiers.Identifier;  
22:        }  
23:    
24:        var dataLayerModel = new DataLayerModel  
25:        {  
26:          page = new Page  
27:          {  
28:            pageInfo = new PageInfo  
29:            {  
30:              experienceType = experienceType,  
31:              sysEnv = sysEnv,  
32:              destinationURL = destinationUrl,  
33:              pageName = pageName  
34:    
35:            },  
36:            category = new Category  
37:            {  
38:              pageType = pageType  
39:            }  
40:          },  
41:          user = new User  
42:          {  
43:            profile = new Profile  
44:            {  
45:              profileInfo = new ProfileInfo  
46:              {  
47:                profileID = profileId,  
48:                loginStatus = loginStatus  
49:              }  
50:            }  
51:          }  
52:        };  
53:    
54:        if (commerceModel != null)  
55:        {  
56:          dataLayerModel.ecommerce = commerceModel;  
57:        }  
58:    
59:        return dataLayerModel;  
60:      }  
The second method (GetCommerceDataLayer) would take the transaction data, and create a data layer eCommerce object that would be used to pass over to the GetDataLayerModel method mentioned and shown above.

1:      public Ecommerce GetCommerceDataLayer(orderModel orderModel, menuResponse oloMenuResponse)  
2:      {  
3:        var ecommerce = new Ecommerce  
4:        {  
5:          purchase = new Purchase  
6:          {  
7:            products = new List<Product>(),  
8:            actionField = new ActionField  
9:            {  
10:              affiliation = DisplayConstants.CateringAffiliation,  
11:              tax = oloMenuResponse.Order.TaxAmount.ToString(CultureInfo.InvariantCulture),  
12:              revenue = oloMenuResponse.Order.TotalAmount.ToString(CultureInfo.InvariantCulture),  
13:              id = oloMenuResponse.Order.SubmitOrderNumber.ToString(CultureInfo.InvariantCulture),  
14:              shipping = "0",  
15:              coupon = string.Empty  
16:            }  
17:          }  
18:        };  
19:    
20:        foreach (var lineItem in orderModel.LineItems)  
21:        {  
22:          var dataLayerProduct = new Product  
23:          {  
24:            id = lineItem.ItemTag,  
25:            name = lineItem.Name,  
26:            price = lineItem.RetailPrice.ToString(CultureInfo.InvariantCulture),  
27:            quantity = lineItem.Quantity,  
28:            variant = string.Empty,  
29:            category = string.Empty  
30:          };  
31:    
32:          ecommerce.purchase.products.Add(dataLayerProduct);  
33:    
34:          if (!lineItem.Modifiers.Any())  
35:          {  
36:            continue;  
37:          }  
38:    
39:          foreach (var modifier in lineItem.Modifiers)  
40:          {  
41:            var dataLayerModifierProduct = new Product  
42:            {  
43:              id = modifier.ItemTag,  
44:              name = modifier.Name,  
45:              price = modifier.RetailPrice.ToString(CultureInfo.InvariantCulture),  
46:              quantity = modifier.Quantity,  
47:              category = DisplayConstants.CateringAddOn,  
48:              variant = string.Empty  
49:            };  
50:    
51:            ecommerce.purchase.products.Add(dataLayerModifierProduct);  
52:          }  
53:        }  
54:    
55:        return ecommerce;  
56:      }  

Coding the Controller and View

The next order of business was to create a controller and view that would be added to my target pages.

I created a simple controller and coded the action to grab my eCommerce transaction data, and pass it over to the data layer methods that I had created before.

Action

My controller action looked like this:

1:      public ActionResult DataLayer()  
2:      {  
3:        Ecommerce eCommerceTransaction = null;  
4:    
5:        //Check for order info in session  
6:        var orderResponse = _contactRepository.GetSubmitOrderResponse();  
7:        var order = _contactRepository.GetSubmitOrder();  
8:    
9:        if (order != null && orderResponse != null)  
10:        {    
11:          eCommerceTransaction =_analyticsRepository.GetCommerceDataLayer(order, orderResponse);  
12:            
13:          //Kill session objects  
14:          _contactRepository.SetSubmitOrderResponse(null);  
15:          _contactRepository.SetSubmitOrder(null);  
16:    
17:        }  
18:    
19:        var model = _analyticsRepository.GetDataLayerModel(eCommerceTransaction);  
20:        return View(model);  
21:      }  

View

My view was very simple; it took my data layer object, and serialized it.

1:  @model MyProject.Foundation.Domain.Models.DataLayer.DataLayerModel  
2:  @using Newtonsoft.Json  
3:    
4:  @{  
5:    if (Model != null)  
6:    {  
7:      <script>  
8:        window.digitalData = @Html.Raw(string.Format("[{0}];",JsonConvert.SerializeObject(Model)))  
9:      </script>  
10:    }  
11:  }  

Adding the Sitecore Component

With the code in place, I created the Controller Rendering item in Sitecore.




Adding the Component to your Target Pages

In my implementation, I statically bound my Controller Rendering to my eCommerce Layout.

You can most certainly add the controller rendering to a specific placeholder that you have set up. Just be aware that the data layer code needs to be within the head element of your page, right before
your Google Tag Manager code:

1:  <head>  
2:  //Your regular head stuff here  
3:    @{  
4:       //Constant value is the ID of my Controller Rendering shown above {CFE19999-FC9C-42C0-A3C0-A6F0FCFB8519}           
5:       @Html.Sitecore().Rendering(MyProject.Feature.Analytics.Constants.RenderingConstants.DataLayer)   
6:      //Google tag manager script (I am loading in a regular partial view below)  
7:      Html.RenderPartial("~/Views/Identity/GoogleTagManager.cshtml");  
8:    }  
9:  </head>  

Final Results

With all the pieces of the puzzle in place, the only thing left to do was to check the script in my pages to confirm that it was being built out correctly.

Here is a sample script generated by a regular eCommerce page:

1:  <script>  
2:        window.digitalData = [{"page":{"category":{"pageType":"Standard"},"pageInfo":{"experienceType":"desktop","sysEnv":"Dev","destinationURL":"https://SC81U2/OrderStuff","pageName":"Category"}},"user":{"profile":{"profileInfo":{"loginStatus":"logged-in","profileID":"ID-123456789"}}},"ecommerce":null}];  
3:   </script>  

Here is sample script that was generated from an eCommerce "Purchase Complete / Thank You" page that had the eCommerce / transaction data loaded:

1:  <script>  
2:        window.digitalData = [{"page":{"category":{"pageType":"Standard"},"pageInfo":{"experienceType":"desktop","sysEnv":"Dev","destinationURL":"https://SC81U2/Thankyou","pageName":"Category"}},"user":{"profile":{"profileInfo":{"loginStatus":"logged-in","profileID":"ID-123456789"}}},"ecommerce":{"purchase":{"actionField":{"id":"2218523","affiliation":"Catering","revenue":"116.65","tax":"7.15","shipping":"0","coupon":""},"products":[{"name":"Fruit Cup","id":"FRUIT_CUP","price":"2.19","brand":null,"category":"Breakfast","variant":"Small Fruit Cup","quantity":50}]}}}];  
3:  </script>  
4:    


Tuesday, March 7, 2017

Securing your mLab Cloud Service for Sitecore MongoDB databases

Standard
The recent string of ransomware attacks on MongoDB databases that left over 30,000 servers compromised, has got most Sitecore clients skittish about the security of their hosted Sitecore MongoDB databases.


Almost all of the posts out there reference the generic MongoDB security checklist as what you should implement to protect your MongoDB installation.

So with this being said, the following questions should be on your mind;

  1. How does this list apply to my mLab Cloud hosted MongoDB service?
  2. Are my mLab MongoDB databases as secure as they possibly can be?
With the new Sitecore Azure PaaS offering picking up steam, it's more important than ever to understand mLab's security considerations as mLab on Azure is the default option that clients are turning to.

The purpose of this post is to help you understand how secure your client's current mLab environment is, or how to secure a new database cluster that you may be working on.

The items being referenced can be found within mLab's security documentation at http://docs.mlab.com/security.

Dedicated Cluster

Every client should be on a Dedicated Cluster plan of some sort.

These plans offer a number of potential security enhancements, as well as a number of baseline security considerations, such that all deployments will always have auth enabled no matter what.

Private Environment

An optional security enhancement is using an mLab Private Environment.

This feature allows for an mLab deployment to be created in a VPC such that another VPC can be peered to limit any connections to the customer-owned VPC. This is especially useful for applications that may have dynamic scaling and non-static IP address.

You can read more about mLab Private Environments at http://docs.mlab.com/private-environments/ 

Encryption at Rest

This will encrypt any data as it resides on disk: http://docs.mlab.com/security/#encryption-at-rest

The feature is currently only supported on AWS and Google Cloud Platform and NOT Azure.


Encryption during Transit (SSL)

Without this feature enabled, any communication with your mLab deployment that is not originating from within AWS or Azure is going to take place across the open internet and will be susceptible to packet sniffing.

Even with custom firewall rules in place to limit access to only the IP address(es) (or address ranges) you specify, traffic between the database and the client applications and networks is vulnerable to snooping.

Whether creating a new deployment or upgrading an existing deployment, you can enable SSL support for MongoDB connections directly from the mLab management portal.

It's an extra $80 a month, but it's well worth the investment in order to ensure privacy, critical security and data integrity.

The details around this feature can be found here: http://docs.mlab.com/ssl-db-connections/.

Custom Firewall Rules

Basically, the feature offers the ability for you to define custom firewall rules so that your database only allows network access from your application infrastructure.

Access can be limited to specific IP address ranges and/or to Amazon EC2 security groups (AWS only).

If you are using AWS, your Security Group must be in EC2-Classic and exist in AWS us-east-1 (the same AWS Region as your database). If your app is in EC2-VPC, consider migrating this deployment to an mLab Private Environment: http://docs.mlab.com/private-environments/

More information about this feature can be found at http://docs.mlab.com/security/#custom-firewalls

Two-factor Authentication for the mLab management console

2FA  is optional by default for account users. 

Access to the mLab management console provides full and complete access to any deployment within the account, including the ability to create and download backups as well as delete/modify deployments. 

Making 2FA a requirement will reduce the potential for undesired access.

Final Note

These security enhancements are all optional, but also recommended.

mLab's baseline security practices provide a reasonable degree of security, but as you very well know, security is not a binary subject and there are always ways to increase the overall security of a deployment.


Monday, January 23, 2017

Sitecore Cleanup Monitor - Proactively keeping an eye on your Event Queue, History and Publish Queue tables

Standard

Background

There are several horror stories floating around the web about the Event Queue bringing Sitecore down to its knees.

Brian Pedersen
https://briancaos.wordpress.com/2016/08/12/sitecore-event-queue-how-to-clean-it-and-why/
https://briancaos.wordpress.com/2014/10/23/sitecore-eventqueue-deadlocks-how-to-solve-them-and-how-to-avoid-them/

Andy Cohen
https://blog.horizontalintegration.com/2016/02/09/sitecore-eventqueue-strikes-again/

I have experienced trouble myself
http://sitecoreart.martinrayenglish.com/2016/08/diagnosing-content-management-server.html

The Last Straw 

There is a bug in pre 8.1 U3 releases (I am on 8.1 U2) that will cause the Event Queue table in the Core database to be flooded with timestamp data from your Sitecore servers in a scaled environment.

The issue was related to the property:changed event that was being added into the Event Queue. Every 10 seconds each Sitecore Instance would use the SetTimestampForLastProcessing method.

There was no need to inform other instances about the update in last processed stamp of local instance, and Sitecore Support provided me with a patch where they simply used the event disabler to fix the issue.

Here is a copy of the patch for download if you are having this problem: https://www.dropbox.com/s/lpjhil5rf9dri0n/Sitecore.Support.99697.zip?dl=0

After experiencing this and other problems in the past, I decided to take action.

Sitecore Cleanup Monitor Module 

The Event Queue was my initial focus, but per Sitecore's Performance Tuning Guide, in order to keep Sitecore running optimally, we need to keep the Event Queue, History and Publish Queue tables below 1000 rows: https://sdn.sitecore.net/upload/sitecore7/70/cms_tuning_guide_sc70-72-a4.pdf. The reason behind this is due to SQL deadlocking: https://technet.microsoft.com/en-us/library/ms177433(v=sql.105).aspx.

With all this being said, I decided to put together a module that would keep an eye on these key tables.

The module consists of 3 agents that will monitor the Event Queue, Publish Queue and History tables to ensure that they don't exceed a set threshold.



Why would you use it?

In many cases, Sitecore's default cleanup agents just aren't efficient enough in cleaning up these key Sitecore tables.

This module allows you to be proactive instead of reactive, so that you don't have to log into your SQL instance to manually run queries to clean up your tables, usually after the $#!,$h has hit the fan.

How does it work? 

When due, the agent will check the row count of the target table in each database (core, master and web), and if the count is above the set threshold, it will remove the oldest rows, bringing the row count down to the threshold. It won't do anything to tables with row counts that are below the threshold.

You can set how often you want each agent to run, and what you want your threshold / table row count to be. You also don't need to use all three agents. If you only want to monitor the Event Queue for example, simply comment or remove the other agents from the module's config file.

You can monitor it's activity be examining your Sitecore logs. Here is a snapshot example:


Installation and Configuration

Documentation, full source code and package download is available from my GitHub repository: https://github.com/martinrayenglish/Sitecore.Cleanup

The module is available on the Sitecore Marketplace: https://marketplace.sitecore.net/Modules/S/Sitecore_Cleanup_Monitor.aspx

Wednesday, December 14, 2016

Implementing 3000+ Redirects in Sitecore

Standard
When standing up a new site, redirects always seem to be an afterthought - one of those items on the list that you talk about in the early phases, and then again when you are ready to tackle them in the last few weeks when the launch is right around the corner.

As a Sitecore developer, most of the time it's up to you to set up the module of choice, and then simply train your content authors on how to use it to load the redirects.

However, when dealing with a large corporate site, and in my case where we combined a couple sites into 1, you have to find a relatively quick way to get thousands of redirects handled by your shiny new Sitecore site.

In this post, I will provide the strategy that I took to import and implement a massive amount of redirects successfully within Sitecore.

You can go ahead and grab all the Url Rewrite module code changes that I mentioned in this post via my fork on GitHub: https://github.com/martinrayenglish/UrlRewrite

You can review the code changes here: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152

You can grab the PowerShell script here.

Url Rewrite Module

There are plenty of Sitecore redirect modules out there, but Andy Cohen's Url Rewrite module is my favorite one because of its rich feature set, great architecture and the fact that it's source code is available when you need to make customizations: https://marketplace.sitecore.net/Modules/Url_Rewrite.aspx

As shown above, it is available on the Sitecore Marketplace. I would recommend grabbing the branch / tag that is specific to your version of Sitecore by navigating over to the GitHub repository: https://github.com/iamandycohen/UrlRewrite.

If you view the changelog, you will be able to find out what version supports your instance.

That is what I did in my case - I worked with Version 1.8.1.3 when I had to make the customizations mentioned below for my 8.1 U2 implementation.

Handling Bucket Items

As we know, item buckets let you manage large numbers of items in the content tree and this was a natural direction to take the massive amount of redirect items that I intended to load into Sitecore.

Now, focusing onto the module's code - there is a recursive method called "AssembleRulesRecursive" within the RulesEngine.cs file that is responsible for aggregating all the redirect items and rules. I ended up having to update this area of the module to check within both bucket and node items for redirect items and rules.

This can be seen by my change on line 91 of RulesEngine.cs: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-b5f5d381da80e314aac4e60905fb7ea7

Next, I needed to set the standard values of the module's Simple Redirect template to be Bucketable


After this, I went ahead and added a new bucket content item at my "global" location in my content tree that would hold the redirect items that I intended to import into Sitecore.

PowerShell Import

The next step in this operation was to get the actual redirect items loaded into Sitecore. I created PowerShell script that would target a CSV file that was loaded into the media library and create items for each data record.

I have been using several derivations of Pavel Veller's script for handling imports in the past. If you are new to Sitecore PowerShell, I recommend taking a look at his post: http://jockstothecore.com/content-import-with-powershell-treasure-hunt/.

My final script simply required my CSV file to contain "name" , "old" and a "new" columns that I would use to create the redirect items within my bucket. The value in the "name" column would be used for the redirect item name, "Old" would hold the old url and "New" would hold the new / target url. Here is a screenshot of a sample from my CSV file:


With everything in place, I uploaded my CSV file containing my redirects into the media library, ran my script, and my many, many redirect items started to appear in my bucket.


Handling Redirects with Static File Extensions 

The module has a built-in handler for static file extensions that you can see by Brent Scav's post: https://blog.horizontalintegration.com/2014/12/19/sitecore-url-rewrite-module-for-static-file-extensions/.

You can simple add handler entries to your web.config to allow it to handle whatever static extensions you need to redirect from in your instance.

Unfortunately, this didn’t work for me in the latest version, as it kept throwing a Tracker.Current "null" error when trying to start the Analytics tracker within the RegisterEventOnRedirect method in Tracker.cs, line 30: https://github.com/martinrayenglish/UrlRewrite/blob/master/Hi.UrlRewrite/Analytics/Tracking.cs

I believe that this was because the handler was hit before Sitecore's InitializeTracker pipeline had been run.

I went ahead and added a way for the handler to tell the InboundRewriter not to try and start the Analytics tracker if it was handling a static extension redirect. This was done by adding an entry to the HttpRequestArgs custom data's SafeDictionary within the handler UrlRewriteHandler.cs on line 28:

https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-1fca180afe168b7567be7ea87006de50

and looking for it within the InboundRewriteProcessor.cs line on 54:

https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-f325026a733120e6591270e76c2d8347

After that, the handlers worked like a champ.

Here is an example of a handler for PDF files from my web.config:


Bonus - Handling Subdomain redirects

I needed a way to handle non-Sitecore site subdomain redirects within my solution.

To explain what I was doing here: 

We had merged a separate site with a different subdomain into our new site, and wanted to be able to create redirects for urls from the old site that pointed to the new urls.

Example:

http://old.mysite.com/folder/some-nice-url (old non-Sitecore site) → https://www.mysite.com/newfolder/some-new-nice-url (new Sitecore site)

Once again, I dug into the InboundRewriter.cs and updated the TestRuleMatches method to be able to match using host name as well. After this, I added a new TestAllRuleMatches method that would be called instead, that would first check using the "old way" of matching based on path, and if it didn’t find a match, it would check for a match using the full url with host name included.

You can see these changes here: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-4580f06f0095411a68df2fa0d1e890dd

With this in place, all I had to do was add the new "old site" binding in IIS to my Sitecore site and voila, the module handled requests for the old subdomain.

Problem Solved

With my items loaded into Sitecore, the ability to handle static file extensions and non-Sitecore site subdomains, I had reached my final destination on my redirect mission!

You can go ahead and grab all the Url Rewrite module code changes that I mentioned in this post via my fork on GitHub: https://github.com/martinrayenglish/UrlRewrite

You can review the code changes here: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152

You can grab the PowerShell script here.

Q&A

Good question asked by Kamruz Jaman: Did you consider generating redirect rules for IIS Rewrite module directly?

The IIS rewrite module was used for forcing ssl behind our AWS elastic load balancer (see this post http://stackoverflow.com/questions/19791820/redirect-to-https-through-url-rewrite-in-iis-within-elastic-beanstalks-load-bal) and to prevent font leaching. Our client made us work with a 3rd party that delivered a redirect map in Excel format of about 6k entries 3 weeks prior to launch. The old and new urls were vastly different and would result in some very complex rewrite rules and we would end up with a web.config 10 miles long. Also, tweaking things after launch (we still are) would be painful because updating the rules using the IIS module would update the web.config and as you know, would cause a recycle.

This approach was the best solution for our situation.

Friday, October 21, 2016

Taming Your Sitecore Analytics Index by Filtering Anonymous Contact Data

Standard
With the release of Sitecore versions 8.1 U3 and 8.2, there is a new setting that will dramatically reduce the activity on your instance's analytics index by filtering out anonymous contact data from it.

To put this simply; you don't have to have all the anonymous visitor data added to your analytics index anymore.

 xDB will still capture and show the anonymous visitor data in the various reporting dashboards, but this data won't be added to your analytics index, and you won't see the anonymous contacts in the Experience Profile dashboard.

The new "ContentSearch.Analytics.IndexAnonymousContacts" setting can be found in the Sitecore.ContentSearch.Analytics.config file, and is set to "true" by default.

To quote the setting comments found in this file:

"This setting specifies whether anonymous contacts and their interactions are indexed.
If true, all contacts and all their interactions are indexed. If false, only identified contacts and their interactions are indexed. Default value: true".

One of the key changes to the core code can be seen in in the Sitecore.ContentSearch.Analytics assembly. The magic is on line 14:

1:  using Sitecore.Analytics.Model.Entities;  
2:  using Sitecore.ContentSearch.Analytics.Abstractions;  
3:  using Sitecore.Diagnostics;  
4:    
5:  namespace Sitecore.ContentSearch.Analytics.Extensions  
6:  {  
7:   public static class ContactExtensions  
8:   {  
9:    public static bool ShouldBeIndexed(this IContact contact)  
10:    {  
11:     Assert.ArgumentNotNull((object) contact, "contact");  
12:     ISettingsAnalytics instance = ContentSearchManager.Locator.GetInstance<ISettingsAnalytics>();  
13:     Assert.IsNotNull((object) instance, "Settings for contact segmentation index cannot be found.");  
14:     if (instance.IndexAnonymousContacts())  
15:      return true;  
16:     return !string.IsNullOrEmpty(contact.Identifiers.Identifier);  
17:    }  
18:   }  
19:  }  
20:    



Why does this matter? 

One of our clients started having some severe Apache Solr issues due to the JVM using a massive amount of memory after running xDB for several months. After our investigation, we discovered that the root cause of the memory usage was due to the analytics index being pounded during the aggregation process. 

The JVM memory usage was like a ticking time bomb. As we started collecting more and more analytics data, our java.exe process started using more and more memory. 

At launch, we gave 4GB to the Java heap size (for more info around this, you can Google Xms<size> -Xmx<size> values). After a few months of running the sites and discovering the memory issue, we felt as though perhaps we assigned our Xmx too low, and upped the memory limit to 8GB. A few weeks later, we outgrow this limit, and we bumped it up to 16GB. 

The high memory usage would eventually cause Solr not respond to query requests and the Sitecore instance to stop functioning. As we know, Sitecore is heavily dependent on an indexing technology (Solr or Lucene), and if it fails, chances are your instance will stop functioning unless you have a magically patch that I mentioned in my previous post: http://sitecoreart.martinrayenglish.com/2016/09/bulletproofing-your-sitecore-solr-and.html


Analytics Index Comparison 

After upgrading our instance from 8.1 U1 to 8.1 U3, and disabling this setting, we performed an index size comparison. Our analytics index went from 21,728,706 docs and 8GB in size to 0 docs and 101 bytes in size (empty). It's important to note that this is because we currently don’t have any identified contacts within xDB. I find it hard to believe that when we start our contact identification process using CRM system data, that we will be seeing sizes like this in the future.


Final Thoughts 

This setting has made a major different in the stability of our client's high traffic Sitecore sites. It's up to you and your team to decide how important it is to have those anonymous contact records show up in the Experience Profile dashboard. 

To us, it was a no-brainer.

Tuesday, September 6, 2016

Bulletproofing your Sitecore Solr and SolrCloud Configurations

Standard

Solr and SolrCloud 

As we know, Sitecore supports both Lucene and Solr search engines. However, there are some compelling reasons to use Solr instead of Lucene that are covered in this article: https://doc.sitecore.net/sitecore_experience_platform/setting_up__maintaining/search_and_indexing/indexing/using_solr_or_lucene

Solr has been the search engine choice for all of my 8.x projects over the last few years and I have recently configured SolrCloud for one of my clients where fault tolerance and high availability was an immensely important requirement.

Although I am a big fan of SolrCloud, it is important to note that Sitecore doesn't officially support SolrCloud yet. For more details, see this KB article: https://kb.sitecore.net/articles/227897.

So, should SolrCloud still be considered in your architecture?

My answer to this question is YES!

My reasoning is that members of Sitecore's Technical and Professional Services Team, have implemented a very stable patch to support SolrCloud that has been tested and used in production by extremely large scale SolrCloud implementations. More about this later.

In addition, if you are running xDB, your Analytics index will get very large over time, and the only way to handle this is to break it up unto multiple shards. SolrCloud is needed to handle this.

The Quest to Keep Solr Online 

One of our high traffic clients running xDB started having Solr issues recently and this sparked my research and work with the Sitecore Technical Services team to obtain a patch to keep Sitecore running if Solr was having issues.

As a side note; the issues that we started seeing were related to the Analytics index getting pounded. The most common error that we saw was the following:

 ERROR <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">  
 <html><head>  
 <title>502 Proxy Error</title>  
 </head><body>  
 <h1>Proxy Error</h1>  
 <p>The proxy server received an invalid  
 response from an upstream server.<br />  
 The proxy server could not handle the request <em><a href="/solr/sitecore_analytics_index/select">GET&nbsp;/solr/sitecore_analytics_index/select</a></em>.<p>  
 Reason: <strong>Error reading from remote server</strong></p></p>  
 </body></html>  

This only popped up after running xDB for several months, as our analytics index started getting fairly large. Definitively something to keep in mind when you are planning for growth, and as mentioned above, why SolrCloud is the best option for a large-scale, enterprise Sitecore search configuration.

Giving the Java Virtual Machine (JVM) running Apache more memory seemed to help, but this error would continue to rear its nasty head, every so often during periods of high traffic.

Sitecore is very sensitive to Solr connection issues, and will be brought down its knees and throw an exception if it has any trouble!

The Bulletproof Solr Patches 


Single Instance Solr Configuration - Patch #391039 

My research to keep Sitecore online if there are Solr issues led me to this post by Brijesh Patel that was published back in March. After reading though it, I decided to contact Sitecore Support about patch #391039, as it seemed to be just what I wanted for my client's single Solr server configuration.

Working with Andrew Chumachenko from support, our tests revealed that the patch published here didn't handle index "SwitchOnRebuilds". To me, this was a deal breaker.

Andrew discovered that there were several versions of patch #391039 (early versions of the patch were implemented for Sitecore versions 7.2 ), and found at least three different variations.

We found that the most recent version of the patch did in fact support "SwitchOnRebuilds", and Andrew made this available to everyone in the community on GitHub: https://github.com/andrew-at-sitecore/Sitecore.Support.391039

This is a quote from Brijesh's post to explain how it works:

"...it checks if Solr is up on Sitecore start. If no, it skips indexes initializing. However, it may lead to exceptions in log files and inconsistencies while working with Sitecore when Solr is down.

Also, there is an agent defined in the ‘Sitecore.Support.391039.config’ that checks and logs the status of Solr connection every minute (interval value should be changed if needed).

If the Solr connection is restored — indexes will be initialized, the corresponding message will be logged and the search and indexing related functionality will work fine."

SolrCloud Solr Configuration - Patch #449298 

This patch works the same way as patch #391039 described above, but supports SolrCloud.

You may be asking yourself, "isn't the point of having a highly available Solr configuration to ensure that my Solr search doesn’t have issues?"

Well, of course. But, due to the nature in which SolrCloud operates, this patch acts as a fail-safe if something goes wrong - if for example your Zookeepers are trying to determine who the leader is if you lose an instance. If there is a mere second that Sitecore is trying to query Solr, and has trouble, it will throw an exception.

So, patch #449298 accounts for this and also allows index "SwitchOnRebuilds" just like the common, single instance Solr server configurations.

GitHub for this patch: https://github.com/SitecoreSupport/Sitecore.Support.449298 

It is important to note that this patch requires an IoC container that injects proper implementations for SolrNet interfaces. It depends on patch Sitecore.Support.405677. You can download the assemblies based on your IoC container from this direct link: https://github.com/SitecoreSupport/Sitecore.Support.405677/releases

Looking Ahead 

Support for Solr out-of-the box (taking into account these patches ) is to be added to the upcoming Sitecore 8.2 U1. So definitely something to look forward to in this release.

A special thanks to Paul Stupka, who is the mastermind behind these patches, and rockstar Andrew Chumachenko for all his help.

Tuesday, August 2, 2016

Diagnosing Content Management Server Memory Issues After a Large Publish

Standard

Background

My current project involved importing a fairly large number of items into Sitecore from an external data source. We were looking at roughly around 600k items that weren't volatile at all. We would have a handful of updates per week.

At the start of development, we debated between using a data provider or going with the import, but after doing a POC using the data provider, it was clear that an import was the best option.

The details of what we discovered would make a great post for another time.

NOTE: The version we were running was Sitecore 8.1 Update 2.

The Problem 

After running the import on the our Staging Content Management Server, we were able to successfully populate 594k items in the master database without any issues.

The problem reared its ugly head after we published the large number of items.

After the successful publish, we noticed that there was an instant memory spike on the Content Management Server after the application pool had recycled. Within about 10 seconds, memory usage would reach 90%, and would continue to climb until IIS simply gave up the ghost.

Mind you, our Staging server was pretty decent, an AWS EC2 Windows instance loaded with 15GB of RAM.

So what would cause this?


Troubleshooting 

I confirmed that my issue was in fact caused by the publish by restoring a backup of the web database from before the publish had occurred and recycling the application pool of my Sitecore instance. 

I decided to take a look at what objects were filling up the memory, and so I loaded and launched dotMemory from JetBrains and started my snapshot.

The snapshot revealed some QueuedEvent lists that were eating up the memory:



Next, I decided to fire up SQL Server Profiler to investigate what was happening on the database server.

Running Profiler for about 10 seconds while Sitecore was starting up, showed the following query being executed 186 times within the same process:

SELECT TOP(1) [EventType], [InstanceType], [InstanceData], [InstanceName], [UserName], [Stamp], [Created] FROM [EventQueue] ORDER BY [Stamp] DESC

Why would Sitecore be executing this query so many times, and then filling up the memory on our server?

I know that Content Management instances have a trigger to check the event queue periodically and collect all events to be processed. But, this seemed very strange.

For more info on how this works, you can check out this article by Patel Yogesh: http://sitecoreblog.patelyogesh.in/2013/07/sitecore-event-queue-scalability-king.html.
It's older but still applicable.

I shifted focus onto the EventQueue table to see what it looked like.

EventQueue Table 

A count on the items in my Web database's EventQueue table returned 1.2M.

99% of the items in the EventQueue table were the following remote event records: 

Sitecore.Data.Eventing.Remote.SavedItemRemoteEvent, Sitecore.Kernel, Version=8.1.0.0, Culture=neutral, PublicKeyToken=null 

Sitecore.Data.Eventing.Remote.CreatedItemRemoteEvent, Sitecore.Kernel, Version=8.1.0.0, Culture=neutral, PublicKeyToken=null 

I ran the following queries to tell me how many "SaveItem" event entries and how many "CreatedItem" event entries existed in the table, that were ultimately put there by my publish: 

SELECT *   FROM [Sitecore_Web].[dbo].[EventQueue]    
WHERE EventType LIKE '%SavedItem%'  AND UserName = 'sitecore\arke'  
ORDER BY Created DESC

SELECT *   FROM [Sitecore_Web].[dbo].[EventQueue]
WHERE EventType LIKE '%CreatedItem%'  AND UserName = 'sitecore\arke'  
ORDER BY Created DESC

Both the former and the later returned 594K items each. This seemed to line up correctly with the number of items that I had recently published, but the fact that we had two entries for each item was the obvious cause of the table having well over 1 million records.

The Solution 

There is a good post on the Sitecore Community site, where Vincent van Middendorp mentions a few Truncate queries to empty the EventQueue table along with the History table: https://community.sitecore.net/developers/f/8/t/1450

Truncating the table seemed a bit too evasive at first, so I went ahead and wrote up a quick query to delete the records from the EventQueue table that I knew I had put there (based on my username):

DELETE   FROM [Sitecore_Web].[dbo].[EventQueue] 
WHERE EventType LIKE '%CreatedItem%' OR EventType LIKE '%SavedItem%' 
AND UserName = 'sitecore\arke' 

Running another count on the records in my EventQueue table returned a count of 7.

So, I may well have just run a truncate :)

After firing up my the Sitecore instance again, I was happy to report that memory on the server was now stable.


The Moral of the Story 

Keep an eye on that EventQueue after a large publish!

Looking forward to seeing the publishing improvements coming in Sitecore 8.2.