Wednesday, December 14, 2016

Implementing 3000+ Redirects in Sitecore

Standard
When standing up a new site, redirects always seem to be an afterthought - one of those items on the list that you talk about in the early phases, and then again when you are ready to tackle them in the last few weeks when the launch is right around the corner.

As a Sitecore developer, most of the time it's up to you to set up the module of choice, and then simply train your content authors on how to use it to load the redirects.

However, when dealing with a large corporate site, and in my case where we combined a couple sites into 1, you have to find a relatively quick way to get thousands of redirects handled by your shiny new Sitecore site.

In this post, I will provide the strategy that I took to import and implement a massive amount of redirects successfully within Sitecore.

You can go ahead and grab all the Url Rewrite module code changes that I mentioned in this post via my fork on GitHub: https://github.com/martinrayenglish/UrlRewrite

You can review the code changes here: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152

You can grab the PowerShell script here.

Url Rewrite Module

There are plenty of Sitecore redirect modules out there, but Andy Cohen's Url Rewrite module is my favorite one because of its rich feature set, great architecture and the fact that it's source code is available when you need to make customizations: https://marketplace.sitecore.net/Modules/Url_Rewrite.aspx

As shown above, it is available on the Sitecore Marketplace. I would recommend grabbing the branch / tag that is specific to your version of Sitecore by navigating over to the GitHub repository: https://github.com/iamandycohen/UrlRewrite.

If you view the changelog, you will be able to find out what version supports your instance.

That is what I did in my case - I worked with Version 1.8.1.3 when I had to make the customizations mentioned below for my 8.1 U2 implementation.

Handling Bucket Items

As we know, item buckets let you manage large numbers of items in the content tree and this was a natural direction to take the massive amount of redirect items that I intended to load into Sitecore.

Now, focusing onto the module's code - there is a recursive method called "AssembleRulesRecursive" within the RulesEngine.cs file that is responsible for aggregating all the redirect items and rules. I ended up having to update this area of the module to check within both bucket and node items for redirect items and rules.

This can be seen by my change on line 91 of RulesEngine.cs: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-b5f5d381da80e314aac4e60905fb7ea7

Next, I needed to set the standard values of the module's Simple Redirect template to be Bucketable


After this, I went ahead and added a new bucket content item at my "global" location in my content tree that would hold the redirect items that I intended to import into Sitecore.

PowerShell Import

The next step in this operation was to get the actual redirect items loaded into Sitecore. I created PowerShell script that would target a CSV file that was loaded into the media library and create items for each data record.

I have been using several derivations of Pavel Veller's script for handling imports in the past. If you are new to Sitecore PowerShell, I recommend taking a look at his post: http://jockstothecore.com/content-import-with-powershell-treasure-hunt/.

My final script simply required my CSV file to contain "name" , "old" and a "new" columns that I would use to create the redirect items within my bucket. The value in the "name" column would be used for the redirect item name, "Old" would hold the old url and "New" would hold the new / target url. Here is a screenshot of a sample from my CSV file:


With everything in place, I uploaded my CSV file containing my redirects into the media library, ran my script, and my many, many redirect items started to appear in my bucket.


Handling Redirects with Static File Extensions 

The module has a built-in handler for static file extensions that you can see by Brent Scav's post: https://blog.horizontalintegration.com/2014/12/19/sitecore-url-rewrite-module-for-static-file-extensions/.

You can simple add handler entries to your web.config to allow it to handle whatever static extensions you need to redirect from in your instance.

Unfortunately, this didn’t work for me in the latest version, as it kept throwing a Tracker.Current "null" error when trying to start the Analytics tracker within the RegisterEventOnRedirect method in Tracker.cs, line 30: https://github.com/martinrayenglish/UrlRewrite/blob/master/Hi.UrlRewrite/Analytics/Tracking.cs

I believe that this was because the handler was hit before Sitecore's InitializeTracker pipeline had been run.

I went ahead and added a way for the handler to tell the InboundRewriter not to try and start the Analytics tracker if it was handling a static extension redirect. This was done by adding an entry to the HttpRequestArgs custom data's SafeDictionary within the handler UrlRewriteHandler.cs on line 28:

https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-1fca180afe168b7567be7ea87006de50

and looking for it within the InboundRewriteProcessor.cs line on 54:

https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-f325026a733120e6591270e76c2d8347

After that, the handlers worked like a champ.

Here is an example of a handler for PDF files from my web.config:


Bonus - Handling Subdomain redirects

I needed a way to handle non-Sitecore site subdomain redirects within my solution.

To explain what I was doing here: 

We had merged a separate site with a different subdomain into our new site, and wanted to be able to create redirects for urls from the old site that pointed to the new urls.

Example:

http://old.mysite.com/folder/some-nice-url (old non-Sitecore site) → https://www.mysite.com/newfolder/some-new-nice-url (new Sitecore site)

Once again, I dug into the InboundRewriter.cs and updated the TestRuleMatches method to be able to match using host name as well. After this, I added a new TestAllRuleMatches method that would be called instead, that would first check using the "old way" of matching based on path, and if it didn’t find a match, it would check for a match using the full url with host name included.

You can see these changes here: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152#diff-4580f06f0095411a68df2fa0d1e890dd

With this in place, all I had to do was add the new "old site" binding in IIS to my Sitecore site and voila, the module handled requests for the old subdomain.

Problem Solved

With my items loaded into Sitecore, the ability to handle static file extensions and non-Sitecore site subdomains, I had reached my final destination on my redirect mission!

You can go ahead and grab all the Url Rewrite module code changes that I mentioned in this post via my fork on GitHub: https://github.com/martinrayenglish/UrlRewrite

You can review the code changes here: https://github.com/martinrayenglish/UrlRewrite/commit/d9b649d129b6b49ee7cf3f6beae3a8229750a152

You can grab the PowerShell script here.

Q&A

Good question asked by Kamruz Jaman: Did you consider generating redirect rules for IIS Rewrite module directly?

The IIS rewrite module was used for forcing ssl behind our AWS elastic load balancer (see this post http://stackoverflow.com/questions/19791820/redirect-to-https-through-url-rewrite-in-iis-within-elastic-beanstalks-load-bal) and to prevent font leaching. Our client made us work with a 3rd party that delivered a redirect map in Excel format of about 6k entries 3 weeks prior to launch. The old and new urls were vastly different and would result in some very complex rewrite rules and we would end up with a web.config 10 miles long. Also, tweaking things after launch (we still are) would be painful because updating the rules using the IIS module would update the web.config and as you know, would cause a recycle.

This approach was the best solution for our situation.

Friday, October 21, 2016

Taming Your Sitecore Analytics Index by Filtering Anonymous Contact Data

Standard
With the release of Sitecore versions 8.1 U3 and 8.2, there is a new setting that will dramatically reduce the activity on your instance's analytics index by filtering out anonymous contact data from it.

To put this simply; you don't have to have all the anonymous visitor data added to your analytics index anymore.

 xDB will still capture and show the anonymous visitor data in the various reporting dashboards, but this data won't be added to your analytics index, and you won't see the anonymous contacts in the Experience Profile dashboard.

The new "ContentSearch.Analytics.IndexAnonymousContacts" setting can be found in the Sitecore.ContentSearch.Analytics.config file, and is set to "true" by default.

To quote the setting comments found in this file:

"This setting specifies whether anonymous contacts and their interactions are indexed.
If true, all contacts and all their interactions are indexed. If false, only identified contacts and their interactions are indexed. Default value: true".

One of the key changes to the core code can be seen in in the Sitecore.ContentSearch.Analytics assembly. The magic is on line 14:

1:  using Sitecore.Analytics.Model.Entities;  
2:  using Sitecore.ContentSearch.Analytics.Abstractions;  
3:  using Sitecore.Diagnostics;  
4:    
5:  namespace Sitecore.ContentSearch.Analytics.Extensions  
6:  {  
7:   public static class ContactExtensions  
8:   {  
9:    public static bool ShouldBeIndexed(this IContact contact)  
10:    {  
11:     Assert.ArgumentNotNull((object) contact, "contact");  
12:     ISettingsAnalytics instance = ContentSearchManager.Locator.GetInstance<ISettingsAnalytics>();  
13:     Assert.IsNotNull((object) instance, "Settings for contact segmentation index cannot be found.");  
14:     if (instance.IndexAnonymousContacts())  
15:      return true;  
16:     return !string.IsNullOrEmpty(contact.Identifiers.Identifier);  
17:    }  
18:   }  
19:  }  
20:    



Why does this matter? 

One of our clients started having some severe Apache Solr issues due to the JVM using a massive amount of memory after running xDB for several months. After our investigation, we discovered that the root cause of the memory usage was due to the analytics index being pounded during the aggregation process. 

The JVM memory usage was like a ticking time bomb. As we started collecting more and more analytics data, our java.exe process started using more and more memory. 

At launch, we gave 4GB to the Java heap size (for more info around this, you can Google Xms<size> -Xmx<size> values). After a few months of running the sites and discovering the memory issue, we felt as though perhaps we assigned our Xmx too low, and upped the memory limit to 8GB. A few weeks later, we outgrow this limit, and we bumped it up to 16GB. 

The high memory usage would eventually cause Solr not respond to query requests and the Sitecore instance to stop functioning. As we know, Sitecore is heavily dependent on an indexing technology (Solr or Lucene), and if it fails, chances are your instance will stop functioning unless you have a magically patch that I mentioned in my previous post: http://sitecoreart.martinrayenglish.com/2016/09/bulletproofing-your-sitecore-solr-and.html


Analytics Index Comparison 

After upgrading our instance from 8.1 U1 to 8.1 U3, and disabling this setting, we performed an index size comparison. Our analytics index went from 21,728,706 docs and 8GB in size to 0 docs and 101 bytes in size (empty). It's important to note that this is because we currently don’t have any identified contacts within xDB. I find it hard to believe that when we start our contact identification process using CRM system data, that we will be seeing sizes like this in the future.


Final Thoughts 

This setting has made a major different in the stability of our client's high traffic Sitecore sites. It's up to you and your team to decide how important it is to have those anonymous contact records show up in the Experience Profile dashboard. 

To us, it was a no-brainer.

Tuesday, September 6, 2016

Bulletproofing your Sitecore Solr and SolrCloud Configurations

Standard

Solr and SolrCloud 

As we know, Sitecore supports both Lucene and Solr search engines. However, there are some compelling reasons to use Solr instead of Lucene that are covered in this article: https://doc.sitecore.net/sitecore_experience_platform/setting_up__maintaining/search_and_indexing/indexing/using_solr_or_lucene

Solr has been the search engine choice for all of my 8.x projects over the last few years and I have recently configured SolrCloud for one of my clients where fault tolerance and high availability was an immensely important requirement.

Although I am a big fan of SolrCloud, it is important to note that Sitecore doesn't officially support SolrCloud yet. For more details, see this KB article: https://kb.sitecore.net/articles/227897.

So, should SolrCloud still be considered in your architecture?

My answer to this question is YES!

My reasoning is that members of Sitecore's Technical and Professional Services Team, have implemented a very stable patch to support SolrCloud that has been tested and used in production by extremely large scale SolrCloud implementations. More about this later.

In addition, if you are running xDB, your Analytics index will get very large over time, and the only way to handle this is to break it up unto multiple shards. SolrCloud is needed to handle this.

The Quest to Keep Solr Online 

One of our high traffic clients running xDB started having Solr issues recently and this sparked my research and work with the Sitecore Technical Services team to obtain a patch to keep Sitecore running if Solr was having issues.

As a side note; the issues that we started seeing were related to the Analytics index getting pounded. The most common error that we saw was the following:

 ERROR <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">  
 <html><head>  
 <title>502 Proxy Error</title>  
 </head><body>  
 <h1>Proxy Error</h1>  
 <p>The proxy server received an invalid  
 response from an upstream server.<br />  
 The proxy server could not handle the request <em><a href="/solr/sitecore_analytics_index/select">GET&nbsp;/solr/sitecore_analytics_index/select</a></em>.<p>  
 Reason: <strong>Error reading from remote server</strong></p></p>  
 </body></html>  

This only popped up after running xDB for several months, as our analytics index started getting fairly large. Definitively something to keep in mind when you are planning for growth, and as mentioned above, why SolrCloud is the best option for a large-scale, enterprise Sitecore search configuration.

Giving the Java Virtual Machine (JVM) running Apache more memory seemed to help, but this error would continue to rear its nasty head, every so often during periods of high traffic.

Sitecore is very sensitive to Solr connection issues, and will be brought down its knees and throw an exception if it has any trouble!

The Bulletproof Solr Patches 


Single Instance Solr Configuration - Patch #391039 

My research to keep Sitecore online if there are Solr issues led me to this post by Brijesh Patel that was published back in March. After reading though it, I decided to contact Sitecore Support about patch #391039, as it seemed to be just what I wanted for my client's single Solr server configuration.

Working with Andrew Chumachenko from support, our tests revealed that the patch published here didn't handle index "SwitchOnRebuilds". To me, this was a deal breaker.

Andrew discovered that there were several versions of patch #391039 (early versions of the patch were implemented for Sitecore versions 7.2 ), and found at least three different variations.

We found that the most recent version of the patch did in fact support "SwitchOnRebuilds", and Andrew made this available to everyone in the community on GitHub: https://github.com/andrew-at-sitecore/Sitecore.Support.391039

This is a quote from Brijesh's post to explain how it works:

"...it checks if Solr is up on Sitecore start. If no, it skips indexes initializing. However, it may lead to exceptions in log files and inconsistencies while working with Sitecore when Solr is down.

Also, there is an agent defined in the ‘Sitecore.Support.391039.config’ that checks and logs the status of Solr connection every minute (interval value should be changed if needed).

If the Solr connection is restored — indexes will be initialized, the corresponding message will be logged and the search and indexing related functionality will work fine."

SolrCloud Solr Configuration - Patch #449298 

This patch works the same way as patch #391039 described above, but supports SolrCloud.

You may be asking yourself, "isn't the point of having a highly available Solr configuration to ensure that my Solr search doesn’t have issues?"

Well, of course. But, due to the nature in which SolrCloud operates, this patch acts as a fail-safe if something goes wrong - if for example your Zookeepers are trying to determine who the leader is if you lose an instance. If there is a mere second that Sitecore is trying to query Solr, and has trouble, it will throw an exception.

So, patch #449298 accounts for this and also allows index "SwitchOnRebuilds" just like the common, single instance Solr server configurations.

GitHub for this patch: https://github.com/SitecoreSupport/Sitecore.Support.449298 

It is important to note that this patch requires an IoC container that injects proper implementations for SolrNet interfaces. It depends on patch Sitecore.Support.405677. You can download the assemblies based on your IoC container from this direct link: https://github.com/SitecoreSupport/Sitecore.Support.405677/releases

Looking Ahead 

Support for Solr out-of-the box (taking into account these patches ) is to be added to the upcoming Sitecore 8.2 U1. So definitely something to look forward to in this release.

A special thanks to Paul Stupka, who is the mastermind behind these patches, and rockstar Andrew Chumachenko for all his help.

Tuesday, August 2, 2016

Diagnosing Content Management Server Memory Issues After a Large Publish

Standard

Background

My current project involved importing a fairly large number of items into Sitecore from an external data source. We were looking at roughly around 600k items that weren't volatile at all. We would have a handful of updates per week.

At the start of development, we debated between using a data provider or going with the import, but after doing a POC using the data provider, it was clear that an import was the best option.

The details of what we discovered would make a great post for another time.

NOTE: The version we were running was Sitecore 8.1 Update 2.

The Problem 

After running the import on the our Staging Content Management Server, we were able to successfully populate 594k items in the master database without any issues.

The problem reared its ugly head after we published the large number of items.

After the successful publish, we noticed that there was an instant memory spike on the Content Management Server after the application pool had recycled. Within about 10 seconds, memory usage would reach 90%, and would continue to climb until IIS simply gave up the ghost.

Mind you, our Staging server was pretty decent, an AWS EC2 Windows instance loaded with 15GB of RAM.

So what would cause this?


Troubleshooting 

I confirmed that my issue was in fact caused by the publish by restoring a backup of the web database from before the publish had occurred and recycling the application pool of my Sitecore instance. 

I decided to take a look at what objects were filling up the memory, and so I loaded and launched dotMemory from JetBrains and started my snapshot.

The snapshot revealed some QueuedEvent lists that were eating up the memory:



Next, I decided to fire up SQL Server Profiler to investigate what was happening on the database server.

Running Profiler for about 10 seconds while Sitecore was starting up, showed the following query being executed 186 times within the same process:

SELECT TOP(1) [EventType], [InstanceType], [InstanceData], [InstanceName], [UserName], [Stamp], [Created] FROM [EventQueue] ORDER BY [Stamp] DESC

Why would Sitecore be executing this query so many times, and then filling up the memory on our server?

I know that Content Management instances have a trigger to check the event queue periodically and collect all events to be processed. But, this seemed very strange.

For more info on how this works, you can check out this article by Patel Yogesh: http://sitecoreblog.patelyogesh.in/2013/07/sitecore-event-queue-scalability-king.html.
It's older but still applicable.

I shifted focus onto the EventQueue table to see what it looked like.

EventQueue Table 

A count on the items in my Web database's EventQueue table returned 1.2M.

99% of the items in the EventQueue table were the following remote event records: 

Sitecore.Data.Eventing.Remote.SavedItemRemoteEvent, Sitecore.Kernel, Version=8.1.0.0, Culture=neutral, PublicKeyToken=null 

Sitecore.Data.Eventing.Remote.CreatedItemRemoteEvent, Sitecore.Kernel, Version=8.1.0.0, Culture=neutral, PublicKeyToken=null 

I ran the following queries to tell me how many "SaveItem" event entries and how many "CreatedItem" event entries existed in the table, that were ultimately put there by my publish: 

SELECT *   FROM [Sitecore_Web].[dbo].[EventQueue]    
WHERE EventType LIKE '%SavedItem%'  AND UserName = 'sitecore\arke'  
ORDER BY Created DESC

SELECT *   FROM [Sitecore_Web].[dbo].[EventQueue]
WHERE EventType LIKE '%CreatedItem%'  AND UserName = 'sitecore\arke'  
ORDER BY Created DESC

Both the former and the later returned 594K items each. This seemed to line up correctly with the number of items that I had recently published, but the fact that we had two entries for each item was the obvious cause of the table having well over 1 million records.

The Solution 

There is a good post on the Sitecore Community site, where Vincent van Middendorp mentions a few Truncate queries to empty the EventQueue table along with the History table: https://community.sitecore.net/developers/f/8/t/1450

Truncating the table seemed a bit too evasive at first, so I went ahead and wrote up a quick query to delete the records from the EventQueue table that I knew I had put there (based on my username):

DELETE   FROM [Sitecore_Web].[dbo].[EventQueue] 
WHERE EventType LIKE '%CreatedItem%' OR EventType LIKE '%SavedItem%' 
AND UserName = 'sitecore\arke' 

Running another count on the records in my EventQueue table returned a count of 7.

So, I may well have just run a truncate :)

After firing up my the Sitecore instance again, I was happy to report that memory on the server was now stable.


The Moral of the Story 

Keep an eye on that EventQueue after a large publish!

Looking forward to seeing the publishing improvements coming in Sitecore 8.2.

Monday, June 20, 2016

Sitecore's IP Geolocation Service - Working with the Missing Area Code, Missing Time Zone and Testing GeoIP Lookups

Standard

Background

My current projects make heavy use of GeoIP personalization, and as a result, I have had the opportunity to dig deep into Sitecore's IP Geolocation Service features, uncovering the gaps and figuring out ways to get around them.

If you need help setting up the Geolocation Service, make sure you check out my post: http://sitecoreart.martinrayenglish.com/2015/08/setting-up-sitecores-geolocation-lookup.html

The Missing Area Code

Sitecore gives you a really nice set of rules to work with once you have the service enabled:



Looking above, you will see that the first rule is based on a visitor's area code.

Unfortunately, this rule doesn't work. After decompiling the GeoIP assembly, I was able to determine that the AreaCode property of the WhoIs object is never set in the MapGeoIpResponse processor:


And the rule that is supposed to use the area code:



So, make sure that you use the "postal code" condition if you plan on doing this type of personalization, and not the "area code" one.

The Missing Time Zone

One of my projects has a requirement around the ability to personalize based on the time of day.

When talking about this over lunch with some fellow Sitecore geeks, I got the "..doesn't Sitecore do that out of the box?" response.

At first thought, this seemed like a valid response, as Sitecore advertises that their IP Geolocation Service has the ability to identify a visitor's time zone.

Well, we know that there aren't any time-based rules shown by the Geolocation ruleset above, but if we have the person's time zone, we could at least use this information to conjure up some custom condition that will allow us to do this type of personalization.

Focusing on the GeoIpResponseProcessor again, you will notice that there is no time zone property on the WhoIs object. So, it's clear that we actually don't have a time zone to use.

After digging in a bit further, and confirming a few things with Sitecore support, I was able to determine that the JSON response from Sitecore's Geolocation service does actually contain the time zone information:


So, why wouldn't they make this available via the WhoIs object and API?

This seems a bit odd.

I registered a wish with my support ticket, so hopefully we will see this in a future release. Until then, we have to write a bit of code to peel off the value so that we can use it in our customization.

Time Zone Fun

Getting the Time Zone

In order to get that time zone value from the service, I had to create a custom processor to grab the raw value from service response.

Processor


Patch Configuration


Converting Olson to Windows Time

As you can see above, time zones from the service are in IANA time zone format (also known as Olson) as described here: https://dev.maxmind.com/faq/how-can-i-determine-the-timezone-of-a-website-visitor/

So the next order of business was to take the Olson time zone ID and convert it to a Windows time zone ID so that when I was ready to perform the local time calculation, it would be fairly easy using the .NET Framework's TimeZoneInfo class.

After a quick Google, I came across this Stack Overflow article that I based my conversion helper method on:
http://stackoverflow.com/questions/5996320/net-timezoneinfo-from-olson-time-zone

With these pieces on place, I had everything I needed to build out my time-based personalization rules!


Bonus: Testing GeoIP Lookups

To make GeoIP Lookup testing easy, I created a processor that injects a mock ip address obtained from a setting, so you can verify that your dependent rules work as expected.

Note: There is a dated module called Geo IP Tester on the Marketplace, but unfortunately it isn't compatible with Sitecore 8.x due to the changes in the API.

Processor


Patch Configuration


Sunday, May 22, 2016

How to ensure that Web API and Sitecore FXM can be implemented together

Standard
As a Sitecore MVC developer, implementing a Web API within Sitecore is pretty trivial. There are several informative posts on the web showing you exactly how to get this going.

If you haven't already checked it out, I recommend that you read through Anders Laub's "Implementing a WebApi service using ServicesApiController in Sitecore 8" post.



The Problem

Where this and other posts on the web fall short, is that they don't indicate the correct sweet spot to patch into the initialize pipeline that won't cause havoc if you plan to implement Sitecore's Federated Experience Manager (FXM).

Sitecore's Web API Controller related to the FXM module is decorated with the following attributes:

 [ServicesController("Beacon.Service")]  
 [RobotDetectionFilter]  
 [ConfiguredP3PHeader]  
 [EnableCors("*", "*", "GET,POST", SupportsCredentials = true)]  

What this means is that Sitecore sets the "Access-Control-Allow-Origin" header value to the domain of every external site that is configured by FXM.

Placing your processor before the ServicesWebApiInitializer processor will remove these headers and result in FXM not being able to make cross-domain requests.

So for example, looking at Anders' example, a patch like this:

 <configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">  
  <sitecore>  
   <pipelines>  
    <initialize>  
     <processor patch:after="processor[@type='Sitecore.Pipelines.Loader.EnsureAnonymousUsers, Sitecore.Kernel']"  
      type="LaubPlusCo.Examples.RegisterHttpRoutes, LaubPlusCo.Examples" />  
    </initialize>  
   </pipelines>  
  </sitecore>  
 </configuration>  
will result in this FXM error:

No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://myexternalsite.com' is therefore not allowed access.




The Quick Fix

Fortunately, all that you need to do to fix this issue is to patch after Sitecore's ServicesWebApiInitializer.

So looking at the example again, this would fix the issue and result in FXM playing nice:

 <configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">  
  <sitecore>  
   <pipelines>  
    <initialize>  
     <processor patch:after="processor[@type='Sitecore.Services.Infrastructure.Sitecore.Pipelines.ServicesWebApiInitializer, Sitecore.Services.Infrastructure.Sitecore']"  
      type="LaubPlusCo.Examples.RegisterHttpRoutes, LaubPlusCo.Examples" />  
    </initialize>  
   </pipelines>  
  </sitecore>  
 </configuration>  


Sunday, May 15, 2016

3-Step Guide: How to trigger an xDB goal using jQuery AJAX with Sitecore MVC

Standard

Background

After cruising around the web looking for some code to use to trigger a goal using jQuery AJAX, I discovered that there weren't really any easy to understand, end-to-end, current examples of how to do this using Sitecore MVC.

So, I decided to write up a quick post to demonstrate how to do this in 3 easy steps.


Step 1 - Create MVC Controller

The first step is to create an MVC Controller with an action that you will use to trigger the goal:

 using System;  
 using System.Linq;  
 using System.Web.Mvc;  
   
 using Sitecore.Analytics;  
 using Sitecore.Analytics.Data.Items;  
 using Sitecore.Mvc.Controllers;  
   
 namespace MyNamespace.Controllers  
 {  
   public class AnalyticsController : SitecoreController  
   {  
     private const string DefaultGoalLocation = "/sitecore/system/Marketing Control Panel/Goals";  
   
     [HttpPost]  
     public ActionResult TriggerGoal(string goal)  
     {  
       if (!Tracker.IsActive || Tracker.Current == null)  
       {  
         Tracker.StartTracking();  
       }  
   
       if (Tracker.Current == null)  
       {    
         return Json(new { Success = false, Error = "Can't activate tracker" });  
       }  
   
       if (string.IsNullOrEmpty(goal))  
       {  
         return Json(new { Success = false, Error = "Goal not set" });  
       }  
   
       var goalRootItem = Sitecore.Context.Database.GetItem(DefaultGoalLocation);  
       var goalItem = goalRootItem.Axes.GetDescendants().FirstOrDefault(x => x.Name.Equals(goal, StringComparison.InvariantCultureIgnoreCase));  
   
       if (goalItem == null)  
       {  
         return Json(new { Success = false, Error = "Goal not found" });  
       }  
   
       var page = Tracker.Current.Session.Interaction.PreviousPage;  
       if (page == null)  
       {  
         return Json(new { Success = false, Error = "Page is null" });  
       }  
   
       var registerTheGoal = new PageEventItem(goalItem);  
       var eventData = page.Register(registerTheGoal);  
       eventData.Data = goalItem["Description"];  
       eventData.ItemId = goalItem.ID.Guid;  
       eventData.DataKey = goalItem.Paths.Path;  
       Tracker.Current.Interaction.AcceptModifications();  
   
       Tracker.Current.CurrentPage.Cancel();   
   
       return Json(new { Success = true });  
     }  
   }  
 }  

Step 2 - Register a custom MVC route

The next step is to create a custom processor for the initialize pipeline and define custom route in the Process method similar to the following:

 using System;  
 using System.Collections.Generic;  
 using System.Linq;  
 using System.Web;  
 using System.Web.Mvc;  
 using System.Web.Routing;  
 using System.Web.UI.WebControls;

 using Sitecore.Pipelines;  
   
 namespace MyNamespace  
 {  
  public class RegisterCustomRoute  
  {  
   public virtual void Process(PipelineArgs args)  
   {  
    Register();  
   }  
   
   public static void Register()  
   {  
    RouteTable.Routes.MapRoute("CustomRoute", "MyCustomRoute/{controller}/{action}/{id}");  
   }  
   
  }  
 }  

Add this processor to the initialize pipeline right before the Sitecore InitializeRoutes processor. You can do this with the help of the patch configuration file in the following way:

 <?xml version="1.0" encoding="utf-8"?>  
 <configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">  
  <sitecore>  
   <pipelines>  
    <initialize>  
     <processor type="MyNamespace.RegisterCustomRoute, MyAssembly"/>  
    </initialize>‌  
   </pipelines>  
  </sitecore>  
 </configuration>  

Step 3 - Trigger using jQuery

Finally, trigger the goal by name using a few lines of jQuery:

 $.post("/MyCustomRoute/Analytics/TriggerGoal?goal=tweet" ,function(data){  
      //Do something with data object  
 });  

Monday, April 18, 2016

Presentation Targets - Chuck Norris's version of Sitecore's Item Rendering

Standard

Background

My current project is unique in the way that the Home Page of the site is designed. Basically, the Home Page is a single-page app, where the entire list of products will start appearing as you scroll down the page. They will be grouped by category, and as you keep scrolling down and reach a product list in a new category, the navigation will automatically change to reflect the new category.

It is one of those designs where the client gets all googly eyed over how pretty it looks and you as the Sitecore Architect are left thinking, "how in the world am I going to give my content authors a great experience when building this out in the Experience Editor?"

Well, an obvious approach would be to have one massively long Home Page with a ton of renderings plopped all over the show. But, I was left asking myself, "Self, we need to break this down into separate Product Category pages, so that my Content Authors will have a good experience and the pages will be easier to maintain.

The Problem

Great idea right? But, how would all the content from these pages be dynamically brought over to the Home Page so that we can do this fancy, animated show / hide trick?

I also had to make sure that I would be able to personalize everything based on the approach that I landed on. For example, depending on the visitors' identified persona, or the time of day, I wanted to be able to switch out the order in which the product categories would appear on the Home Page.

True Item Rendering

Doing a bit of research, I came across one of Vasiliy Fomichev's older posts regarding Sitecore's Item Rendering. We share the same underwhelming feeling about Sitecore's implementation of the Item Rendering, and reading further, I was pleasantly surprised to find that his requirements and "true item rendering solution" matched what I was looking for:

The Item Rendering must use the presentation components found in the presentation details of the referenced item

After I downloaded his example project, and fired it up, I realized that this would get me most of the way there; I could set the datasource to any content item that contained presentation components with set datasources, and it would render them in the location where my Item Rendering was placed.

The nice thing too was that I had full Experience Editor support.

So, I could modify the properties of both the Item Rendering and the Renderings within the Rendering (that's a tongue twister).

Sweet! 

Presentation Targets is Born

My next thought to myself was "Self, how awesome would it be if I could drop this on a page somewhere and target specific renderings within specific placeholders that exist on another item?"

This would give me a fantastic amount of flexibility and ultimate re-usability of already built presentation components.

So, I decided to update the project and add the following improvements:

  1. A new Presentation Targets Rendering and updated patch file so that both the new rendering and out-of-the-box Item Rendering could live in harmony.
  2. Adjusted the code to allow rendering parameters to be passed through from the target renderings.
  3. Added the ability to set the placeholders and renderings that you would like to target on the datasource item. These are both set as pipe-separated (|) lists of rendering parameters:



So, what does this give me?

To put it simply; a rendering that can target any renderings inside any placeholders on an existing item and render their content.

As I mentioned earlier, you have the ability to modify the content of the targeted renderings within the location that you have placed the Presentation Targets Rendering within the Experience Editor.

So let's study this with an example use case:

  1. You worked very hard to build out a wonderful product carousel, and implemented a series of personalization rules on several of the slides.
  2. There is a promotion this week for a series of products that are part of the wonderful carousel, and so your boss wants you to add the carousel to the Home Page.
  3. Instead of adding a new carousel rendering to the page, setting up each of the slides with datasources and re-applying your personlization rules from scratch, you could place the Presentation Targets Rendering on the Home Page, set the datasource to the item that has the wonderful carousel, set the placeholder that it exists in, and the rendering id of the carousel.
  4. You are done!

The Possibility of Nested Personalization

While working with Lars Peterson and the SBOS team, this topic came a few times during the implementation of a Home Page carousel (yes, I just love carousels if you couldn't tell already).

Looking at the proposed use case: 
  • For a specific group of visitors, change the entire carousel's datasource so that it displays one that is relevant to the specific group.
  • For visitors that are outside of this group, personalize slide 1 with conditions based on geolocation.

So, basically allowing for the possibility to personalize the datasource of an entire carousel, and within that, personalize each individual slide.

Without Presentation Targets

This is achievable by having more than one carousel rendering where we apply a rule with the "Hide Component” set to only show the carousel with the personalized slides for our targeted visitor.

With Presentation Targets

Granted that you would have to initially set up the presentation of the carousels and slides on a seperate item, using the Presentation Targets Rendering will allow you to achieve this level of personalization because with Experience Editor support, you could set personalization rules on both the Presentation Targets Rendering and the Carousel's Slide Renderings.

Now, isn't that fancy?



Final Thoughts

The Presentation Targets module opens up some new opportunities to explore content reuse and the depths of personalization within the Experience Platform.

I will be sure to share more of my experiences as I dig in a bit deeper.

Full source code is available on GitHub: https://github.com/martinrayenglish/PresentationTargets

Another shout-out to Vasiliy Fomichev on his awesome post to get the fire started!

Monday, March 28, 2016

A New Rudder on Sitecore.Ship to Deploy Securely From Visual Studio Team Services

Standard
I fell in love with Kevin Obee's Sitecore.Ship module when it was first introduced to me by my Arke colleague and MVP, Patrick Perrone, and it has been part of my Continuous Deployment routine ever since.

If you are using Hedgehog's TDS, a build server of your flavor, and a touch of PowerShell and / or curl, Sitecore.Ship allows you sail code and content into Integration, Staging and Production environments in a breeze.

This post assumes that you are familiar with using TDS to generate .update packages, and have used the Sitecore.Ship module before. I intend to demonstrate how you can use my customizations to keep the module's REST service secure, and deploy from Visual Studio Team Services (VSTS) to your various environments.

All updated source code and documentation can be found on my GitHub fork: https://github.com/martinrayenglish/Sitecore.Ship


Security

Out of the box, Kevin's module allows you to enable remote deployments and add an access control list of machine IP addresses that can access the REST web service.

 <packageInstallation enabled="true" allowRemote="true" allowPackageStreaming="true" recordInstallationHistory="true">  
  <Whitelist>  
   <add name="local loopback" IP="127.0.01" />  
   <add name="Allowed machine 1" IP="10.20.3.4" />  
   <add name="Allowed machine 2" IP="10.40.4.5" />  
  </Whitelist>  
 </packageInstallation>  

This works really well for most use cases, but not when you are deploying from Visual Studio Team Services' Azure servers. Keep reading and I will explain why.

Visual Studio Team Services

Microsoft has done a great job with VSTS a.k.a. TFS in the cloud. They provide a nice suite of tools such as code repositories, continuous integration, bug and task tracking, and agile planning at the very wonderful price of free for up to 5 users.

Once your team grows beyond 5 for free, the increments are priced reasonably. If you are an MSDN subscriber, you aren't counted against the total number of user's in the pricing model. So if your company is a Microsoft shop with MSDN, you could be working with your client's VSTS account at no cost to them. Sweet!

You can check out Microsoft's pricing by navigating over to this page: https://www.visualstudio.com/pricing/visual-studio-team-services-pricing-vs.

Setting up the Build Definition In Visual Studio Team Services

VSTS provides you with a slew of options to help set up your build definitions for either Continuous Integration or Continuous Deployment.

In my example, I set up a build definition for a Staging environment with a few simple clicks. The basic steps to add a new definition are:

Build

Add a series of tasks used to from your build steps

Repository

Set the repository type (in my case Git) and branch that you intend to build from

Variables

Set the build configuration (such as "MyClientStaging"), and platform settings

Triggers

If you want to perform Continuous Integration and build on every commit to the repository using the branch set in the Repository Tab

General

Make sure that the "Default agent queue" is set to "Hosted"

In my example, I set the above-mentioned configurations and build steps to my fancy, and the end result looked something like this:




You will notice that there are 2 PowerShell script tasks that are executed after the build and tests have run.

About The Scripts

Both scripts were originally crafted by my Arke colleague Patrick Perrone. I simply added a few modifications to make things work in my environments.

Make sure you check out his Sitecore.Ship-PowerShell-Functions module that is available on GitHub: https://github.com/patrickperrone/Sitecore.Ship-PowerShell-Functions

The first script is used to deploy my TDS generated .update package from VSTS' Azure servers, over to my target machine(s) that have Sitecore.Ship installed on them.

The second script is used to execute an improved version of Pavel Veller's ConfigFileCommitter service shown by his post: http://jockstothecore.com/update-config-love/

Sitecore.Ship Whitelist VSTS Challenges

As I mentioned previously, the security considerations in Kevin's module work really well in most traditional environments where you are using a deployment server that is either on-premise or in the cloud but has a single, controllable IP address.

The challenge I faced building and deploying directly from VSTS, was that it was impossible to predict the IP address that the Azure server was deploying from, and thus impossible to lock it down using the Whitelist configuration.

First Stab - IP Address Ranges

After forking the Sitecore.Shop repository, I started working through the code so that I could lock things down when deploying from VSTS. After adding some logging I noticed a pattarn of IP Address ranges from the Azure servers, and went on my merry way to update the module to include support for IP Address ranges.

So, with this in place, I could now apply ranges to my Whitelist configuration like so:

 <packageInstallation enabled="true" allowRemote="true" allowPackageStreaming="true" recordInstallationHistory="true">  
  <Whitelist>  
   <add name="local loopback" IP="127.0.01" />  
   <add name="Allowed machine 1" IP="10.20.3.4" />  
   <add name="Allowed machine 2" IP="10.40.4.5" />  
   <add name="Allowed IP Range 1" IP="23.96.0.0-23.96.255.255" />  
   <add name="Allowed IP Range 2" IP="65.52.0.0-65.52.255.255" />  
  </Whitelist>  
 </packageInstallation>  

Problem solved? 

Not so much. 

After running several deployment tests, I discovered that the Azure IP ranges were just not predicable at all. Based on what I saw, I would end up having to open up a ton of ranges, and would just hope and pray that things didn't change and cause my deployments to blow up.

It was obvious that as I started opening up more and more ranges, the security aspect of my deployments was sinking, and I was opening up myself to those pesky Internet pirates who could sail right on in. (Hope you enjoyed this ;)




Second Stab - Authentication Token

Having recently worked with Sitecore's xDB Cloud REST API, I decided to take their approach and build in an Authentication Token security option.

With this approach, I added the ability set the name and value of a custom header via configuration. I could then add this token in the header of the request when running the deployment to the machines.

The updated configuration looked like this:

 <packageInstallation enabled="true" allowRemote="true" allowPackageStreaming="true" recordInstallationHistory="true"   
 authHeader="X-Ship-Auth" authToken="AC074C6EDBA518F807E0E3F2F36A8B512D9C5637744BE67CD60D271244AC523AAB9CF8DB7F7D3934205E5BD850B2768C7171C3C594D6C6BFCA3992CCCCA67148">  
  <Whitelist>  
   <add name="local loopback" IP="127.0.01" />  
   <add name="Allowed machine 1" IP="10.20.3.4" />  
   <add name="Allowed machine 2" IP="10.40.4.5" />  
   <add name="Allowed IP Range 1" IP="23.96.0.0-23.96.255.255" />  
   <add name="Allowed IP Range 2" IP="65.52.0.0-65.52.255.255" />  
  </Whitelist>  
 </packageInstallation>  

Testing and Implementation

Testing

When testing the authentication changes using a tool like Postman, all you would need to do is pass over the token value in a custom header that you set up on configuration when making the request.



Implementation

In my case, I modified the PowerShell deployment script to include the new custom header containing my token.



Final Note on Service Paths - "Services" vs "SitecoreShip"

Part of the module's configuration includes adding a NancyHttpRequestHandler to your web.config specifying the path of Ship service, along with telling Sitecore not to process things at that path via an IgnoreUrlPrefixes setting.

Web.config:

 <handlers>  
 <add name="Nancy" verb="*" type="Nancy.Hosting.Aspnet.NancyHttpRequestHandler" path="/services/*" />  
 </handlers>  

Ship.config:

 <settings>  
    <setting name="IgnoreUrlPrefixes" set:value="/services/|/sitecore/default.aspx|/trace.axd|/webresource.axd|/sitecore/shell/Controls/Rich Text Editor/Telerik.Web.UI.DialogHandler.aspx|/sitecore/shell/applications/content manager/telerik.web.ui.dialoghandler.aspx|/sitecore/shell/Controls/Rich Text Editor/Telerik.Web.UI.SpellCheckHandler.axd|/Telerik.Web.UI.WebResource.axd|/sitecore/admin/upgrade/|/layouts/testing" />  
 </settings>  

This turned out to be problematic for a recent project, where we actually had an item that needed to use the "services" path. They had content living in "/services/contact-us".

So, I modified the original service path from "services" to "sitecoreship" to overcome this issue, and hopefully prevent this conflict from happening on my future projects.

All updated source code and documentation can be found on my GitHub fork: https://github.com/martinrayenglish/Sitecore.Ship

Monday, February 29, 2016

Sitecore 8.x Component Datasource Item Locking Behavior

Standard

Background

A question was posed on the Sitecore Community Site around items in workflow and component datasource item locking, and it piqued my curiosity enough to want to dig in to understand Sitecore's behavior on datasource items when we have multiple content authors working within the same content scope.

After my review, I discovered some unexpected behavior that could pose a potential problem for content authors. Talking through the results with veteran MVP, Kamruz Jaman, he suggested a potential workaround that could will help provide relief for author's in this scenario.


Test Configuration

In my tests, I was working with Sitecore 8.1 Update 1, and created Author A and Author B who were both part of ​the sitecore\Author​ and ​sitecore\Designer roles. I made sure that my authors had the necessary read, write, delete etc permission for the content scope that they were working in.

Datasource Item Locking Behavior

Author A locks an the item with presentation that consists of a component with a datasource item. The item is locked but the datasource item is not locked.

Author A makes inline edits to the component's datasource item's content, and then clicks the save button. After saving the item, the datasource item is now locked.

Author B locks an item that has a component with the same datasource item that has been locked by Author A.

Author B cannot make inline edits to the component with the locked datasource item.

Author B can make inline edits to all other areas of the item that aren't linked to locked datasource items.

Two important things to note from above:

  1. Author's cannot make changes to a component's datasource item when another author has locked the datasource item. One would expect this behavior.
  2. The datasource item only gets locked by the author when the author has made changes to the datasource item and clicks the save button. Making changes to other items on the page and saving will not lock the datasource item.

Problem

So far, things work pretty much the way one would expect. The only oddity is that the datasource item only gets locked on save. This is something that we can live with though.

The problem that I identified from the abovementioned scenario was that when Author A had finished making edits and unlocked the item, the datasource item didn't get unlocked in with the item. It stayed locked!

Workaround - Automatic Unlocking

One would expect both items and datasource items in workflow to be unlocked when an author has clicked the unlock button after making their content changes.

As Kamruz pointed out, one way to deal with this is to have the automatic unlocking setting enabled. The setting can be found in the Sitecore.config file:

 <setting name="AutomaticUnlockOnSaved" value="false" />  
When the value of AutomaticUnLockOnSaved is set to true, then every item is automatically unlocked after saving.

Having this enabled would provide relief from the unexpected behavior that is there by default.


Monday, February 15, 2016

Sitecore xDB Cloud Edition: Using the REST API

Standard
As I mentioned in a previous post, one of the nice things about using xDB Cloud Edition is that once your licensing is in place, getting up and running is very easy.

It's important to note that you don't have direct access to the various collections. So, you can't connect using a tool like Robomongo or MongoVUE to get information about your instance. In the past, getting consumption information about your cloud instance would require you opening up a ticket with support.

All this has changed with the introduction of the REST API for the xDB Cloud service. To find more information about what the API has to offer, I recommend that you read through the REST API reference for the xDB Cloud service.

In this post, I will show you how to use the API with one of my favorite browser plugins, Postman, so that you can get useful information and manage various processes in your Cloud Instance.



Nexus Authentication Token

The first order of business is to use the SSO Encode Sitecore License endpoint (https://gateway-sso-scs.cloud.sitecore.net/api/License/Encode) to obtain a Nexus Authentication token.

In order to do this, you need to generate a POST request to the endpoint with your Sitecore license file in the body of the request.



After performing the post, you will see your Nexus token in the response:


Check xDB consumption

Once you have your Nexus token, hitting the other endpoints is a walk in the park.  All you need to do is pass over the token value in a custom header called "X-ScS-Nexus-Auth" when making the request.

One of the endpoints that you will be accessing regularly is the consumption one. This gives you very useful information such as instance size, number of contacts and interactions.

For this endpoint, you need to pass over the following parameters along with the token:
  • licenseId – your Sitecore license ID (1 below)
  • deploymentId – the unique identification of the deployment (2 below)
  • year – the consumption year (3 below)
  • month – the consumption month (3 below)

4 below is my generated Nexus token called "X-ScS-Nexus-Auth" that I have added to the HTTP header.

So for example, to get consumption information for October 2015, my GET request looks like this:


You will see that there is small bug in my particular instance, where it is returning "0GB" for size. At the time of writing this post, the Cloud Team was actively working to get this resolved.

If you want to use this data to create a fancy report, what you can do is convert the JSON data into CSV format using one of the free conversion sites like http://konklone.io/json/.

Using this site, I was able to get the data into Excel and create some cool looking graphs in a few clicks:


Other xDB Endpoints

Other endpoints that are available to you include:

Get Firewall settings
https://gateway-xdb-scs.cloud.sitecore.net/api/xdb/firewallsettings/{licenseId}/{deploymentId}

Get history processing status
https://gateway-xdb-scs.cloud.sitecore.net/api/xdb/historyProcessing/{licenseId}/{deploymentId}

Get xDB collection verification dataset
https://gateway-xdb-scs.cloud.sitecore.net/api/xdb/collection/{licenseId}/{deploymentId}

Get xDB set status
https://gateway-xdb-scs.cloud.sitecore.net/api/xdb/historyProcessing/{licenseId}/{deploymentId}

List xDB sets
https://gateway-xdb-scs.cloud.sitecore.net/api/xdb/{licenseId}

Trigger history processing
https://gateway-xdb-scs.cloud.sitecore.net/api/xdb/historyProcessing/{licenseId}/{deploymentId}

The Trigger history processing endpoint above is another one to note. This gives you the ability to trigger a rebuild of your cloud reporting database. Note that the HTTP method is a PUT for this.

Final Thoughts

With this API in place that gives us a good level of control of the xDB Cloud, we can't help but get even more excited about the additional self service APIs and the xConnect API, that will be released with 8.2 later this year.


Monday, January 11, 2016

Getting Started with Sitecore SPEAK 2.0

Standard
With the release of Sitecore 8.1, we now have the ability to work with SPEAK 2.0 which is promised to be less of a steep learning curve, and will give your fingers a bit of a rest when adding custom logic to your page code.

Because it is so new, most of the SPEAK related documentation that you find on the internet today is SPEAK 1.1 related. Most likely, if it doesn't specify a version, it's SPEAK 1.1. The only 2.0 post that you may find is by SPEAK guru, Mike Robbins, where he compares a Sitecore SPEAK 2.0 Component vs SPEAK 1.1

So to get started with 2.0, I recommend that you read through Mike's post along with the SPEAK changes document found on the 8.1 Initial Release page that describes the differences between 1.1 and 2.0. 


No SPEAK 2.0 Branches Yet

If you take the plunge and start building new apps using SPEAK 2.0, you will notice that there currently aren't any branches available in 2.0 yet.



What this means is that you have to build out your 2.0 pages manually. Never fear, it's actually not that painful.

SPEAK 2.0 Enabled Dashboard Page

In my POC, I started by building out a simple Dashboard Page. You can use my approach to build out any of the other SPEAK pages based on 2.0.

After talking through some things with one of our star developers, Sergey Perepechin, I got started and created a 1.1 version of the Dashboard that I would use as a guide by looking at the Presentation Details to make sure that I added the correct SPEAK renderings to the correct placeholders.



After this, I created a new item based on the Speak-DashboardPage template.


With this in place, I created a PageSettings item in its usual home; under the SPEAK page:


I then added the various SPEAK 2.0 enabled renderings to the page. The first and most important one I added was the PageCode rendering. For PageCode, I made sure that SpeakCoreVersion was set to Speak Core 2-0.


I worked through adding the rest of the renderings that the Dashboard Page requires, by using my 1.1 Dashboard's Presentation Details as a guide.

It is important to note that some of the rendering's names have change slightly in 2.0, so you will need to review the SPEAK changes document to make sure that you pick the correct ones. DashboardPageStructure is an example of one of these.


Slowly but surely, I matched my renderings and placeholders to the 1.1 version of my Dashboard, using the 2.0 enabled renderings:


The end result; a shiny new SPEAK 2.0 enabled Dashboard Page:


SPEAK 2.0 Page Code

The next thing I wanted to explore was implementing my own 2.0 enabled JavaScript page code. The changes to page code are highlighted in the "Page code changes" section within the  SPEAK changes document.

The new structure is like this:

 function(Speak) {  
   
  Speak.pageCode({  
   
  initialize: function() {},  
   
  initialized: function() {},  
   
  beforeRender: function() {},  
   
  render:function() {},  
   
  afterRender:function() {}  
   
 });  
   
 })(Sitecore.Speak);  

Easy enough, and as they mentioned, we now have a lot more hooks.

Adding a reference to a JavaScript library using SPEAK 2.0 

I am a fan of Sitecore.Services.Client, and have successfully implemented it's EntityService thanks to some great posts and videos by Mike Robbins. Seeing a pattern here? :)

If you haven't checked it out, make sure you at least give this post a read: http://mikerobbins.co.uk/2015/01/06/entityservice-sitecore-service-client/

Ok, so back to what I was trying to test. I wanted to take a function that I had used in a SPEAK 1.1 app's page code, where I implemented EntityService and made a fetchEntities call. 

This is a a sample snippet of the SPEAK 1.1 page code:

1:  require.config({  
2:    paths: {  
3:      entityService: "/sitecore/shell/client/Services/Assets/lib/entityservice"  
4:    }  
5:  });  
6:    
7:  define(["sitecore", "jquery", "underscore", "entityService"], function (Sitecore, $, _, entityService) {  
8:    var AuthorAdmin = Sitecore.Definitions.App.extend({  
9:    
10:      initialized: function () {  
11:    
12:        this.GetRecentlyDrafted();  
13:      },  
14:    
15:      initialize: function () { },  
16:    
17:      GetRecentlyDrafted: function () {  
18:    
19:        var datasource = this.RecentlyDraftedDataSource;  
20:    
21:        var workflowService = new entityService({  
22:          url: "/sitecore/api/ssc/Arke-Sitecore-AuthorAdmin-Controllers/WorkFlowItems"  
23:        });  
24:    
25:        var result = workflowService.fetchEntities().execute().then(function (workFlowItems) {  
26:          for (var i = 0; i < workFlowItems.length; i++) {  
27:            datasource.add(workFlowItems[i]);  
28:          }  
29:        });  
30:    
31:      }  
32:    });  
33:    
34:    return AuthorAdmin;  
35:  });  

After SPEAKING (pardon the pun) to Mike, and spending time going through the SPEAK 2.0 components in my 8.1 instance located at

C:\inetpub\wwwroot\{YourInstanceName}\Website\sitecore\shell\client\Business Component Library\version 2\Layouts\Renderings,

I was able to figure out the correct SPEAK 2.0 version of the page code with a reference to the entity service JavaScript library:

1:  (function (Speak) {  
2:    
3:    require.config({  
4:      paths: {  
5:        entityService: "/sitecore/shell/client/Services/Assets/lib/entityservice"  
6:      }  
7:    });  
8:    
9:    Speak.pageCode(["entityService"], function (entityService) {  
10:      return {  
11:        initialize: function () {  
12:    
13:          this.GetRecentlyDrafted();  
14:    
15:        },  
16:        GetRecentlyDrafted: function () {  
17:    
18:          var datasource = this.RecentlyDraftedDataSource;  
19:    
20:          var workflowService = new entityService({  
21:            url: "/sitecore/api/ssc/Arke-Sitecore-AuthorAdmin-Controllers/WorkFlowItems"  
22:          });  
23:    
24:          var result = workflowService.fetchEntities().execute().then(function (workFlowItems) {  
25:            for (var i = 0; i < workFlowItems.length; i++) {  
26:              datasource.add(workFlowItems[i]);  
27:            }  
28:          });  
29:        }  
30:      }  
31:    });  
32:  })(Sitecore.Speak);  

You will notice that I am accessing a DataSource component on line 18. This is actually a custom JSON SPEAK DataSource component that was developed by Anders Laub.

I really liked his approach and the component's purpose because it separates responsibility.

See his post on it's creation here: http://laubplusco.net/creating-simple-sitecore-speak-json-datasource/

SPEAK 2.0 JSON DataSource Component

My next move was to take Anders' component and convert it to SPEAK 2.0.  As highlighted in Mike's post Sitecore SPEAK 2.0 Component vs SPEAK 1.1, a major difference in 2.0 is that it includes a server side model that represents our rendering parameters for our component. No sweat!

The new SPEAK 2.0 JSON DataSource component was made up of these 3 parts:

Model - JsonDataSourceRenderingModel.cs

1:  using Sitecore.Mvc.Presentation;  
2:    
3:  namespace Arke.Sitecore.AuthorAdmin.Models  
4:  {  
5:    public class JsonDataSourceRenderingModel : SpeakRenderingModel  
6:    {  
7:      public string Json { get; set; }  
8:    }  
9:  }  

You will notice above that my model inherits from SpeakRenderingModel.

Within Sitecore Rocks, you have to specify the class in the Model field:



View - JsonDataSource.cshtml


 @model Arke.Sitecore.AuthorAdmin.Models.JsonDataSourceRenderingModel  
 <script @Model.HtmlAttributes type="text/x-sitecore-jsondatasource">  
 </script>  


Javascript - JsonDataSource.js


1:  (function (Speak) {  
2:    Speak.component([], {  
3:      name: "JsonDataSource",  
4:      initialize: function () {  
5:    
6:        this.Json = null;  
7:    
8:      },  
9:      add: function (data) {  
10:    
11:        var json = this.Json;  
12:    
13:        if (json === null) {  
14:          json = new Array();  
15:        }  
16:    
17:        var newArray = new Array(json.length + 1);  
18:    
19:        for (var i = json.length - 1; i >= 0; i--) {  
20:          newArray[i + 1] = json[i];  
21:        }  
22:    
23:        newArray[0] = data;  
24:        this.Json = newArray;  
25:      }  
26:    });  
27:  })(Sitecore.Speak);  

More to Come!

I hope that this post helps shed some light on getting started with SPEAK 2.0.  I will continue blabbering about SPEAK (love the puns) with my findings in the near future.

Watch this space!