Quantcast
Channel: Second Life of a Hungarian SharePoint Geek
Viewing all 206 articles
Browse latest View live

Why can’t We Send E-Mails from the SharePoint JavaScript Client Object Model, and How to Enable this Feature

$
0
0

As you might know, we can send e-mails from the SharePoint managed client object model (see code sample below), but not from the JavaScript version of the client object model.

Note: This feature has its own limitations. For example, you can not send mails to any arbitrary external mail addresses, only to the SharePoint users, and you can not add attachments to the mails, just to name a few.

  1. using (var clientContext = new ClientContext("http://YourSharePointServer/"))
  2. {
  3.     var ep = new EmailProperties();
  4.     ep.To = new List<string> { "user1@company.com", "user2@company.com" };
  5.     ep.From = "user3@company.com";
  6.     ep.Body = "body";
  7.     ep.Subject = "subject";
  8.     Utility.SendEmail(clientContext, ep);
  9.     clientContext.ExecuteQuery();
  10. }

However, if you would like to send mails using the JavaScript client object model, you find quickly, that there is no sendEmail method defined under the methods of the SP.Utilities.Utility class. That means, if you really have to send a mail from your web page using JavaScript, and would like to work only with the out-of-the-box features of SharePoint, you have to use the OData / REST interface, as illustrated by the code sample in this forum thread or see this one if you need additional mail headers.

But wait a minute. As far as I know, both the managed client object model and its JavaScript counterpart use the same server-side components and the same communication protocol between the server and client side. So what is that limitation in this case? Personally, I prefer to useing the JavaScript client object model to invoking the REST methods. Why should I mix them if my other components are written for the JavaScript client object model?

After having several questions on SharePoint StackExchange in the previous days regarding e-mail sending from JavaScript, I’ve decided to look behind the scenes.

As a first step, I searched for the JavaScript implementation of the SP.Utilities.EmailProperties and the SP.Utilities.Utility classes and found them in SP.debug.js (and in SP.js, of course), and really, there is no sendEmail method defined for the SP.Utilities.Utility class.

The mail sending for the client components is implemented on the servers side in the internal static SendEmail_Client method of  the Microsoft.SharePoint.Utilities.SPUtility class (Microsoft.SharePoint assembly). This method has the ClientCallableMethod attribute as follows:

[ClientCallableMethod(Name="SendEmail", OperationType=OperationType.Read, ClientLibraryTargets=ClientLibraryTargets.RESTful | ClientLibraryTargets.Silverlight | ClientLibraryTargets.DotNetFramework)]

You can see, that as the value of the ClientLibraryTargets property it does not define either ClientLibraryTargets.JavaScript or ClientLibraryTargets.All. It means, that it is not intended to be used from JavaScript.

In the rest of the post I show you a few alternatives, how to enable this missing functionality.

Note: The workarounds included in the post are provided “as is”, without any responsibility. Whether they work for you or not may depend on the patch level of your SharePoint environment, and any new installed patch may make a solution build on these workaround unusable. So I suggest you to not use this approach in a productive environment.

I’ve created a new web part page on my site, and added a Script Editor Web Part to it. I set the following content for the new web part:

/_layouts/15/sp.runtime.js
/_layouts/15/sp.js
/SiteAssets/sendMail.js

<button onclick="sendMail()" type="button">Send mail</button>

In the Site Assets library of the site I’ve created a new text file called sendMail.js, and edited its content.

After studying the existing static methods of the SP.Utilities.Utility class in SP.debug.js it was easy to implement the sendEmail method as well. We should have wait, while the loading of  the SP.js file has been finished, then extend the (at this time already) existing SP.Utilities.Utility class with the new method. The next snippet shows our code at this point:

  1. 'use strict';
  2.  
  3. function main() {  
  4.     SP.Utilities.Utility.sendEmail = function SP_Utilities_Utility$resolvePrincipal(context, properties) {
  5.         if (!context) {
  6.             throw Error.argumentNull('context');
  7.         }
  8.         var $v_0 = new SP.ClientActionInvokeStaticMethod(context, '{16f43e7e-bf35-475d-b677-9dc61e549339}', 'SendEmail', [properties]);
  9.  
  10.         context.addQuery($v_0);
  11.     };
  12. }
  13.  
  14. function sendMail() {
  15.     var ctx = SP.ClientContext.get_current();
  16.  
  17.     var emailProperties = new SP.Utilities.EmailProperties();
  18.     emailProperties.set_to(['user1@company.com', 'user2@company.com']);
  19.     emailProperties.set_from('user3@company.com');
  20.     emailProperties.set_body('body');
  21.     emailProperties.set_subject('subject');
  22.  
  23.     SP.Utilities.Utility.sendEmail(ctx, emailProperties);
  24.  
  25.     ctx.executeQueryAsync(
  26.                 function () {
  27.                     console.log("Mail sent");
  28.                 },
  29.                 function (sender, args) {
  30.                     console.log('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace());
  31.                 }
  32.             );
  33.  
  34. }
  35.  
  36.  
  37. SP.SOD.executeOrDelayUntilScriptLoaded(main, "sp.js");

Unfortunately, if you try out the page, and clicks the “Send mail” button, this code sends no mail, but you get an error instead:

Object doesn’t support property or method ‘get_bCC

The corresponding stack trace:

SP.DataConvert.invokeGetProperty [Line: 2, Col: 18240], sp.runtime.js
SP.DataConvert.writePropertiesToXml [Line: 2, Col: 11813], sp.runtime.js
SP.Utilities.EmailProperties.prototype.writeToXml [Line: 2, Col: 441169], sp.js
SP.DataConvert.writeValueToXmlElement [Line: 2, Col: 14393], sp.runtime.js
SP.ClientActionInvokeStaticMethod.prototype.$i_1 [Line: 2, Col: 30238], sp.runtime.js
SP.ClientActionInvokeStaticMethod [Line: 2, Col: 29671], sp.runtime.js
SP_Utilities_Utility$resolvePrincipal [Line: 8, Col: 9], sendMail.js
sendMail [Line: 23, Col: 5], sendMail.js
onclick [Line: 610, Col: 18], SendMailTest.aspx

If you check the property names of the SP.Utilities.EmailProperties class, you see that there are properties like BCC and CC. The corresponding getter and setter method definitions of the same class in the SP.debug.js file according to it have the name get_BCC / set_BCC and get_CC / set_CC.

The problem is, that the SP.DataConvert.invokeGetProperty mehtod calls a private method of the SP.DataConvert class, that – due to the name convention used in the JavaScript client object model – converts the property name BCC to bCC (and CC to cC), converting the first letter to lower case.

SP.DataConvert.$2V=function(a){ULSnd3:;return a.substr(0,1).toLowerCase()+a.substr(1)}    function(a){ULSnd3:;return a.substr(0,1).toLowerCase()+a.substr(1)}

What can we do? There are fortunately several options!

In the fist case, we simply create a new SP.Utilities.EmailProperties instance as earlier, then decorate the new instance in the AddMethods method with the getter / setter methods required by the SP.DataConvert.invokeGetProperty mehtod before sending the mail via the sendEmail method. In these new methods we simply wrap the original methods (the ones with the full uppercase property names).

  1. 'use strict';
  2.  
  3. function main() {  
  4.     SP.Utilities.Utility.sendEmail = function SP_Utilities_Utility$resolvePrincipal(context, properties) {
  5.         if (!context) {
  6.             throw Error.argumentNull('context');
  7.         }
  8.         var $v_0 = new SP.ClientActionInvokeStaticMethod(context, '{16f43e7e-bf35-475d-b677-9dc61e549339}', 'SendEmail', [properties]);
  9.  
  10.         context.addQuery($v_0);
  11.     };
  12. }
  13.  
  14. function AddMethods(emailProps) {
  15.     emailProps.get_bCC = function SP_Utilities_EmailProperties$get_bCC() {
  16.         return emailProps.get_BCC();
  17.     };
  18.     emailProps.set_bCC = function SP_Utilities_EmailProperties$set_bCC(value) {
  19.         emailProps.get_BCC(value);
  20.         return value;
  21.     };
  22.     emailProps.get_cC = function SP_Utilities_EmailProperties$get_cC() {
  23.         return emailProps.get_CC()
  24.     };
  25.     emailProps.set_cC = function SP_Utilities_EmailProperties$set_cC(value) {
  26.         emailProps.get_CC(value)
  27.         return value;
  28.     };
  29. }
  30.  
  31. function sendMail() {
  32.     var ctx = SP.ClientContext.get_current();
  33.  
  34.     var emailProperties = new SP.Utilities.EmailProperties();
  35.     AddMethods(emailProperties);
  36.  
  37.     emailProperties.set_to(['user1@company.com', 'user2@company.com']);
  38.     emailProperties.set_from('user3@company.com');
  39.     emailProperties.set_body('body');
  40.     emailProperties.set_subject('subject');
  41.  
  42.     SP.Utilities.Utility.sendEmail(ctx, emailProperties);
  43.  
  44.     ctx.executeQueryAsync(
  45.                 function () {
  46.                     console.log("Mail sent");
  47.                 },
  48.                 function (sender, args) {
  49.                     console.log('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace());
  50.                 }
  51.             );
  52.  
  53. }
  54.  
  55. SP.SOD.executeOrDelayUntilScriptLoaded(main, "sp.js");

This approach works, but you should invoke the AddMethods method for each of the SP.Utilities.EmailProperties instance you create.

As another option, we can take the source code of SP.Utilities.EmailProperties class from the SP.debug.js file, copy it to our own sendMail.js file, then replace the EmailProperties with EmailPropertiesCustom. Finally, fix the wrong method names.

  1. 'use strict';
  2.  
  3. SP.Utilities.EmailPropertiesCustom = function SP_Utilities_EmailPropertiesCustom() {
  4.     SP.Utilities.EmailPropertiesCustom.initializeBase(this);
  5. };
  6. SP.Utilities.EmailPropertiesCustom.prototype = {
  7.     $1u_1: null,
  8.     $22_1: null,
  9.     $C_1: null,
  10.     $24_1: null,
  11.     $2T_1: null,
  12.     $31_1: null,
  13.     $36_1: null,
  14.     get_additionalHeaders: function SP_Utilities_EmailPropertiesCustom$get_additionalHeaders() {
  15.         return this.$1u_1;
  16.     },
  17.     set_additionalHeaders: function SP_Utilities_EmailPropertiesCustom$set_additionalHeaders(value) {
  18.         this.$1u_1 = value;
  19.         return value;
  20.     },
  21.     get_bCC: function SP_Utilities_EmailPropertiesCustom$get_bCC() {
  22.         return this.$22_1;
  23.     },
  24.     set_bCC: function SP_Utilities_EmailPropertiesCustom$set_bCC(value) {
  25.         this.$22_1 = value;
  26.         return value;
  27.     },
  28.     get_body: function SP_Utilities_EmailPropertiesCustom$get_body() {
  29.         return this.$C_1;
  30.     },
  31.     set_body: function SP_Utilities_EmailPropertiesCustom$set_body(value) {
  32.         this.$C_1 = value;
  33.         return value;
  34.     },
  35.     get_cC: function SP_Utilities_EmailPropertiesCustom$get_cC() {
  36.         return this.$24_1;
  37.     },
  38.     set_cC: function SP_Utilities_EmailPropertiesCustom$set_cC(value) {
  39.         this.$24_1 = value;
  40.         return value;
  41.     },
  42.     get_from: function SP_Utilities_EmailPropertiesCustom$get_from() {
  43.         return this.$2T_1;
  44.     },
  45.     set_from: function SP_Utilities_EmailPropertiesCustom$set_from(value) {
  46.         this.$2T_1 = value;
  47.         return value;
  48.     },
  49.     get_subject: function SP_Utilities_EmailPropertiesCustom$get_subject() {
  50.         return this.$31_1;
  51.     },
  52.     set_subject: function SP_Utilities_EmailPropertiesCustom$set_subject(value) {
  53.         this.$31_1 = value;
  54.         return value;
  55.     },
  56.     get_to: function SP_Utilities_EmailPropertiesCustom$get_to() {
  57.         return this.$36_1;
  58.     },
  59.     set_to: function SP_Utilities_EmailPropertiesCustom$set_to(value) {
  60.         this.$36_1 = value;
  61.         return value;
  62.     },
  63.     get_typeId: function SP_Utilities_EmailPropertiesCustom$get_typeId() {
  64.         return '{fab1608d-fdfb-4c8c-bb0a-9b9cc3618a15}';
  65.     },
  66.     writeToXml: function SP_Utilities_EmailPropertiesCustom$writeToXml(writer, serializationContext) {
  67.         if (!writer) {
  68.             throw Error.argumentNull('writer');
  69.         }
  70.         if (!serializationContext) {
  71.             throw Error.argumentNull('serializationContext');
  72.         }
  73.         var $v_0 = ['AdditionalHeaders', 'BCC', 'Body', 'CC', 'From', 'Subject', 'To'];
  74.  
  75.         SP.DataConvert.writePropertiesToXml(writer, this, $v_0, serializationContext);
  76.         SP.ClientValueObject.prototype.writeToXml.call(this, writer, serializationContext);
  77.     },
  78.     initPropertiesFromJson: function SP_Utilities_EmailPropertiesCustom$initPropertiesFromJson(parentNode) {
  79.         SP.ClientValueObject.prototype.initPropertiesFromJson.call(this, parentNode);
  80.         var $v_0;
  81.  
  82.         $v_0 = parentNode.AdditionalHeaders;
  83.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  84.             this.$1u_1 = SP.DataConvert.fixupType(null, $v_0);
  85.             delete parentNode.AdditionalHeaders;
  86.         }
  87.         $v_0 = parentNode.BCC;
  88.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  89.             this.$22_1 = SP.DataConvert.fixupType(null, $v_0);
  90.             delete parentNode.BCC;
  91.         }
  92.         $v_0 = parentNode.Body;
  93.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  94.             this.$C_1 = $v_0;
  95.             delete parentNode.Body;
  96.         }
  97.         $v_0 = parentNode.CC;
  98.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  99.             this.$24_1 = SP.DataConvert.fixupType(null, $v_0);
  100.             delete parentNode.CC;
  101.         }
  102.         $v_0 = parentNode.From;
  103.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  104.             this.$2T_1 = $v_0;
  105.             delete parentNode.From;
  106.         }
  107.         $v_0 = parentNode.Subject;
  108.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  109.             this.$31_1 = $v_0;
  110.             delete parentNode.Subject;
  111.         }
  112.         $v_0 = parentNode.To;
  113.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  114.             this.$36_1 = SP.DataConvert.fixupType(null, $v_0);
  115.             delete parentNode.To;
  116.         }
  117.     }
  118. };
  119.  
  120. SP.Utilities.EmailPropertiesCustom.registerClass('SP.Utilities.EmailPropertiesCustom', SP.ClientValueObject);
  121.  
  122. function main() {  
  123.     SP.Utilities.Utility.sendEmail = function SP_Utilities_Utility$resolvePrincipal(context, properties) {
  124.         if (!context) {
  125.             throw Error.argumentNull('context');
  126.         }
  127.         var $v_0 = new SP.ClientActionInvokeStaticMethod(context, '{16f43e7e-bf35-475d-b677-9dc61e549339}', 'SendEmail', [properties]);
  128.  
  129.         context.addQuery($v_0);
  130.     };
  131. }
  132.  
  133. function sendMail() {
  134.     var ctx = SP.ClientContext.get_current();
  135.  
  136.     // note: we use our custom class in this case!
  137.     var emailProperties = new SP.Utilities.EmailPropertiesCustom();
  138.  
  139.     emailProperties.set_to(['user1@company.com', 'user2@company.com']);
  140.     emailProperties.set_from('user3@company.com');
  141.     emailProperties.set_body('body');
  142.     emailProperties.set_subject('subject');
  143.  
  144.     SP.Utilities.Utility.sendEmail(ctx, emailProperties);
  145.  
  146.     ctx.executeQueryAsync(
  147.                 function () {
  148.                     console.log("Mail sent");
  149.                 },
  150.                 function (sender, args) {
  151.                     console.log('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace());
  152.                 }
  153.             );
  154.  
  155. }
  156.  
  157. SP.SOD.executeOrDelayUntilScriptLoaded(main, "sp.js");

This approach works either, however you must not forget to create an instance of the SP.Utilities.EmailPropertiesCustom class instead of an SP.Utilities.EmailProperties instance, and pass it as a parameter when invoking the sendEmail method.

In the last approach we take the source code of SP.Utilities.EmailProperties class from the SP.debug.js file, copy it to our own sendMail.js file again. This time, however, we don’t change the class name. As this class is already registered earlier in the SP.js, we would get an error like this:

SCRIPT5022: Sys.InvalidOperationException: Type SP.Utilities.EmailProperties has already been registered. The type may be defined multiple times or the script file that defines it may have already been loaded. A possible cause is a change of settings during a partial update.

To avoid this error, we should first remove the existing registration. This is possible by this line of code:

Sys.__registeredTypes[‘SP.Utilities.EmailProperties’] = false;

The code snippet for the third option:

  1. 'use strict';
  2.  
  3. SP.Utilities.EmailProperties = function SP_Utilities_EmailProperties() {
  4.     SP.Utilities.EmailProperties.initializeBase(this);
  5. };
  6. SP.Utilities.EmailProperties.prototype = {
  7.     $1u_1: null,
  8.     $22_1: null,
  9.     $C_1: null,
  10.     $24_1: null,
  11.     $2T_1: null,
  12.     $31_1: null,
  13.     $36_1: null,
  14.     get_additionalHeaders: function SP_Utilities_EmailProperties$get_additionalHeaders() {
  15.         return this.$1u_1;
  16.     },
  17.     set_additionalHeaders: function SP_Utilities_EmailProperties$set_additionalHeaders(value) {
  18.         this.$1u_1 = value;
  19.         return value;
  20.     },
  21.     get_bCC: function SP_Utilities_EmailProperties$get_bCC() {
  22.         return this.$22_1;
  23.     },
  24.     set_bCC: function SP_Utilities_EmailProperties$set_bCC(value) {
  25.         this.$22_1 = value;
  26.         return value;
  27.     },
  28.     get_body: function SP_Utilities_EmailProperties$get_body() {
  29.         return this.$C_1;
  30.     },
  31.     set_body: function SP_Utilities_EmailProperties$set_body(value) {
  32.         this.$C_1 = value;
  33.         return value;
  34.     },
  35.     get_cC: function SP_Utilities_EmailProperties$get_cC() {
  36.         return this.$24_1;
  37.     },
  38.     set_cC: function SP_Utilities_EmailProperties$set_cC(value) {
  39.         this.$24_1 = value;
  40.         return value;
  41.     },
  42.     get_from: function SP_Utilities_EmailProperties$get_from() {
  43.         return this.$2T_1;
  44.     },
  45.     set_from: function SP_Utilities_EmailProperties$set_from(value) {
  46.         this.$2T_1 = value;
  47.         return value;
  48.     },
  49.     get_subject: function SP_Utilities_EmailProperties$get_subject() {
  50.         return this.$31_1;
  51.     },
  52.     set_subject: function SP_Utilities_EmailProperties$set_subject(value) {
  53.         this.$31_1 = value;
  54.         return value;
  55.     },
  56.     get_to: function SP_Utilities_EmailProperties$get_to() {
  57.         return this.$36_1;
  58.     },
  59.     set_to: function SP_Utilities_EmailProperties$set_to(value) {
  60.         this.$36_1 = value;
  61.         return value;
  62.     },
  63.     get_typeId: function SP_Utilities_EmailProperties$get_typeId() {
  64.         return '{fab1608d-fdfb-4c8c-bb0a-9b9cc3618a15}';
  65.     },
  66.     writeToXml: function SP_Utilities_EmailProperties$writeToXml(writer, serializationContext) {
  67.         if (!writer) {
  68.             throw Error.argumentNull('writer');
  69.         }
  70.         if (!serializationContext) {
  71.             throw Error.argumentNull('serializationContext');
  72.         }
  73.         var $v_0 = ['AdditionalHeaders', 'BCC', 'Body', 'CC', 'From', 'Subject', 'To'];
  74.  
  75.         SP.DataConvert.writePropertiesToXml(writer, this, $v_0, serializationContext);
  76.         SP.ClientValueObject.prototype.writeToXml.call(this, writer, serializationContext);
  77.     },
  78.     initPropertiesFromJson: function SP_Utilities_EmailProperties$initPropertiesFromJson(parentNode) {
  79.         SP.ClientValueObject.prototype.initPropertiesFromJson.call(this, parentNode);
  80.         var $v_0;
  81.  
  82.         $v_0 = parentNode.AdditionalHeaders;
  83.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  84.             this.$1u_1 = SP.DataConvert.fixupType(null, $v_0);
  85.             delete parentNode.AdditionalHeaders;
  86.         }
  87.         $v_0 = parentNode.BCC;
  88.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  89.             this.$22_1 = SP.DataConvert.fixupType(null, $v_0);
  90.             delete parentNode.BCC;
  91.         }
  92.         $v_0 = parentNode.Body;
  93.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  94.             this.$C_1 = $v_0;
  95.             delete parentNode.Body;
  96.         }
  97.         $v_0 = parentNode.CC;
  98.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  99.             this.$24_1 = SP.DataConvert.fixupType(null, $v_0);
  100.             delete parentNode.CC;
  101.         }
  102.         $v_0 = parentNode.From;
  103.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  104.             this.$2T_1 = $v_0;
  105.             delete parentNode.From;
  106.         }
  107.         $v_0 = parentNode.Subject;
  108.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  109.             this.$31_1 = $v_0;
  110.             delete parentNode.Subject;
  111.         }
  112.         $v_0 = parentNode.To;
  113.         if (!SP.ScriptUtility.isUndefined($v_0)) {
  114.             this.$36_1 = SP.DataConvert.fixupType(null, $v_0);
  115.             delete parentNode.To;
  116.         }
  117.     }
  118. };
  119.  
  120. // re-register the type
  121. // to avoid the error
  122. // SCRIPT5022: Sys.InvalidOperationException: Type SP.Utilities.EmailProperties has already been registered. The type may be defined multiple times or the script file that defines it may have already been loaded. A possible cause is a change of settings during a partial update.
  123. // we should first remove the existing registration
  124. Sys.__registeredTypes['SP.Utilities.EmailProperties'] = false;
  125. SP.Utilities.EmailProperties.registerClass('SP.Utilities.EmailProperties', SP.ClientValueObject);
  126.  
  127. function main() {  
  128.     SP.Utilities.Utility.sendEmail = function SP_Utilities_Utility$resolvePrincipal(context, properties) {
  129.         if (!context) {
  130.             throw Error.argumentNull('context');
  131.         }
  132.         var $v_0 = new SP.ClientActionInvokeStaticMethod(context, '{16f43e7e-bf35-475d-b677-9dc61e549339}', 'SendEmail', [properties]);
  133.  
  134.         context.addQuery($v_0);
  135.     };
  136. }
  137.  
  138. function sendMail() {
  139.     var ctx = SP.ClientContext.get_current();
  140.     
  141.     var emailProperties = new SP.Utilities.EmailProperties();
  142.     emailProperties.set_to(['user1@company.com', 'user2@company.com']);
  143.     emailProperties.set_from('user3@company.com');
  144.     emailProperties.set_body('body');
  145.     emailProperties.set_subject('subject');
  146.  
  147.     SP.Utilities.Utility.sendEmail(ctx, emailProperties);
  148.  
  149.     ctx.executeQueryAsync(
  150.                 function () {
  151.                     console.log("Mail sent");
  152.                 },
  153.                 function (sender, args) {
  154.                     console.log('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace());
  155.                 }
  156.             );
  157.  
  158. }
  159.  
  160. SP.SOD.executeOrDelayUntilScriptLoaded(main, "sp.js");

Although it is possible the less supported option (if there are different levels of supportability at all) from the three options discussed in the post, I prefer this last one, as its usage is the most transparent for the developer. One can use the SP.Utilities.EmailProperties class and there is no need for invoking helper methods.

Note: Be aware, that when using one of the last two options, you should call the get_bCC / set_bCC and get_cC / set_cC methods instead of the get_BCC / set_BCC and get_CC / set_CC methods if you need to read / set the BCC / CC properties. In the case of the first option however, you can call both of the methods with first uppercase letter, and the methods with first lowercase letter.



Permission-based Rendering Templates, Part 1: The Asynchronous Solution

$
0
0

Recently I read a question on SharePoint StackExchange about how one can restrict the available options in a choice field based on the group membership of the current user. My answer was to create a custom client rendering templates (CSR) and set it via the JSLink property of the choice field. If you are new in the usage of custom rendering templates, you can find several superb introduction on the topic on the web, like this one. At the time of my answer I had no sample code ready to publish (honestly, I was rather surprised that I have not found any such example on the web), but in the meantime I prepared two various implementations for the same problem. In this post I describe the first possible approach, an asynchronous solution based on the JavaScript client object model (JSCOM) and jQuery. The other solution will be discussed in a later post.

Both approaches share the same custom list: it is a list having a standard Title field and choice field called Status that has three state options: ‘Approved’, ‘Rejected’ and ‘Resubmit’. If the current use is member of a specific SharePoint group (let’s say ‘MyGroup’), the options ‘Approved’ and ‘Rejected’ should be displayed in the editable mode (that means on ‘EditForm’ and on ‘NewForm’), otherwise only the option ‘Resubmit’.

In our JavaScript rendering template we define the custom namespace, that includes the member properties and methods of the template. The same editFieldMethod function will be used in both editable modes. It’s simply a wrapper around the standard display template of choice fields (SPFieldChoice_Edit), the single extra work it performs is to store the ID of the corresponding HTML element (select in this case) into a member property called controlId. The standard format of the Id is NameOfTheChoiceField_GuidOfTheChoiceField__$DropDownChoice, for example in my case it is Status_fb5a9aac-5fdb-442e-96ac-ab7161cc4208_$DropDownChoice. We store its value to be able to find the HTML element and it children option elements via jQuery later in our asynchronous callback method.

  1. var restrictedValues1 = ['Approved', 'Rejected'];
  2. var restrictedValues2 = ['Resubmit'];
  3.  
  4. var custom = custom || {};
  5.  
  6. custom.controlId = null;
  7.  
  8. custom.editFieldMethod = function (ctx) {
  9.     var fieldSchema = ctx.CurrentFieldSchema;
  10.     custom.controlId = fieldSchema.Name + '_' + fieldSchema.Id + '_$DropDownChoice';
  11.     var html = SPFieldChoice_Edit(ctx);
  12.     return html;
  13. }

We created a simple escapeForJQuery helper function to escape the dollar sign ($) in the ID, as I found IE 11 and  jQuery have issues with that character when used in selectors.

  1. custom.escapeForJQuery = function (value) {
  2.     var newValue = value.replace(/\$/g, "\\$");
  3.     return newValue;
  4. }

Note: you might have problems with the underscore (_) as well, especially if you use old browser versions, however I have not experienced such problems.In this case you should extend the escapeForJQuery helper function. See this guide:

Given this fact, authors who write CSS often attempt to employ the underscore in a similar fashion when creating class and ID names. This should not be done. Although underscores are, as of this writing, technically permitted in class and ID names, there are many historical and practical reasons why they should be avoided.

We utilize the escapeForJQuery function in our next helper function. The hideOptions method hides those options of a specific HTML element with ID specified in the ctrlId parameter that have any of the the values specified in the restrictedValues array parameter:

  1. custom.hideOptions = function (ctrlId, restrictedValues) {
  2.     restrictedValues.forEach(function (rv) {
  3.         var selector = "#" + custom.escapeForJQuery(ctrlId) + " option[value='" + custom.escapeForJQuery(rv) + "']";
  4.         $(selector).remove();
  5.     });        
  6. }

We use a third helper function called isCurrentUserMemberOfGroup to determine via CSOM if the current user is member of a group. This function – borrowed from this answer – has two parameters: the name of the group (groupName) and a callback method (OnComplete).

  1. custom.isCurrentUserMemberOfGroup = function (groupName, OnComplete) {
  2.  
  3.     var clientContext = new SP.ClientContext.get_current();
  4.     var currentUser = clientContext.get_web().get_currentUser();
  5.  
  6.     var userGroups = currentUser.get_groups();
  7.     clientContext.load(userGroups);
  8.  
  9.     clientContext.executeQueryAsync(OnSuccess, OnFailure);
  10.  
  11.     function OnSuccess(sender, args) {
  12.         var isMember = false;
  13.         var groupsEnumerator = userGroups.getEnumerator();
  14.         while (groupsEnumerator.moveNext()) {
  15.             var group = groupsEnumerator.get_current();
  16.             if (group.get_title() == groupName) {
  17.                 isMember = true;
  18.                 break;
  19.             }
  20.         }
  21.  
  22.         OnComplete(isMember);
  23.     }
  24.  
  25.     function OnFailure(sender, args) {
  26.         OnComplete(false);
  27.     }
  28. }

The isCurrentUserMemberOfGroup function is invoked by the applyPermissions function. In the callback function we hide the adequate options based on the group membership of the user.

  1. var adminGroup = "MyGroup";
  2.  
  3. custom.applyPermissions = function (ctx) {
  4.     custom.isCurrentUserMemberOfGroup(adminGroup, function (isCurrentUserInGroup) {
  5.         console.log("Current user is member of group '" + adminGroup + "': " + isCurrentUserInGroup);
  6.  
  7.         if (custom.controlId) {
  8.             if (isCurrentUserInGroup) {
  9.                 custom.hideOptions(custom.controlId, restrictedValues1);
  10.             }
  11.             else {
  12.                 custom.hideOptions(custom.controlId, restrictedValues2);
  13.             }
  14.         }
  15.     });
  16. };

In our rendering template we register the custom editing method editFieldMethod, and set the applyPermissions function to be called as OnPostRender:

  1. var customOverrides = {};
  2. customOverrides.Templates = {};
  3.  
  4. customOverrides.Templates.Fields = {
  5.     'Status': {
  6.         'EditForm': custom.editFieldMethod,
  7.         'NewForm': custom.editFieldMethod
  8.     }
  9. };
  10.  
  11. customOverrides.Templates.OnPostRender = custom.applyPermissions;
  12.  
  13. SPClientTemplates.TemplateManager.RegisterTemplateOverrides(customOverrides);

The full source code of the rendering template introduced in this post:

  1. 'use strict';
  2.  
  3. (function () {
  4.  
  5.     var restrictedValues1 = ['Approved', 'Rejected'];
  6.     var restrictedValues2 = ['Resubmit'];
  7.  
  8.     var custom = custom || {};
  9.  
  10.     custom.controlId = null;
  11.  
  12.     custom.editFieldMethod = function (ctx) {
  13.         var fieldSchema = ctx.CurrentFieldSchema;
  14.         custom.controlId = fieldSchema.Name + '_' + fieldSchema.Id + '_$DropDownChoice';
  15.         var html = SPFieldChoice_Edit(ctx);
  16.         return html;
  17.     }
  18.  
  19.     custom.isCurrentUserMemberOfGroup = function (groupName, OnComplete) {
  20.  
  21.         var clientContext = new SP.ClientContext.get_current();
  22.         var currentUser = clientContext.get_web().get_currentUser();
  23.  
  24.         var userGroups = currentUser.get_groups();
  25.         clientContext.load(userGroups);
  26.  
  27.         clientContext.executeQueryAsync(OnSuccess, OnFailure);
  28.  
  29.         function OnSuccess(sender, args) {
  30.             var isMember = false;
  31.             var groupsEnumerator = userGroups.getEnumerator();
  32.             while (groupsEnumerator.moveNext()) {
  33.                 var group = groupsEnumerator.get_current();
  34.                 if (group.get_title() == groupName) {
  35.                     isMember = true;
  36.                     break;
  37.                 }
  38.             }
  39.  
  40.             OnComplete(isMember);
  41.         }
  42.  
  43.         function OnFailure(sender, args) {
  44.             OnComplete(false);
  45.         }
  46.     }
  47.  
  48.     custom.escapeForJQuery = function (value) {
  49.         var newValue = value.replace(/\$/g, "\\$");
  50.         return newValue;
  51.     }
  52.  
  53.     custom.hideOptions = function (ctrlId, restrictedValues) {
  54.         restrictedValues.forEach(function (rv) {
  55.             var selector = "#" + custom.escapeForJQuery(ctrlId) + " option[value='" + custom.escapeForJQuery(rv) + "']";
  56.             $(selector).remove();
  57.         });        
  58.     }
  59.  
  60.     var adminGroup = "MyGroup";
  61.  
  62.     custom.applyPermissions = function (ctx) {
  63.         custom.isCurrentUserMemberOfGroup(adminGroup, function (isCurrentUserInGroup) {
  64.             console.log("Current user is member of group '" + adminGroup + "': " + isCurrentUserInGroup);
  65.  
  66.             if (custom.controlId) {
  67.                 if (isCurrentUserInGroup) {
  68.                     custom.hideOptions(custom.controlId, restrictedValues1);
  69.                 }
  70.                 else {
  71.                     custom.hideOptions(custom.controlId, restrictedValues2);
  72.                 }
  73.             }
  74.         });
  75.     };
  76.     
  77.     var customOverrides = {};
  78.     customOverrides.Templates = {};
  79.  
  80.     customOverrides.Templates.Fields = {
  81.         'Status': {
  82.             'EditForm': custom.editFieldMethod,
  83.             'NewForm': custom.editFieldMethod
  84.         }
  85.     };
  86.  
  87.     customOverrides.Templates.OnPostRender = custom.applyPermissions;
  88.  
  89.     SPClientTemplates.TemplateManager.RegisterTemplateOverrides(customOverrides);
  90.     
  91. })();

Assuming your custom list is called PermBasedField, and both jQuery (in my case it is jquery-1.9.1.min.js) and our custom JavaScript rendering template (in my case it’s called permissionBasedFieldTemplate.js) are stored in the root of the Site Assets library of the root web, you can register the template using the following PowerShell script:

$web = Get-SPWeb http://YourSharePointSite
$list = $web.Lists["PermBasedField"]

$field = $list.Fields.GetFieldByInternalName("Status")
$field.JSLink = "~sitecollection/_layouts/15/sp.runtime.js|~sitecollection/_layouts/15/sp.js|~sitecollection/SiteAssets/jquery-1.9.1.min.js|~sitecollection/SiteAssets/permissionBasedFieldTemplate.js"
$field.Update()

Stay tuned, the second part of the post including a synchronous approach should come soon.


Permission-based Rendering Templates, Part 2: The Synchronous Solution

$
0
0

In my recent post I’ve illustrated how can you implement a permission-based custom rendering template using the JavaScript client object model (JSCOM)  and jQuery. That rendering template was implemented using the standard asynchronous JavaScript patterns via a callback method to not block the UI thread of the browser. In a fast network (in a LAN, for example) however, a synchronous implementation can function as well. Although there are some unsupported methods to make a JSCOM request synchronously, the JavaScript client object model was designed for asynchronous usage (see its executeQueryAsync method). To send our requests synchronously, we utilize the REST / OData interface in this post, and send the requests via the ajax function of jQuery.

To understand the original requirements and the configuration (field and list names, etc.), I suggest to read the first part first.

To enable using of jQuery selectors containing the dollar sign ($), we use the same escapeForJQuery helper function that we’ve created for the first part.

  1. var restrictedValues1 = ['Approved', 'Rejected'];
  2. var restrictedValues2 = ['Resubmit'];
  3.  
  4. var custom = custom || {};
  5.  
  6. custom.controlId = null;
  7.  
  8. var adminGroup = "MyGroup";
  9.  
  10. custom.escapeForJQuery = function (value) {
  11.     var newValue = value.replace(/\$/g, "\\$");
  12.     return newValue;
  13. }

Instead of simply wrapping the standard display template of choice fields (SPFieldChoice_Edit), the editFieldMethod function is responsible to get the HTML content of the field control, as it would be rendered without the customization by invoking the SPFieldChoice_Edit function, then we determine the group membership of the user by calling the synchronous isCurrentUserMemberOfGroup function (more about that a bit later), finally we alter the HTML content by hiding the adequate options by calling the hideOptions function (see it later as well).

  1. custom.editFieldMethod = function (ctx) {
  2.     var fieldSchema = ctx.CurrentFieldSchema;
  3.     custom.controlId = fieldSchema.Name + '_' + fieldSchema.Id + '_$DropDownChoice';
  4.     var html = SPFieldChoice_Edit(ctx);
  5.  
  6.     var isCurrentUserInGroup = custom.isCurrentUserMemberOfGroup(adminGroup);
  7.     if (isCurrentUserInGroup) {
  8.         html = custom.hideOptions(html, custom.controlId, restrictedValues1);
  9.     }
  10.     else {
  11.         html = custom.hideOptions(html, custom.controlId, restrictedValues2);
  12.     }
  13.  
  14.     return html;
  15. }

The hideOptions function loads the HTML source of the control into the DOM and removes the options that should be hidden for the given group. Finally it returns the HTML source of the altered control:

  1. custom.hideOptions = function (html, ctrlId, restrictedValues) {
  2.     var parsedHtml = $(html);
  3.     restrictedValues.forEach(function (rv) {
  4.         var selector = "#" + custom.escapeForJQuery(ctrlId) + " option[value='" + custom.escapeForJQuery(rv) + "']";
  5.         $(parsedHtml).find(selector).remove();
  6.     });
  7.     var result = $(parsedHtml).html();
  8.  
  9.     return result;
  10. }

The isCurrentUserMemberOfGroup function sends a synchronous REST request via the the ajax function of jQuery to determine the group membership of the current user:

  1. var serverUrl = String.format("{0}//{1}", window.location.protocol, window.location.host);
  2.  
  3. custom.isCurrentUserMemberOfGroup = function (groupName) {
  4.     var isMember = false;
  5.  
  6.     $.ajax({
  7.         url: serverUrl + "/_api/Web/CurrentUser/Groups?$select=LoginName",
  8.         type: "GET",
  9.         async: false,
  10.         contentType: "application/json;odata=verbose",
  11.         headers: {
  12.             "Accept": "application/json;odata=verbose",
  13.             "X-RequestDigest": $("#__REQUESTDIGEST").val()
  14.         },
  15.         complete: function (result) {
  16.             var response = JSON.parse(result.responseText);
  17.             if (response.error) {
  18.                 console.log(String.format("Error: {0}\n{1}", response.error.code, response.error.message.value));
  19.             }
  20.             else {
  21.                 var groups = response.d.results;
  22.                 groups.forEach(function (group) {
  23.                     var loginName = group.LoginName;
  24.                     console.log(String.format("Group name: {0}", loginName));
  25.                     if (groupName == loginName) {
  26.                         isMember = true;
  27.                     }
  28.                 });
  29.             }
  30.         }
  31.     });
  32.  
  33.     return isMember;
  34. }

In this case we simply register the editFieldMethod for both the ‘EditForm’ and for the ‘NewForm’ mode of the Status field, there is no need for the OnPostRender method:

  1. var customOverrides = {};
  2. customOverrides.Templates = {};
  3.  
  4. customOverrides.Templates.Fields = {
  5.     'Status': {
  6.         'EditForm': custom.editFieldMethod,
  7.         'NewForm': custom.editFieldMethod
  8.     }
  9. };
  10.  
  11. SPClientTemplates.TemplateManager.RegisterTemplateOverrides(customOverrides);

The full source code of the rendering template introduced in this post:

  1. 'use strict';
  2.  
  3. (function () {
  4.  
  5.     var restrictedValues1 = ['Approved', 'Rejected'];
  6.     var restrictedValues2 = ['Resubmit'];
  7.  
  8.     var custom = custom || {};
  9.  
  10.     custom.controlId = null;
  11.  
  12.     var adminGroup = "MyGroup";
  13.  
  14.     custom.escapeForJQuery = function (value) {
  15.         var newValue = value.replace(/\$/g, "\\$");
  16.         return newValue;
  17.     }
  18.  
  19.     custom.hideOptions = function (html, ctrlId, restrictedValues) {
  20.         var parsedHtml = $(html);
  21.         restrictedValues.forEach(function (rv) {
  22.             var selector = "#" + custom.escapeForJQuery(ctrlId) + " option[value='" + custom.escapeForJQuery(rv) + "']";
  23.             $(parsedHtml).find(selector).remove();
  24.         });
  25.         var result = $(parsedHtml).html();
  26.  
  27.         return result;
  28.     }
  29.  
  30.     custom.editFieldMethod = function (ctx) {
  31.         var fieldSchema = ctx.CurrentFieldSchema;
  32.         custom.controlId = fieldSchema.Name + '_' + fieldSchema.Id + '_$DropDownChoice';
  33.         var html = SPFieldChoice_Edit(ctx);
  34.  
  35.         var isCurrentUserInGroup = custom.isCurrentUserMemberOfGroup(adminGroup);
  36.         if (isCurrentUserInGroup) {
  37.             html = custom.hideOptions(html, custom.controlId, restrictedValues1);
  38.         }
  39.         else {
  40.             html = custom.hideOptions(html, custom.controlId, restrictedValues2);
  41.         }
  42.  
  43.         return html;
  44.     }
  45.  
  46.     var serverUrl = String.format("{0}//{1}", window.location.protocol, window.location.host);
  47.  
  48.     custom.isCurrentUserMemberOfGroup = function (groupName) {
  49.         var isMember = false;
  50.  
  51.         $.ajax({
  52.             url: serverUrl + "/_api/Web/CurrentUser/Groups?$select=LoginName",
  53.             type: "GET",
  54.             async: false,
  55.             contentType: "application/json;odata=verbose",
  56.             headers: {
  57.                 "Accept": "application/json;odata=verbose",
  58.                 "X-RequestDigest": $("#__REQUESTDIGEST").val()
  59.             },
  60.             complete: function (result) {
  61.                 var response = JSON.parse(result.responseText);
  62.                 if (response.error) {
  63.                     console.log(String.format("Error: {0}\n{1}", response.error.code, response.error.message.value));
  64.                 }
  65.                 else {
  66.                     var groups = response.d.results;
  67.                     groups.forEach(function (group) {
  68.                         var loginName = group.LoginName;
  69.                         console.log(String.format("Group name: {0}", loginName));
  70.                         if (groupName == loginName) {
  71.                             isMember = true;
  72.                         }
  73.                     });
  74.                 }
  75.             }
  76.         });
  77.  
  78.         return isMember;
  79.     }
  80.  
  81.     var customOverrides = {};
  82.     customOverrides.Templates = {};
  83.  
  84.     customOverrides.Templates.Fields = {
  85.         'Status': {
  86.             'EditForm': custom.editFieldMethod,
  87.             'NewForm': custom.editFieldMethod
  88.         }
  89.     };
  90.  
  91.     SPClientTemplates.TemplateManager.RegisterTemplateOverrides(customOverrides);
  92.  
  93. })();

Assuming your custom list is called PermBasedField, and both jQuery (in my case it is jquery-1.9.1.min.js) and our custom JavaScript rendering template (in my case it’s called permissionBasedFieldTemplate2.js) are stored in the root of the Site Assets library of the root web, you can register the template using the following PowerShell script:

$web = Get-SPWeb http://YourSharePointSite
$list = $web.Lists["PermBasedField"]

$field = $list.Fields.GetFieldByInternalName("Status")
$field.JSLink = "~sitecollection/SiteAssets/jquery-1.9.1.min.js|~sitecollection/SiteAssets/permissionBasedFieldTemplate2.js"
$field.Update()

Note, that (in contrast to the script introduced in the first part of this post) there is no need for the JSCOM JavaScript files (sp.runtime.js and sp.js) in this case.


Creating an AngularJS Directive for Mouse Hold

$
0
0

Most of the time we use AngularJS to create our single-page application (SPA) in SharePoint and Project Server.

Recently we had a requirement that AngularJS does not provide an out-of-the-box solution for: there is a container on the page that displays a fix number of items and two buttons (one is located above the container, the other is below the container), that should scroll the items in the container back and forth. When the user clicks the upper button, new items should appear at the bottom of the container and the top items should disappear. When the user clicks the button below, the items formerly scrolled out at the top should re-appear and the items below should disappear. The user must be able to scroll the items one-by-one. However, there is a large number of items, so clicking the buttons 20 times just to scroll 20 items down would be rather inconvenient for the users.

So we need something similar, that we already have in the scroll bar in our traditional Windows applications:

  • If the user clicks on the arrow button at the bottom of the scroll bar, and holds the mouse button in the position (there is a single mouse down event but no mouse up event), the application will scroll the content in small steps down.
  • Similarly, if the user clicks on the arrow button at the top of the scroll bar, and holds the mouse button in the position (there is a single mouse down event but no mouse up event), the application will scroll the content in small steps up.
  • If the user clicks on either of the arrow buttons, holds the mouse button in this position, but moves the mouse pointer out of the area of the arrow button (the mouse down event is raised in the area of the button, but a mouse out event is raised before the mouse up event) the content will be scrolled only while the mouse pointer is over the arrow button.
  • If the user clicks on the screen as the mouse pointer is out of the arrow button area, holds the mouse button in this position, and moves the pointer over the arrow button only later (the mouse down event is raised outside the area of the button) the content will be not scrolled.

To sum up the above rules for our case: the single mouse down event should happen while the mouse pointer is over the button, and the action performed by the application (in our case it was scrolling up / down) is repeated until either a mouse up event or a mouse out event occurs.

We decided to implement the requirements as a reusable component in AngularJS, namely a directive, that encapsulates the functionality and enables to apply it to various HTML elements declaratively.

We also wanted to provide the following parameters to our component:

  • Which action the component should repeat, similar to the other, built-in AngularJS events, like ng-click. Its value should be the name of the JavaScript function available in the scope.
  • The time interval to configure the frequency of the repetition the action. Its value should be a numeric value of the delay in milliseconds.
  • The time interval to configure the delay for starting the repetition (for example, the user has to hold the mouse button down for 1 sec. to start the repetition, but once it is started, the action is performed in every 0.2 sec.) was in our first scope of work, but it was selected as victim of  feature cutting.

We found several similar solutions on web blogs and forums, but none of them fulfilled our demands completely, or they simply just didn’t work.

Our implementation was created as an attribute-level AngularJS directive: the mandatory ‘on-mouse-hold’ attribute contains the name of the function that would be invoked as action for the mouse hold event. The optionally ‘mouse-hold-repeat’ attribute contains the delay for the repetition (in milliseconds), the default value is 0.5 sec.

Note: In this post I illustrate the usage of the directive in a non-SharePoint-specific application for those of you who are not interested in SharePoint, and to separate this piece of functionality from the other (IMHO not less interesting) SharePoint-related stuff. I plan to write a further post about using the directive in a SharePoint-specific application, namely how to load items dynamically in case of scrolling using the JavaScript client object model.

The following HTML snippet illustrates using the directive in simple case. There are two buttons having different actions and delays. In this case we simply count a numeric value up and down.

  1. <div ng-app="myApp" ng-controller="counterCtrl">
  2.     <button type="button" on-mouse-hold="countUp">Count up</button>
  3.     <div>{{counter}}</div>
  4.     <button type="button" on-mouse-hold="countDown" mouse-hold-repeat="50">Count down</button>
  5. </div>

The functionality of the AngularJS directive is implemented in the JavaScript code below:

  1. 'use strict';
  2.  
  3. var myApp = angular.module('myApp', []);
  4.  
  5. myApp.controller('counterCtrl', function ($scope) {
  6.  
  7.     $scope.counter = 0;
  8.  
  9.     $scope.countDown = function () {
  10.         $scope.counter–;
  11.     }
  12.  
  13.     $scope.countUp = function () {
  14.         $scope.counter++;
  15.     }
  16.  
  17. }).directive('onMouseHold', function ($parse, $interval) {
  18.     var stop;
  19.  
  20.     var dirDefObj = {
  21.         restrict: 'A',
  22.         scope: { method: '&onMouseHold' },
  23.         link: function (scope, element, attrs) {
  24.             var expressionHandler = scope.method();
  25.             var actionInterval = (attrs.mouseHoldRepeat) ? attrs.mouseHoldRepeat : 500;
  26.  
  27.             var startAction = function () {
  28.                 expressionHandler();
  29.                 stop = $interval(function () {
  30.                     expressionHandler();
  31.                 }, actionInterval);
  32.             };
  33.  
  34.             var stopAction = function () {
  35.                 if (stop) {
  36.                     $interval.cancel(stop);
  37.                     stop = undefined;
  38.                 }
  39.             };
  40.  
  41.             element.bind('mousedown', startAction);
  42.             element.bind('mouseup', stopAction);
  43.             element.bind('mouseout', stopAction);
  44.         }
  45.     };
  46.  
  47.     return dirDefObj;
  48. });

If you want to test the functionality online, visit this page on jsfiddle.


Using Edge.js as a Replacement for win32ole

$
0
0

Last month I had to create a NodeJS script that invokes methods of ActiveX object. I work in a Windows-only environment, so it should be no problem. I find the win32ole package quickly. Based on its description and the samples I’ve found, it seemed to be the perfect tool for my requirements. However, as many others (see issues on GitHub, and a lot of  threads about the build problem on StackOverflow), I had issues installing the package in my environment:

OS: Windows Server 2008 R2 SP1, Windows Server 2012 R2
npm: 2.15.9
node: 4.5.0

The last two lines are from the output of the npm version command.

As far as I see, a package win32ole depends on (node-gyp) fails to build:

npm ERR! win32ole@0.1.3 install: ‘node-gyp rebuild’
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the win32ole@0.1.3 install script ‘node-gyp rebuild’.

As agape824 commented on Aug 28 2015 regarding a similar issue:

“I solved the problem
by installing node.js v0.8.18 & npm v1.4.28.
Previous erros were produced by different version of node files (eg. v8.h).”

So I’ve removed all NodeJS and npm installation on one of our systems, and installed the suggested versions. Downloaded the node-v0.8.18-x64.msi, and the right npm version was installed via the command:

npm install npm@1.4.28 -g

Now invoking the command npm version results:

node: 0.8.18
npm: 1.4.28

and we can install win32ole using:

npm install –save-dev win32ole

Having win32ole installed, we can create NodeJS scripts that interact with ActiveX object.

For example, you can get the content of a web page via the MSXML2.XMLHTTP object:

var win32ole = require(‘win32ole’);

var url = "http://www.yoursite.com";
var xhr = new ActiveXObject("MSXML2.XMLHTTP");
xhr.open("GET", url, false);
xhr.send();
console.log(xhr.responseText);

Of course, you can do it much easier and a platform-independent way using other NodeJS libraries, it is just to illustrate, how to invoke ActiveX object methods. However, you can perform other, not such trivial actions using the win32ole library as well, like interacting with the Windows application that support automation via ActiveX object (like Excel, Word, or Outlook, see the examples here), or even access the HTML DOM loaded into your Internet Explorer browser, and extract values from it.

In my case I needed Internet Explorer to perform Windows-integrated authentication against a SharePoint server. In this case, SharePoint returns an authentication ticket in the response page (in the hidden input field ‘__REQUESTDIGEST’), that one can include in subsequent requests.

var win32ole = require(‘win32ole’);
var uri = "
http://YourSharePointServer&quot;;

try{
var ie = new ActiveXObject(‘InternetExplorer.Application’);
  // displaying the UI of IE might be useful when debugging
  //ie.Visible = true;
  console.log(uri);
  ie.Navigate(uri);
  while(ie.ReadyState != 4) {
    win32ole.sleep(1000);
  }
  var token = ie.Document.getElementById("__REQUESTDIGEST").value;
  console.log(token);
  ie.Quit();
}catch(e){
  console.log(‘*** exception cached ***\n’ + e);

So the win32ole package would be really great, but it has not been updated in the past 4 years or so, and we don’t work with obsolete node and npm versions just to be able to use this package. Instead of that, we tried to find a replacement solution for win32ole. And I think, we’ve found something that is even better than win32ole, and it is the Edge.js package. Edge.js enables interaction between your NodeJS and .NET code in both direction, and not only on the Windows platform, as it supports Mono and CoreCLR as well. It supports PowerShell, and other languages beyond C#, like F#, Lisp or Python just to name the most important ones.

To tell the truth, creating ActiveX object and invoking their methods only a very small subset of functionality enabled by this package. Obviously, you can not create ActiveX objects on operating systems that do not support them, but it is not the limitation of the package.

After you install the Edge.js package, for example:

npm install –save-dev edge

you can create NodeJS scripts that invokes your C# code. In the C# code you can create ActiveX objects and invoke their members as well. In the following simple example create an instance of the WScript.Shell ActiveX object, and displays a greeting message via its Popup method:

var edge = require(‘edge’);

var wshShell = edge.func(function () {/*
  async (input) => { 
       dynamic wshShell = Activator.CreateInstance(Type.GetTypeFromProgID("WScript.Shell"));
       wshShell.Popup("Hello, " + input + "!");

        return string.Empty;
    }
*/});

wshShell("world", function (error, result) {
    if (error) throw error;
    console.log(result);
});

Or we can re-create our win32ole example showed above using Edge.js, and read the authentication token via the HTML DOM in Internet Explorer:

var edge = require(‘edge’);

var uri = "http://YourSharePointServer&quot;;

var getToken = edge.func(function () {/*
    async (uri) => { 

            dynamic ie = Activator.CreateInstance(Type.GetTypeFromProgID("InternetExplorer.Application"));
            // if you want to see the UI (for example, when debugging)
            //ie.Visible = true;
            ie.Navigate(uri);
            while (ie.ReadyState != 4)
            {
                System.Threading.Thread.Sleep(1000);
            }
            var token = ie.Document.getElementById("__REQUESTDIGEST").value;
            ie.Quit();

        return token.ToString();
    }
*/});

getToken(uri, function (error, result) {
    if (error) throw error;
    console.log(result);
});

An alternative solution to the above is to read a SharePoint web page via the MSXML2.XMLHTTP object and parse the HTML DOM via cheerio to get the hidden field that contains the request digest.

var edge = require(‘edge’);
var cheerio = require(‘cheerio’);

var uri = "http://YourSharePointServer&quot;;

var getToken = edge.func(function () {/*
    async (uri) => {

            dynamic xhr = Activator.CreateInstance(Type.GetTypeFromProgID("MSXML2.XMLHTTP"));
            xhr.open("GET", uri, false);
            xhr.send();

            return xhr.responseText;
    }
*/});

getToken(uri, function (error, result) {
    if (error) throw error;
    $ = cheerio.load(result);
        console.log($(‘#__REQUESTDIGEST’).val());  
});

I hope these scripts help other developers frustrated by the build issues of win32ole to create workarounds. I think Edge.js is a really useful NodeJS package, I am sure I will find a lot of application areas for it in the future. In contrast to win32ole, Edge.js is a living project, and that is very important to us. Many thanks to Thomas Janczuk for creating and supporting this gem! Keep up the excellent job!


‘The URL "[url]" is invalid. It may refer to a nonexistent file or folder, or refer to a valid file or folder that is not in the current Web.’ Error When Changing the URL of a Web Site

$
0
0

Recently one of our SharePoint administrators wanted to change the address of a site via Site Settings / Title, description, and logo:

image

He got an error with a correlation ID. Based on this ID we found this entry in the ULS logs:

<nativehr>0x80004005</nativehr><nativestack></nativestack>The URL "/Sites/SiteX" is invalid. It may refer to a nonexistent file or folder, or refer to a valid file or folder that is not in the current Web.

We had the same error message in the PowerShell console, when we tried to change the URL of the site from PowerShell, as described in this post:

$web = Get-SPWeb http://YourSharePointServer/Sites/SiteX
$web.ServerRelativeUrl = ‘/SiteX_New’
$web.Update()

The same symptoms, if we try to do it as described here:

Get-SPWeb http://YourSharePointServer/Sites/SiteX | Set-SPWeb -RelativeUrl SiteX_New

This message was of course wrong and misleading, as we could access the web both from the UI and from script. As it turned out, an other error preceded the one above in the logs:

System.Data.SqlClient.SqlException (0x80131904): String or binary data would be truncated.     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)     at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)     at System.Data.SqlClient.SqlDataReader.TryHasMoreRows(Boolean& moreRows)     at System.Data.SqlClient.SqlDataReader.TryHasMoreResults(Boolean& moreResults)     at System.Data.SqlClient.SqlDataReader.TryNextResult(Bool…
…ean& more)     at System.Data.SqlClient.SqlDataReader.NextResult()     at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock)     at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock)  ClientConnectionId:71163353-b397-4ada-99fd-be1e09547586  Error Number:8152,State:13,Class:16
ExecuteQuery failed with original error 0x80131904

The real problem was a few file URLs in one of the document libraries. The length of these URLs was already originally near the limit, and after changing the site URL with a longer path name would be these new URLs beyond the limitation.

On the content database level, the properties of the documents are stored in the AllDocs table. The DirName field (nvarchar(256)) contains the full directory path, including the site structure (for example ‘Sites/SiteX/Documents/FolderA/FolderC‘). The LeafName field (nvarchar(128)) contains the file name (for example ‘DocumentZ.docx‘). It means, if a site URL is being changed, only the value of the DirName field would be changed, only in this field can be the new value truncated, if its length is beyond the 128 character limit.

You can query the files having the longest DirName from the content database via the SQL query:

SELECT TOP 100
  [DirName], LEN([DirName]) AS DirNameLength
  FROM [Your_Content_DB].[dbo].[AllDocs]
  WHERE DirName LIKE ‘Sites/SiteX/%’
  ORDER BY DirNameLength DESC

If you happen to need the overall path (including both DirName and LeafName), you can query it as well:

SELECT TOP 100
  [DirName] + ‘/’ + [LeafName] AS Path, LEN([DirName] + ‘/’ + [LeafName]) AS PathLength
  FROM [Your_Content_DB].[dbo].[AllDocs]
  WHERE DirName LIKE ‘Sites/SiteX/%’
  ORDER BY PathLength DESC


SharePoint Designer Workflow Gets Suspended after Task Completion – How to Get Field Value from a Workflow Task via Lookup

$
0
0

Nowadays we are working quite a lot with SharePoint Designer 2013 based workflows. On workflows I mean the “new”, Workflow Manager based ones.

Recently we wanted to access a workflow task field beyond the standard outcome to use its value in another part of the workflow. For example, we need the value of the Description field, as the explanation of the decision made on the form (rejection vs. approval).

image

To achieve that, we stored the workflow task Id in a variable called TaskID (see above), and planned to use it as a lookup value from the task list (see below). Note, that we used the ID field in the lookup list, Data Source is Assocciation: Task List, that is the standard Worklow Tasks list in our case.

image

The value of the TaskID variable is returned as integer:

image

After publishing the workflow and creating an item to test it, the workflow task was created. We entered some text in the Description field, and approved the task. We found, that the workflow gets stuck in the Suspended status. Resuming it has not helped either.

image

The error description we had:

RequestorId: 3c361109-ce76-de39-0000-000000000000. Details: An unhandled exception occurred during the execution of the workflow instance. Exception details: System.FormatException: Input string was not in a correct format. at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal) at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info) at Microsoft.Activities.Expressions.ParseNumber`1.Execute(CodeActivityContext context) at System.Activities.CodeActivity`1.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

The resources we found on the web here, here and there did not help to much, but the error message itself did.

The reason of the error was, that the TaskID (a variable of type String) we have from the Assign a task action is actually the Guid of the task item, but we wanted to use it to look up the task based on its ID field (an Integer). Of curse, the workflow engine was not able to convert the Guid to an integer value.

The correct lookup is illustrated below. We use the GUID field for as the lookup field, and TaskID is returned as a string:

image

image

With this “minor” modification the workflow runs as expected.

After we solved the problem I found that the the original requirement (getting field value from a specific workflow task as data source via lookup) was already discussed and solved earlier, see this thread and this one.


Using PowerShell and REST with Project Server (or SharePoint) for Reporting

$
0
0

If you are working with Project Server or SharePoint Server, you should not ignore the potential provided by PowerShell and the REST (OData) interface to create simple reports. You should although at the same time be aware of  a few pitfalls of this combination as well.

Let’s see the next code example first. Its goal is to output the list of projects to the screen, including their Id, Name and ProjectSiteUrl properties:

$url = "http://YourProjectServer/PWA/_api/ProjectServer/Projects?$select=Id,Name,ProjectSiteUrl&quot;

$request = [System.Net.WebRequest]::Create($url)
$request.UseDefaultCredentials = $true
$request.Accept = "application/json;odata=verbose"

$response = $request.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
$data = $reader.ReadToEnd()

$result = ConvertFrom-Json -InputObject $data
$result.d.results | select Id, Name, ProjectSiteUrl

If you test the URL http://YourProjectServer/PWA/_api/ProjectServer/Projects?$select=Id,Name,ProjectSiteUrl from the browser, you see, that all of these three properties are returned. However, if you run the above script from console, you find, that the ProjectSiteUrl column is empty for all of the projects.

If you use the ProjectData OData endpoint instead of the ProjectServer endpoint, and select the corresponding properties, all of the properties will be omitted by the script:

$url = "http://YourProjectServer/PWA/_api/Projects?$select=ProjectId,ProjectName,ProjectWorkspaceInternalUrl&quot;

$request = [System.Net.WebRequest]::Create($url)
$request.UseDefaultCredentials = $true
$request.Accept = "application/json;odata=verbose"

$response = $request.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
$data = $reader.ReadToEnd()

$result = ConvertFrom-Json -InputObject $data
$result.d.results | select ProjectId, ProjectName, ProjectWorkspaceInternalUrl

Note: If you have a localized version of Project Server, you can either use an OData query including the localized entity and property names, like:

http://YourProjectServer/PWA/_api/ProjectData/Projekte?$select=ProjektID,ProjektName,ProjektArbeitsbereichInterneURL

or switch back to the English version by injecting [en-US] segment after the ProjectData endpoint:

http://YourProjectServer/PWA/_api/ProjectData/[en-US]/Projects?$select=ProjectId,ProjectName,ProjectWorkspaceInternalUrl

Of course, in the first case you should change the property names used in the select statement in the PowerShell script to match the names used in the REST query.

Let’s see another example. In the next case, our goal is to create a .csv file, that one can easily import to Excel, including the name and the RBS (resource breakdown structure) of the resources.

  1. $baseUrl = "http://YourProjectServer/PWA/_api/ProjectServer&quot;
  2. $rbsUrl = $baseUrl + "/LookupTables?$filter=Name eq 'RBS'&$expand=Entries&$select=Entries/InternalName,Entries/Value"
  3. $resourceUrl = $baseUrl + "/EnterpriseResources?$select=Name,Custom_000039b78bbe4ceb82c4fa8c0c400284"
  4.  
  5. #rbs
  6. $rbsRequest = [System.Net.WebRequest]::Create($rbsUrl)
  7. $rbsRequest.UseDefaultCredentials = $true
  8. $rbsRequest.Accept = "application/json;odata=verbose"
  9.  
  10. $rbsResponse = $rbsRequest.GetResponse()
  11. $rbsReader = New-Object System.IO.StreamReader $rbsResponse.GetResponseStream()
  12. $rbsData = $rbsReader.ReadToEnd()
  13.  
  14. $rbsResult = ConvertFrom-Json -InputObject $rbsData
  15. $rsbEntries = $rbsResult.d.results.Entries.results
  16.  
  17. #resources
  18. $resRequest = [System.Net.WebRequest]::Create($resourceUrl)
  19. $resRequest.UseDefaultCredentials = $true
  20. $resRequest.Accept = "application/json;odata=verbose"
  21.  
  22. $resResponse = $resRequest.GetResponse()
  23. $resReader = New-Object System.IO.StreamReader $resResponse.GetResponseStream()
  24. $resData = $resReader.ReadToEnd()
  25.  
  26. $resResult = ConvertFrom-Json -InputObject $resData
  27.  
  28. $resResult.d.results | % {
  29. select -Input $_ -Prop `
  30.     @{ Name='Name'; Expression={$_.Name} },
  31.     @{ Name='RBS'; Expression={$rbs = $_.Custom_x005f_000039b78bbe4ceb82c4fa8c0c400284; If ($rbs.results -is [System.Object[]]) {$rsbEntries | ? { $_.InternalName -eq $rbs.results[0] } | % { $_.Value } } Else {''} } }
  32.     } | Export-Csv -Path ResourceRBS.csv -Delimiter ";" -Encoding UTF8 -NoTypeInformation

Note: The –NoTypeInformation switch of Export-Csv ensures that no type information would be emitted as header into the .csv file. The -Delimiter ";" and the -Encoding UTF8 settings help to produce a .csv file in a format and encoding that can be opened in Excel simply by clicking on the file.

The symptoms are similar as in the first case, only the resource name is included in the file, but the RBS value not.

I’ve included this last code sample in a code block not just because it is a bit longer as the former ones, but because I help that the highlighting helps you to understand the base problem with our scripts, even if you did not catch it at the first example. Have you recognized, that the query options ($filter, $select and $expand) have a different color, as the rest of the query text? Actually, they have the very same color as the variable names (like $baseUrl or $resRequest) in the code. It is because they are handled really as variable names. Since we used double quotes in the code to define the string literals for URLs, and it means PowerShell should parse the string and replace possible variable names with the values of the variable. As we didn’t define variables like $filter, $select or $expand, they are simply removed from the string (replaced by an empty string). See this short explanation for details.

Instead of double quotation marks we should use single quotation marks to leave the query options intact, but in this case we should escape the single quotes (using two single quotation marks) used in the REST query itself.

For example, instead of:

$url = "http://YourProjectServer/PWA/_api/ProjectServer/Projects?$select=Id,Name,ProjectSiteUrl&quot;

we should simply use:

$url = ‘http://YourProjectServer/PWA/_api/ProjectServer/Projects?$select=Id,Name,ProjectSiteUrl&#8217;

and instead of::

$rbsUrl = $baseUrl + "/LookupTables?$filter=Name eq ‘RBS’&$expand=Entries&$select=Entries/InternalName,Entries/Value"

we should use:

$rbsUrl = $baseUrl + ‘/LookupTables?$filter=Name eq ”RBS”&$expand=Entries&$select=Entries/InternalName,Entries/Value’

Note, that the value RBS is enclosed by two single quotation marks on both sides, and not by a double quotation mark!

Alternatively, you can use the double quotation marks to define the strings for the REST queries (for example, if you still would like PowerShell to parse it from some reason), but in this case, you should escape the dollar sign in the query options to disable parsing them out from the string.

For example, instead of:

$url = "http://YourProjectServer/PWA/_api/ProjectServer/Projects?$select=Id,Name,ProjectSiteUrl&quot;

we should simply use:

$url = "http://YourProjectServer/PWA/_api/ProjectServer/Projects?`$select=Id,Name,ProjectSiteUrl"

and instead of::

$rbsUrl = $baseUrl + "/LookupTables?$filter=Name eq ‘RBS’&$expand=Entries&$select=Entries/InternalName,Entries/Value"

we should use:

$rbsUrl = $baseUrl + "/LookupTables?`$filter=Name eq ‘RBS’&`$expand=Entries&`$select=Entries/InternalName,Entries/Value"

See this description for more details about PowerShell string parsing and escaping methods.

If you compare our first two examples (the one with the ProjectServer and the other one with the ProjectData endpoint) the results are different, because in the first case the ProjectSiteUrl property is not part of the standard set of properties returned by default for projects via the ProjectServer endpoint, but ProjectData returns all properties, the ProjectWorkspaceInternalUrl property too, even if it is not specified in a $select query option.

In the third case, our query should have returned the entries of the RBS lookup table, but since the query options got lost, it simply return an overview about all lookup tables.



How to Change the Service Account for the Workflow Manager

$
0
0

A few weeks ago we made a mistake when installing Workflow Manager in a new environment, as we have chosen a wrong account name as the service account for Workflow Manager.

As a first try, we simply changed the identity of the application pool assigned to the Workflow Manager (called WorkflowMgmtPool) in IIS and restarted the pool, but after the change we had an error when accessing the workflow related pages in SharePoint:

Application error when access /_layouts/15/Workflow.aspx, Error=The remote server returned an error: (500) Internal Server Error.   at Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)     at Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.End(IAsyncResult result)     at Microsoft.Workflow.Client.ClientHelpers.SendRequest[T](HttpWebRequest request, T content)    9d19d89d-48f7-c052-732f-a59123539aa3
System.Net.WebException: The remote server returned an error: (500) Internal Server Error.    at Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)     at Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.End(IAsyncResult result)     at Microsoft.Workflow.Client.ClientHelpers.SendRequest[T](HttpWebRequest request, T content)    9d19d89d-48f7-c052-732f-a59123539aa3

In the Workflow Manager event logs (Event Viewer/Applications and Services Logs/Microsoft-Workflow/Operational) we found this error message:

Error processing management request. Method: GET, RequestUri: https://YourSharePoint:12290/YourScope, Error: System.Security.Cryptography.CryptographicException: Keyset does not exist

   at System.Security.Cryptography.Utils.CreateProvHandle(CspParameters parameters, Boolean randomKeyContainer)
   at System.Security.Cryptography.Utils.GetKeyPairHelper(CspAlgorithmType keyType, CspParameters parameters, Boolean randomKeyContainer, Int32 dwKeySize, SafeProvHandle& safeProvHandle, SafeKeyHandle& safeKeyHandle)
   at System.Security.Cryptography.RSACryptoServiceProvider.GetKeyPair()
   at System.Security.Cryptography.X509Certificates.X509Certificate2.get_PrivateKey()
   at Microsoft.Workflow.Common.EncryptionHelper.DecryptStringWithCertificate(X509Certificate2 encryptionCertificate, String encryptedText)
   at Microsoft.Workflow.Management.WorkflowEncryptionSettings.InitializeInternal()
   at Microsoft.Workflow.Management.WorkflowServiceConfiguration.get_EncryptionSettings()
   at Microsoft.Workflow.Management.WorkflowServiceConfiguration.GetResourceManagementConnectionStringFromConfig()
   at Microsoft.Workflow.Management.WorkflowServiceConfiguration.get_ConfigProvider()
   at Microsoft.Workflow.Management.WorkflowServiceConfiguration.GetWorkflowServiceConfiguration()
   at Microsoft.Workflow.Gateway.HttpConfigurationInitializer.CreateServiceContext(String nodeId, NamespaceSender namespaceSender)
   at Microsoft.Workflow.Gateway.HttpConfigurationInitializer.EnsureInitialized(String nodeId, NamespaceSender namespaceSender)
   at Microsoft.Workflow.Gateway.HttpConfigurationInitializer.Initialize(HttpConfiguration config, String nodeId, NamespaceSender namespaceSender)
   at Microsoft.Workflow.Gateway.Global.EnsureConfigInitialized(String nodeId)
   at Microsoft.Workflow.Gateway.Global.Application_BeginRequest(Object sender, EventArgs e)

image

It seems the account had no permission to access a certificate or something like this, so we changed back the application pool identity an searched for a better solution.

We found a few useful resources on the web, discussing how the account change should be performed (see here, here and here).

So we run this script from Workflow Manager PowerShell console on our single-node workflow farm:

Stop-SBFarm
Set-SBFarm –RunAsAccount <YourDomain\UserName>
$RunAsPassword = ConvertTo-SecureString -AsPlainText -Force ‘<Password>’
Update-SBHost -RunAsPassword $RunAsPassword
Start-SBFarm

As the result of the script above, the identity of the following Windows services has been changed to the account specified in the script:

  • Service Bus Gateway
  • Service Bus Message Broker
  • Service Bus Resource Provider
  • Service Bus VSS
  • Windows Fabric Host Service

The identity of the Workflow Manager Backend service was not changed, nor the application pool identity of the Workflow Manager in IIS

The script grant the following database roles in the Service Bus databases:

  • Workflow_SB_Container (role granted: ServiceBus.Operators)
  • Workflow_SB_Gateway (roles granted: SBProjectStore.Operators, ServiceBus.Operators)
  • Workflow_SB_Management (role granted: Strore.Operators)

There was however no permission granted on the following workflow-related databases:

  • Workflow_Farm
  • Workflow_Instance
  • Workflow_Resource

As a next step of the identity change (following the suggestion from one of the above referenced forum threads), we changed manually the account of the Workflow Manager Backend service, and restarted it. It caused however further problems, granting permissions for the account on the before mentioned three WF databases (WFServiceOerators role, or db_owner) did not helped either.

The symptoms we faced to were:

  • We were able to start workflow (at least, no error message at this place) from the SharePoint UI, but happened  nothing, we can not stop the workflows from the UI.
  • At the web-endpoint of the Workflow Manager (https://YourSharePoint:12290/YourScope) we had this error message:

<Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"&gt;
  <Code>UnexpectedError</Code>
  <Message>The data or messaging layer is unavailable. Please retry after 300 seconds.</Message> 
</Error>

In the Event Viewer we had a lot of errors like:

The Workflow Manager cannot contact Service Bus service after retrying for ’28’ minutes. Please verify if the Service Bus service is up and running. The Workflow Manager failed at location ‘ServiceBusNamespaceListener.GetSessionAndStateWithRetryAsyncResult.HandleException’ due to exception: System.UnauthorizedAccessException: 40100: Unauthorized.TrackingId:b006a351-d6bc-4b4e-a178-a4a1d689fee9_GYourSharePoint_GYourSharePoint,TimeStamp:27.02.2017 11:04:31 —> System.ServiceModel.FaultException: 40100: Unauthorized.TrackingId:b006a351-d6bc-4b4e-a178-a4a1d689fee9_GYourSharePoint_GYourSharePoint,TimeStamp:27.02.2017 11:04:31

image

and warnings like:

Service Bus exception swallowed at location ServiceBusNamespaceListener.GetSessionAndStateWithRetryAsyncResult.HandleException. System.UnauthorizedAccessException: 40100: Unauthorized.TrackingId:c0f820e5-bc7f-4186-8d8f-41899f014c84_GYourSharePoint_GYourSharePoint,TimeStamp:27.02.2017 11:05:19 —> System.ServiceModel.FaultException: 40100: Unauthorized.TrackingId:c0f820e5-bc7f-4186-8d8f-41899f014c84_GYourSharePoint_GYourSharePoint,TimeStamp:27.02.2017 11:05:19

image

The few discussions related to similar problems we found on the web (like this one or this one) did not help to much, so we decided to set back the original  account of the Workflow Manager Backend service, and restarted it again. Our workflows are functioning now, but I am really keen to know, how we could change the identity of the Workflow Manager Backend service as well.


Microsoft.Workflow.Client.InvalidRequestException: Failed to query the OAuth S2S metadata endpoint – The remote server returned an error: (400) Bad Request

$
0
0

Recently we installed a new Workflow Manager farm (a single-server one) on the front-end server of one of our SharePoint farms.

I wanted to register the Workflow Manager for a web application in the SharePoint farm via the PowerShell cmdlet:

Register-SPWorkflowService -SPSite https://YourSharePointSite -WorkflowHostUri https://YourWorkflowManagerServer:12290 -ScopeName YourScope –Force

But I received an error like this one:

Register-SPWorkflowService : Failed to query the OAuth S2S metadata endpoint
at URI ‘https://YourSharePointSite/_layouts/15/metadata/json/1&#8217;.
Error details: ‘An error occurred while sending the request’. HTTP headers received from the server – ActivityId:
d10c4cbb-bde4-4040-b09f-1ace1491dc87. NodeId: YourWFNode. Scope: /YourScope.
Client ActivityId : b89c2ff9-8560-458e-9ea2-31ec6c8fde36.
At line:1 char:1
+ Register-SPWorkflowService -SPSite https://YourSharePointSite/&#160; -W …

In the Event Viewer (Application and Services Logs / Microsoft-Workflow / Operational) we had this error:

image

Failed to query the remote endpoint for the S2S metadata document. Details: System.Net.Http.HttpRequestException: An error occurred while sending the request. —> System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. —> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
   at System.Net.TlsStream.EndWrite(IAsyncResult asyncResult)
   at System.Net.ConnectStream.WriteHeadersCallback(IAsyncResult ar)
   — End of inner exception stack trace —
   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
   at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
   — End of inner exception stack trace —

In the ULS logs we had this error message:

Microsoft.Workflow.Client.InvalidRequestException: Failed to query the OAuth S2S metadata endpoint at URI ‘https://YourSharePointSite/_layouts/15/metadata/json/1&#8217;. Error details: ‘An error occurred while sending the request.’. HTTP headers received from the server – ActivityId: d10c4cbb-bde4-4040-b09f-1ace1491dc87. NodeId: YourWFNode. Scope: /YourScope. Client ActivityId : b89c2ff9-8560-458e-9ea2-31ec6c8fde36. —> System.Net.WebException: The remote server returned an error: (400) Bad Request.     at Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)     at Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.End(IAsyncResult result)     at Microsoft.Workflow.Client.ClientHelpers.SendRequest[T](HttpWebRequest request, T content)     — End of inner exceptio…

The SharePoint site https://YourSharePointSite and the Workflow Manager endpoint URL https://YourWorkflowManagerServer:12290 were both available without any issue (e.g. no problem with the certificate too), on both nodes (front-end and application servers) of the SharePoint farm, as well as from client computers.

The articles I found about the issue (like this one or this one) explained the problem with the reason, that the SharePoint endpoint URL (in our case ‘https://YourSharePointSite/_layouts/15/metadata/json/1‘) is not accessible, probably because of a name resolution issue. In our case that was definitely not the issue, because if I switched the SharePoint URL from HTTPS to HTTP (via changing the Alternate Access Settings for the site + bindings in IIS manager), I was able to run the registration script successfully:

Register-SPWorkflowService -SPSite http://YourSharePointSite -WorkflowHostUri https://YourWorkflowManagerServer:12290 -ScopeName YourScope –Force -AllowOAuthHttp

After switching back the URL to HTTPS we had the problem again.

My next assumption was, that the service account for the Workflow Manager does not have the root certificate of the SSL certificate under the Trusted Root Certification Authorities.

So I’ve started the Microsoft Management Console (mmc.exe) and added the Certificates snap-in for the service account of the Workflow Manager Backend service:

image

image

image

I found that the list of Trusted Root Certification Authorities contains the root certificate of the SSL, so it could not be a problem either.

As next step, I’ve logged in on the Workflow Manager server (that is the front-end server of the SharePoint farm) the using the Workflow Manager service account to test the connection to the SharePoint site interactively via Internet Explorer. In this case I was faced with the problem, that the SharePoint site https://YourSharePointSite has a certificate warning. As I opened the certificate for the site in Internet Explorer, I saw only the very last entry in the certificate chain (for example, the entry for YourSharePointSite), but none of the certificates above. I’ve found it either, that the account has configured not to use a proxy server. I enabled the proxy connection, then restarted Internet Explorer, and voila no more issues with the certificate. I was able to register the Workflow Manager as well. I don’t exactly know, what was the problem, but I assume, the certificate revocation list was not available without the proxy, and that prohibited the certificate validation necessary for the registration of the Workflow Manager.


Generating Pseudo GUIDs for Your Project Server Entities

$
0
0

As you might have known, since the version 2013, Project Server utilizes pseudo-GUIDs to improve Project Server performance. These ones has the format of a “classical” GUID, but actually generated sequentially. As Microsoft states in this TechNet article:

"We handle GUIDs a little better in Project Server 2013 – and in many places they are sequential GUIDs which cause less index fragmentation"

This topic is quite good described in the Project Conference 2014 presentation Project Worst Practice – Learning from other peoples mistakes by Brian Smith. See the video recording between 6:08-13:54, or the slides 10-14.

One of the main components of the pseudo-GUID generation is the NewSequentialUid method of the Microsoft.Office.Project.Server.Library.PSUtility class:

public static Guid NewSequentialUid() 

  Guid guid; 
  if (NativeMethods.UuidCreateSequential(out guid) != 0) 
    return Guid.NewGuid(); 
  byte[] b = guid.ToByteArray(); 
  Array.Reverse((Array) b, 0, 4); 
  Array.Reverse((Array) b, 4, 2); 
  Array.Reverse((Array) b, 6, 2); 
  return new Guid(b); 
}

So if you want to use the same kind of pseudo-GUIDs for your own custom entities you create from code, you can get the IDs by invoking the method (for example, via PowerShell). The code sample below illustrates, how to get a single ID, or a batch of  IDs (in this case, 5 of them):

# load the necessary assembly
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Project.Shared")
# generate a single sequential ID
[Microsoft.Office.Project.Server.Library.PSUtility]::NewSequentialUid()
# or generate a range of sequential IDs, in this case, five of them
(1..5) | % { [Microsoft.Office.Project.Server.Library.PSUtility]::NewSequentialUid().Guid }


How to Create a Simple “Printer Friendly” Display Form

$
0
0

Our users needed a simply way to print items in SharePoint, that mean only item properties without any ribbon or navigation elements.

Assuming you have a list ‘YourCustomList’ available at the URL http://YourSharePoint/Lists/YourCustomList, the standard display form of a list item (in this case the one with ID 1) would be:

http://YourSharePoint/Lists/YourCustomList/DispForm.aspx?ID=1

This page contains however the site navigation elements and the ribbon as well. Appending the query string parameter IsDlg=1 (like http://YourSharePoint/Lists/YourCustomList/DispForm.aspx?ID=1&IsDlg=1) helps to remove the navigation parts, but the ribbon remains.

Our solution to remove the ribbon was to add this very simple JavaScript block via a Script Editor Web Part to the display form page (DispForm.aspx). I suggest to insert the Script Editor Web Part after the existing List Form Web Part on the page.

// http://stackoverflow.com/questions/901115/how-can-i-get-query-string-values-in-javascript
function getParameterByName(name, url) {
    if (!url) url = window.location.href;
    name = name.replace(/[\[\]]/g, “\\$&”);
    var regex = new RegExp(“[?&]” + name + “(=([^&#]*)|&|#|$)”),
        results = regex.exec(url);
    if (!results) return null;
    if (!results[2]) return ”;
    return decodeURIComponent(results[2].replace(/\+/g, ” “));
}

if (getParameterByName(‘IsPrint’) == ‘1’) {
  var globalNavBox = document.getElementById(‘globalNavBox’);
  if (globalNavBox) {
    globalNavBox.style.display = ‘none’;
  }
}

Note: You can switch the display form to page edit mode via the ToolPaneView=2 query string parameter (see more useful hints here), for example:

http://YourSharePoint/Lists/YourCustomList/DispForm.aspx?ToolPaneView=2

The main part of the solution, the getParameterByName method was borrowed from this forum thread. It helps to get a query string parameter value by its name. Using this method we check, if there is a parameter IsPrint, and if it is there having a value of 1, the we make the globalNavBox HTML element, that is actually a placeholder for the ribbon, invisible.

It means, if we call the display form by the URL http://YourSharePoint/Lists/YourCustomList/DispForm.aspx?ID=1&IsDlg=1&IsPrint=1 then there is no ribbon or other navigation element on the page. Using this URL format you can even add a custom action, for example, a new button to the ribbon or an edit control block (ECB) menu-item (see example later in the post), or refer a print form directly from a document or from an e-mail.

In the above case, the users can then print the page via right-clicking with the mouse and selecting Print… from the pop-up menu. Alternatively we could inject a Print button on the form itself. This technique will be demonstrated below.

In this case we use JQuery, and our JavaScript code is a bit more complex, so we store it into a separate file in the Site Assets library of the site, and refer only the files in the Script Editor Web Part:

/font/ema%20href=
http://../../SiteAssets/js/printForm.js

Our JavaScript code (printForm.js) would be in this case:

// http://stackoverflow.com/questions/901115/how-can-i-get-query-string-values-in-javascript
function getParameterByName(name, url) {
    if (!url) url = window.location.href;
    name = name.replace(/[\[\]]/g, "\\$&");
    var regex = new RegExp("[?&]" + name + "(=([^&#]*)|&|#|$)"),
        results = regex.exec(url);
    if (!results) return null;
    if (!results[2]) return ”;
    return decodeURIComponent(results[2].replace(/\+/g, " "));
}

// https://davidwalsh.name/add-rules-stylesheets
var sheet = (function() {
    // Create the <style> tag
    var style = document.createElement("style");

    // Add a media (and/or media query) here if you’d like!
    style.setAttribute("media", "print")

    // WebKit hack 😦
    style.appendChild(document.createTextNode(""));

    // Add the <style> element to the page
    document.head.appendChild(style);

    return style.sheet;
})();

$(document).ready(function() {
  if (getParameterByName(‘IsPrint’) == ‘1’) {
    sheet.insertRule("#globalNavBox { display:none; }", 0);
    sheet.insertRule("input { display:none; }", 0);

    $(‘input[value="Close"]’).closest(‘tr’).closest(‘tr’).append(‘<td class="ms-toolbar" nowrap="nowrap"><table width="100%" cellspacing="0" cellpadding="0"><tbody><tr><td width="100%" align="right" nowrap="nowrap"><input class="ms-ButtonHeightWidth" accesskey="P" onclick="window.print();return false;" type="button" value="Print"></input></td></tr></tbody></table></td><td class="ms-separator">&nbsp;</td>’);
  }
});

In this case we inject a Print button dynamically and don’t hide the ribbon, but use the technique illustrated here to add CSS styles to hide UI elements (ribbon and the buttons) only in the printed version via the media attribute of the style sheet.

Note: The above code is for a SharePoint site with English UI. Since the value of the Close button is language dependent, you should change the code if you have a SharePoint site with another culture settings. For example, in a German version the JQuery selector would be:

input[value="Schließen"]

In this case you should have to save the script using Unicode encoding instead of ANSI to prohibit the loss of special character ‘ß’.

Finally, I show you how to create a shortcut to the form in the ECB menu using SharePoint Designer (SPD).

Select your list in SPD, and from the Custom Actions menu select the List Item Menu.

image

Set the fields as illustrated below:

image

The full value of the Navigate to URL field:

javascript:OpenPopUpPageWithTitle(ctx.displayFormUrl + ‘&ID={ItemId}&IsDlg=1&IsPrint=1′, RefreshOnDialogClose, 600, 400,’Print Item’)

We use the OpenPopUpPageWithTitle method and a custom made URL to show the printer friendly display form with the necessary query string parameters. See this article on more details of the OpenPopUpPageWithTitle method.

After saving the custom action, you can test it in your list:

image

This is the customized form having the extra Print button on it:

image

And that is the outcome of the print:

image


Working with the REST / OData Interface from PowerShell

$
0
0

If you follow my blog you might already know that I am not a big fan of the REST / OData interface. I prefer using the client object model. However there are cases, when REST provides a simple (or even the only available) solution.

For example, we are working a lot with PowerShell. If you are working with SharePoint on the client side at a customer, and you are not allowed to install / download / copy the assemblies for the managed client object model (CSOM), you have a problem.

Some possible reasons (you should know, that the SharePoint Server 2013 Client Components SDK is available to download as an .msi, or you can get the assemblies directly from an on-premise SharePoint installation):

  • You might have no internet access, so you cannot download anything from the web.
  • If you happen to have internet access, you are typically not allowed to install such things without administrator permissions on the PC. It’s quite rare case, if you or the business user you are working with has this permission.
  • You have no direct access on the SharePoint server, so you cannot copy the assemblies from it.
  • You are not allowed to use your own memory stick (or other storage device) to copy the assemblies from it.
  • Even if there is no technical barrier, company policies might still prohibit you using external software components like the CSOM assemblies.

In this case, using the REST interface is a reasonable choice. You can have a quick overview of the REST-based list operations here.

The main questions I try to answer in this post:

  • Which object should I use to send the request?
  • How to authenticate my request?
  • How to build up the payload for the request?

First of all, I suggest you to read this post to learn some possible pitfalls when working with REST URLs from PowerShell and how to avoid them with escaping.

Reading data with the SharePoint REST interface

Reading data with a GET request

Sending a GET request for a REST-based service in PowerShell is not really a challenge, might you think, and you are right, it is really straightforward most of the cases. But take the following example, listing the Id and Title fields of items in a list:

$listTitle = "YourList"
$url = "http://YourSharePoint/_api/Web/Lists/GetByTitle(&#8216;$listTitle‘)/Items?`$select=Id,Title"

$request = [System.Net.WebRequest]::Create($url)
$request.UseDefaultCredentials = $true
$request.Accept = ‘application/json;odata=verbose’

$response = $request.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
# ConvertFrom-Json : Cannot convert the Json string because a dictionary converted from it contains duplicated keys ‘Id’ and ‘ID’.
#$response = $reader.ReadToEnd()
$response = $reader.ReadToEnd() -creplace ‘"ID":’, ‘"DummyId":’

$result = ConvertFrom-Json -InputObject $response
$result.d.results | select Id, Title

If you would use

$response = $reader.ReadToEnd()

instead of

$response = $reader.ReadToEnd() -creplace ‘"ID":’, ‘"DummyId":’

then you became this exception, when trying to convert the JSON response:

ConvertFrom-Json : Cannot convert the Json string because a dictionary converted from it contains duplicated keys ‘Id’ and ‘ID’.

The reason, that the JSON response of the server contains the fields Id and ID. JSON is case-sensitive, but PowerShell is not, so it is an issue if you want to convert the JSON response to a PowerShell object. You can read more about it in this post, although I don’t like the solution proposed there. Although it really helps to avoid the error, but it uses the case insensitive –replace operator instead of the case sensitive -creplace, so it converts both fields into a dummy field. PowerShell seems to have no problem with the duplicated properties.

Instead of using a System.Net.WebRequest object, we can achieve a shorter version using the Invoke-RestMethod cmdlet. Note, that we don’t select and display the Id property in this case to avoid complications. See my comments about that in the next section discussing the POST request.

$listTitle = "YourList"
$url = "http://YourSharePoint/_api/Web/Lists/GetByTitle(&#8216;$listTitle‘)/Items?`$select=Title"
$headers = @{ ‘Accept’ = ‘application/json; odata=verbose’}
$result = Invoke-RestMethod -Uri $url -Method Get -Headers $headers -UseDefaultCredentials
$result.d.results | select Title

Reading data with a POST request

There are cases when you have to use the POST method instead of GET to read some data from SharePoint. For example, if you need to filter the items via a CAML query. In the following example I show you how to query the file names all documents in a library recursively that are older than a threshold value:

$listTitle = "YourDocuments"
$offsetDays = -30

$urlBase = "http://YourSharePointSite/&quot;
$urlAuth = $urlBase +"_api/ContextInfo"
$url = $urlBase + "_api/Web/Lists/GetByTitle(‘$listTitle’)/GetItems?`$select=FileLeafRef"

$viewXml = "<View Scope=’Recursive’><ViewFields><FieldRef Name=’Created’/><FieldRef Name=’FileLeafRef’/></ViewFields><Query><Where><Lt><FieldRef Name=’Created’ /><Value Type=’DateTime’><Today OffsetDays=’$offsetDays’ /></Value></Lt></Where></Query></View>"

$queryPayload = @{ 
                   ‘query’ = @{
                          ‘__metadata’ = @{ ‘type’ = ‘SP.CamlQuery’ };                      
                          ‘ViewXml’ = $viewXml
                   }
                 } | ConvertTo-Json

# authentication
$auth = Invoke-RestMethod -Uri $urlAuth -Method Post -UseDefaultCredentials
$digestValue = $auth.GetContextWebInformation.FormDigestValue

# the actual request
$headers = @{ ‘X-RequestDigest’ = $digestValue; ‘Accept’ = ‘application/json; odata=verbose’ }
$result = Invoke-RestMethod -Uri $url -Method Post -Body $queryPayload -ContentType ‘application/json; odata=verbose’ -Headers $headers –UseDefaultCredentials

# displaying results
$result.d.results | select FileLeafRef

Just for the case of comparison I include the same payload in JavaScript format:

var queryPayload = {
                     ‘query’ : {
                        
‘__metadata’ : { ‘type’ : ‘SP.CamlQuery’ },
                         ‘ViewXml’ : viewXml
                    
}
                   };

As you can see, these are the most relevant differences in the format we need in PowerShell:

  • We use an equal sign ( = ) instead of  ( : ) to separate the name and its value.
  • We use a semicolon ( ; ) instead of the comma ( , ) to separate object fields.
  • We need a leading at sign ( @ ) before the curly braces ( { ).

The Invoke-RestMethod tries to automatically convert the response to the corresponding object based on the content type of the response. If it is an XML response (see the authentication part above) then the result will be a XmlDocument. If it is a JSON response then the result will be a PSCustomObject representing the structure of the response. However, if the response can not be converted, it remains a single String.

For example, if we don’t limit the fields we need in response via the $select query option:

$url = $urlBase + "_api/Web/Lists/GetByTitle(‘$listTitle’)/GetItems"

then the response includes the fields Id and ID again. In this case we should remove one of these fields using the technique illustrated above with the simple GET request, before we try to convert the response via the ConvertFrom-Json cmdlet.

Note: If you still use PowerShell v3.0 you get this error message when you invoke Invoke-RestMethod setting the Accept header:

Invoke-RestMethod : The ‘Accept’ header must be modified using the appropriate property or method.
Parameter name: name

So if it is possible, you should consider upgrading to PowerShell v4.0. Otherwise, you can use the workaround suggested in this forum thread, where you can read more about the issue as well.

If you are not sure, which version you have, you can use $PSVersionTable.PSVersion to query the version number, or another option as suggested here.

Creating objects

In this case we send a request with the POST method to the server. The following code snippet shows, how you can create a new custom list:

$listTitle = "YourList"

$urlBase = "http://YourSharePoint/&quot;
$urlAuth = $urlBase +"_api/ContextInfo"
$url = $urlBase + "_api/Web/Lists"

$queryPayload = @{ 
                    ‘__metadata’ = @{ ‘type’ = ‘SP.List’ }; ‘AllowContentTypes’ = $true; ‘BaseTemplate’ = 100;
                    ‘ContentTypesEnabled’ = $true; ‘Description’ = ‘Your list description’; ‘Title’ = $listTitle                      
    } | ConvertTo-Json

$auth = Invoke-RestMethod -Uri $urlAuth -Method Post -UseDefaultCredentials
$digestValue = $auth.GetContextWebInformation.FormDigestValue

$headers = @{ ‘X-RequestDigest’ = $digestValue; ‘Accept’ = ‘application/json; odata=verbose’ }

$result = Invoke-RestMethod -Uri $url -Method Post -Body $queryPayload -ContentType ‘application/json; odata=verbose’ -Headers $headers –UseDefaultCredentials

The response we receive in the $result variable contains the properties of the list we just created. For example, the Id (GUID) of the list is available as $result.d.Id.

Updating objects

In this case we send a request with the POST method to the server and set the X-HTTP-Method header to MERGE. The following code snippet shows, how to change the title of the list we created in the previous step:

$listTitle = "YourList"

$urlBase = "http://YourSharePoint/&quot;
$urlAuth = $urlBase +"_api/ContextInfo"
$url = $urlBase + "_api/Web/Lists/GetByTitle(‘$listTitle’)"

$queryPayload = @{ 
                    ‘__metadata’ = @{ ‘type’ = ‘SP.List’ }; ‘Title’ = ‘YourListNewTitle’                      
    } | ConvertTo-Json

$auth = Invoke-RestMethod -Uri $urlAuth -Method Post -UseDefaultCredentials
$digestValue = $auth.GetContextWebInformation.FormDigestValue

$headers = @{ ‘X-RequestDigest’ = $digestValue; ‘Accept’ = ‘application/json; odata=verbose’; ‘IF-MATCH’ = ‘*‘; ‘X-HTTP-Method’ = ‘MERGE’ }

$result = Invoke-RestMethod -Uri $url -Method Post -Body $queryPayload -ContentType ‘application/json; odata=verbose’ -Headers $headers –UseDefaultCredentials

Deleting objects

In this case we send a request with the POST method to the server and set the X-HTTP-Method header to DELETE. The following code snippet shows, how you can delete a list item:

$listTitle = "YourList"

$urlBase = "http://YourSharePoint/&quot;
$urlAuth = $urlBase +"_api/ContextInfo"
$url = $urlBase + "_api/Web/Lists/GetByTitle(‘$listTitle’)/Items(1)"

# authentication
$auth = Invoke-RestMethod -Uri $urlAuth -Method Post -UseDefaultCredentials
$digestValue = $auth.GetContextWebInformation.FormDigestValue

# the actual request
$headers = @{ ‘X-RequestDigest’ = $digestValue; ‘IF-MATCH’ = ‘*’; ‘X-HTTP-Method’ = ‘DELETE’ }
$result = Invoke-RestMethod -Uri $url -Method Post -Headers $headers -UseDefaultCredentials

Note: Although the documentation states, that “in the case of recyclable objects, such as lists, files, and list items, this results in a Recycle operation”, based on my tests it is false, as the objects got really deleted.

Final Note: This one applies to all of the operations discussed in the post. If the SharePoint site you are working with available via HTTPS and there is an issue with the certificate, you can turn off the certificate validation, although it is not recommended in a production environment. You should include this line in your code before making any web requests:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }


Disabling SharePoint Alerts Temporarily for a Specific SharePoint List

$
0
0

Recently we extended a SharePoint list in our test environment with a few new fields. Users have been complained that they received immediate notifications due to their existing subscriptions on the list. To avoid the same situation in the live system, we decided to temporarily deactivate the alerts for the time of the list field extension. I find a solution for that in this thread, implemented in C#. Although I like C#, for administrative tasks like this one I prefer using PowerShell, so I transformed the code into a few-line script:

$url = ‘http://YourSharePoint/WebSite&#8217;
$listTitle = ‘Title of your list’
$targetStatus = [Microsoft.SharePoint.SPAlertStatus]::Off # or [Microsoft.SharePoint.SPAlertStatus]::On

$web = Get-SPWeb $url
$list = $web.Lists[$listTitle]

# to query the current status of the alerts only:
# $web.Alerts | ? { $_.List.ID -eq $list.ID } | % { $_.Status }

$web.Alerts | ? { $_.List.ID -eq $list.ID } | % {
  $_.Status = $targetStatus
  $_.Update()
}

After implementing the changes, you can reactivate the alerts (in this case you should use the value [Microsoft.SharePoint.SPAlertStatus]::On in $targetStatus), however, you should wait a few minutes, as the immediate alerts are sent every 5 minutes by default (see screenshot below). If you turn the alerts on before the next run of the job, your previous change to inactivate the notifications has no effect and the alerts would be sent to the user.

image

By letting the Immediate Alerts job to have a run after you make the changes in the list, the notification events waiting in the event queue will be purged and not included in the upcoming immediate alerts. They will be however included in the daily and weekly summaries, but that was not an issue in our case.

If you don’t want to wait for the next scheduled run, you can start the job from the UI (see Run Now button above), or via script like this:

Get-SPTimerJob | ? { $_.Name -eq "job-immediate-alerts"} | % { Start-SPTimerJob $_ }


"The file name you specified is not valid or too long. Specify a different file name." Error When Using Redirection in IIS

$
0
0

Recently a user complained, that although he can create and copy files on a mapped drive on his Windows 7, linked to a SharePoint document library, the following error message was displayed to him in the Windows Explorer view of the library when he tried to rename any file:

The file name you specified is not valid or too long. Specify a different file name.

image

The error message was already known to us, it is typically a result of a special character or a space in the URL that is being encoded, and used in this encoded form to map the drive, or the mapped path might contain a trailing slash ‘/’, see threads here and here.

In this case there wasn’t any issue with the characters, but as we checked the mapping via the NET USE command, we noticed that the connection was listed as

\\YourServer\DocLib

although the SharePoint site was configured to use HTTPS (let’s say with URL https://YourServer), so the connection should have been actually:

\\YourServer@SSL\DocLib

On the SharePoint server (SharePoint 2013 on Window Server 2012 R2) we verified the configuration in Internet Information Services (IIS) Manager, and found the HTTPS binding all right.

There was however an other web site with the very same binding as the SharePoint site, but instead of HTTPS it was bound to HTTP (that means http://YourServer). The sole purpose of this web site was to forward any incoming HTTP request to the SharePoint site using HTTP Redirect with the settings below (see this page for configuration details):

Redirect requests to this destination option checked: https://YourServer$S$Q

Redirect all requests to exact destination (instead of relative to destination) option checked

image

The solution was in this case so simple as to disconnect the mapped folder and to reconnect it using HTTPS:

NET USE Y: "https://YourServer/DocLib&quot;

Conclusion of the story: Redirection apparently works with WebDAV as well, however renaming files fails in this case.



Getting a List of Checked-Out Projects from PowerShell via REST

$
0
0

We have an application running as a monthly scheduled batch process that updates enterprise fields of projects on our Project Server implementation based on values taken from various satellite systems. As a prerequisite, all affected projects should be checked-in. Projects checked-out to users won’t be updated. Of course, technically it would be possible to force check-in on those projects, but it was a business decision not to do that as we wanted to avoid data inconsistency by checking in a project that is not yet meant to be ready for that by the project manager.

Our application iterates through the projects, and if they are checked-in, it checks them out, updates the values, checks the project back and publishes it. If the project is checked-out, it sends a warning to the owner that the project was not updated due to its state. Occasionally project owners are doubting this warning, saying they are sure to check-in their projects, so I decided to create a sort-of report running just before starting the updating process to prove the list of projects left checked-out. It is something similar administrators already have on the Force Check-in Enterprise Objects page under PWA Settings.

image

Recently I wrote about how we can use PowerShell to create simple reports based on the data we query via the REST interface. This time I applied the very same technique to get the list of projects that are checked-out, including the name of the projects, the check-out description, checked-out time and the name and e-mail address of the user checked-out the project. The key was to assemble the REST query URL, including the $expand expression for the CheckedOutBy field.

$url = ‘http://YourProjectServerPWA/_api/ProjectServer/Projects?$expand=CheckedOutBy&$select=Name,CheckOutDescription,CheckedOutDate,CheckedOutBy/Title,CheckedOutBy/Email&$filter=IsCheckedOut’

$request = [System.Net.WebRequest]::Create($url)
$request.UseDefaultCredentials = $true
$request.Accept = ‘application/json;odata=verbose’

$response = $request.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
$data = $reader.ReadToEnd()

$result = ConvertFrom-Json -InputObject $data

$result.d.results | % {
select -Input $_ -Prop `
    @{ Name=’Name’; Expression={$_.Name} },
    @{ Name=’User’; Expression={$_.CheckedOutBy.Title} },
    @{ Name=’EMail’; Expression={$_.CheckedOutBy.Email} },
    @{ Name=’Date’; Expression={[DateTime]::Parse($_.CheckedOutDate).ToString(‘g’)} },
    @{ Name=’Description’; Expression={$_.CheckOutDescription} }
    } | Export-Csv -Path CheckedOutProjects.csv -Delimiter ";" -Encoding UTF8 –NoTypeInformation

The result is a comma separated value (.csv) file, that one can open in Excel easily as well.


A Quick and Dirty Solution to Create a Blank Site in SharePoint 2013

$
0
0

Recently one of our clients requested a change in a custom-built SharePoint application. The original version of the application was built for SharePoint (MOSS) 2007 using Visual Studio 2008, then upgraded to SharePoint 2010 using Visual Studio 2010. Later the site was upgraded to SharePoint 2013, without any change in the code of the solution.

Now we had to create a replica of the site in our developer environment including the list data. We pulled a backup of the site using the Export-SPWeb cmdlet successfully in the productive system, and created a new team site in the development system as a target of the Import-SPWeb cmdlet. When executing the restore operation we’ve got this exception:

Import-SPWeb : Cannot import site. The exported site is based on the template STS#0 but the destination site is based on the template STS#1. You can import sites only into sites that are based on same template as the exported site.

image

In the error message STS#0 means the Team Site template, and STS#1 stands for the Blank Site template (see SharePoint site template IDs and their description here). Jason Warren suggests in this thread to use the -Force switch of the Import-SPWeb cmdlet to force the overwrite of the existing site, but we had the same issue even using this switch. How could we create a new web site using the Blank Site template? Solutions available using the server side, like using PowerShell or unhiding the Blank Site template are discussed in this thread. But what could we do, if we had no access to the server side, as this site template is not available on the web UI anymore?

We found a simply solution using only a single browser (Internet Explorer in our case) and the F12 Developer Tools.

Load the site creation page in the browser, then start the Developer Tools, and select the list of templates using the DOM Explorer.

 

image

Select an options in the select element, like the Team Site

image

… change its value attribute to STS#1

image

… and finally click the Create button on the web page to create the new blank site.

This solution is quick, but I consider it to be dirty, as users should perform it themselves and each time they need a blank site, so definitely not a user friendly option. But it might be handy if you need a simple way without access to the server side.


Copying Flat Lookup Table Entries via the Managed Object Model

$
0
0

Assume you have in Project Server a flat lookup table (I mean a lookup table having a single level, without any hierarchy between the entries), and you would like to copy the entries to another (already existing!) lookup table, that may exist on the same or on another server / PWA instance. You can do the via the managed object model of Project Server, as demonstrated by the code below:

  1. private void CopyLookupTableValues(string sourcePwa, string sourceTable, string targetPwa, string targetTable)
  2. {
  3.     LookupEntryCollection ltSourceEntries = null;
  4.     using (var pcSource = new ProjectContext(sourcePwa))
  5.     {
  6.         pcSource.Load(pcSource.LookupTables, lts => lts.Where(lt => lt.Name == sourceTable).Include(lt => lt.Masks, lt => lt.Entries.Include(e => e.FullValue, e => e.Id, e => e.SortIndex)));
  7.         pcSource.ExecuteQuery();
  8.  
  9.         if (pcSource.LookupTables.Any())
  10.         {
  11.             ltSourceEntries = pcSource.LookupTables.First().Entries;
  12.         }
  13.         else
  14.         {
  15.             Console.WriteLine("Source table '{0}' not found on PWA '{1}'", sourceTable, sourcePwa);
  16.         }
  17.     }
  18.  
  19.     if (ltSourceEntries != null)
  20.     {
  21.         using (var pcTarget = new ProjectContext(targetPwa))
  22.         {
  23.             pcTarget.Load(pcTarget.LookupTables, lts => lts.Where(lt => lt.Name == targetTable).Include(lt => lt.Name));
  24.             pcTarget.ExecuteQuery();
  25.  
  26.             // target table exist
  27.             if (pcTarget.LookupTables.Any())
  28.             {
  29.                 var ltTargetEntries = pcTarget.LookupTables.First().Entries;
  30.  
  31.                 ltSourceEntries.ToList().ForEach(lte => {
  32.                     ltTargetEntries.Add(new LookupEntryCreationInformation
  33.                         {
  34.                             // instead creating a new ID, you can copy the existing ID
  35.                             // it works only if you copy the entries to another PWA instance,
  36.                             // and only if there wasn't already an entry with the same ID
  37.                             Id = Guid.NewGuid(), // lte.Id,
  38.                             Value = new LookupEntryValue { TextValue = lte.FullValue },
  39.                             SortIndex = lte.SortIndex
  40.                         });
  41.                     // if you have a lot of entries, it might be better to execute the query for each entries
  42.                     // to avoid 'The request uses too many resources' error
  43.                     // pcTarget.LookupTables.Update();
  44.                     // pcTarget.ExecuteQuery();
  45.                 });
  46.  
  47.                 pcTarget.LookupTables.Update();
  48.                 pcTarget.ExecuteQuery();
  49.             }
  50.             else
  51.             {
  52.                 Console.WriteLine("Target table '{0}' not found on PWA '{1}'", targetTable, targetPwa);
  53.             }
  54.         }
  55.     }
  56. }

The following call copies the lookup table Divisions from one PWA instance to another one:

CopyLookupTableValues("http://YourProjectServer/PWA&quot;, "Divisions", "http://AnotherProjectServer/PWA&quot;, "Divisions");

If your lookup table has not a lot of entries, you can probably copy them in a single batch, using a single call to the ExecuteQuery method. Otherwise, if thee batch size exceeds the 2 MB limit, you might have an exception like “The request uses too many resources”. In this case I suggest you to invoke the ExecuteQuery method for each entry, or create an ExecuteQueryBatch method, as described in this post.

Theoretically, you could copy the entries with their ID, but technically it is not always an option. For example, if you would like to copy the entries within the same PWA instance, you can’t have two entries sharing the same IDs. Based on my experience, if you have already an entry with the same ID, and you would like to copy it into another lookup table, although no exception is thrown, the entry won’t be copied.

The sample above works only for flat (non-hierarchical) lookup tables. You can copy hierarchical lookup tables (like RBS – resource breakdown structure) as well, but it requires a bit more coding, as I will show you in one of the planned next posts.

You can find further sample codes to manipulate Project Server enterprise custom fields and lookup table via the client object model in this older post.


Copying Hierarchical Lookup Table Entries via the Managed Object Model

$
0
0

After I’ve described, how to copy flat lookup tables via the Project Server managed object model, this time I will go a step forward, and show how you can copy lookup tables with hierarchy, like the RBS (resource breakdown structure) table.

The complexity of the task (comparing to the flat lookup tables) comes to the fact, that child entries are bound to their respective parent entries not via the IDs (like having a property called ParentId) but simply via the FullValue property. See the properties of the LookupEntry class in the documentation. For example (assuming the separator character used in the code mask is the period “.”), the parent entry of a child entry having its FullValue property like “Division.Subdivision.SubSubdivision” is the entry having a FullValue property like “Division.Subdivision”. Furthermore, the parent entry should be already included in the lookup table, as we inserts its child items, but it seems to be fulfilled by the standard Project Server behavior, as it returns entries in the correct order (parent entries first, their child entries next) for a simply request.

As in the case of the flat tables, we should copy the target entries one by one, by adding new LookupEntryCreationInformation instances to the existing Entries property (of type LooupEntryCollection) of the target lookup table.

Just to make our life a bit harder, in contrast to the LookupEntry class the LookupEntryCreationInformation class has a property ParentId, but no FullValue property at all. It has, however a Value property that you should to set to the value of the child entry, without the joined values of the parent entries. You should set the ParentId property to the value of the Id of the parent entry only if there is a parent entry, otherwise you mustn’t set this property (for example, to null). You can append the LookupEntryCreationInformation instance to the target LooupEntryCollection instance via Add method.

If you would like to get the Id of the parent entry, it would be nice to split the last tag from the FullValue of the current LookupEntry instance to first get the full value of the parent entry (like by splitting SubSubdvision from “Division.Subdivision.SubSubdivision” we would get “Division.Subdivision”, the FullValue of the parent entry), and make a query for the LookupEntry having the same value in the collection of already appended entries afterwards, like this:

parentId = ltTargetEntries.First(e => e.FullValue == parentFullValue).Id;

If you try that, you get the very same exception, that you receive if you try to access a property that you have not explicitly or implicitly requested in the client object model:

An unhandled exception of type ‘Microsoft.SharePoint.Client.PropertyOrFieldNotInitializedException’ occurred in Microsoft.SharePoint.Client.Runtime.dll
Additional information: The property or field ‘FullValue’ has not been initialized. It has not been requested or the request has not been executed. It may need to be explicitly requested.

You could request the entire entry collection including the FullValue property of the entries after each update, but it would not be very efficient. Instead of this, we create a dictionary object of type Dictionary<Guid, string> to store a local mapping of the Id – FullValue pairs, and use this mapping to look up the parent entries.

This method assumes the target lookup table already exists, and both of the source and target tables have the same depth / code mask and the period character “.” as separator:

  1. private void CopyHierarchicalLookupTableValues(string sourcePwa, string sourceTable, string targetPwa, string targetTable)
  2. {
  3.     var separator = '.';
  4.  
  5.     LookupEntryCollection ltSourceEntries = null;
  6.     using (var pcSource = new ProjectContext(sourcePwa))
  7.     {
  8.         pcSource.Load(pcSource.LookupTables, lts => lts.Where(lt => lt.Name == sourceTable).Include(lt => lt.Entries.Include(e => e.FullValue, e => e.Id, e => e.SortIndex)));
  9.         pcSource.ExecuteQuery();
  10.  
  11.         if (pcSource.LookupTables.Any())
  12.         {
  13.             ltSourceEntries = pcSource.LookupTables.First().Entries;
  14.         }
  15.         else
  16.         {
  17.             Console.WriteLine("Source table '{0}' not found on PWA '{1}'", sourceTable, sourcePwa);
  18.         }
  19.     }
  20.  
  21.     if (ltSourceEntries != null)
  22.     {
  23.         using (var pcTarget = new ProjectContext(targetPwa))
  24.         {
  25.             pcTarget.Load(pcTarget.LookupTables, lts => lts.Where(lt => lt.Name == targetTable).Include(lt => lt.Name));
  26.             pcTarget.ExecuteQuery();
  27.  
  28.             // target table exist
  29.             if (pcTarget.LookupTables.Any())
  30.             {
  31.                 var ltTargetEntries = pcTarget.LookupTables.First().Entries;
  32.                 var localIdToFullValueMap = new Dictionary<Guid, string>();
  33.  
  34.                 // we cannot assign the FullValue property the value that includes the separator characters
  35.                 // to avoid LookupTableItemContainsSeparator = 11051 error
  36.                 // we should  split the value at separator characters and assign the last item to the Value property and if there is a parent item
  37.                 // set the ParentId property as well, see later
  38.                 // https://msdn.microsoft.com/en-us/library/office/ms508961.aspx
  39.                 ltSourceEntries.ToList().ForEach(lte =>
  40.                 {
  41.                     var value = lte.FullValue;
  42.                     Console.WriteLine("FullValue: '{0}'", value);
  43.                     Guid? parentId = null;
  44.                     var parentFullValue = string.Empty;
  45.  
  46.                     var lastIndexOfSeparator = value.LastIndexOf(separator);
  47.                     if (lastIndexOfSeparator > -1)
  48.                     {
  49.                         parentFullValue = value.Substring(0, lastIndexOfSeparator);
  50.                         value = value.Substring(lastIndexOfSeparator + 1);
  51.                         Console.WriteLine("value: '{0}'", value);
  52.                         Console.WriteLine("parentFullValue: '{0}'", parentFullValue);
  53.  
  54.                         // parent should have been already appended to avoid the error:
  55.                         // An unhandled exception of type 'Microsoft.SharePoint.Client.PropertyOrFieldNotInitializedException' occurred in Microsoft.SharePoint.Client.Runtime.dll
  56.                         // Additional information: The property or field 'FullValue' has not been initialized. It has not been requested or the request has not been executed. It may need to be explicitly requested.
  57.                         //parentId = ltTargetEntries.First(e => e.FullValue == parentFullValue).Id;
  58.                         parentId = localIdToFullValueMap.First(e => e.Value == parentFullValue).Key;
  59.                         Console.WriteLine("parentId: '{0}'", parentId);
  60.  
  61.                     }
  62.  
  63.                     // instead creating a new ID, you can copy the existing ID
  64.                     // it works only if you copy the entries to another PWA instance,
  65.                     // and only if there wasn't already an entry with the same ID
  66.                     var id = Guid.NewGuid(); // lte.Id;
  67.  
  68.                     var leci = new LookupEntryCreationInformation
  69.                     {
  70.                         Id = id,
  71.                         Value = new LookupEntryValue { TextValue = value },
  72.                         SortIndex = lte.SortIndex
  73.                     };
  74.  
  75.                     Console.WriteLine("leci Id: '{0}', Value: '{1}'", leci.Id, leci.Value.TextValue);
  76.                     var fullValue = value;
  77.  
  78.                     // we should set the ParentId property only if the entry has really a parent
  79.                     // setting the ParentId property to null is not OK
  80.                     if (parentId.HasValue)
  81.                     {
  82.                         leci.ParentId = parentId.Value;
  83.                         fullValue = parentFullValue + separator + value;
  84.                     }
  85.  
  86.  
  87.                     localIdToFullValueMap.Add(leci.Id, fullValue);
  88.  
  89.                     ltTargetEntries.Add(leci);
  90.                     // if there are a lot of entries, it might be advisable to update and execute query after each of the entries
  91.                     // to avoid "The request uses too many resources" error message
  92.                     // https://pholpar.wordpress.com/2015/07/19/how-to-avoid-the-request-uses-too-many-resources-when-using-the-client-object-model-via-automated-batching-of-commands/
  93.                     // pcTarget.LookupTables.Update();
  94.                     // pcTarget.ExecuteQuery();
  95.                 });
  96.  
  97.                 pcTarget.LookupTables.Update();
  98.                 pcTarget.ExecuteQuery();
  99.             }
  100.             else
  101.             {
  102.                 Console.WriteLine("Target table '{0}' not found on PWA '{1}'", targetTable, targetPwa);
  103.             }
  104.         }
  105.     }
  106. }

The following call copies the lookup table RBS from one PWA instance to another one:

CopyHierarchicalLookupTableValues("http://YourProjectServer/PWA&quot;, "RBS", "http://AnotherProjectServer/PWA&quot;, "RBS");

The notes I made for the flat lookup tables apply for the hierarchical case as well:

If your lookup table has not a lot of entries, you can probably copy them in a single batch, using a single call to the ExecuteQuery method. Otherwise, if thee batch size exceeds the 2 MB limit, you might have an exception like “The request uses too many resources”. In this case I suggest you to invoke the ExecuteQuery method for each entry, or create an ExecuteQueryBatch method, as described in this post.

Theoretically, you could copy the entries with their ID, but technically it is not always an option. For example, if you would like to copy the entries within the same PWA instance, you can’t have two entries sharing the same IDs. Based on my experience, if you have already an entry with the same ID, and you would like to copy it into another lookup table, although no exception is thrown, the entry won’t be copied.


How to get the Url of the “Edit View” Page of a Specific SharePoint List View from PowerShell

$
0
0

There might be cases when you can’t access the Edit View page of a specific list view from the SharePoint UI. For example, there is no such direct link in the case of Survey lists. There is no ribbon including the Manage Views group, and the Views area is missing from the List settings page as well.

You can, however access the Edit View page from your browser if you know its URL. The standard URL of this page has this pattern:

http://YourSharePoint/Web/SubWeb/_layouts/15/ViewEdit.aspx?List=%7BDC913804%2DB28E%2D4F52%2DAF53%2DDEC490A1C83D%7D&View=%7B2E7DF707%2D42BA%2D44EE%2D87C6%2D0919CA38BDF1%7D

As you see, the ViewEdit.aspx page is responsible for this functionality. The encoded Ids (Guid) of the List and the View are passed as query string parameters (List and View respectively).

You can get the URL of the page using this PowerShell script easily:

$web = Get-SPWeb ‘http://YourSharePoint/Web/SubWeb&#8217;
$list = $web.Lists[‘YourList’]
# get the default view of the list
$view = $list.DefaultView
# or get an arbitrary view by its name
# $view = $list.Views[‘All Items’]
$viewId = $view.ID

function EscapeGuid($guid)
{
  return "{$guid}".ToUpper().Replace(‘-‘, ‘%2D’).Replace(‘{‘, ‘%7B’).Replace(‘}’, ‘%7D’)
}

$url = $web.Url + ‘/_layouts/15/ViewEdit.aspx?List=’ + (EscapeGuid $list.ID) + ‘&View=’ + (EscapeGuid $view.ID)

You can even start the page in Internet Explorer from PowerShell if you wish:

$ie = New-Object -ComObject InternetExplorer.Application
$ie.Navigate2($url)
$ie.Visible = $true


Viewing all 206 articles
Browse latest View live