Quantcast
Channel: Second Life of a Hungarian SharePoint Geek
Viewing all 206 articles
Browse latest View live

People Picker does not work when ActiveX Filtering is enabled in Internet Explorer

$
0
0

Last week we had a strange issue with one of the users. She used a simple SharePoint application that includes a People Picker control to enable users to assign items to other employees.

A pre-defined user should have been selected as a default value in the field via server side code, however it was empty on page load in this case. If the user typed the name or login name of any employees in the text box, the name was not resolved by the control as expected, although as we checked the network traffic by Fiddler, we saw that the request for name resolution was sent to the server and the server responded with the resolved entity. If the user tried to select the assignee via browsing the entities, she was able select the employee, but as she clicked the OK button in the Select People and Groups Webpage Dialog, the selection was lost and the text box remained empty.

Other users (using the same browser version, Internet Explorer 9, and having the very same permission on the system) had no problem with the application.

Fortunately, I noticed an unusual icon near to the URL in the browser, saying “Some content is blocked to help protect your privacy”.

image

As I found out, it was a result of the user activated unconsciously the ActiveX Filtering feature in her browser.

image

The solution was turning off ActiveX filtering for the site, however, one can turn off the whole feature, if it is not needed.

image

I was able to reproduce the issue when working with SharePoint 2010 up to the Internet Explorer version 11. In case of SharePoint 2013 the issue seems to be fixed, at least, I was not able to reproduce it with IE 10.



How to find out the real number of Queue Jobs

$
0
0

Recently we had an issue with Project Server. Although the Microsoft Project Server Queue Service was running, the items in the queue has not been processed, and the performance of the system degraded severely. The same time we found a lot of cache cluster failures in Windows Event Logs and ULS logs, it is not clear which one was the source of the problem and which is the result of the other. The “solution” was to install the February 2015 CU SharePoint Product Updates and restarting the server. 

However, even after the restart the number of the job entries seemed to be constant when checking via the PWA Settings / Manage Queue Jobs (Queue and Database Administration): at the first page load the total number displayed at the left bottom of the grid was 1000, however when we paged through the results or refreshed the status, the total was changed to 500 (seems to be an issue with the product). It means that PWA administrators don’t see the real number of the entries.

But how could one then get the real number of the queue jobs?

If you have permission to access the performance counters on the server (in the simplest case, if you are a local admin), then you can use the Current Unprocessed Jobs counter (ProjectServer:QueueGeneral), that – as its name suggests – give the total number of the current unprocessed jobs.

You have to find an alternative solution if you need the count of jobs having other status, or need even more granulate results, for example, the number of job entries that are ready for processing and are related to publishing a project.

The QueueJobs property of the Project class in the client object model (see the QueueJob and QueueJobCollection classes as well) provides only information related to a given project, and the same is true for the REST interface, where you can access the same information for your project like (the Guid is the ID of your project):

http://YourProjServer/PWA/_api/ProjectServer/Projects(‘98138ffd-d0fa-e311-83c6-005056b45654′)/QueueJobs

The best solution I’ve found is based on the GetJobCount method in the QueueSystem object in the PSI interface. Let’s see some practical PowerShell examples how to use it.

To get the reference for the proxy:

$pwaUrl = "http://YourProjServer/PWA"
$svcPSProxy = New-WebServiceProxy -Uri ($pwaUrl + "/_vti_bin/PSI/QueueSystem.asmx?wsdl") –UseDefaultCredential

To get the number of all entries without filtering:

$svcPSProxy.GetJobCount($Null, $Null, $Null)

For filtering, we can use the second and the third parameter of the method: the jobStates and messageTypes that are arrays of the corresponding JobState and QueueMsgType enumerations. Both of these enums are available as nested enums in the Microsoft.Office.Project.Server.Library.QueueConstants class. This class is defined in the Microsoft.Office.Project.Server.Library assembly, so if we would like to use the enums, we should load the assembly first:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Project.Server.Library")

Note: you can use the integer values corresponding to the enum values like:

$jobStates = (1, 7)

…however I don’t find it very developer friendly to use such magical constants in code, and you lose the autocomplete feature of PowerShell as well that you have when working with the enums as displayed below:

$jobStates = (
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+JobState]::ReadyForProcessing,
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+JobState]::ProcessingDeferred
)

similarly for message types:

$msgTypes = (
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+QueueMsgType]::ReportingProjectPublish,
  [int][Microsoft.Office.Project.Server.Library.QueueConstants+QueueMsgType]::ReportingProjectDelete
)

You can then access the count of filtered items like:

$svcPSProxy.GetJobCount($Null, $jobStates, $Null)

or

$svcPSProxy.GetJobCount($Null, $jobStates, $msgTypes)

It is worth to know that the QueueConstants class has two methods (PendingJobStates and CompletedJobStates) that return a predefined set of the enum values as a generic IEnumreable<QueueConstants.JobState>. We can use these methods from PowerShell as well:

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::PendingJobStates() | % { [int]$_ }

or

$jobStates = Microsoft.Office.Project.Server.Library.QueueConstants]::CompletedJobStates() | % { [int]$_ }


Automating the Provisioning of a PWA-Instance

$
0
0

When testing our custom Project Server 2013 solutions in the development system, or deploying them to the test system I found it useful to be able to use a clean environment (new PWA instance having an empty project database and separate SharePoint content database for the project web access itself and the project web sites) each time.

We wrote a simple PowerShell script that provisions a new PWA instance, including:
- A separate SharePoint content database that should contain only a single site collection: the one for the PWA. If the content DB already exists, we will use the existing one, otherwise we create a new one.
- The managed path for the PWA.
- A new site collection for the PWA using the project web application site template, and the right locale ID (1033 in our case). If the site already exists (in case we re-use a former content DB), it will be dropped before creating the new one.
- A new project database. If a project database with the same name already exists on the SQL server, it will be dropped and re-created.
- The content database will be mounted to the PWA instance, and the admin permissions are set.

Note, that we have a prefix (like DEV or TEST) that identifies the system. We set the URL of the PWA and the database names using this prefix. The database server names (one for the SharePoint content DBs and another one for service application DBs) include the prefix as well, and are configured via aliases in the SQL Server Client Network Utilities, making it easier to relocate the databases if needed.

$environmentPrefix = "DEV"

$webAppUrl = [string]::Format("http://ps-{0}.company.com", $environmentPrefix)

$contentDBName = [string]::Format("PS_{0}_Content_PWA", $environmentPrefix)
$contentDBServer = [string]::Format("PS_{0}_Content", $environmentPrefix)

$pwaMgdPathPostFix = "PWA"
$pwaUrl = [string]::Format("{0}/{1}", $webAppUrl, $pwaMgdPathPostFix)
$pwaTitle = "PWA Site"
$pwaSiteTemplate = "PWA#0"
$pwaLcid = 1033
$ownerAlias = "domain\user1"
$secondaryOwnerAlias = "domain\user2"

$projServDBName = [string]::Format("PS_{0}_PS", $environmentPrefix)
$projServDBServer = [string]::Format("PS_{0}_ServiceApp", $environmentPrefix)

Write-Host Getting web application at $webAppUrl
$webApp = Get-SPWebApplication -Identity $webAppUrl

$contentDatabase = Get-SPContentDatabase -Identity $contentDBName -ErrorAction SilentlyContinue

if ($contentDatabase -eq $null) {
  Write-Host Creating content database: $contentDBName
  $contentDatabase = New-SPContentDatabase -Name $contentDBName -WebApplication $webApp -MaxSiteCount 1 -WarningSiteCount 0 -DatabaseServer $contentDBServer
}
else {
  Write-Host Using existing content database: $contentDBName
}

$pwaMgdPath = Get-SPManagedPath -Identity $pwaMgdPathPostFix -WebApplication $webApp -ErrorAction SilentlyContinue
if ($pwaMgdPath -eq $null) {
  Write-Host Creating managed path: $pwaMgdPathPostFix
  $pwaMgdPath = New-SPManagedPath -RelativeURL $pwaMgdPathPostFix -WebApplication $webApp -Explicit
}
else {
  Write-Host Using existing managed path: $pwaMgdPathPostFix
}

$pwaSite = Get-SPSite –Identity $pwaUrl -ErrorAction SilentlyContinue
if ($pwaSite -ne $null) {
  Write-Host Deleting existing PWA site at $pwaUrl
  $pwaSite.Delete()
}

Write-Host Creating PWA site at $pwaUrl
$pwaSite = New-SPSite –Url $pwaUrl –OwnerAlias $ownerAlias –SecondaryOwnerAlias  $secondaryOwnerAlias -ContentDatabase $contentDatabase –Template $pwaSiteTemplate -Language $pwaLcid –Name $pwaTitle

$projDBState = Get-SPProjectDatabaseState -Name $projServDBName -DatabaseServer $projServDBServer
if ($projDBState.Exists) {
  Write-Host Removing existing Project DB $projServDBName
  Remove-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -WebApplication $webApp
}
Write-Host Creating Project DB $projServDBName
New-SPProjectDatabase –Name $projServDBName -DatabaseServer $projServDBServer -Lcid $pwaLcid -WebApplication $webApp

Write-Host Bind Project Service DB to PWA Site
Mount-SPProjectWebInstance –DatabaseName $projServDBName -DatabaseServer $projServDBServer –SiteCollection $pwaSite

#Setting admin permissions on PWA
Grant-SPProjectAdministratorAccess –Url $pwaUrl –UserAccount $ownerAlias

Using this script helps us to avoid a lot of manual configuration steps, saves us a lot of time and makes the result more reproducible.


Strange Access Denied Error in Project Server Client OM

$
0
0

Recently I worked with the Managed Client Object Model of Project Server 2013. Although being Farm Administrator, Site Collection Administrator (Site Owner) of every site collections and member of the Administrators group in Project Server, I received an Access Denied error (UnauthorizedAccessException) when executing even the simplest queries like this one:

  1. using (var projectContext = new ProjectContext(pwaUrl))
  2. {
  3.     var projects = projectContext.Projects;
  4.     projectContext.Load(projects);
  5.     projectContext.ExecuteQuery();
  6. }

The error message was:

Access denied. You do not have permission to perform this action or access this resource.

Error code: –2147024891

image

No entries corresponding the error or the correlation ID in the response was found in the ULS logs.

The real problem was the pwaUrl parameter in the ProjectContext constructor: by mistake I used the URL of the root site (like http://projectserver) instead of the URL of the PWA site (http://projectserver/PWA).


How to Read the Values of Fields bound to Lookup Tables via the Client Object Model

$
0
0

Assume you have an Enterprise Custom Field (let’s call this ECFResField’) defined for project resources, that is bound to a Lookup Table.

How can we read the textural values of the field as it is assigned to you resources? After all, what makes reading such values makes any different than getting field values without lookup tables?

If we have a look at a resource with a lookup table based custom field via an OData / REST query (for example, by http://YourProjectServer/pwa/_api/ProjectServer/EnterpriseResources), you can see, that the value is stored as a reference, like ‘’Entry_4d65d905cac9e411940700505634b541‘.

image

If we access the value via the managed client OM, we get it as a string array, even if only a single value can be selected from the lookup table. The reference value in the array corresponds to the value in the InternalName property of the lookup table entry. If we know the ID of the resource (we want to read the value from), the enterprise custom field (that means we know its internal name as well) and the related lookup table, we can get the result in a single request as shown below:

  1. using (var projectContext = new ProjectContext(pwaUrl))
  2. {
  3.     var lookupTableId = "4c65d905-cac9-e411-9407-00505634b541";
  4.     var resourceId = "f9497d1d-9145-e411-9407-00505634b541";
  5.  
  6.     var res = projectContext.EnterpriseResources.GetById(resourceId);
  7.     var lt = projectContext.LookupTables.GetById(lookupTableId);
  8.     var cfInternalName = "Custom_80bd269ecbc9e411940700505634b541";
  9.     projectContext.Load(res, r => r[cfInternalName]);
  10.     projectContext.Load(lt.Entries);
  11.     projectContext.ExecuteQuery();
  12.  
  13.     var valueEntries = res[cfInternalName] as string[];
  14.     if (valueEntries != null)
  15.     {
  16.         foreach (var valueEntry in valueEntries)
  17.         {
  18.             var lookupText = lt.Entries.FirstOrDefault(e => e.InternalName == valueEntry) as LookupText;
  19.             var ltValue = (lookupText != null) ? lookupText.Value : null;
  20.             Console.WriteLine("Value: '{0}' (Entry was '{1}')", ltValue, valueEntry);
  21.         }
  22.     }
  23. }

However, if these values are unknown, and we know only the name of the resource and the field, we need to submit an extra request to get the IDs for the second step:

  1. using (var projectContext = new ProjectContext(pwaUrl))
  2. {
  3.     var resourceName = "Brian Cox";
  4.     var fieldName = "ResField";
  5.  
  6.     projectContext.Load(projectContext.EnterpriseResources, ers => ers.Include(r => r.Id, r => r.Name));
  7.     projectContext.Load(projectContext.CustomFields, cfs => cfs.Include(f => f.Name, f => f.InternalName, f => f.LookupTable.Id, f => f.LookupEntries));
  8.  
  9.     projectContext.ExecuteQuery();
  10.  
  11.     var resourceId = projectContext.EnterpriseResources.First(er => er.Name == resourceName).Id.ToString();
  12.     var cf = projectContext.CustomFields.First(f => f.Name == fieldName);
  13.     var cfInternalName = cf.InternalName;
  14.     var lookupTableId = cf.LookupTable.Id.ToString();
  15.  
  16.     var res = projectContext.EnterpriseResources.GetById(resourceId);
  17.     var lt = projectContext.LookupTables.GetById(lookupTableId);
  18.  
  19.     projectContext.Load(res, r => r[cfInternalName]);
  20.     projectContext.Load(lt.Entries);
  21.     projectContext.ExecuteQuery();
  22.  
  23.     var valueEntries = res[cfInternalName] as string[];
  24.     if (valueEntries != null)
  25.     {
  26.         foreach (var valueEntry in valueEntries)
  27.         {
  28.             var lookupText = lt.Entries.FirstOrDefault(e => e.InternalName == valueEntry) as LookupText;
  29.             var ltValue = (lookupText != null) ? lookupText.Value : null;
  30.             Console.WriteLine("Value: '{0}' (Entry was '{1}')", ltValue, valueEntry);
  31.         }
  32.     }
  33. }

Note: although this post was about a custom field defined for the resource entity, you can apply the same technique for project and task fields as well.


How to Read the Values of Fields bound to Lookup Tables via REST

$
0
0

In my recent post I’ve illustrated how to read the values of Enterprise Custom Fields (ECT) that are bound to Lookup Tables. I suggest you to read that post first, as it can help you to better understand the relations between the custom field values and the internal names of the lookup table entries.

In this post I show you how to read such values using the REST interface. Instead of C# I use JavaScript in this example. The sample code is using the version 3.0.3-Beta4 of the LINQ for JavaScript library (version is important, as this version contains lower case function names in contrast to the former stable relases!) and the version 1.8.3 of jQuery.

Assuming that these scripts are all located in the PSRESTTest/js subfolder in the Layouts folder, we can inject them via a Script Editor Web Part using this HTML code:

<script type="text/ecmascript" src="/_layouts/15/PSRESTTest/js/jquery-1.8.3.min.js"></script>
<script type="text/ecmascript" src="/_layouts/15/PSRESTTest/js/linq.min.js"></script>
<script type="text/ecmascript" src="/_layouts/15/PSRESTTest/js/GetCustFieldREST.js"></script>

In our GetCustFieldREST.js script we define the String.format helper function first:

String.format = (function () {
    // The string containing the format items (e.g. "{0}")
    // will and always has to be the first argument.
    var theString = arguments[0];

    // start with the second argument (i = 1)
    for (var i = 1; i < arguments.length; i++) {
        // "gm" = RegEx options for Global search (more than one instance)
        // and for Multiline search
        var regEx = new RegExp("\\{" + (i – 1) + "\\}", "gm");
        theString = theString.replace(regEx, arguments[i]);
    }

    return theString;
});

Another helper function supports sending fault-tolerant REST-queries:

function sendRESTQuery(queryUrl, onSuccess, retryCount) {

    var retryWaitTime = 1000; // 1 sec.
    var retryCountMax = 3;
    // use a default value of 0 if no value for retryCount passed
    var retryCount = (retryCount != undefined) ? retryCount : 0;

    //alert($(‘#__REQUESTDIGEST’).val());
    //$.support.cors = true; // enable cross-domain query
    $.ajax({
        //beforeSend: function (request) {
        //    request.withCredentials = false;
        //},
        type: ‘GET’,
        //xhrFields: { withCredentials: false },
        contentType: ‘application/json;odata=verbose’,
        url: baseUrl + queryUrl,
        headers: {
            ‘X-RequestDigest': $(‘#__REQUESTDIGEST’).val(),
            "Accept": "application/json; odata=verbose"
        },
        dataType: "json",
        complete: function (result) {
            var response = JSON.parse(result.responseText);
            if (response.error) {
                if (retryCount <= retryCountMax) {
                    window.setTimeout(function () { sendRESTQuery(queryUrl, onSuccess, retryCount++) }, retryWaitTime);
                }
                else {
                    alert("Error: " + response.error.code + "\n" + response.error.message.value);
                }
            }
            else {
                bgDetails = response.d;
                onSuccess(bgDetails);
            }
        }
    });
}

 

The baseUrl variable holds the root of the REST endpoint for Project Server. I assume your PWA site is provisioned to the PWA managed path.

var baseUrl = String.format("{0}//{1}/PWA/_api/ProjectServer/", window.location.protocol, window.location.host);

We call the getResCustProp method when the page is loaded:

$(document).ready(getResCustProp);

In the getResCustProp method I first query the InternalName of the custom field, as well as the InternalName and the FullValue properties of the lookup entries of the corresponding lookup table. In a second query I read the custom field value for the specified enterprise resource, and compare the value (or values) stored in the field with the InternalName property of the lookup table entries from the first query. Note, that we should escape the underscore in the InternalName property, and should use the ‘eval’ JavaScript function, as we don’t know the name of the custom field (that is the name of the property) at design time.

function getResCustProp() {
    var resourceName = "Brian Cox";
    var fieldName = "ResField";
    var fieldQueryUrl = String.format("CustomFields?$expand=LookupTable/Entries&$filter=Name eq ‘{0}’&$select=InternalName,LookupTable/Entries/InternalName,LookupTable/Entries/FullValue", fieldName);
    sendRESTQuery(fieldQueryUrl, function (fieldResponseData) {
        var field = fieldResponseData.results[0];
        var lookupTableEntries = Enumerable.from(field.LookupTable.Entries.results);
        var resourceQuery = String.format("EnterpriseResources?$filter=Name eq ‘{0}’&$select={1}", resourceName, field.InternalName)
        sendRESTQuery(resourceQuery, function (resourceResponseData) {
            var resource = resourceResponseData.results[0];
            var encodedInternalName = field.InternalName.replace(‘_’, ‘_x005f_’);
            var fieldValue = eval("resource." + encodedInternalName);
            Enumerable.from(fieldValue.results).forEach(function (fv) {
                var entry = lookupTableEntries.first(String.format(‘$="{0}"’, fv));
                alert(String.format("Value: ‘{0}’, Entry: ‘{1}’", entry.FullValue, fv));
            });
        });
    });
}

Of course, if you would like to use the code in production, you should add further data validation (if there is any resource and custom field returned by the queries, etc.) and error handling to the method.


Strange Localization Issue When Working with List and Site Templates

$
0
0

One of our clients runs a localized version of SharePoint 2013. The operating system is a Windows 2012 R2 Server (English), the SharePoint Server itself is English as well. The German language pack is installed and sites and site collections were created in German. We are working with various custom site templates. Recently one of these templates had to be extended with a Task-list based custom lists (called ToDos). The users prepared the list in a test site, and we saved the list as a template. We created a new site using the site template (we will refer to this site later as prototype), and next we created a new list based on the custom list template. Finally, we saved the altered web site as site template, including content using the following PowerShell commands:

$web = Get-SPWeb $siteTemplateSourceUrl
$web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)

We created a test site using the new site template, and everything seemed to be OK. However, after a while, the users started to complain, that a menu for the new list contains some English text as well. As it turned out, some of the views for the new list were created with English title:

image

Problem2

First, we verified the manifest.xml of the list template, by downloading the .stp file (that has a CAB file format) and opening it using IZArc. We found, that the DisplayName property of the default view (“Alle Vorgänge” meaning “All Tasks”) and a custom datasheet view (called “db”, stands for “Datenblatt”) contains the title as text, the DisplayName property of the other views contains a resource reference (like “$Resources:core,Late_Tasks;”).

ListTemplate

Next, we downloaded the site template (the .wsp file has also a CAB file format, and can be opened by IZArc), and verified the schema.xml for the ToDos list. We found, that original, German texts (“Alle Vorgänge” and “db”) were kept, however, all other view names were “localized” to English.

English

At this point I guessed already, that problem was caused by the local of the thread the site template exporting code was run in. To verify my assumption, I saved the prototype site from the site settings via the SharePoint web UI (that is German in our case). This time the resulting schema.xml in the new site template .wsp contained the German titles:

German

We got the same result (I mean German view titles) if we called our former PowerShell code by specifying German as the culture for the running thread. See more info about the Using-Culture helper method, SharePoint Multilingual User Interface (MUI) and PowerShell here:

Using-Culture de-DE 
  $web = Get-SPWeb $pwsSiteTemplateSourceUrl
  $web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)
}

We’ve fixed the existing views via the following PowerShell code (Note: Using the Using-Culture helper method is important in this case as well. We have only a single level of site hierarchy in this case, so there is no recursion in code!):

$web = Get-SPWeb http://SharePointServer

function Rename-View-IfExists($list, $viewNameOld, $viewNameNew)
{
  $view =  $list.Views[$viewNameOld]
  If ($view -ne $Null) {
      Write-Host Renaming view $viewNameOld to $viewNameNew
      $view.Title = $viewNameNew
      $view.Update()
  }
  Else {
    Write-Host View $viewNameOld not found
  }
}

Using-Culture de-DE {
  $web.Webs | % {
    $list = $_.Lists["ToDos"]
    If ($list -ne $Null) {
      Write-Host ToDo list found in $_.Title
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge"
      Rename-View-IfExists $list "Upcoming" "Anstehend"
      Rename-View-IfExists $list "Completed" "Abgeschlossen"
      Rename-View-IfExists $list "My Tasks" "Meine Aufgaben"
      Rename-View-IfExists $list "Gantt Chart" "Gantt-Diagramm"
      Rename-View-IfExists $list "Late Tasks" "Verspätete Vorgänge" 
    }
  }
}

Strange, that we had no problem with field names or other localizable texts when worked with the English culture.


May Merge-SPLogFile Flood the Content Database of the Central Administration Web Application?

$
0
0

In the recent weeks we searched for the cause of a specific error in one of our SharePoint 2013 farms. To get detailed trace information, we switched the log level often to the VerboseEx mode. A few days later the admins alerted us that the size of the Central Administration content database has been increased enormously (at that time it was about 80 GB!).

Looking for the source of this unexpected amount of data I found a document library called Diagnostics Log Files (Description: View and analyze merged log files from the all servers in the farm) that contained 1824 items.

image

image

Although I consider myself primarily a SharePoint developer, I always try to remain up-to-date in infrastructural themes as well, but to tell the truth I’ve never seen this document library before. Searching the web didn’t help as well.

Having a look into the document library I found a set of folders with GUID names.

image

Each of the folders contained a lot of files: a numbered set for each of the servers in the farm.

image

The files within the folders are archived logs in .gz file format, each around 26 MB.

image

Based on the description of the library I guessed that it has a relation to the Merge-SPLogFile cmdlet, that performs collection of ULS log files from all of the servers in the farm an saves the aggregation in the specified file on the local system, although I have not found any documentation how it performs this action and if it has anything to do with the content DB of the central admin site.

 

After a few hours of “reflectioning”, it was obvious, how this situation was achieved. If you are not interested in call chains, feel free to jump through the following part.

All of the classes and methods below are defined in the Microsoft.SharePoint assembly, if not specified otherwise.

The InternalProcessRecord method of the Microsoft.SharePoint.PowerShell.SPCmdletMergeLogFile class (Microsoft.SharePoint.PowerShell assembly) creates a new instance of the SPMergeLogFileUtil class based on the path of the aggregated log folder and the filter expression, and calls its Run method:

SPMergeLogFileUtil util = new SPMergeLogFileUtil(this.Path, filter);
ThreadPool.QueueUserWorkItem(new WaitCallback(util.Run));

In the Run method of the Microsoft.SharePoint.Diagnostics.SPMergeLogFileUtil class:

public void Run(object stateInfo)
{
  try
  {
    this.Progress = 0;
    // Creates the diagnostic log files document library in central admin (if does not yet exist) via the GetLogFilesList method
    // Add a new subfolder (having GUID name) to the library. If the folder already exists, deletes its content.
    // Executes child jobs (SPMergeLogFilesJobDefinition) on each farm member, see more details about it later below
    List<string> jobs = this.DispatchCollectingJobs();
    // Waits for all child jobs to be finished on the farm members
    this.MonitorJobs(jobs);
    // Merges the content of the collected files from the central admin document library into the specified local file system folder by calling the MergeFiles method
    // Finally deletes the temporary files one by one from the central admin document library and at the very end deletes their folder as well
    this.RunMerge();
  }
  catch (Exception exception)
  {
    this.Error = exception;
  }
  finally
  {
    this.Progress = 100;
  }
}

Yes, as you can see, if there is any error in the file collection process on any of the farm members, or the merging process fails, the files won’t be deleted from the Diagnostics Log Files document library.

Instead of the RunMerge method, the deletion process would have probably a better place in the finally block, or at least, in the catch block one should check if the files were removed successfully.

 

A few words about the Microsoft.SharePoint.Diagnostics.SPMergeLogFilesJobDefinition, as promised earlier. Its Execute method calls the CollectLogFiles method that creates an ULSLogFileProcessor instance based on the requested log filter and gets the corresponding ULS entries from the farm member the job running on, stores the entries in a temporary file in the file system, uploads them to the actual subfolder (GUID name) of the Diagnostics Log Files document library (file name pattern: [ServerLocalName] (1).log.gz or [ServerLocalName].log.gz if a single file is enough to store the log entries from the server), and delete the local temporary file.

 

A few related text values can we read via PowerShell as well:

In the getter of the DisplayName property in the SPMergeLogFilesJobDefinition class returns:

SPResource.GetString("CollectLogsJobTitle", new object[] { base.Server.DisplayName });

reading the value via PowerShell:

[Microsoft.SharePoint.SPResource]::GetString("CollectLogsJobTitle")

returns

Collection of log files from server |0

In the GetLogFilesList method of the SPMergeLogFileUtil class we find the title and description of the central admin document library used for log merging

string title = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsTitle", new object[0]);
string description = SPResource.GetString(SPGlobal.ServerCulture, "DiagnosticsLogsDescription", new object[0]);

Reading the values via the PowerShell:

[Microsoft.SharePoint.SPResource]::GetString("DiagnosticsLogsTitle")

returns

Diagnostics Log Files

[Microsoft.SharePoint.SPResource]::GetString("DiagnosticsLogsDescription")

returns

View and analyze merged log files from the all servers in the farm

These values supports our theory, that the library was created and filled by these methods.

 

Next, I’ve used the following PowerShell script to look for the failed merge jobs in the time range when the files in the Central Administration have been created:

$farm = Get-SPFarm
$from = "3/21/2015 12:00:00 AM"
$to = "3/21/2015 6:00:00 AM"
$farm.TimerService.JobHistoryEntries | ? {($_.StartTime -gt $from) -and ($_.StartTime -lt $to) -and ($_.Status -eq "Failed") -and ($_.JobDefinitionTitle -like "Collection of log files from server *")}

The result:

image

As you can see, in our case there were errors on both farm member servers, for example, the storage space was not enough on one of them. After the jobs have failed, the aggregated files were not deleted from the Diagnostics Log Files document library of the Central Administration.

Since even a successful execution of the Merge-SPLogFile cmdlet can temporarily increase the size of the Central Administration content database considerably, and the effect of the failed executions is not only temporary (and particularly large, if it happens several times and is combined with verbose logging), SharePoint administrators should be aware of these facts, consider them when planning database sizes and / or take an extra maintenance step to remove the rest of failed merge processes from the Diagnostics Log Files document library regularly. As far as I see, the issue hits SharePoint 2010 and 2013 as well.



Make the Status Bar Visible on Project Detail Pages for Users with Read-Only Permissions on PWA with Publishing Feature

$
0
0

Assume the following situation: You have a Project Web Access site (PWA) on your Project Server that has the SharePoint Server Publishing feature activated both on the site collection and on the site level. This detail might seem to be irrelevant now, but gets an importance pretty soon. The permissions on the PWA are customized and rather restricted, even project managers have only Read permissions on the root site level. The PMs complain often, that they can not open the project from the Project Detail Pages (PDP) for editing, the corresponding buttons are inactive on the ribbon.

image

The administrators check the project and see, that it is checked out to someone else.

Problem: the administrators see the status information on the PDPs, however other user do not. For example, if an administrator opens the project for editing, the following screen is displayed:

image

The users with standard rights see however only this, without the status bar:

image

I found, that the content of the status bar is contained in a DIV with id="pageStatusBar" on the web page. There are several JavaScript methods, that manipulate this content, like addStatus, appendStatus, removeAllStatus, removeStatus, setStatusPriColor (all of these in init.debug.js). After setting breakpoints in these methods, and reloading the site, I found, that the status bar is displayed temporally in each cases, but it is hidden then for standard users, via a JavaScript method that is included in the PDP itself (like Schedule.aspx):

document.onreadystatechange=fnRemoveAllStatus; function fnRemoveAllStatus(){removeAllStatus(true)};

For the administrators, this script was not included in the page, and so the status bar remained visible.

After a search using Reflector, I found the very same script block being inserted to the page by the HideStatusBar method of the PublishingRibbon class (namespace: Microsoft.SharePoint.Publishing.Internal.WebControls, assembly: Microsoft.SharePoint.Publishing, just as any other classes below). This method was called by the HideRibbonAndStatusBarAndSiteActionsMenu method, which is called by the OnLoad method of the same PublishingRibbon class, if the CanShowSiteActionsMenuItems method of the ConsoleVisibleUtilities class returns false:

public static bool CanShowSiteActionsMenuItems
{
  get
  {
    SPBasePermissions permissions = SPBasePermissions.BrowseUserInfo | SPBasePermissions.CreateSSCSite | SPBasePermissions.CreateAlerts | SPBasePermissions.UseRemoteAPIs | SPBasePermissions.UseClientIntegration |  SPBasePermissions.ViewVersions | SPBasePermissions.OpenItems | SPBasePermissions.ViewListItems | SPBasePermissions.ViewPages | SPBasePermissions.Open | SPBasePermissions.ViewFormPages;
    CombinedBasePermissions permissions2 = new CombinedBasePermissions();
    If ((((permissions | permissions2.ListItemPermissions) == permissions) && ((permissions | permissions2.ListPermissions) == permissions)) && ((permissions | permissions2.RootSitePermissions) == permissions))
    {
      return ((permissions | permissions2.SitePermissions) != permissions);
    }
    return true;
  }
}

The CombinedBasePermissions is a helper class that aggregates the site and list permissions of the current user from the current SharePoint context.

The value of the permissions variable is a combination of several list- and site-level base permissions. If you compare it with the standard Read permission level (see the next two screenshots below), you can see, that it is exactly the same combination of permissions:

List permissions for the Read permission level

image

Site permissions for the Read permission level

image

We compare the site and list permission returned by CombinedBasePermissions with these predefined set of base permissions. In the case of the PDPs we have no list context, that means, only the site permissions will be compared. If the current user has no permission beyond the ones included in the standard Read permission level, the remove script will be injected into the page, and the status bar won’t be displayed for the user.

The following screenshot illustrates the site permissions for the Team members (Microsoft Project Web App) permission level:

image

It includes the Browse Directories and the Edit Personal User Information base permissions. If the user had the Team members (Microsoft Project Web App) permission level either with or without the Read permission level, they would like to see the status bar. However, granting extra permissions to the users might be not desired on one hand, on the other hand, its effect would not be limited to the PDPs, the relevant status bar would be displayed on all other pages as well.

You can consider, I you really need the SharePoint Server Publishing feature. If not, deactivating it on the site level might solve the issue as well.

If you are OK with a quick and dirty solution, include the following script in you PDPs, for example, via a Script Editor Web Part:

<script type="text/ecmascript">
  function removeAllStatus(b) {
  }
</script>

The script overrides the default implementation of the removeAllStatus method in the scope of the page it is included into, and makes it impossible to hide the status bar.

As the result of the page customization, that status bar left displayed for standard users as well:

image


Automating Project Server development tasks via PowerShell and the Client Object Model – Customizing Project Web Site templates

$
0
0

I start with a note this time: Even though you were not interested in Project Server itself at all, I suggest you to read the post further, while most of the issues discussed below are not Project Server specific, they apply to SharePoint as well.

Recently I work mostly on a Project Server customization project. As I’ve learned on my former development projects, I try to automate so much repetitive tasks as possible (like automating the PWA provisioning), thus remains more time for the really interesting stuff. I plan to post my results on this blog to share the scripts and document the experiences for myself as well.

One of the very first tasks (and probably a never-ending one) was to create a customized Project Web Site (PWS) site template. New Enterprise Projects created in the Project Web Access (PWA) should have their PWS created based on the custom site template.

The process of customizing a PWS site template is described in this post, however, there are a few issues if we apply this approach alone, just to name a few of them:

– PWS master pages cannot be edited using SharePoint Designer by default. There is a workaround for this issue.

– If I create a custom master page for the PWA and would like a PWS to refer the same master page, I can set it for example using PowerShell. However, if I create a site template from this PWS, this configuration seems to be lost in the template, and template refers to the default seattle.master. I have similar experience with the site image / logo, I can set one, but this setting seems to be not saved in the template.

– The standard navigation structure of a project site (and all site template created based on it) contains project-specific navigation nodes, like Project Details that contains the Guid of the current project as a query string parameter. If you create a site template from this site, any project sites that will be created based on this template will contain this node twice: one of the is created based on the site template (wrong Guid, referring to the project the original site belongs to, thus wrong URL), and another one is created dynamically as the project web site gets provisioned.

The workflow of my web site template creation and customization process includes four main parts, and two of them – step 2 and step 4 – are automated by our script.

The first part of the process (including step 1 and step 2) is optional. If you have changed nothing in your web site prototype, you can immediately start with the manual manipulation of the extracted web site template content (step 3), otherwise, we have to get a fresh version of the template into our local system for the further customizations.

Step 1: Creation and customization a SharePoint web site, that serves as a prototype for the web site template.

A SharePoint web site is customized based on the requirements using the SharePoint web UI, SharePoint Designer (for PWA see this post), or via other tools, like PowerShell scripts (for example, JSLink settings). This is a “manual” task.

Step 2: Creation of the web site template based on the prototype, downloading and extracting the site template.

A site template is created (including content) based on the customized web site. If a former site template having the same name already exists, if will be deleted first.

The site template is downloaded to the local file system (former file having the same name is deleted first).

The content of the .wsp file (CAB format) is extracted into a local folder (folder having the same name is deleted first, if it exists).

Step 3: Customization of the extracted web site template artifacts.

The script is paused. In this step you have the chance to manual customization of the solution files, like ONet.xml.

Step 4: Compressing the customized files into a new site template, and uploading it to SharePoint.

After a key press the script runs further.

Files having the same name as our site template and extension of .cab or .wsp will be deleted. The content of the folder is compressed as .cab and the renamed to .wsp.

In the final step the original web site template is removed and the new version is installed.

Next, a few words about the CAB extraction and compression tools I chose for the automation. Minimal requirements were that the tool must have a command line interface and it should recognize the folder structure to be compressed automatically, without any helper files (like the DDF directive file in case of makecab).

After reading a few comparisons (like this and this one) about the alternative options, I first found IZArc and its command line add-on (including IZARCC for compression and IZARCE for extraction, see their user’s manual for details) to be the best choice. However after a short test I experienced issues with the depth of the folder path and file name length in case of IZARCE, so I fell back to extrac32 for the extraction.

Finally, the script itself:

$pwaUrl = "http://YourProjectServer/PWA/&quot;
$pwsSiteTemplateSourceUrl = $pwaUrl + "YourPrototypeWeb"
$solutionName = "YourSiteTemplate"
$wspFileName = $solutionName + ".wsp"
$cabFileName = $solutionName + ".cab"
$siteTemplateTitle = $solutionName
$siteTemplateName = $solutionName
$siteTemplateDescription = "PWS Website Template"

$localRootPath = "D:\SiteTemplates\"
$wspExtractFolderName = $solutionName
$wspExtractFolder = $localRootPath + $wspExtractFolderName
$wspFilePath = $localRootPath + $wspFileName
$wspLocalPath = $localRootPath + $wspFileName
$wspUrl = $pwaUrl + "_catalogs/solutions/" + $wspFileName

$cabFilePath = $localRootPath + $cabFileName

function Using-Culture (
   [System.Globalization.CultureInfo]   $culture = (throw "USAGE: Using-Culture -Culture culture -Script {…}"),
   [ScriptBlock]
   $script = (throw "USAGE: Using-Culture -Culture culture -Script {…}"))
   {
     $OldCulture = [Threading.Thread]::CurrentThread.CurrentCulture
     $OldUICulture = [Threading.Thread]::CurrentThread.CurrentUICulture
         try {
                 [Threading.Thread]::CurrentThread.CurrentCulture = $culture
                 [Threading.Thread]::CurrentThread.CurrentUICulture = $culture
                 Invoke-Command $script
         }
         finally {
                 [Threading.Thread]::CurrentThread.CurrentCulture = $OldCulture
                 [Threading.Thread]::CurrentThread.CurrentUICulture = $OldUICulture
         }
   }

function Remove-SiteTemplate-IfExists($solutionName, $wspFileName, $pwaUrl) 
{
  $us = Get-SPUserSolution -Identity $solutionName -Site $pwaUrl -ErrorAction SilentlyContinue
  if ($us -ne $Null)
  {
    Write-Host Former version of site template found on the server. It will be removed…
    Uninstall-SPUserSolution -Identity $solutionName -Site $pwaUrl -Confirm:$False
    Remove-SPUserSolution -Identity $wspFileName -Site $pwaUrl -Confirm:$False
  }
}

function Remove-File-IfExists($path)
{
  If (Test-Path $path)
  {
    If (Test-Path $path -PathType Container)
    {
      Write-Host Deleting folder: $path
      Remove-Item $path -Force -Recurse
    }
    Else
    {
      Write-Host Deleting file: $path
      Remove-Item $path -Force
    }
  }
}

Do { $downloadNewTemplate = Read-Host "Would you like to get a new local version of the site template to edit? (y/n)" }
Until ("y","n" -contains $downloadNewTemplate )

If ($downloadNewTemplate -eq "y")
{

    Remove-SiteTemplate-IfExists $solutionName $wspFileName $pwaUrl

    Using-Culture de-DE { 
     Write-Host Saving site as site template including content
     $web = Get-SPWeb $pwsSiteTemplateSourceUrl
     $web.SaveAsTemplate($siteTemplateName, $siteTemplateTitle, $siteTemplateDescription, 1)
   }

  Remove-File-IfExists $cabFilePath

  Write-Host Downloading site template
  $webClient = New-Object System.Net.WebClient
  $webClient.UseDefaultCredentials  = $True 
  $webClient.DownloadFile($wspUrl, $cabFilePath)

  # clean up former version before downloading the new one
  # be sure you do not lock the deletion, for example, by having one of the subfolders opened in File Explorer,
  # or via any file opened in an application
  Remove-File-IfExists $wspExtractFolder

  Write-Host Extracting site template into folder $wspExtractFolder
  #
http://updates.boot-land.net/052/Tools/IZArc%20MANUAL.TXT
  # limited file lenght / folder structure depth! :-(
  #& "C:\Program Files (x86)\IZArc\IZARCE.exe" -d $cabFilePath $wspExtractFolder

  #http://researchbin.blogspot.co.at/2012/05/making-and-extracting-cab-files-in.html
  #expand $cabFilePath $wspExtractFolder -F:*.*
  extrac32 /Y /E $cabFilePath /L $wspExtractFolder
}

Write-Host "Alter the extracted content of the site template, then press any key to upload the template…"
# wait any key press without any output to the console
#
http://technet.microsoft.com/en-us/library/ff730938.aspx
$dummy = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")

# clean up former version before creating the new one
# TODO rename it using a date time pattern instead of deletion!
Remove-File-IfExists $cabFilePath
Remove-File-IfExists $wspFilePath

# makecab: we cannot include multiple files directly. To do that, we have to create a directive file called a Diamond Directive File(DDF) and include instructions in it
#
http://comptb.cects.com/automate-compression-tasks-cli/
& "C:\Program Files (x86)\IZArc\IZARCC.exe" -a -r -p $cabFilePath $wspExtractFolder

Rename-Item $cabFilePath $wspFileName

# remove former solution before uploading and activating the new one
Remove-SiteTemplate-IfExists $solutionName $wspFileName $pwaUrl

Write-Host Installing the new version of the site template
Add-SPUserSolution -LiteralPath $wspFilePath -Site $pwaUrl
$dummy = Install-SPUserSolution -Identity $solutionName -Site $pwaUrl

Note: If you are working with the English version of the PWA and have an English operating system on the server, you don’t need the Using-Culture function. To learn more about it see this post.


Automating the Deployment of a Customized Project Web Site Template via PowerShell and the Managed Client Object Model

$
0
0

Assume, you have created a customized web site template for your enterprise project type in the development environment as described here, and now you would like to deploy it into the test farm. Of course, you can manually delete the former site template, upload the new one, re-configure it to be the associated web site template for your enterprise project type, and finally re-create your test project (that means, checking in and deleting the existing one, and create it again using the new template), but this procedure is boring, cumbersome and – as any human-based process – rather error-prone.

Why do not automate this step as well?

I’ve created a PowerShell script that performs the steps outlined above. The first steps (deleting the former version of the site template and uploading the new one) can be done by native PowerShell Cmdlets, but for the remaining, Project Server related tasks require the Managed Client Object Model, so we import the necessary assemblies into the process.

First we get a list of all projects and a list of all enterprise project types, then query for the right ones on the “client side”.

Note: Although PowerShell does not support .NET extension methods (like the Where and Include methods of the client object model) natively, we could restrict the items returned by these queries to include really only the item we need (see solution here), and include only the properties we need (as described here). As the item count of the projects and enterprise project types is not significant, and we should use the script on the server itself due to the SharePoint Cmdlets, it has no sense in this case to limit the network traffic via these tricks.

Next, we update the web site template setting (WorkspaceTemplateName  property) of the enterprise project type. We need this step as the original vale was reset to the default value as we deleted the original site template on re-upload.

If the test project is found, we delete it (after we checked it in, if it was checked out), and create it using the updated template.

Since these last steps (project check-in, deletion, and creation) are all queue-based operations, we should use the WaitForQueue method to be sure the former operation is completed before we start the next step.

$pwaUrl = "http://YourProjectServer/PWA/&quot;
$solutionName = "YourSiteTemplate"
$wspFileName = $solutionName + ".wsp"
$timeoutSeconds = 1000
$projName = "TestProj"

# English
$projType = "Enterprise Project"
$pwaLcid = 1033
# German
#$projType = "Enterprise-Projekt"
#$pwaLcid = 1031

# path of the folder containing the .wsp
$localRootPath = "D:\SiteTemplates\"
$wspLocalPath = $localRootPath + $wspFileName

# uninstall / remove the site template if activated / found
$solution = Get-SPUserSolution -Identity $wspFileName -Site $pwaUrl -ErrorAction SilentlyContinue
If ($solution -ne $Null) {
  If ($solution.Status -eq "Activated") {
    Write-Host Uninstalling web site template
    Uninstall-SPUserSolution -Identity $solutionName -Site $pwaUrl -Confirm:$False
  }
  Write-Host Removing web site template
  Remove-SPUserSolution -Identity $wspFileName -Site $pwaUrl -Confirm:$False
}

# upload and activate the new version
Write-Host Uploading new web site template
Add-SPUserSolution -LiteralPath $wspLocalPath -Site $pwaUrl
Write-Host Installing new web site template
$dummy = Install-SPUserSolution -Identity $solutionName -Site $pwaUrl
 
# set the path according the location of the assemblies
Add-Type -Path "c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.ProjectServer.Client.dll"
Add-Type -Path "c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"

$projectContext = New-Object Microsoft.ProjectServer.Client.ProjectContext($pwaUrl)

# get lists of enterprise project types and projects
$projectTypes = $projectContext.LoadQuery($projectContext.EnterpriseProjectTypes)
$projects = $projectContext.Projects
$projectList = $projectContext.LoadQuery($projectContext.Projects)

$projectContext.ExecuteQuery()

$entProjType = $projectTypes | ? { $_.Name -eq $projType }
$project = $projectList | ? { $_.Name -eq $projName }

Write-Host Updating web site template for the enterprise project type
$web = Get-SPWeb $pwaUrl
$template = $web.GetAvailableWebTemplates($pwaLcid) | ? { $_.Title -eq $solutionName }

$entProjType.WorkspaceTemplateName = $template.Name
$projectContext.EnterpriseProjectTypes.Update()
$projectContext.ExecuteQuery()

If ($project -ne $Null) {
  If ($project.IsCheckedOut) {
    Write-Host Project $projName is checked out, checking it in before deletion
    $checkInJob = $project.Draft.CheckIn($True)
    $checkInJobState = $projectContext.WaitForQueue($checkInJob, $timeoutSeconds)
    Write-Host Check-in project job status: $checkInJobState
  }
  Write-Host Deleting existing project $projName
  # we can delete the project either this way
  #$removeProjResult = $projects.Remove($project)
  #$removeJob = $projects.Update()
  # or
  $removeJob = $project.DeleteObject()
  $removeJobState = $projectContext.WaitForQueue($removeJob, $timeoutSeconds)
  Write-Host Remove project job status: $removeJobState
}

I found the set of Project Server PowerShell Cmdlets is limited, and rather operation-based. You can use it, as long as your single task is to administer Project Server instances and databases. However, when it comes to the interaction with Project Server entities, you have to involve the Managed Client Object Model. Hopefully this example provides not only a reusable tool, but also helps you understand, how to extend your own PowerShell library with the methods borrowed from the client side .NET libraries.


Breaking Changes in Project Server Update?

$
0
0

Recently I have extended the Visual Studio solution, that includes the code sample for a former blog post, illustrating how to register Project Server event handlers via the managed client object model.

When I wanted to build the solution, the build was broken because of a compile time error in the code I have not change since last November:

‘Microsoft.ProjectServer.Client.EventHandlerCreationInformation’ does not contain a definition for ‘CancelOnError’

image

I’ve opened the corresponding assembly (C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.ProjectServer.Client.dll) in Reflector,  and found that there is really no CancelOnError property defined on the class Microsoft.ProjectServer.Client.EventHandlerCreationInformation.

image

Based on the official documentation, this property should exist, and I’m sure I was able to compile my code, so it existed at that time.

Our developer environment was recently patched from the SP1 patch level to the February 2015 PU patch level, so it must have been “lost” that time.

I found another VM that is at the RTM level, checked the same class in the same assembly, and found the property defined:

image

Note: the problem affects not only the managed client object model, but the JavaScript object model and the server-side object model as well. It should affect the out-of-the box feature receiver described in my former post, and all of the packages that rely on this functionality.

We have the same issue on the server side as well. Both of the the classes Microsoft.ProjectServer.EventHandlerCreationInformation and Microsoft.ProjectServer.EventHandler (C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\CONFIG\BIN\Microsoft.ProjectServer.dll) had the CancelOnError property in the previous version, but it is simply no more defined without any official warning on the change, causing both server side and client side code referring to this property to fail.


How to Read Project Properties that are not Available in the Client Object Model?

$
0
0

Recently I had a development task that at the first sight seemed to be trivial, but it turned out quickly to be rather a challenge. I had to display some basic project information on a page in our Project Web Site (PWS), like project start and finish date, remaining work and percent complete. The web page was built using client-side technologies, like the Client-side object model (CSOM) for Project 2013 and using the AngularJS library, and we did not plan to change the architecture to server side code.

If you check the properties of the PublishedProject (either on the client side in namespace / assembly Microsoft.ProjectServer.Client or on the server side in Microsoft.ProjectServer), you see that it has properties like StartDate and FinishDate, and it inherits its PercentComplete property from the Project base class, however there is no property for RemainingWork or PercentWorkComplete defined, although both of these values are available as fields if you manage a Project Server view (see screenshot below). This information is not available via REST / OData either.

image

You should know, that in the case of  Project Server, the server side OM is simply a wrapper around the PSI, for example, the PercentComplete property in the Project class is defined:

public int PercentComplete
{
  get
  {
    ProjectDataSet.TaskRow summaryTaskRow = this.SummaryTaskRow;
    if (summaryTaskRow != null && !summaryTaskRow.IsTASK_PCT_COMPNull())
      return summaryTaskRow.TASK_PCT_COMP;
    else
      return 0;
  }
}

Client side OMs (either managed or ECMAScript) and REST calls invoke the server side OM, so at the background the good old PSI is still in action.

It seems that the developers of Project Server remained simply not enough time to map all of the fields available via PSI to the object models on the server side and the client side.

You should know either, that the project properties we need are stored as task properties for the project summary task of the current project. In the Project Server database the tasks of the published projects (so the project summary tasks as well) are stored in the [pub].[MSP_TASKS] table. If you run the following query (where ProjectWebApp is the name of the database and the Guid in the [PROJ_UID] filter is the ID of your project), you find some specific field values that may help to identify the summary task record of a project:

SELECT [TASK_UID]    
      ,[TASK_PARENT_UID]
      ,[TASK_ID]
      ,[TASK_OUTLINE_NUM]
      ,[TASK_OUTLINE_LEVEL]
      ,[TASK_NAME]
      ,[TASK_START_DATE]
      ,[TASK_FINISH_DATE]
      ,[TASK_PCT_COMP]
      ,[TASK_PCT_WORK_COMP]
      ,[TASK_REM_WORK]
  FROM [ProjectWebApp].[pub].[MSP_TASKS]
  WHERE [PROJ_UID] = ‘d0ae5086-be7a-e411-9568-005056b45654′

The project summary task record – at least, based on my experimental results – , matches the following conditions:

[TASK_ID] = 0

[TASK_OUTLINE_NUM] = 0

[TASK_OUTLINE_LEVEL] = 0

[TASK_UID] = [TASK_PARENT_UID]

But as said, we need a solution on the client side, and obviously one that does not tamper with the Project Server database. What options are there to achieve the missing information?

The Project class has a property called SummaryTaskId, but if you have this value already, and would like to query the project tasks via REST (for example: http://YourProjServer/PWA/_api/ProjectServer/Projects(‘d0ae5086-be7a-e411-9568-005056b45654&#8242;)/Tasks(‘FFAE5086-BE7A-E411-9568-005056B45654′)) or via the client object model, the result is empty. The description of the SummaryTaskId property says: “Gets the GUID for the hidden project summary task”. Yes, it is so hidden, that it simply not included in the Tasks collection of the Project class! The Tasks property of the PublishedProject class is of type PublishedTaskCollection, and on the server side the record for the project summary task is simply filtered out, when initializing the internal Dictionary used for the storage of the Task records. If you don’t believe me, or need more details on that, see the constructor method of Microsoft.ProjectServer.PublishedTaskCollection class below:

internal PublishedTaskCollection()
{
    Func<Dictionary<Guid, PublishedTask>> valueFactory = null;
    if (valueFactory == null)
    {
        valueFactory = () => base.ProjectData.Task.OfType<ProjectDataSet.TaskRow>().Where<ProjectDataSet.TaskRow>(delegate (ProjectDataSet.TaskRow r) {
            if (!r.IsTASK_PARENT_UIDNull())
            {
                return (r.TASK_PARENT_UID != r.TASK_UID);
            }
            return true;
        }).ToDictionary<ProjectDataSet.TaskRow, Guid, PublishedTask>(r => r.TASK_UID, r => this.CreateTask(r));
    }
    this._tasks = new Lazy<Dictionary<Guid, PublishedTask>>(valueFactory);
}

Of course, we get the same, empty result if we would like to filter the tasks for one the special conditions we found in the database (like [TASK_OUTLINE_LEVEL] = 0):
http://YourProjServer/PWA/_api/ProjectServer/Projects(‘d0ae5086-be7a-e411-9568-005056b45654&#8242;)/Tasks?$filter=OutlineLevel eq 0 

The project reporting data contains the project summary tasks as well, so we could invoke the ProjectData OData endpoint from the client side to query the required information. The problem with this approach is that it would require extra permissions on the reporting data and one cannot limit this permission to the summary tasks of a specific project, to summary tasks, or just to tasks at all. If you grant your users the Access Project Server Reporting Service global permission, they can query all of the reporting data. It is sure not our goal, but you can test it if you wish.

Once you have the ID of the project summary task (for example via the SummaryTaskId property), the task is available via a query like this one:

http://YourProjServer/PWA/_api/ProjectData/Tasks(ProjektID=guid’d0ae5086-be7a-e411-9568-005056b45654&#8242;,TaskID=guid’FFAE5086-BE7A-E411-9568-005056B45654′)

When using PSI, we can access the required information via the TASK_REM_WORK and TASK_PCT_WORK_COMP fields in ProjectDataSet.TaskRow, that means, rows in the Task property (type of  ProjectDataSet.TaskDataTable) of the ProjectDataSet. The first row in the record set contains the information about the project summary task.

We could create our own extensions for the client object model (wrapping around just this piece of  PSI), as I illustrated for the managed, and for the ECMAScript object model as well, but it would require a lot of work, so I ignored this option for now. Instead of this, I’ve created a simple .NET console application utilizing the PSI (see the most important part of the code below). Unfortunately, I have not found a method that returns only a specific task of a specific project, so I had to call the ReadProjectEntities method to read all of the tasks of the project.

  1. _projectClient = new SvcProject.ProjectClient(ENDPOINT_PROJECT, pwaUrl + "/_vti_bin/PSI/ProjectServer.svc");
  2. _projectClient.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation;
  3.  
  4. Guid projId = Guid.Parse("d0ae5086-be7a-e411-9568-005056b45654");
  5. int taskEntityId = 2;
  6.  
  7. var projEntitiesDS = _projectClient.ReadProjectEntities(projId, taskEntityId, SvcProject.DataStoreEnum.PublishedStore);
  8. var tasksTable = projEntitiesDS.Task;
  9.  
  10. foreach (SvcProject.ProjectDataSet.TaskRow task in tasksTable.Rows)
  11. {
  12.     Console.WriteLine(string.Format("TASK_OUTLINE_NUM: {0}; TASK_PCT_WORK_COMP: {1}; TASK_REM_WORK: {2}", task.TASK_OUTLINE_NUM, task.TASK_PCT_WORK_COMP, task.TASK_REM_WORK));
  13. }

I’ve captured the request and the response using Fiddler:

image

Then extended my JavaScript code with methods that assemble the request in the same format, submit it to the server, then parse the required fields out of the response.

First, I needed a helper method to format strings:

  1. String.format = (function () {
  2.     // The string containing the format items (e.g. "{0}")
  3.     // will and always has to be the first argument.
  4.     var result = arguments[0];
  5.  
  6.     // start with the second argument (i = 1)
  7.     for (var i = 1; i < arguments.length; i++) {
  8.         // "gm" = RegEx options for Global search (more than one instance)
  9.         // and for Multiline search
  10.         var regEx = new RegExp("\\{" + (i – 1) + "\\}", "gm");
  11.         result = result.replace(regEx, arguments[i]);
  12.     }
  13.  
  14.     return result;
  15. });

In my Angular controller I defined this function to format dates:

  1. $scope.formatDate = function (date) {
  2.     var formattedDate = '';
  3.     if ((typeof date != "undefined") && (date.year() > 1)) {
  4.         formattedDate = String.format("{0}.{1}.{2}", date.year(), date.month() + 1, date.date());
  5.     }
  6.  
  7.     return formattedDate;
  8. }

Next, in the controller we get the ID of the project for the current PWS, and we read project properties that are available via the client object model, and finally the ones, that are available only via PSI:

  1. var promiseWebProps = ProjService.getWebProps($scope);
  2. promiseWebProps.then(function (props) {
  3.     $scope.projectId = props.projectId;
  4.  
  5.     // read the project properties that are available via the client object model
  6.     var promiseProjProp = ProjService.getProjectProps($scope);
  7.     promiseProjProp.then(function (props) {
  8.         $scope.projStartDate = moment(props.projStartDate);
  9.         $scope.projFinishDate = moment(props.projFinishDate);
  10.         $scope.percentComp = props.percentComp;
  11.     }, function (errorMsg) {
  12.         console.log("Error: " + errorMsg);
  13.     });
  14.  
  15.     // read the project properties that are available only via PSI
  16.     var promiseProjPropEx = ProjService.getProjectPropsEx($scope);
  17.     promiseProjPropEx.then(function (propsEx) {
  18.         $scope.remainingWork = Math.round(propsEx.remainingWork / 600) / 100;
  19.         $scope.percentWorkComp = propsEx.percentWorkComp;
  20.     }, function (errorMsg) {
  21.         console.log("Error: " + errorMsg);
  22.     });
  23.  
  24. }, function (errorMsg) {
  25.     console.log("Error: " + errorMsg);
  26. });

As you can see, the value we receive in the remainingWork property should be divided by 600 and 100 to get the value in hours.

In our custom ProjService service I’ve implemented the corresponding methods.

The project ID is stored in the property bag of the PWS in a property called MSPWAPROJUID (see this post about how to read property bags from the client object model):

  1. this.getWebProps = function ($scope) {
  2.     var deferred = $q.defer();
  3.  
  4.     var ctx = SP.ClientContext.get_current();
  5.  
  6.     var web = ctx.get_web();
  7.     var props = web.get_allProperties();
  8.     ctx.load(props);
  9.  
  10.  
  11.     ctx.executeQueryAsync(
  12.         function () {
  13.             var allProps = props.get_fieldValues();
  14.  
  15.             deferred.resolve(
  16.                 {
  17.                     projectId: allProps.MSPWAPROJUID
  18.                 });
  19.         },
  20.         function (sender, args) {
  21.             deferred.reject('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace());
  22.         }
  23.     );
  24.  
  25.     return deferred.promise;
  26. };

Having the project ID, reading project properties via the client object model should be straightforward as well:

  1. this.getProjectProps = function ($scope) {
  2.     var deferred = $q.defer();
  3.  
  4.     var ctx = SP.ClientContext.get_current();
  5.  
  6.     var projContext = PS.ProjectContext.get_current();
  7.  
  8.     projContext.set_isPageUrl(ctx.get_isPageUrl);
  9.     var proj = projContext.get_projects().getById($scope.projectId);
  10.     projContext.load(proj, "StartDate", "FinishDate", "PercentComplete");
  11.  
  12.     projContext.executeQueryAsync(
  13.         function () {
  14.             deferred.resolve({
  15.                 projStartDate: proj.get_startDate(),
  16.                 projFinishDate: proj.get_finishDate(),
  17.                 percentComp: proj.get_percentComplete()
  18.             });
  19.         },
  20.         function (sender, args) {
  21.             deferred.reject('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace());
  22.         }
  23.     );
  24.  
  25.     return deferred.promise;
  26. };

Reading the ‘extra’ properties via PSI is a bit more complex. First, we assemble the request XML as we captured it with Fiddler when used the console application mentioned above, and post it to the server. Next, we process the response (see the code of the helper method buildXMLFromString farther below), and parse out the necessary properties from the project summary task (that is the Task node having rowOrder = 0) using XPath queries.

  1. this.getProjectPropsEx = function () {
  2.     var deferred = $q.defer();
  3.    
  4.     // assuming your PWA is located at /PWA
  5.     var psiUrl = String.format("{0}//{1}/PWA/_vti_bin/PSI/ProjectServer.svc", window.location.protocol, window.location.host);
  6.    
  7.     $http({
  8.         method: 'POST',
  9.         url: psiUrl,
  10.         data: String.format('<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd="http://www.w3.org/2001/XMLSchema"><ReadProjectEntities xmlns="http://schemas.microsoft.com/office/project/server/webservices/Project/"><projectUid&gt;{0}</projectUid><ProjectEntityType>2</ProjectEntityType><dataStore>PublishedStore</dataStore></ReadProjectEntities></s:Body></s:Envelope>', $scope.projectId),
  11.         headers: {
  12.             "Content-Type": 'text/xml; charset=utf-8',
  13.             "SOAPAction": "http://schemas.microsoft.com/office/project/server/webservices/Project/ReadProjectEntities&quot;
  14.         }
  15.     }).success(function (data) {
  16.         var dataAsXml = buildXMLFromString(data);
  17.         dataAsXml.setProperty('SelectionLanguage', 'XPath');
  18.         dataAsXml.setProperty('SelectionNamespaces', 'xmlns:pds="http://schemas.microsoft.com/office/project/server/webservices/ProjectDataSet/&quot; xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"');
  19.         var projSumTaskNode = dataAsXml.selectSingleNode("//pds:Task[@msdata:rowOrder=0]");
  20.         var remainingWork = projSumTaskNode.selectSingleNode("pds:TASK_REM_WORK").nodeTypedValue;
  21.         var percentWorkComp = projSumTaskNode.selectSingleNode("pds:TASK_PCT_WORK_COMP").nodeTypedValue;
  22.         deferred.resolve(
  23.             {
  24.                 remainingWork: remainingWork,
  25.                 percentWorkComp: percentWorkComp
  26.             });
  27.     })
  28.     .error(function (data, status) {
  29.         deferred.reject('Request failed. ' + data);
  30.     });
  31.     
  32.     return deferred.promise;
  33. }

These are the helper methods I used for processing the response text as XML:

  1. function createMSXMLDocumentObject() {
  2.     if (typeof (ActiveXObject) != "undefined") {
  3.         // http://blogs.msdn.com/b/xmlteam/archive/2006/10/23/using-the-right-version-of-msxml-in-internet-explorer.aspx
  4.         var progIDs = [
  5.                         "Msxml2.DOMDocument.6.0",
  6.                         "Msxml2.DOMDocument.3.0",
  7.                         "MSXML.DOMDocument"
  8.         ];
  9.         for (var i = 0; i < progIDs.length; i++) {
  10.             try {
  11.                 return new ActiveXObject(progIDs[i]);
  12.             } catch (e) { };
  13.         }
  14.     }
  15.  
  16.     return null;
  17. }
  18.  
  19. function buildXMLFromString(text) {
  20.     var xmlDoc;
  21.  
  22.     xmlDoc = createMSXMLDocumentObject();
  23.     if (!xmlDoc) {
  24.         alert("Cannot create XMLDocument object");
  25.         return null;
  26.     }
  27.  
  28.     xmlDoc.loadXML(text);
  29.  
  30.     var errorMsg = null;
  31.     if (xmlDoc.parseError && xmlDoc.parseError.errorCode != 0) {
  32.         errorMsg = "XML Parsing Error: " + xmlDoc.parseError.reason
  33.                     + " at line " + xmlDoc.parseError.line
  34.                     + " at position " + xmlDoc.parseError.linepos;
  35.     }
  36.     else {
  37.         if (xmlDoc.documentElement) {
  38.             if (xmlDoc.documentElement.nodeName == "parsererror") {
  39.                 errorMsg = xmlDoc.documentElement.childNodes[0].nodeValue;
  40.             }
  41.         }
  42.         else {
  43.             errorMsg = "XML Parsing Error!";
  44.         }
  45.     }
  46.  
  47.     if (errorMsg) {
  48.         alert(errorMsg);
  49.         return null;
  50.     }
  51.  
  52.     return xmlDoc;
  53. }

Having an HTML template like this one:

  1. <div><span>% complete:</span><span>{{percentComp}}%</span></div>
  2. <div><span>% work complete:</span><span>{{percentWorkComp}}%</span></div>
  3. <div><span>Remaining work:</span><span>{{remainingWork}} Hours</span></div>
  4. <div><span>Project start:</span><span>{{formatDate(projStartDate)}}</span></div>
  5. <div><span>Project finish:</span><span>{{formatDate(projFinishDate)}}</span></div>

the result should be displayed similar to this one:

image

A drawback of this approach (not to mention the fact that it is pretty hacky) is, that due the ReadProjectEntities method, all of the fields of all of the project tasks should be downloaded to the client, although we need only a few fields of a single task, the project summary task. So it would make sense to implement some kind of  caching on the client side, but it is out of the scope of this post. But as long as Microsoft does not provide all the project fields in the client object model, I have not found any better solution that would require a relative small effort.


How to avoid ‘The request uses too many resources’ when using the client object model via automated batching of commands

$
0
0

One of the reasons, I prefer the client object model of SharePoint to the REST interface, is its capability of batching requests.

For example, you can add multiple users to a SharePoint group using the code below, and it is sent as a single request to the server:

  1. using (var clientContext = new ClientContext(url))
  2. {
  3.     var web = clientContext.Web;
  4.     var grp = web.SiteGroups.GetByName("YourGroup");
  5.  
  6.     var usersToAdd = new List<string>() { @"i:0#.w|domain\user1", @"i:0#.w|domain\user2" };
  7.  
  8.     foreach (var loginName in usersToAdd)
  9.     {
  10.         var user = web.EnsureUser(loginName);
  11.         grp.Users.AddUser(user);
  12.     }
  13.  
  14.     clientContext.ExecuteQuery();
  15. }

However, as the number of  users you would like to add increases, you might have issues, as the operational requests in your batch are exceeding the 2 MB limit.

How could we solve the problem relative painless, avoiding the error, and still keeping our code readable?

The good news is that it is easy to achieve using extension method, generic, and the Action class. We can extend the ClientContext with an ExecuteQueryBatch method, and pass the list of parameter values to be processed in an IEnumerable, the action to be performed, and the count of  items should be processed in a single batch. The method splits the parameter values into batches, calling the ExecuteQuery method on the ClientContext for each batch.

If the action, you would perform on the client objects has a single parameter (as in our case above, the login name is a single parameter of type String), the ExecuteQueryBatch method can be defined as:

  1. public static class Extensions
  2. {
  3.     public static void ExecuteQueryBatch<T>(this ClientContext clientContext, IEnumerable<T> itemsToProcess, Action<T> action, int batchSize)
  4.     {
  5.         var counter = 1;
  6.  
  7.         foreach (var itemToProcess in itemsToProcess)
  8.         {
  9.             action(itemToProcess);
  10.  
  11.             counter++;
  12.             if (counter > batchSize)
  13.             {
  14.                 clientContext.ExecuteQuery();
  15.                 counter = 1;
  16.             }
  17.         }
  18.  
  19.         if (counter > 1)
  20.         {
  21.             clientContext.ExecuteQuery();
  22.         }
  23.     }
  24. }

Having the ExecuteQueryBatch method in this form, the original code can be modified:

  1. var batchSize = 20;
  2.  
  3. using (var clientContext = new ClientContext(url))
  4. {
  5.     var web = clientContext.Web;
  6.     var grp = web.SiteGroups.GetByName("YourGroup");
  7.  
  8.     var usersToAdd = new List<string>() { @"i:0#.w|domain\user1", @"i:0#.w|domain\user2" /* and a lot of other logins */ };
  9.  
  10.     clientContext.ExecuteQueryBatch<string>(usersToAdd,
  11.         new Action<string>(loginName =>
  12.         {
  13.             var user = web.EnsureUser(loginName);
  14.             grp.Users.AddUser(user);
  15.         }),
  16.         batchSize);
  17.  
  18.     clientContext.ExecuteQuery();
  19. }

The size of batch you can use depends on the complexity of the action. For a complex action should be the batch smaller. The ideal value should you find experimentally.

Actions with multiple parameter require additional overloads of the the ExecuteQueryBatch extension method.

In my next post I’ll illustrate how to utilize this extension method in a practical example.


How to restrict the available users in a ‘Person or Group’ field to Project Server resources?

$
0
0

Assume you have a task list in your Project Web Access (PWA) site or on one of the Project Web Sites (PWS) in your Project Server and you would like to restrict the users available in the Assigned To field (field type of  ‘Person or Group‘) to users who are specified as Enterprise Resource in Project Server, that is running in the “classical” Project Server permission mode, and not in the new SharePoint Server permission mode. There is no synchronization configured between Active Directory groups and Project Server resources.

You can limit a ‘Person or Group‘ field to a specific SharePoint group, but there is no built-in solution to sync enterprise resources to SharePoint groups. In this post I show you, how to achieve that via PowerShell and the managed client object models of Project Server and SharePoint.

Note: You could get the login names of users assigned to the enterprise resources via REST as well (http://YourProjectServer/PWA/_api/ProjectServer/EnterpriseResources?$expand=User&$select=User/LoginName), but in my sample I still use the client object model of  Project Server.

My goal was to create a PowerShell solution, because it makes it easy to change the code on the server without any kind of compiler. I first created a C# solution, because the language elements of C# (like extension methods, generics and LINQ) help us to write compact, effective and readable code. For example, since the language elements of PowerShell do not support the LINQ expressions, you cannot simply restrict the elements and their properties returned by a client object model request, as I illustrated my former posts here, here and here. Having the working C# source code, I included it in my PowerShell script as literal string and built the .NET application at runtime, just as I illustrated in this post. In the C# code I utilized an extension method to help automated batching of the client object model request. More about this solution can be read here.

The logic of the synchronization is simple: we read the list of all non-generic enterprise resources, and store the login names (it the user exists) as string in a generic list. Then read the members of the SharePoint group we are synchronizing and store their login names as string in another generic list. Finally, we add the missing users to the SharePoint group and remove the extra users from the group.

The final code is included here:

  1. $pwaUrl = "http://YourProjectServer/PWA&quot;;
  2. $grpName = "AllResourcesGroup";
  3.  
  4. $referencedAssemblies = (
  5.     "Microsoft.SharePoint.Client, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c",
  6.     "Microsoft.SharePoint.Client.Runtime, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c",
  7.     "Microsoft.ProjectServer.Client, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c",
  8.     "System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089")
  9.  
  10. $sourceCode = @"
  11. using System;
  12. using System.Linq;
  13. using System.Collections.Generic;
  14. using Microsoft.SharePoint.Client;
  15. using Microsoft.ProjectServer.Client;
  16.  
  17. public static class Extensions
  18. {
  19.     // batching to avoid
  20.     // Microsoft.SharePoint.Client.ServerException: The request uses too many resources
  21.     // https://msdn.microsoft.com/en-us/library/office/jj163082.aspx
  22.     public static void ExecuteQueryBatch<T>(this ClientContext clientContext, IEnumerable<T> itemsToProcess, Action<T> action, int batchSize)
  23.     {
  24.         var counter = 1;
  25.  
  26.         foreach (var itemToProcess in itemsToProcess)
  27.         {
  28.             action(itemToProcess);
  29.             counter++;
  30.  
  31.             if (counter > batchSize)
  32.             {
  33.                 clientContext.ExecuteQuery();
  34.                 counter = 1;
  35.             }
  36.         }
  37.  
  38.         if (counter > 1)
  39.         {
  40.             clientContext.ExecuteQuery();
  41.         }
  42.     }
  43. }
  44.  
  45. public static class Helper
  46. {
  47.     public static void SyncGroupMembers(string pwaUrl, string grpName)
  48.     {
  49.         List<string> resLogins = new List<string>();
  50.         List<string> grpLogins = new List<string>();
  51.         var batchSize = 20;
  52.  
  53.         using (var projectContext = new ProjectContext(pwaUrl))
  54.         {
  55.             var resources = projectContext.EnterpriseResources;
  56.             projectContext.Load(resources, rs => rs.Where(r => !r.IsGeneric).Include(r => r.User.LoginName));
  57.  
  58.             projectContext.ExecuteQuery();
  59.  
  60.             resLogins.AddRange(resources.ToList().Where(r => r.User.ServerObjectIsNull == false).ToList().Select(r => r.User.LoginName.ToLower()));               
  61.         }
  62.         using (var clientContext = new ClientContext(pwaUrl))
  63.         {
  64.             var web = clientContext.Web;
  65.  
  66.             var grp = web.SiteGroups.GetByName(grpName);
  67.             clientContext.Load(grp, g => g.Users.Include(u => u.LoginName));
  68.  
  69.             clientContext.ExecuteQuery();
  70.  
  71.             grpLogins.AddRange(grp.Users.ToList().ToList().Select(u => u.LoginName.ToLower()));
  72.  
  73.             var usersToAdd = resLogins.Where(l => !grpLogins.Contains(l));
  74.             clientContext.ExecuteQueryBatch<string>(usersToAdd,
  75.                 new Action<string>(loginName =>
  76.                 {
  77.                     var user = web.EnsureUser(loginName);
  78.                     grp.Users.AddUser(user);
  79.                 }),
  80.                 batchSize);
  81.  
  82.             var usersToRemove = grpLogins.Where(l => !resLogins.Contains(l));
  83.             clientContext.ExecuteQueryBatch<string>(usersToRemove,
  84.                 new Action<string>(loginName =>
  85.                 {
  86.                     grp.Users.RemoveByLoginName(loginName);
  87.                 }),
  88.                 batchSize);
  89.         }
  90.     }
  91. }
  92.  
  93. "@
  94. Add-Type -ReferencedAssemblies $referencedAssemblies -TypeDefinition $sourceCode -Language CSharp;
  95.  
  96. [Helper]::SyncGroupMembers($pwaUrl, $grpName)


Creating a PowerShell-based Monitoring and Alerting System for Project Server

$
0
0

A few months ago I published a post about how to find the jobs in the Project Server queue programmatically. In the current post I will show you, how can you use PowerShell to track the number of jobs in queue, and send an e-mail alert, if the count is higher than a predefined limit for a longer period. Although the example in this post is Project Server specific, you can use the same technique to create other types of alerts as well.

Since the PowerShell script will be run by Windows Task Scheduler (for example on a 5-minute schedule) it was an important question, how to solve the communication between the runs. For example, how the current session can find out, since when the counter is higher than the limit? Of course, if the limit is reached, and we have already sent a mail, we would not like to send further mails for every and each runs till the counter is higher than the limit. But how to inform the forthcoming sessions from the current session, that we have sent a mail? Of course, there are many possible solutions for this problem. We could use a database, or a file (either XML or any custom format) to persist the information between the sessions. I’ve chosen an even simpler approach. I’ve create empty files (QueueLimitReached.txt and MailSent.txt), and check their existence and / or creation date to check when the limit has been reached and if the alert mail has been already sent. If the counter goes below the limit again, I simply delete these semaphore files.

Having this background, the script itself should be already straightforward.

  1. Add-PSSnapin "Microsoft.SharePoint.PowerShell"
  2.  
  3. $folderPath = "D:\ScheduledTasks\"
  4. $limitReachedFileName = "QueueLimitReached.txt"
  5. $mailSentFileName = "MailSent.txt"
  6. $ageOfFileLimit = 15 # in minutes
  7. $counterValueLimit = 50
  8.  
  9. $emailTo = "admins@company.com"
  10. $emailCc = "helpdesk@company.com;projmans@company.com"
  11. $emailSubject = "Project Server Queue Alert"
  12. $emailBody = @"
  13. Hi,
  14.  
  15. the count of the jobs in the Project Server Queue is very high. Please, fix the issue!
  16.  
  17. Regards,
  18. The PowerShell Monitor
  19.   "@
  20.  
  21. $limitReachedFilePath = $folderPath + $limitReachedFileName
  22. $mailSentFilePath = $folderPath + $mailSentFileName
  23.  
  24. function HasAlertState()
  25. {
  26.   $counter = Get-Counter -Counter "\ProjectServer:QueueGeneral(_Total)\Current Unprocessed Jobs"
  27.   $counterValue = $counter.CounterSamples[0].CookedValue
  28.   return ($counterValue -gt $counterValueLimit)
  29. }
  30.  
  31. function SendAlert()
  32. {   
  33.   $globalAdmin = New-Object Microsoft.SharePoint.Administration.SPGlobalAdmin
  34.  
  35.   $smtpMail = New-Object Net.Mail.MailMessage
  36.   $smtpMail.From = $globalAdmin.MailFromAddress
  37.   $smtpMail.Subject = $emailSubject
  38.   $smtpMail.Body = $emailBody
  39.   $emailTo.Split(";") | % { $mailAddr = New-Object Net.Mail.MailAddress($_); $smtpMail.To.Add($mailAddr) }
  40.   $emailCc.Split(";") | % { $mailAddr = New-Object Net.Mail.MailAddress($_); $smtpMail.Cc.Add($mailAddr) }
  41.   $smtpMail.ReplyTo = New-Object Net.Mail.MailAddress($globalAdmin.MailReplyToAddress)
  42.   $smtpMail.BodyEncoding = [System.Text.Encoding]::GetEncoding($globalAdmin.MailCodePage)
  43.   $smtpMail.SubjectEncoding = [System.Text.Encoding]::GetEncoding($globalAdmin.MailCodePage)
  44.  
  45.   $smtpClient = New-Object Net.Mail.SmtpClient($globalAdmin.OutboundSmtpServer)
  46.   $smtpClient.Send($smtpMail)
  47. }
  48.  
  49. $alertCondition = HasAlertState
  50.  
  51. If ($alertCondition)
  52. {
  53.   If (Test-Path $limitReachedFilePath)
  54.   {
  55.     $creationTime = (Get-ChildItem $limitReachedFilePath).CreationTime
  56.     $ageOfFile = ([DateTime]::Now – $creationTime).Minutes
  57.     Write-Host $ageOfFile
  58.     If ($ageOfFile -gt $ageOfFileLimit)
  59.     {
  60.       Write-Host Limit reached
  61.       If (-not (Test-Path $mailSentFilePath))
  62.       {
  63.         Write-Host Mail has not yet been sent. Send it now.
  64.         SendAlert
  65.         # suppress return value via casting it to null
  66.         [void] (New-Item -name $mailSentFileName -path $folderPath -itemType File)
  67.       }
  68.     }
  69.   }
  70.   # create a new file, if no former one exists
  71.   else
  72.   {
  73.     If (-not (Test-Path $limitReachedFilePath))
  74.     {
  75.       # suppress return value via casting it to null
  76.       [void] (New-Item -name $limitReachedFileName -path $folderPath -itemType File)
  77.     }
  78.   }
  79. }
  80. # delete the former files, if they exist
  81. Else
  82. {
  83.   If (Test-Path $limitReachedFilePath)
  84.   {
  85.     Remove-Item $limitReachedFilePath
  86.   }
  87.   If (Test-Path $mailSentFilePath)
  88.   {
  89.     Remove-Item $mailSentFilePath
  90.   }
  91. }

In the sample we check the value of the Current Unprocessed Jobs counter of Project Server. You can easily change the limit of  job count (50), and the time period (15 minutes) in the code, or customize the addressees, subject and body of the mail. If you would like to create other types of alerts, you should simply implement your own version of the HasAlertState method.


Create Project Server Enterprise Custom Fields via PSI from PowerShell

$
0
0

Last year I already wrote about how one can manage the Project Server Enterprise Custom Fields via the Managed Client Object Modell. We could transfer the code samples of that post from C# to PowerShell, but because of the limitations of the Managed Client Object Modell I use the PSI interface instead in this case. What are those limitations? Not all of the properties available in PSI are exposed by the Client OM, see for example the MD_PROP_SUMM_GRAPHICAL_INDICATOR field, that we can use to set the rules of graphical indicators defined for the fields. I’ll show you an example for getting and setting the indicator rules in a later post, in the current one I only show you the technique we can use to create the Enterprise Custom Fields via PSI.

One can find an existing description with code sample in Step 3 and 4 of this post, that achieves the same goal, however, I don’t like that approach for several reasons, for example, because of  we have to generate the proxy assembly based on the WSDL in the code itself. Instead of that I find the following code much more simple:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Project.Server.Library")

$pwaUrl = "http://YourProjectServer/pwa&quot;

# create shortcuts
#
http://stackoverflow.com/a/1049010
$PSDataType = [Microsoft.Office.Project.Server.Library.PSDataType]
$Entities = [Microsoft.Office.Project.Server.Library.EntityCollection]::Entities

$svcPSProxy = New-WebServiceProxy -Namespace PSIProxy -Uri ($pwaUrl + "/_vti_bin/psi/CustomFields.asmx?wsdl") -UseDefaultCredential

$customFieldDataSet = New-Object PSIProxy.CustomFieldDataSet 

$customFieldRow = $customFieldDataSet.CustomFields.NewCustomFieldsRow()   
$customFieldRow.MD_PROP_UID = [Guid]::NewGuid()
$customFieldRow.MD_PROP_NAME = "Custom Project Field"
$customFieldRow.MD_PROP_TYPE_ENUM = $PSDataType::STRING
$customFieldRow.MD_ENT_TYPE_UID = $Entities.ProjectEntity.UniqueId
$customFieldRow.MD_PROP_IS_REQUIRED = $false
$customFieldRow.MD_PROP_IS_LEAF_NODE_ONLY = $false
$customFieldRow.MD_PROP_DESCRIPTION = "Test Field Desc."
$customFieldRow.SetMD_LOOKUP_TABLE_UIDNull()
$customFieldRow.SetMD_PROP_DEFAULT_VALUENull()
$customFieldDataSet.CustomFields.AddCustomFieldsRow($customFieldRow)

$svcPSProxy.CreateCustomFields($customFieldDataSet, $false, $true)

If you have casting issues when using the Namespace parameter of the New-WebServiceProxy cmdlet, you should read this post.


Managing Project Server Views via PSI from PowerShell

$
0
0

If you would like to manage Project Server views from code you will find very few helpful resources (if any) on the web. The object models simply do not include classes related to this (neither on the server side nor on the client side). Although the PSI contains a View service, it is intended for internal use. Of course, that intention could not stop us to use the service at our own risk. Below I give you some useful code samples to illustrate the usage of the View service.

First of all, we create the proxy assembly, load the required Microsoft.Office.Project.Server.Library assembly in the process as well, and define some shortcuts to make it easier to reference enum and property values later on.

$pwaUrl = "http://YourProjectServer/pwa&quot;
$svcPSProxy = New-WebServiceProxy -Namespace PSIProxy -Uri ($pwaUrl + "/_vti_bin/PSI/View.asmx?wsdl") -UseDefaultCredential
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Project.Server.Library")
$ViewConstants = [Microsoft.Office.Project.Server.Library.ViewConstants]
$ViewType = [Microsoft.Office.Project.Server.Library.ViewConstants+ViewType]

If you now the unique ID of your view, it is easy to display all of the fields and security categories associated with the view:

$viewId = [Guid]"63d3499e-df27-401c-af58-ebb9607beae8"
$view = $svcPSProxy.ReadView($viewId)
$view.ViewReportFields | % { $_.CONV_STRING }
$view.SecurityCategoryObjects | % { $_.WSEC_CAT_NAME }

If the view ID is unknown, you can get it based on the name and type of the view:

$viewName = "Your Report"
$viewType = $ViewType::PORTFOLIO

$views = $svcPSProxy.ReadViewSummaries()
$viewId = ($views.ViewReports | ? { $_.WVIEW_NAME -eq $viewName -and $_.WVIEW_TYPE -eq $viewType }).WVIEW_UID

You can list all of the views:

$views = $svcPSProxy.ReadViewSummaries()
$views.ViewReports | % {
  Write-Host $_.WVIEW_NAME ($ViewType$_.WVIEW_TYPE)
}

To change the order of the first two fields in the view:

$view = $svcPSProxy.ReadView($viewId)
$view.ViewReportFields[0].WVIEW_FIELD_ORDER = 1
$view.ViewReportFields[1].WVIEW_FIELD_ORDER = 0
$svcPSProxy.UpdateView($view)

To change the order of two arbitrary fields (based on their name) in the view:

$fieldName1 = "Finish"
$fieldName2 = "Owner"
$view = $svcPSProxy.ReadView($viewId)
$field1 = $view.ViewReportFields | ? { $_.CONV_STRING -eq $fieldName1 }
$field2 = $view.ViewReportFields | ? { $_.CONV_STRING -eq $fieldName2 }
$field1Order = $field1.WVIEW_FIELD_ORDER
$field2Order = $field2.WVIEW_FIELD_ORDER
$field1.WVIEW_FIELD_ORDER = $field2Order
$field2.WVIEW_FIELD_ORDER = $field1Order
$svcPSProxy.UpdateView($view)

To remove a field from a view:

$fieldToRemoveName = "Ende"
$view = $svcPSProxy.ReadView($viewId)
$fieldToRemove = $view.ViewReportFields | ? { $_.CONV_STRING -eq $fieldToRemoveName }
$fieldToRemove.Delete()
$svcPSProxy.UpdateView($view)

To delete the view itself:

[Void]$svcPSProxy.DeleteViewReports($viewId)

To create a new view using an existing view as a template:

$newViewName = "New View"
[Void]$svcPSProxy.CopyViewReports($viewId, $newViewName)
$newView = $svcPSProxy.ReadViewSummaries().ViewReports | ? { $_.WVIEW_NAME -eq $newViewName -and $_.WVIEW_TYPE -eq $viewType }

To list all of the fields available in a given type (in this case, for tasks):

$svcPSProxy.ReadProjectFields($ViewConstants::ViewTABLE_TASK_UID ).ViewFields | % { $_.CONV_STRING }

To append a new field at the end of the fields in the view:

$fieldToAppendName = "% Work Complete"

$fieldToAppend = $svcPSProxy.ReadProjectFields($ViewConstants::ViewTABLE_TASK_UID ).ViewFields | ? { $_.CONV_STRING -eq $fieldToAppendName }
$view = $svcPSProxy.ReadView($viewId)
$maxFieldOrder = ($view.ViewReportFields | % { $_.WVIEW_FIELD_ORDER } | measure -Maximum).Maximum

$newField = $view.ViewReportFields.NewViewReportFieldsRow()

$newField.WFIELD_UID = $fieldToAppend.WFIELD_UID
$newField.CONV_STRING = $fieldToAppend.CONV_STRING
$newField.WFIELD_TEXTCONV_TYPE = $fieldToAppend.WFIELD_TEXTCONV_TYPE
$newField.WTABLE_UID = $fieldToAppend.WTABLE_UID
$newField.WFIELD_IS_CUSTOM_FIELD = $fieldToAppend.WFIELD_IS_CUSTOM_FIELD
$newField.WFIELD_NAME_SQL = $fieldToAppend.WFIELD_NAME_SQL
$newField.WFIELD_IS_MULTI_VALUE = $fieldToAppend.WFIELD_IS_MULTI_VALUE
$newField.WFIELD_LOOKUP_TABLE_UID = $fieldToAppend.WFIELD_LOOKUP_TABLE_UID
$newField.WVIEW_UID = $view.ViewReports.WVIEW_UID
$newField.WVIEW_FIELD_ORDER = $maxFieldOrder + 1
$newField.WVIEW_FIELD_WIDTH = 100
$newField.WVIEW_FIELD_AUTOSIZE = 1
$newField.WVIEW_FIELD_CUSTOM_LABEL = [System.DBNull]::Value
$newField.WVIEW_FIELD_IS_READ_ONLY = 0

$view.ViewReportFields.AddViewReportFieldsRow($newField)
$svcPSProxy.UpdateView($view)

To inject a new field in the view before another field having a specified name:

$fieldInjectBeforeName = "% Complete"
$fieldToInjectName = "% Work Complete"

$fieldToInject = $svcPSProxy.ReadProjectFields($ViewConstants::ViewTABLE_TASK_UID ).ViewFields | ? { $_.CONV_STRING -eq $fieldToInjectName }

$view = $svcPSProxy.ReadView($viewId)

$fieldInjectBeforeOrder = ($view.ViewReportFields | ? { $_.CONV_STRING -eq $fieldInjectBeforeName }).WVIEW_FIELD_ORDER

$view.ViewReportFields | ? { $_.WVIEW_FIELD_ORDER -ge $fieldInjectBeforeOrder } | % { $_.WVIEW_FIELD_ORDER++ }

$newField = $view.ViewReportFields.NewViewReportFieldsRow()

$newField.WFIELD_UID = $fieldToInject.WFIELD_UID
$newField.CONV_STRING = $fieldToInject.CONV_STRING
$newField.WFIELD_TEXTCONV_TYPE = $fieldToInject.WFIELD_TEXTCONV_TYPE
$newField.WTABLE_UID = $fieldToInject.WTABLE_UID
$newField.WFIELD_IS_CUSTOM_FIELD = $fieldToInject.WFIELD_IS_CUSTOM_FIELD
$newField.WFIELD_NAME_SQL = $fieldToInject.WFIELD_NAME_SQL
$newField.WFIELD_IS_MULTI_VALUE = $fieldToInject.WFIELD_IS_MULTI_VALUE
$newField.WFIELD_LOOKUP_TABLE_UID = $fieldToInject.WFIELD_LOOKUP_TABLE_UID
$newField.WVIEW_UID = $view.ViewReports.WVIEW_UID
$newField.WVIEW_FIELD_ORDER = $fieldInjectBeforeOrder
$newField.WVIEW_FIELD_WIDTH = 100
$newField.WVIEW_FIELD_AUTOSIZE = 1
$newField.WVIEW_FIELD_CUSTOM_LABEL = [System.DBNull]::Value
$newField.WVIEW_FIELD_IS_READ_ONLY = 0

$view.ViewReportFields.AddViewReportFieldsRow($newField)
$svcPSProxy.UpdateView($view)

The last code sample shows how to create a new Gantt-view from scratch, appending a single field and a single security category to it:

$viewRepDS = New-Object PSIProxy.PWAViewReportsDataSet
$newView = $viewRepDS.ViewReports.NewViewReportsRow()
$newView.WVIEW_UID = [Guid]::NewGuid()
$newView.WVIEW_NAME = "New Report 2"
$newView.WVIEW_DESCRIPTION = "Test report description"

$fieldToAppendName = "% Arbeit abgeschlossen"

$fieldToAppend = $svcPSProxy.ReadProjectFields($ViewConstants::ViewTABLE_TASK_UID ).ViewFields | ? { $_.CONV_STRING -eq $fieldToAppendName }

$newField = $viewRepDS.ViewReportFields.NewViewReportFieldsRow()

$newField.WFIELD_UID = $fieldToAppend.WFIELD_UID
$newField.CONV_STRING = $fieldToAppend.CONV_STRING
$newField.WFIELD_TEXTCONV_TYPE = $fieldToAppend.WFIELD_TEXTCONV_TYPE
$newField.WFIELD_IS_CUSTOM_FIELD = $fieldToAppend.WFIELD_IS_CUSTOM_FIELD
$newField.WFIELD_NAME_SQL = $fieldToAppend.WFIELD_NAME_SQL
$newField.WFIELD_IS_MULTI_VALUE = $fieldToAppend.WFIELD_IS_MULTI_VALUE
$newField.WFIELD_LOOKUP_TABLE_UID = $fieldToAppend.WFIELD_LOOKUP_TABLE_UID
$newField.WVIEW_UID = $newView.WVIEW_UID
$newField.WVIEW_FIELD_ORDER = 0
$newField.WVIEW_FIELD_WIDTH = 100
$newField.WVIEW_FIELD_AUTOSIZE = 1
$newField.WVIEW_FIELD_CUSTOM_LABEL = [System.DBNull]::Value
$newField.WVIEW_FIELD_IS_READ_ONLY = 0
$viewRepDS.ViewReportFields.AddViewReportFieldsRow($newField)

$newSecCat = $viewRepDS.SecurityCategoryObjects.NewSecurityCategoryObjectsRow()
$newSecCat.WSEC_CAT_UID = [Microsoft.Office.Project.Server.Library.PSSecurityCategory]::MyProjects
$newSecCat.WSEC_OBJ_TYPE_UID = [Microsoft.Office.Project.Server.Library.PSSecurityObjectType]::View
$newSecCat.WSEC_OBJ_UID = $newView.WVIEW_UID
$viewRepDS.SecurityCategoryObjects.AddSecurityCategoryObjectsRow($newSecCat)

$newView.WVIEW_TYPE = $ViewType::PORTFOLIO
$newView.WVIEW_DISPLAY_TYPE = $ViewConstants::ViewDISPLAYTYPE_GANTT
$newView.WGANTT_SCHEME_UID =  $ViewConstants::GanttSchemeUidProjectCenter
$newView.WVIEW_SPLITTER_POS = 250
#  Group by (see [pub].[MSP_WEB_GROUP_SCHEMES] table in Project DB for possible values)
$newView.WGROUP_SCHEME_UID = [Guid]::Empty

$viewRepDS.ViewReports.AddViewReportsRow($newView)
$svcPSProxy.UpdateView($viewRepDS)


Tasks regarding to MySites Migration and Automating them via PowerShell

$
0
0

Recently we have performed a domain migration for a customer, where we had to migrate the MySites of the users as well. In this blog post I share the relevant PowerShell scripts we used to support the migration.In our case it was a SharePoint 2010 farm, however for SharePoint 2013 you should have the same tasks as well, so hopefully you find the scripts useful.

The user naming convention has been changed during the migration, for example a user John Doe had a login name in the source domain (let’s call it simply domain) like jdoe, he has a new login name john.doe in the target domain (let’s call it newDomain).

As you now, each MySite is a separate site collection under a site collection root (like http://mysites.company.com/personal), the last part of the site collection URL is built based on the login name (for example, it was originally http://mysites.company.com/personal/jdoe). Of course, the customer wanted the MySite URLs to reflect the changes in the login name naming conventions (it should be changed http://mysites.company.com/personal/john.doe)

First, we had to migrate the SharePoint user and its permissions using the Move-SPUser cmdlet:

$sourceURL = "http://mysites.company.com/personal/jdoe&quot;
$web = Get-SPWeb $sourceURL
$user = $web.SiteUsers["domain\jdoe"]
Move-SPUser -Identity $user -NewAlias "newDomain\john.doe" –IgnoreSID

We cannot simply change the URL of the site collection. We have to backup it and restore using the new URL as described in this post and illustrated here:

$sourceURL = "http://mysites.company.com/personal/jdoe&quot;
$targetURL = "http://mysites.company.com/personal/john.doe&quot;

# Location for the backup file
$backupPath = "E:\data\mysite.bak"

Try
{
   # Set the Error Action
   $ErrorActionPreference = "Stop"

  Write-Host "Backing up the Source Site Collection…"-ForegroundColor DarkGreen
  Backup-SPSite $sourceURL -Path $backupPath -force
  Write-Host "Backup Completed!`n"

  # Delete source Site Collection
  Write-Host "Deleting the Source Site Collection…"
  Remove-SPSite -Identity $sourceURL -Confirm:$false
  Write-Host "Source Site Deleted!`n"

  # Restore Site Collection to new URL
  Write-Host "Restoring to Target Site Collection…"
  Restore-SPSite $targetURL -Path $backupPath -Confirm:$false
  Write-Host "Site Restored to Target!`n"

  # Remove backup files
  Remove-Item $backupPath
}
Catch
{
  Write-Host "Operation Failed. Find the Error Message below:" -ForegroundColor Red
  Write-Host $_.Exception.Message -ForegroundColor Red
}
Finally
{
   # Reset the Error Action to Default
   $ErrorActionPreference = "Continue"
}
 
Write-host "Process Completed!"

Of course, we have to change the MySite URL in the user profile properties as well as described here. We used the following script: 

$waUrl = "http://mysites.company.com&quot;
$wa = Get-SPWebApplication -Identity $waUrl

# Create Service Context for User Profile Manager
$context = Get-SPServiceContext $wa.Sites[0]

# Get User Profile Manager instance
$upm = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($context)

# Get the user profile for owner of the personal site
$up = $upm.GetUserProfile("newDomain\john.doe")
$up["PersonalSpace"].Value = "/personal/john.doe"
$up.Commit()

Each user is by default the primary site collection administrator of his own MySite. In my former posts I already discussed how we can change the primary site collection administrator with or without elevated permissions. See this posts for reference to change the account to the one from the new domain.

For example, the simplest version:

$targetURL = "http://mysites.company.com/personal/john.doe&quot;
$siteAdmin = New-Object Microsoft.SharePoint.Administration.SPSiteAdministration($targetURL )
$siteAdmin.OwnerLoginName = "newDomain\john.doe"


PowerShell Scripts around the SID

$
0
0

If you ever migrated SharePoint users you should be familiar either with the Move-SPUser cmdlet or its predecessor, the migrateuser stsadm operation:

$sourceURL = "http://mysites.company.com&quot;
$web = Get-SPWeb $sourceURL
$user = $web.SiteUsers["domain\jdoe"]
Move-SPUser -Identity $user -NewAlias "newDomain\john.doe" –IgnoreSID

or

stsadm -o migrateuser –oldlogin "domain\jdoe" -newlogin "newDomain\john.doe" -ignoresidhistory

As you see, both method relies on the SID (or on its ignorance), but what is this SID and how can we read its value for our SharePoint or Active Directory users?

Each user in the Active Directory (AD) has a security identifier (SID) that is a unique, immutable identifier, allowing the user to be renamed without affecting its other properties.

Reading the SID of a SharePoint user from PowerShell is so simple as:

$web = Get-SPWeb http://YourSharePoint.com
$user = $web.AllUsers["domain\LoginName"]
$user.Sid

To be able to work with Active Directory from PowerShell, you need of course the Active Directory cmdlets. If your machine has no role in AD, you should install this PowerShell module using the steps described in this post.

Once you have this module installed, and you imported it via “Import-Module ActiveDirectory”, you can read the SID of a user in AD:

$user = Get-ADUser UserLoginNameWithoutDomain -Server YourDomainController.company.com
$user.SID.Value

Where UserLoginNameWithoutDomain is the login name of the user without the domain name, like jdoe in case of domain\jdoe, and YourDomainController.company.com is your DC responsible for the domain of your user.

If you need the SID history from AD as well, it’s a bit complicated. In this case I suggest you to read this writing as well.

$ADQuery = Get-ADObject –Server YourDomainController.company.com`
        -LDAPFilter "(samAccountName=UserLoginNameWithoutDomain )" `
        -Property objectClass, samAccountName, DisplayName, `
        objectSid, sIDHistory, distinguishedname, description, whenCreated |
        Select-Object * -ExpandProperty sIDHistory
$ADQuery | % { 
  Write-Host $_.samAccountName
  Write-Host Domain $_.AccountDomainSid.Value 
  Write-Host SID History
  $_.sIDHistory | % {
    $_.Value     
  }
  Write-Host ——————–
}


Viewing all 206 articles
Browse latest View live