Today I had to investigate a problem, where the type for a field in a SharePoint list changed from User to Lookup. Using the web interface for SharePoint it is not possible to do this modification.
A user field is internally a lookup field to the UserInformation list in the site collection, this could be proofed in the SchemaXml of the field. That means, this change did not modify or destroy any data. But we still do not know what happened to our list.
We where able to solve the problem by modifying the FieldTypeKind using PowerShell PnP. This member of the Microsoft.SharePoint.Client.Field class stores a value from the enum FieldType.
So, to solve this problem, we just needed 4 lines of code:
In our case, running these commands could solve the problem. But be careful changing the FieldType, because I assume it could destroy the data in this field.
When we create a survey in SharePoint, an overview will be created with several information.
Under some circumstances it might be necessary, to hide the line with the Number of Responses. Because this is a simple html-table that shows the information, a Script Editor webpart and some jQuery could be used to solve this problem.
First, we need jQuery in our site. To make jQuery available, we can use PowerShell PnP, connect to our site and execute the following cmdlet:
Next, we open the overview page of the survey and switch into edit mode for the page (click the gear icon top right and select Edit Page). When in edit mode, insert a Script Editor webpart on the page (we find it in the category Media and Content). When the webpart is added, click Edit Snippet in the webpart and enter the following snippet:
Click Insert and stop editing on the page.
That’s all, the line with the Number of Responses has gone.
In some cases, it could be necessary to hide a field in the editform in SharePoint. This could easily be done with the SetShowInEditForm() method from the Client-side Object Model of SharePoint.
In this short demo, I have prepared a demo list with the fields “Target Date” and “Planned Date”. When a new item is created by a user, both fields could be edited. But, when the item was saved, we do not want to be able to modify the value in the “Target Date”.
In the web interface in SharePoint, we do not have any option to change the necessary property of the field. But with PowerShell, this problem could easily be solved. After we have established a connection to our SharePoint site, we can use these four lines of script to do the modification:
When we are using a SharePoint site with modern UI, the field will always be removed, except when we add a new item, because the information pane is always opened in edit mode (in modern UI we do not have a separate view mode).
But when we switch back to the classic UI, we can see a difference.
Just done, with a few lines in PowerShell.
Just, to have it complete: the Client-side Object Model has two other methods in the Field-class, SetShowInNewForm() and SetShowInDisplayForm(), to make a similar functionality available in the other forms for the list.
Yes, it is also possible to archive the same goal by modifying the edit form with PowerApps, when we use the modern UI. But this only works with the modern UI and there is still a need for using the classic UI, because we have other options in the classic interface actually not available in the modern UI.
The last days I was asked, how to handle indexing of pdf-files that contain scanned content. In these files the content often are just images and an OCR approach is needed to make the content readable and accessible for the crawler.
From my point of view, we have two options to answer the question. The first is a Flow, where we can use the ElasticOCR connector. Actually, the connector is in preview, but it can already get a trial license for your tests. The way of working of the connector creates a new version of the document with readable content for the crawler. Good approach and it does what I expected.
But there should already be another approach to answer the question. For environments that run on-premises, we are not able to use Microsoft Flow, and on the other hand, using this Flow connector will first copy the file and the content to another location, do the processing and then move the results back to our SharePoint or OneDrive library.
There are some development packages for OCR available, I tested with IronOcr. My approach is very simple: in the library, where the document is stored, I create a hidden text field, where I store the text content of the file after the OCR process is done. The SharePoint crawler will pick-up the content of the field and store all necessary information in the index for the search. Searching for any information from the document will show the document in the search results.
The following code is just the result of this proof-of-concept, nothing more. The first part is just the field definition, where the text content will be stored after ORC.
The second part is a very, very simple command line program that takes the item id of a document as the parameter, does the OCR for the document and stores the readable text in the text field of the file item.
So, for handling these documents in the real world, we can use a remote event receiver for SharePoint (Online) or just a simple remote timer job. As always it depends on the environment, where we are working in.
I got the request to create a nice FAQ in SharePoint Online. The user interface should not be a simple list but should be fancy or modern. Additionally, the content should be in two languages, English and German.
So, what would be a simple approach for the solution? The content itself would be a custom list in SharePoint Online, where we have a choice field for the languages. For the answers we use a Note field with FullHtml style.
To create the structure for our solution, we are using just some PowerShell (with the SharePoint PnP extensions):
The final solution will look like this for the end user:
The complete code for this sample could be found in GitHub:
A few years ago, I wrote already an article about creating the zip-file for the Azure Web Job deployment directly in Visual Studio. You can find the article here.
It seems, there is a limit of 50 MB for the zip-file, when you upload the file in the Azure Portal. Now I had the problem for a little more complex timerjob that this file size was exceeded. When we have a look into the bin/release folder, we see some files that are not needed to run the exe-file. So, I made a little modification to the PowerShell script to exclude these files. The new script file looks like this:
When the business is working with a solution, build in SharePoint Online, they often have new requirements. One of these requiremtens most time is, to add new fields to a list or content type. Because we have the SharePoint PnP extensions, the task for adding a new field could easily be solved with the Add-PnPField cmdlet. But…
… often the user wants to have a new field in a special position within the list or content type. This could also be solved using PowerShell, but we need some CSOM code for the solution. The field order is part of the FieldLinks collection of the content type. Most of the time, we have already a field in the content type that could be used as the anchor, after which we want to place the new file. So, first, we need the position of our already existing field (the new field should already be created). We get this information with the following script:
With the position of the anchor field, we can use the following script to move our new field to the desired position in the content type. That could be done with the following script:
When we put this together, we can use the following script to do the move during a provisioning process:
If necessary, and this often makes sense, we can put all these scripts together and use the functions directly in one script.
SharePoint is really a great tool. We can easily create lists and libraries with all the columns for metadata we need. All just with using the web interface of SharePoint. When we need to create content types to build some more structure for our lists and libraries, we can do it via the web interface, too. And, when I need to do any other setting in my site, do any regional or language setting or activate any additional feature, just do the necessary clicks in the site settings in the web interface. So, why the hell should I use PowerShell or any other automation tool to create the site structure?
When we work in iterations (a good practice today), our automation scripts easily can make modifications or extensions to our former development. Because we know the id’s, the internal names of the artefacts and the structure, we can easily reconfigure the site to our needs.
And, when we at first do not know how we can do something with scripting, because we do not know, where and how a setting is stored, we can first configure in the web interface and then investigate in tools like the SharePoint Client Browser.
So, what are the pro’s and con’s for automation with PowerShell?
Full control on id’s and internal names
Ability to provide an iterative approach for the solution deployment
Providing additional development is easy, because all objects are well known
The solution is reproduceable in any other environment or tenant
Scripting means typing in files and for some SharePoint objects that is time consuming
Because it’s so easy, to configure a site structure in the web interface, most customers won’t believe that defining a site structure will cost that time
My advice from my own practical use and all my former SharePoint projects is, whenever you need to transport your solution from your development environment to any other stage (eg. the environment of your customer), use a PowerShell script to automate the creation of the site structure. Reduce the manual steps for a deployment to a minimum (the same as when we do any other development and need to deploy our solution). When you have developed any other kind of software that works with your SharePoint environment, also use automation to do the deployment.
And, have in mind, there is another advantage, when using scripts and automation: you can reuse the objects and code you have developed. The strength, when developing today is, to reuse anything we have already done in any other project. Doing this, when defining a site structure via the web interface, is impossible.