Loading...

Follow Julie Turner - Veni, Vidi, Vici on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I learned about CSS Variables from Stefan Bauer and his post CSS Variables support for SPFx projects through spfx-uifabric-themes. In a nutshell this npm package, which I’ve started using regularly, transforms the current theme colors available to the SPFx web part into variables that can be used within your SCSS/CSS files. This intrigued me as it’s a native browser capability that’s been around since 2015 and because it means that you can affect the styling at run time vs build time… which translated means that lots of things you would normally think you’d need script for you can now do with a crafty use of CSS and some variables.

Generally, CSS Variables are scoped at the “root”, as Stefan does with the theme colors. This makes sense when you have values that should be consistent across the page. However, it turns out that CSS variables can also be scoped to an element and its children. Hmmm… this led me to realize that I can create these variables in the scope of the web part (not the page) and thereby they can have different values for each instance of the web part on the page.

This design pattern came in super handy in my demo for the talk I was doing with Mark Rackley at this years SharePoint Conference (Anything you can do, I can do better… Embracing the SharePoint Framework). In the session Mark and I were discussing the merits of “advancing” your development skills [javascript -> typescript, leveraging certain packages, the async/await pattern vs promises, etc.] The goal for my demo was to take a JavaScript project that he had done and hosted in a content editor web part on a classic page and “modernize” it. I did this in a variety of ways but one of the coolest was this CSS variable pattern. So, let me show you.

What is a CSS variable?

According to w3.org, cascading variables are “a new primitive value type that is accepted by all CSS properties, and custom properties for defining them.”

SCSS, which is pre-processed into CSS has the concept of a variable already, so this isn’t something that’s particularly novel, but what SCSS variables don’t give you is a way to define them through an elements style attribute, more on this later.

Creating and Scoping a CSS variable to the root

In your CSS (or SCSS) file you can define variables at the root of the page and then use them in your various styles.

:root {
    --main-color: red;
}
.myClass {
   color: var(--main-color);
}

Now if i use that class on an HTML element I’ll render the following

Creating and Scoping a CSS Variable to an element

CSS variables, as defined above are cascading. So that means that I can redefine that variable at some other point in the style if I want and or define a new variable at that point that is only scoped to that element and it’s children. Building on the previous example.

:root {
    --main-color: red;
}
.myClass {
   color: var(--main-color);
}
.myAltClass {
   --main-color: white; 
   --alt-color: blue;
   color: var(--main-color);
   background-color: var(--alt-color);
}

Now if I add another couple of elements that use both .myClass and .myAltClass I’ll render the following:

Utilizing web part properties to affect the values of the CSS Variables

So, this is excellent but the issue with defining these values in the SCSS/CSS is that they’re static for the implementation. Although that let’s you use that variable throughout your styles and change it in just one place that doesn’t make it dynamic enough for the purposes of the solution I was trying to create.

As I implied above, this is where the real superpowers of CSS variables come into play. You can define them via the style attribute of an element. Therefore, when using a framework such a ReactJS or Angular or Vue or Knockout (name your framework du jour), where I can easily build the DOM elements dynamically, I can create those CSS variables as well.

So, using the ReactJS example, when I render the element, I can create those variables and then inject them into the DOM. Note that the div at the root of the “return” links to the classes linkTiles and tileCont and then defines a style which injects the styleBlock value which is where I defined the CSS variables, like this:

public render(): React.ReactElement<ILinkTilesProps> {
    //Create the CSS Variables based on the web part properties
    let styleBlock = { "--tileWidth": this.props.width + "px", "--tileHeight": this.props.height + "px" } as React.CSSProperties;
    //Render tile container as flex box
    try {
      return (
        <div className={`${styles.linkTiles} ${styles.tileCont}`} style={styleBlock}>
          {this.props.tiles && this.props.tiles.length > 0 && this.props.tiles.map((t: ILink) => {
            return (
              <Tile tile={t} showTitle={this.props.showTitle} />
            );
          })}
        </div>
      );
    } catch (err) {
      Logger.write(`${err} - ${this.LOG_SOURCE} (render)`, LogLevel.Error);
      return null;
    }
}

Then the css for this project has the following class definitions which use those variables I defined for height and width in a multitude of classes. Here’s a snippet.

.linkTiles {
  &.tileCont {
    width: 100%;
    display: flex;
    flex-wrap: wrap;
    justify-content: left;
  }

  .tile,
  .tileFlip,
  .tileFront,
  .tileFront>img,
  .tileBack {
    width: var(--tileWidth);
    height: var(--tileHeight);
  }
....

What that gives me is a completely isolated implementation of my style, so when two instances of that same web part are on a page their height and width as I defined them in the CSS variable is isolated to that instance.

I hope you can think of other great ways to use this cool solution, sadly though I suppose the spoiler is that they are not supported in IE 11 (https://caniuse.com/#search=css%20variable).

Happy Coding!

As linked to above the complete source code for the solution can be found in my Public Samples repo.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I love the new feature I picked up from my friend Stefan Bauer about using npm version to upgrade the version of your SPFx solution. It has made working as an individual and as a team contributor so much easier because it becomes obvious in your repositories history when versions of the project we’re created and by whom.

I was struggling though because some of my more complicated projects, although set up the same way, were functioning with the exception that the git tags were not getting being created. It turns out that if your folder structure is more complicated, and your package.json file is in a sub folder below your .git folder the tags won’t get created automatically, although all the other aspects of the solution work fine.

Luckily I found a post with a workaround in the npm repos issues list.

If you’re repository structure looks anything like this, where your package.json file is not at the same level as your .git folder for the project you’re running npm version on, a workaround to get the tags to apply automatically is to add an additional, empty .git folder.

So this…

mySpfxProject
|—-.git/
|—-docs/
|—-specs/
|—-webparts/
|——–package.json
|——–{all the other spfx files}

becomes…

mySpfxProject
|—-.git/
|—-docs/
|—-specs/
|—-webparts/
|——–.git/
|——–package.json
|——–{all the other spfx files}

And voila, npm version will now create the appropriate tag.

Keep in mind that the tag is for the entire repo, so if you have multiple solutions in the same repo that have different versions you may want to manually apply your tags in a different way. Which is probably why the feature works the way it does in the first place.

Happy coding!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Curate the News Social Following Sites on behalf of a user

The impetus for this post was the desire to follow a site for a batch of users. Why? Well, the news that shows up on the SharePoint home page stems from news posted to sites you follow. So as an organization, especially a large one, if you want to somewhat curate what news gets pushed to your users you need to make sure they’re following the sites that have the news you want them to see.

The social endpoints that are generally available via REST or CSOM clearly let you follow a site for the current user but there really is no documented way to follow one site for a batch of users or on behalf of another user.

This post follows the lead set by Mikael Svenson in his post Quickly clear Followed sites using PnP PowerShell. In Mikael’s case he was trying to clear the sites he had followed while doing some testing, and a quick way to do that is with his PowerShell script. What he uncovered though is the hidden list in the user’s personal site that stores and abundance of social information including sites, documents and items a user is following. Now all I had to do was leverage this list and see what could come of it.

<Disclaimer>
Let me be super clear, since it was a topic of conversation on the twitterverse, that this is not a Microsoft sanctioned method for solving this problem. Sadly there is no supported method for solving this problem, and so you need to make sure that you, or your client understand the inherent risks with going off reservation.
</Disclaimer>

What comes next is just the code, it’s almost ridiculously simple solution, but low and behold it works.

Once you add the site to the list, the site will show up as being followed when the user navigates to it, and after a short time news from that site will bubble up for the user when they visit the SharePoint home page.

Init setup

Obviously this code is an example. Normally you would want to set up all these variables in an app.config, database, or whatever works for your solution. I’m just outlining here the information you’re going to need to be able to complete the process.

The biggest hurdle to success here is permissions. By default the “Company Administrator” is the only person who is a SCA (Site Collection Administrator) on each of the personal sites. You’ll need to make sure whatever account your using has access to each of the users you want to modify or this solution isn’t going to work for you. To get around that, the simplest solution is probably to create an Azure app registration with the “Have full control of all site collections” app permission and then use that context to access each users site.

const string _tenant = "<Your Tenant Name>"; //e.g. 'contoso'
const string _username = "<User with SCA to each tenant-my.sharepoint.com site collection>";
SecureString _password = null; //The password for _username

var user = "<User you want to follow the site for, replace @ and . with _>"; //e.g. 'test_contoso_com'
            
var socialSite = $"https://{_tenant}-my.sharepoint.com/personal/{user}";
var socialPartial = $"/personal/{user}";
            
var followSite = $"https://{_tenant}.sharepoint.com/sites/MySite";

Guid webId = new Guid("<Web Id for followSite root web>");
string webTitle = "<Title of followSite>";
Guid siteId = new Guid("<Site Id for followSite>");

Execute

using (ClientContext ctx = new ClientContext(socialSite))
{
	ctx.Credentials = new SharePointOnlineCredentials(_username, _password);
	try
	{
		//Hidden list that contains followed sites, documents, and items
		var list = ctx.Web.Lists.GetByTitle("Social");
		ctx.Load(list);

		//Validate the 'Private' folder exists -- for a user who hasn't followed anything it will not be there.
		var folderPrivate = ctx.Web.GetFolderByServerRelativeUrl($"{socialPartial}/Social/Private");
		ctx.Load(folderPrivate);
		try
		{
			ctx.ExecuteQuery();
		}
		catch (Exception ex)
		{
			//Create private and Followed site
			var info = new ListItemCreationInformation();
			info.UnderlyingObjectType = FileSystemObjectType.Folder;
			info.LeafName = "Private";
			ListItem newFolder = list.AddItem(info);
			newFolder["Title"] = "Private";
			newFolder["ContentTypeId"] =
				"0x01200029E1F7200C2F49D9A9C5FA014063F220006553A43C7080C04AA5273E7978D8913D";
			newFolder.Update();
			ctx.ExecuteQuery();
		}

		//Validate the 'FollowedSites' folder exists -- for a user who hasn't followed anything it will not be there.
		var folderFollowed = ctx.Web.GetFolderByServerRelativeUrl($"{socialPartial}/Social/Private/FollowedSites");
		ctx.Load(folderFollowed);
		try
		{
			ctx.ExecuteQuery();
		}
		catch (Exception ex)
		{
			//Create private and Followed site
			var info = new ListItemCreationInformation();
			info.UnderlyingObjectType = FileSystemObjectType.Folder;
			info.FolderUrl = $"{socialPartial}/Social/Private";
			info.LeafName = "FollowedSites";
			ListItem newFolder = list.AddItem(info);
			newFolder["Title"] = "FollowedSites";
			newFolder["ContentTypeId"] = "0x0120001F6E5E1DE9E5447195CFF4F4FC5DDF5B00545FD50747B4D748AA2F22CD9D0BCB5E";
			newFolder.Update();
			ctx.ExecuteQuery();
		}

		//Create the new follow item for the site, in the FollowedSites folder.
		var infoItem = new ListItemCreationInformation();
		infoItem.FolderUrl = $"{socialPartial}/Social/Private/FollowedSites";
		var newFollowedSite = list.AddItem(infoItem);
		newFollowedSite["Title"] = webTitle;
		newFollowedSite["ContentTypeId"] = "0x01FC00533CDB8F4EAE447D941948EFAE32BFD500D2687BB5643C16498964AD0C58FBA2F3";
		newFollowedSite["Url"] = followSite;
		newFollowedSite["SiteId"] = siteId;
		newFollowedSite["WebId"] = webId;
		newFollowedSite.Update();
		ctx.ExecuteQuery();
	}
	catch (Exception ex)
	{
		Console.WriteLine(ex.Message);
	}
}

As usual the source code for this solution can be found in my github repo.

Happy Coding!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Julie Turner - Veni, Vidi, Vici by Julie Turner, Mvp - 7M ago

My Sympraxis partner Marc Anderson mentioned that we’ve been talking about PnPJS packages for SharePoint Framework a lot lately and called out that I would be blogging about utilizing the logging package in his post Using PnPJS and Async/Await to Really Simplify Your API Calls. If you haven’t checked it out and aren’t using PnPJS and the Async/Await method instead of Promises in your SharePoint Framework solutions, you should give it a read.

TL;DR

Download the sample code from my GitHub repo for three examples of how to use the PnP Logging package.

Why Log

Logging information from your application to the browser console about what’s happening under the covers in your code can be enormously helpful when trying to debug issues that are bound to arise. From basic information, like the fact that your web part has started and successfully initialized to error information during execution. Well thought out and consistent logging can really go a long way to solving issues fast. Certainly, you can issue calls to console.log throughout your code, and if you’re going to take nothing else from this post, please consider making it a common practice to do so in almost, if not every, method. Waldek Mastykarz has written a nice post on utilizing and extending, shall we say, the out of the box logging built into the SharePoint Framework in his post Logging in the SharePoint Framework solutions. PnPJS has an implementation that resembles the CustomLogHandler he describes but takes it a bit further.

Types of logging

If you look at the documentation PnPJS Logging supports a default ConsoleListener, a FunctionListener, and the ability to pass in your own implementation of a listener that inherits from LogListener. Each one honors the Active Log Level which will then only execute the log method when the call’s error level is greater to or equal to the set level. This is something you could easily set as a web part property or a Tenant property so that you could get more of less information as the situation warrants.

Starting Point

First, the Logger is a singleton, which is important to understand because that means that you need only initialize it once and then it’s available to use in anywhere in your code. Start by passing the listener of your choice to the subscribe method.

Logger.subscribe(new ConsoleListener());

The second step is to set the Active Log Level, like so:

Logger.activeLogLevel = LogLevel.Verbose;

And, make note that you can have more than one listener. For my advanced example I not only want to do some custom logging I also want to log information to the console, so I’ve added both listeners to the Logger.

Calling the Logger

To call the logger you have a couple of different options. You can either use the write method, which will simply pass your information as a string to the message and if you choose a logging level. You can use the writeJSON method which allows you to pass a JSON object which will get converted to a string to serve as your message and optionally a logging level. And finally, the log method which will allow you to specify each property of the LogEntry. For more samples see the official documentation.

Basic Logging

For basic logging we’re just using the functionality as is, by utilizing a ConsoleListner, setting the logging level, and noting that anything we “Log” is getting written to the browser’s console.

Custom Logging

For custom logging we took advantage of the FunctionListener and created our own variation on how we might log information to the console. As the documentation points out, if you already have your own logging solution, be that an api or whatever, you could use this method to simply hand off the errors. My example shows making a REST call when the log entry is at the Error level.

let listener = new FunctionListener((entry: LogEntry) => {
  try {
    switch (entry.level) {
      case LogLevel.Verbose:
        console.info(entry.message);
        break;
      case LogLevel.Info:
        console.log(entry.message);
        break;
      case LogLevel.Warning:
        console.warn(entry.message);
        break;
      case LogLevel.Error:
        console.error(entry.message);
        // pass all logging data to an existing framework -- for example a REST endpoint 
        this.context.httpClient.post("<REST Endpoint URL>", HttpClient.configurations.v1, { headers: { Accept: "application/json" }, body: JSON.stringify(entry) });
        break;
    }
  } catch (err) {
    console.error(`Error executing customLogging FunctionListener - ${err}`);
  }
});

Logger.subscribe(listener);

Advanced Logging

Finally, advanced logging takes advantage of building your own implementation by inheriting from LogListener. In this implementation I’m creating a scenario where by you would log just the errors to a custom list, in this case in SharePoint, but it could easily be anywhere. The point is that I want to implement my own listener so that I can do some setup, like make sure I have the users’ Id.

export default class AdvancedLoggingService implements LogListener {
  private _applicationName: string;
  private _web: Web;
  private _logListName: string;
  private _userId: number;
  private _writeLogFailed: boolean;

  constructor(applicationName: string, logWebUrl: string, logListName: string, currentUser: string) {
    //Initialize
    try {
      this._writeLogFailed = false;
      this._applicationName = applicationName;
      this._logListName = logListName;
      this._web = new Web(logWebUrl);
      this.init(currentUser);
    } catch (err) {
      console.error(`Error initializing AdvancedLoggingService - ${err}`);
    }
  }

  private async init(currentUser: string): Promise<void> {
    //Implement an asyncronous call to ensure the user is part of the web where the ApplicationLog list is and get their user id.
    try {
      let userResult = await this._web.ensureUser(`i:0#.f|membership|${currentUser}`);
      this._userId = userResult.data.Id;
    } catch (err) {
      console.error(`Error initializing AdvancedLoggingService (init) - ${err}`);
    }
  }

  public log(entry: LogEntry): void {
    try {
      //If the entry is an error then log it to my Application Log table.  All other logging is handled by the console listener
      if (entry.level == LogLevel.Error) {
        if (!this._writeLogFailed) {
          let stackArray = null;
          if (entry.data.StackTrace && entry.data.StackTrace.length > 0)
            stackArray = JSON.stringify(entry.data.StackTrace.split('\n').map((line) => { return line.trim(); }));
          let newLogItem: LogItem = new LogItem(this._applicationName, entry.data.FileName, entry.data.MethodName, new Date(), this._userId, entry.message, stackArray);
          let newLogItemResult = this._web.lists.getByTitle(this._logListName).items.add(newLogItem);
        }
      }
    } catch (err) {
      //Assume writing to SharePoint list failed and stop continuous writing
      this._writeLogFailed = true;
      console.error(`Error logging error to SharePoint list ${this._logListName} - ${err}`);
    }
    return;
  }
}

As a result, every time an error is logged a new entry is put in my ApplicationLog list.

Conclusion

PnPJS library logging package has a lot of depth to create some super functional logging implementations for your custom SharePoint Framework solutions. Resolve this year to make your code more robust and easily supportable. For the complete source code, please check out my GitHub repo.

Happy Coding!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Introduction

Azure Active Directory (AAD) Unified Groups, or better known as Office365 Groups, the security principal that underlies modern SharePoint team sites, Teams, Outlook Groups, Planner, etc. is a very powerful management construct that is the glue that holds the Office 365 security pyramid together. Basically, a Unified Group has both an Owners group and a Members group and by adding users (either users in your tenant or external users – with a Microsoft based work and school account or a personal account) you can create a construct that allows you to work across many of the vast product offerings in Office 365. For a more easily consumable infographic covering the power of Unified Groups go check out fellow MVP, Matt Wade’s An everyday guild to Office 365 Groups

At the tenant level, weather you’ve thought about it or not, you have a default sharing status for all the Unified Groups in your environment. Assuming you haven’t changed anything, they are probably; “Let users add new guests to the organization” and then for SharePoint and OneDrive “Let users share SharePoint Online and OneDrive for Business content with people outside the organization” – “Anyone, including anonymous users”. Obviously, you can set these any way you like but assuming you want to allow sharing of some kind then you’ll need to have sharing enabled at the tenant level. So now how do you disable/enable sharing for each of the groups/site collections.

Long story short, if you’re an enterprise you might want the ability to manage which groups include users outside your organization. And you might even want to build a system for tracking what users are granted access and if there’s some sort of approval process in place. By flipping a lot of switches and twisting a bunch of knobs, most through the Microsoft Graph and somewhat through the Microsoft.TenantAdministration library you can achieve just that.

Scenario

From a central management system, maintain a list of sites a partner has external access to and the names of each user from that partner with access.* When a new site/user is added do the following:

  1. manually add that user as an external user via invitation in AAD
  2. modify their user properties
  3. assign them a manager
  4. add the new user to the member group of the Unified Group

When the user accepts the external sharing request they will have access to the group. Further, we want to maintain one entry in AAD for each external email account.

*For the purposes of this scenario I’m not discussing the architecture of said central management system suffice it to say it certainly could be a set of lists in SharePoint with a relationship on partner, but it also could be an external system built on top of a relational database. Regardless of this implementation let’s assume we have a source of partners and users that can be granted access.
Implementation

With the assumption that you are familiar with creating an Azure AD Application (either v1 of v2), the various authentication flows that you could use depending on your platform du jour, and the various ways to use either the ADAL or MSAL libraries I’ll move on to the actual pieces of code that implement the solution. If you are not familiar, please start by checking out the documentation about how to get auth tokens from the official Microsoft site. That site also has a bunch of Quick Starts and if you like labs, there are some good Microsoft Graph Hands On Labs you can use to get yourself up to speed.

Also, when creating your Azure Application, you will need to grant a bunch of permissions depending on what type of app registration you choose. Because I am using application permissions and not delegated permissions, I granted my application the following:

  • Directory.Read.All
  • Directory.ReadWrite.All
  • Group.ReadWrite.All
  • User.Invite.All
  • User.ReadWrite.All
Setup

Assuming you have a list of sites you want to enable sharing with for each site you will need the site’s URL and the corresponding O365 Unified Group ‘Id’. I explained in my previous post how you might use the Microsoft Graph to retrieve the ID if you know the site URL. Since we have to have “sharing” turned on at the tenant level you will most likely want a process in place that turns sharing off for all existing Unified Groups and site collections and any newly created ones, managing that is outside the scope of this post but the code would be the same.

I have seen several instances where that scenario won’t work but I’m almost positive it’s legacy groups that were created in this tenant as a result of utilizing preview code… so for the purposes of this post I’m going to assume you can get the Id via graph but if not, there are other ways you can get it most notably the Exchange Online PowerShell comandlets. You can use Get-UnifiedGroup to retrieve information about the group. Be aware an entirely confusing aspect of the results of the commandlet is knowing which of the various guid’s returned is the one that works consistently with the Microsoft Graph. I have found that the ExternalDirectoryObjectId property works most consistently but have found several instances where it’s null, and in that case the ID seems to be the best alternative.
Manage Sharing of Unified Group

To enable or disable sharing of the Unified Group, which is different from the site collection sharing status, you will want to create and apply a particular groupSettingTemplate to the Unified Group. You do so by first creating your version of the Group.Unified.Guest template. You can get the id of this template by issuing the following get request using the graph explorer: https://graph.microsoft.com/v1.0/groupSettingTemplates
If you scroll through the results you will find the template for ‘Group.Unified.Guest’. Note the templates Id. Based on my testing the id is the same in all tenants, so you can probably skip this test but if you have problems might be worth going back and checking.

Ok, now what you want to do is create the content for your request, check if the template is already applied to the group in question and then either post or patch the template to the group. See the code below.

//URL to the group's settings
string urlGraph = String.Format("https://graph.microsoft.com/v1.0/groups/{0}/settings", groupId);
//The groupSettingsTemplate Id that we want to apply to our group
string templateId = "08d542b9-071f-4e16-94b0-74abb372e3d9";
//The version of the template we will apply to the group, where AllowToAddGuests is either true/false
var content = new StringContent(@"{
    'displayName': 'Group.Unified.Guest',
    'templateId': '08d542b9-071f-4e16-94b0-74abb372e3d9',
    'values': [
        {
        'name': 'AllowToAddGuests',
        'value': 'True'
        }
    ]
}'}", Encoding.UTF8, "application/json");


using (var client = new HttpClient())
{
    //setup client
    client.BaseAddress = new Uri(urlGraph);
    client.DefaultRequestHeaders.Accept.Clear();
    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    //A previous Async request retrieved our access token, now we're appending it to the header
    client.DefaultRequestHeaders.Add("Authorization", "Bearer " + task.Result.AccessToken);

    //Check if template exists
    string settingId = String.Empty;
    //Gets a list of existing groupTemplates applied to the group, if any.
    var taskExists = Task.Run(async () => await client.GetAsync(urlGraph));
    taskExists.Wait();
    if (taskExists.Result != null)
    {
        if (taskExists.Result.StatusCode == HttpStatusCode.OK)
        {
            HttpResponseMessage response = taskExists.Result;
            var taskResponse = Task.Run(async () => await response.Content.ReadAsStringAsync());
            taskResponse.Wait();
            if (taskResponse.Result != null)
            {
                //Converting the results to an object we can consume in C#, this is done by creating a class that matches the JSON
                GroupSettingsList settings = JsonConvert.DeserializeObject(taskResponse.Result);
                if (settings.value.Count > 0)
                {
                    foreach (var setting in settings.value)
                    {
                        //If the current setting, matches the groupSettingTemplate then save it
                        if (setting.templateId == templateId)
                            settingId = settings.value[0].id;
                    }
                }
            }
        }
    }
    
    Task taskResult = null;
    //Based on if the groupSettingTemplate is already applied to this group, either post a new one or patch the existing one
    if (settingId == String.Empty)
        taskResult = Task.Run(async () => await client.PostAsync(urlGraph, content));
    else
        taskResult = Task.Run(async () => await client.PatchAsync(urlGraph + "/" + settingId, content));

    taskResult.Wait();
    if (taskResult.Result != null)
    {
        if (taskResult.Result.StatusCode == HttpStatusCode.Created)
        {
            Console.WriteLine("Success");
        }
        else
        {
            Console.WriteLine("Failed");
        }
    }
}

Manage Sharing of the SharePoint site collection

Unfortunately, there is (as of publishing) no way through the Microsoft Graph to modify the sharing status of the site collection, however you can easily do so through CSOM. The Microsoft.TenantAdministration library gives you the means to change to the following states through an enum: Disabled, ExternalUserSharingOnly, ExternalUserAndGuestSharing, ExistingExternalUserSharingOnly. The following code shows you how to change it from Disabled to ExternalUserSharingOnly based on a value passed to the function.

//Note this specific using for the 'Tenant'
using Microsoft.Online.SharePoint.TenantAdministration;

using (ClientContext ctx = new ClientContext(tenantUrl))
{
    ctx.Credentials = new SharePointOnlineCredentials(_username, _password);
    ctx.RequestTimeout = -1;
    Tenant tenant = new Tenant(ctx);
    var site = tenant.GetSitePropertiesByUrl(siteUrl, true);
    ctx.Load(site);
    var taskResult = Task.Run(async () => await ctx.ExecuteQueryAsync());
    taskResult.Wait();
    site.SharingCapability = sharingEnabled ? SharingCapabilities.ExternalUserSharingOnly : SharingCapabilities.Disabled;
    //A list of allowed external domains can be added here
    site.SharingAllowedDomainList = "";
    SpoOperation op = site.Update();
    ctx.Load(op, i => i.IsComplete, i => i.PollingInterval);
    ctx.ExecuteQuery();
    while (!op.IsComplete)
    {
        //wait 15 seconds and try again
        System.Threading.Thread.Sleep(15000);
        op.RefreshLoad();
        ctx.ExecuteQuery();
    }
}

Creating External Users

If the external user’s account already exists in your AAD, you will need to retrieve the users AAD id which can be accomplished by making a call to the user endpoint as shown below. This code is also the basis as you can see by the comments for adding the existing or newly created user to the Unified Group.

//Url to verify if external user already exists
string urlGraph = "https://graph.microsoft.com/v1.0/users?$filter=mail eq 'my_email@extdomain.com'";

using (var client = new HttpClient())
{
    //setup client
    client.BaseAddress = new Uri(urlGraph);
    client.DefaultRequestHeaders.Accept.Clear();
    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    //A previous Async request retrieved our access token, now we're appending it to the header
    client.DefaultRequestHeaders.Add("Authorization", "Bearer " + task.Result.AccessToken);

    //make request
    string userId = string.Empty;
    var taskUser = Task.Run(async () => await client.GetAsync(urlGraph));
    taskUser.Wait();
    if (taskUser.Result != null)
    {
        if (taskUser.Result.StatusCode == HttpStatusCode.OK)
        {
            HttpResponseMessage response = taskUser.Result;
            var taskUserExists= Task.Run(async () => await response.Content.ReadAsStringAsync());
            taskUserExists.Wait();
            if (taskUserExists.Result != null)
            {
                //Converting the results to an object we can consume in C#, this is done by creating a class that matches the JSON
                var user = JsonConvert.DeserializeObject(taskUserExists.Result);
                if (user.value.Count > 0)
                {
                  //User exists so save the userId to add it to the unified group
                  userId = user.value[0].id;
                }
                else
                {
                  //CODE TO CREATE INVITATION GOES HERE
                }
            }
        }
    }

    if (userId != string.Empty)
    {
      //CODE TO ADD USER TO UNIFIED GROUP GOES HERE - Adding Users to the Unified Group
    }
}

If you’ve verified that the user account doesn’t exist, then you will want to create an invitation for them to join. Luckily the Microsoft Graph has a great way to do this for you called Invitation Manager. Depending on how much control you want over the email that goes to guests you can set the sendInvitationMessage property to allow Microsoft to send the email for you with a couple of configurable properties or you can take the information returned from the invitation process to craft and send your own email.

urlGraph = "https://graph.microsoft.com/v1.0/invitations";
//User doesn't exist, create invitation
var content = new StringContent(@"{
    'invitedUserEmailAddress': 'my_email@extdomain.com',
    'inviteRedirectUrl': 'https://myTenant.sharepoint.com/sites/MyExternalSite',
    'invitedUserDisplayName': 'My User (extdomain)',
    'sendInvitationMessage': 'true',
    'invitedUserMessageInfo': {
        'ccRecipients': [{
            'emailAddress': {
                'address': 'ccRecipient@myTenant.com',
                'name': 'CC Recipient'
            }
        }]
    }
}", Encoding.UTF8, "application/json");
var taskNewUser = Task.Run(async () => await client.PostAsync(urlGraph, content));
taskNewUser.Wait();
if (taskNewUser.Result != null)
{
    if (taskNewUser.Result.StatusCode == HttpStatusCode.Created || taskNewUser.Result.StatusCode == HttpStatusCode.OK)
    {
        HttpResponseMessage responseNewUser = taskNewUser.Result;
        var taskNewUserContent = Task.Run(async () => await responseNewUser.Content.ReadAsStringAsync());
        taskNewUserContent.Wait();
        if (taskNewUserContent.Result != null)
        {
            var userNew = JsonConvert.DeserializeObject(taskNewUserContent.Result);
            if (userNew != null)
            {
                userId = userNew.invitedUser.id;
                //At this point the user exists in AAD and can be modified further.
            }
        }
    }
    else
    {
        Console.Write(taskNewUser.Result.StatusCode);
    }
}

The return payload from that post, gives you the AAD id for the user that will be used in the next step but that you can also then be used to modify the users account more, by setting other properties like mobile phone, company, and maybe even uploading a photo or setting a manager relationship. For more information on modifying a user record see the Graph documentation for a User.

Adding Users to the Unified Group

So, either from the results of creating an invitation or from looking the user up you have the AAD Id that can be used to add that user to the members group of the Unified Group. This is as easy as making a post to the group/{id}/members endpoint. The code below goes in outlined in the first code snippet in the section Creating External Users.

//Add to Group
urlGraph = String.Format("https://graph.microsoft.com/v1.0/groups/{0}/members/$ref", groupId);
var contentGroup = new StringContent(@"{'@odata.id': 'https://graph.microsoft.com/v1.0/users/" + userId + @"'}", Encoding.UTF8, "application/json");
var taskResultGroup = Task.Run(async () => await client.PostAsync(urlGraph, contentGroup));
taskResultGroup.Wait();
if (taskResultGroup.Result != null)
{
    if (taskResultGroup.Result.StatusCode == HttpStatusCode.NoContent || taskResultGroup.Result.StatusCode == HttpStatusCode.OK)
    {
      Console.WriteLine("Success");
    }
    else
    {
      Console.Write("Failed");
    }
}

Summary

By taking these ideas and your own requirements and imagination you can assemble a very powerful tool to manage your companies external sharing. Luckily for us the Microsoft Graph allows us to attain most the capabilities we need and in time, probably all. I hope this helps get you started. Happy Coding!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

For this last post I want to take what we’ve learned and add the final pieces that have you creating web parts in the same way you would modern SPFx web parts and solutions. We’re going to start by discussing TypeScript and then briefly touch on Sass and how to include these languages into your new Webpack/Gulp environment.

TypeScript is becoming almost ubiquitous in modern web development. The pros are numerous, my favorites are the ability to write code to target older browser with modern capabilities, and the ability to use a version of intellisense to validate your objects properties and methods. In my experience both of these features makes development go faster. The cons are that you’ll need to transpile your code as well as utilize typings for the libraries you want to include. By using Visual Studio Code, or another IDE as your development environment TypeScript is pretty much built in. If you are coming from C#, or some other compiled language, you’re going to find that you feel significantly more comfortable writing TypeScript than JavaScript, mainly because many of the conventions you’re used to have an equivalent in the TypeScript language and thus patterns like MVVM are easily implemented.

As I’ve mentioned in many of my other posts I tend to use AngularJS as my development framework of choice simply because it works well and supports dynamic binding which is needed for web part development. That’s changing with the introduction of Web Components and Angular 5’s – Angular Elements but at this moment that’s super cutting edge and I’m not going to address it.

If you are interested in Angular Elements check out both Andrew Connell’s post (Solve the SharePoint Framework + Angular Challenge with Angular 5.0 Elements) and Sébastien Levert’s series (SHAREPOINT FRAMEWORK & ANGULAR ELEMENTS : BUILDING YOUR FIRST WEB PART)

All that said that doesn’t mean AngularJS has to be the framework you choose, or that you need to choose a framework at all. All it means is that I’m going to show an example below of how I add the typings for spinning up an AngularJS project. If you’re interested in some good reading and further links around the framework wars you might check out Angular, React or Vue – Which Web Framework to Focus on for SPFx? where Andrew Connell gives you a lot of resources to help you learn about the different frameworks, and some good advice…. Try before you ‘buy’!

TypeScript

Enough introduction, on to the actual process. First we’re going to update our package.json file by adding TypeScript. Now if you’re using a tool like WebStorm they provide a “bundled” version of TypeScript (Visual Studio Code provides language support but not the transpiler, you will need to add it as I describe). Again, per my discussions in the previous posts I have run into version incompatibility issues and so I’ve taken to including my own version and not installing it globally or relying on the bundled version. You should choose what works for you but if you’re going to pick your version then you need to add it to you devDependencies section of your package.json.

"typescript": "~2.3.4",

typescript Provides typescript processing

A basic requirement of TypeScript is a configuration file, also known as a tsconfig.json file. The power of TypeScript really comes from the ability to code once and target whatever version of ECMAScript your browser(s) require. My basic tsconfig.json files looks like the following. You can see from the ‘target’ property that I want my transpiled JavaScript to run in browsers supporting ECMAScript v5.

{
  "compilerOptions": {
      "target": "es5",
      "module": "commonjs",
      "sourceMap": true,
      "experimentalDecorators": true,
      "lib": ["dom", "es6", "es2016.array.include"]
  }
}

In addition, I need to add the typings for my third-party libraries. Typings are the “intellisense” for your code. They allow the transpiler to check that you’ve correctly utilized the various properties and methods your referencing before it actually “builds” it. The “cool” way to add typings to your project is to use the @types pattern, you can look up your favorites in NPM. Here I’m adding the typing for AngularJS to my dependencies.

"@types/angular": "~1.6.36",

@types/angular Provides typescript support for AngularJS

We also need to add Webpack support for our TypeScript files. So, we’ll add the following:

"ts-loader": "~2.3.7",

ts-loader This is the typescript loader for webpack.

Then we’ll modify our webpack.config.js file to reference and use ts-loader. Note other modifications to support our switch to TypeScript including changing our entry file, adding ts-loader to our modules section, and the addition of the “resolve” section which helps Webpack configure how modules are resolved. By including extensions section we’re telling Webpack to automatically resolve files with these extensions.

var webpack = require('webpack');

module.exports = {
    entry: {
        bundleCDNDemoWebpackTS: "./client/cdndemo.ts"
    },
    output: {
        path: '/code/Conference-Demos/CDNDemoWebpackTS/build/',
        filename: "[name].js",
        publicPath: '/'
    },
    module: {
        rules: [
            {
                test: /\.css$/,
                exclude: /node_modules/,
                loader: ["style-loader", "css-loader"]
            },            
            {
                test: /\.html$/,
                exclude: /node_modules/,
                loader: "html-loader"
            },
            {
                test: /\.ts$/,
                loader: 'ts-loader',
                exclude: /node_modules/
            }
        ]
    },
    externals: {
        angular: 'angular',
        Sympraxis: 'Sympraxis'
    },
    resolve: {
        extensions: ['.ts', '.js'],
    },
    watch: true
};

Now if I run an “npm i” all these dependencies will be loaded into my node_modules folder, and I can start my “npm build” process to start transpiling and webpacking my TypeScript based solution.

SASS/SCSS

Sass stands for “Syntactically Awesome Style Sheets” and its file extension is scss. Once I tried Sass I’ve never looked back as it makes those things that you should be able to do in stylesheets easy by providing features like variables, nesting, partials, inheritance, and operators. If you’ve never tried it check out the Sass site for some easy getting started snippets. To include Sass files in your project you need to include a few modules to help Webpack out.

"node-sass": "~4.7.2",
"sass-loader": "~4.1.1",

node-sass A dependency of sass-loader that must be manually included.
sass-loader Compiles the scss file into a css file so that webpack and include it in the bundle.

And then to our modules section of the webpack.config.js we need to add a rule for our scss files which is basically the same rules as css files but with the sass processor first (or last, processors work from last to first.. so the file will go through the sass-loader, then the css-loader, then the style-loader).

{
   test: /\.scss$/,
   exclude: /node_modules/,
   loader: ["style-loader", "css-loader", "sass-loader"]
},

And Beyond

There are so many other things we could add to our chain at this point. Linters, testing frameworks, etc, etc but this series covers what we at Sympraxis do at a minimum for our projects that reside in classic SharePoint. I really hope you’ve enjoyed reading them and if you have any questions please feel free to leave a comment below. If you’re interested, you can download the complete files that I’ve discussed in my series from my GitHub repo under the “Development Toolchain” folder.

Happy Coding!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview