Loading...

Follow Cameron Dwyer | Office 365, SharePoint, Outlook.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I’ve been using the new Microsoft Edge browser as my default browser now for several weeks. I initially missed some Chrome extensions that made my life a little easier and the Microsoft Edge Extensions store hasn’t got much to offer yet. That was until I realised that Microsoft has made it possible to install extensions directly form the Google Chrome Web store into Edge! It’s super simple as well.

Open the Extensions option

Toggle the ability to “Allow extensions from other stores”, then click Allow.

Now browse to the Google Chrome Web Store by typing this URL in: https://chrome.google.com/webstore

You should see a blue bar along the top of the page letting you know you can use the “Add to Chrome” button to add any extension from the Google Chrome Web Store to Edge.

That’s all there is to it. Enjoy using your familiar Chrome extensions on Edge.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We are living through the age of digital transformation, where businesses are finding new and novel ways to embrace technology to change processes and gain competitive advantages. This has seen almost all of our once paper based documentation make it’s way into digital form. The core Office product suite of Word, Excel, PowerPoint and Outlook remain the critical tools for our users to create and consume this digital content and are some of the most popular business applications on the planet.

With users spending so much time inside of Office products, being able to extend Office with add-ins provides enormous opportunity to deliver productivity saving features directly to users where they need it.

Consider the number of times you are working on an Excel or Word document, or writing an email, and switch to another application to copy/paste data. It’s a lot isn’t it? Whenever our users are having to switch contexts between different application windows it breaks concentration, breaks focus and diminishes productivity. Office add-ins are the opportunity to bring the data users are trying to get to directly to them. Are they trying to look up data in an internal system (CRM, products, stock, orders, addresses, work items, project tasks)? Identifying these common ‘reference data’ systems and integrating through add-ins can make users not only more productive but also results in better accuracy and completeness of data.

Automation of document construction is another common scenario for Office add-ins. This is where the add-in can assist the user in bringing “building blocks” of content together and automating repetitive tasks for key documents types that are created frequently as part of a business process.

Imagine opening up Word to create a document and not having to leave Word to actually get the job done because everything you need is just there, available at your fingertips! Tighter integration between our systems is the promise of add-ins. It’s what users want, it’s what businesses want.

So this begs the question, why do we see so little investment in add-in development?

I believe this has a lot to do with the stigma associated with the old COM/VSTO add-in model:

  • Difficult to develop (unmanaged code, prone to memory leaks)
  • Difficult to deploy/update
  • Could affect performance of the host application (badly written add-ins could easily cause poor startup times and freezing of the host application)
  • Could affect stability of the host application (badly written add-ins could make the host application hang or crash completely)
  • Locked into using ‘old’ technology to develop in
  • Would only work for Windows version of Office (not Mac, or online)

None of this applies to the current add-in model, based on web standards and technologies:

  • Any web application developer can be developing Office add-ins very quickly
  • Deployment is centralized and add-ins can be acquired from a public or enterprise store. Being web based there is no installation of code.
  • Performance and stability of the host application have been protected so badly written add-ins cannot affect the host application.
  • Since add-ins are essentially web apps, any web technology can be used for the web front-end of the add-in, and there’s no restriction on your technology choice for back-end and hosting (if your add-in requires back-end services)
  • These web based add-ins run everywhere that Office does (Windows, Mac, Online Browser, iOS, Android).

Ultimately, realizing the value and potential of Office add-ins comes down to education and awareness.

So what’s the call to action:

I’ll be speaking about Office add-ins at these upcoming conferences and would love to meet and talk add-ins.

The Digital Workplace Conference – Sydney (6-7 August 2019)

The European SharePoint, Office 365 and Azure Conference – Prague (2-5 December 2019)

Integrate. Dominate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Fiddler is one of those development tools that makes me wonder what I did before it was around. If you’ve been hiding under a rock for the last decade (or just getting into web development) Fiddler is a web debugging proxy and it’s super useful for developers to see all the HTTP network traffic. Once you start Fiddler up on a machine all web traffic (HTTP requests/responses) are routed through Fiddler before heading out to their destination, and then the responses come back through Fiddler and are routed back to the calling application.

This means Fiddler allows you to inspect all web traffic going out and coming back in. More than that however it lets you.. well.. fiddle with it!

All web traffic is shown much like a log listing in the Fiddler GUI.

I find most developers are familiar with the basics of using Fiddler to observe and do basic inspection of the requests and responses that are happening. I see far fewer developers utilizing some of the really powerful manipulation aspects that Fiddler is capable of.

These are some of the scenarios that I use Fiddler for that I think are really powerful to know about as a developer. I’ve been wanting to write this post for a long long time. I hope you pick up at least one new tip from this list!

#1 Replay a request to quickly debug back-end web methods

This great for when you are developing code that is triggered by a HTTP request, such as a Web API controller method or an Azure function.

Here’s the scenario, you have some front-end app that creates a request to your back-end code (Web API, Azure Function etc). You are debugging the back-end code and trying to determine what’s going on but each time you need to use the front-end (as a user) to get it into the state of issuing the request to your back-end. Sometimes this isn’t trivial to just mock up using something like Postman due to headers, authentication tokens, dynamic values etc.

This is a great scenario for using Fiddler’s ability to replay a request.

Here’s a super simple controller method so you can see the mechanics. It just returns the current server time.

Imagine we are trying to debug this, we run the project on our dev machine with a breakpoint in the controller code. Then we need to trigger our method by using whatever the front-end app is. If we have Fiddler running we will be able to see this request come through from the front-end app to our controller method.

The code will stop at our breakpoint and we can try to debug and figure out what’s going on in our code.

After you’ve gone through one execution of the method, when you want to trigger it again you don’t have to manually go through the front-end app again. Just go to request in Fiddler, right click | Replay | Reissue Requests

Fiddler makes a copy of the original request including headers and body content and makes the request.

This will result in the back-end method being called again and you are quickly into debugging again without having to worry about using your front-end app to have the request issued.

#2 Replay a request with modified request content

This follows on from the above scenario of being able to quickly get into debugging your back-end code triggered by a HTTP request. It’s great to be able to replay a request but what if your request contains a lot of dynamic data, either in headers, in the URL parameters or it may be a POST request with complex objects serialized in the body?

This is where we can use Fiddler’s super powerful Composer tab.

Select the Composer tab (on the right side of the Fiddler interface)

You can then drag/drop any request from the left hand side. This takes a copy of the request (just like it did when we replayed an existing request).

Instead of executing the request immediately, you get to play with it in the composer tab. You can edit any part of it, the URL, headers or body.

Click execute and the request will be made and you can inspect it like any other request.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I had been using Application Insights for a while with a Web API project and found it immensely valuable for monitoring how the Web API was performing and discovering any issues in production.

I only recently stumbled across the fact that for ASP.NET projects, the Application Insights telemetry is surfaced and available in your source code within Visual Studio using CodeLens.

https://docs.microsoft.com/en-us/azure/azure-monitor/app/visual-studio-codelens

I was wondering why I wasn’t seeing this for my project and it seems you only get this magic happening automatically if your project is ASP.NET. My project was a Web API project.

Luckily it didn’t take long to find the answer here:
https://feedback.azure.com/forums/357324-application-insights/suggestions/15697332-support-code-lens-exception-display-for-all-net-p

I followed the steps posted on the comment of this thread. By adding this into an existing Product Group within the project .csproj file.

<ApplicationInsightsResourceId>/subscriptions/[subscription-id]/resourceGroups/[resource-group-name]/providers/microsoft.insights/components/[app-insights-resource-name]</ApplicationInsightsResourceId>

Note: When substituting values into the above, Resource Group and App Insights are the display name shown in Azure not the ids.

You will need to close and re-open the project in Visual Studio for the changes in the .csproj file to take affect. Once this happens you should be able to right-click on your project in Solution Explorer window and see an Application Insights menu option.

Now you’re all setup, learn more about how to use all this data that is at your fingertips!
https://docs.microsoft.com/en-us/azure/azure-monitor/app/visual-studio

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When creating more complex PowerPoint slides you may need to overlap objects or entirely cover object with others. This make the object very hard to select as they are underneath each other.

There is an easier way than just clicking with increasing frustration!

Home | Drawing | Arrange | Position Objects | Selection Pane…

Now you have the Selection pane docked to the right of your slide. This allows you to easily see a list of objects on the page and you can select the active object by selecting it in this list. Simple end to frustrated clicking.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is the continuation of my experience with testing the auto-scaling capabilities of the Azure App Service. The first post dealt with scaling out as load increases, and this post deals with scaling back in when load decreases.

On the Scale Out blade of the App Service Plan you can see the current number of instances it has scaled to along with the Run History of scaling events (such as when each new instance was triggered and when it came online). Here we can see that under load my App Service Plan had scaled out to 6 instances, which was the maximum number of instance I’d configured for it to scale out to.

At this stage I removed the heavy load testing to watch it scale back down. In fact I cut all calls to the service so it was experiencing zero traffic. I was starting to get worried after 20 mins as I still had 6 instances and no sign of it attempting to scale back in. Then I realised I had to also setup scale in rules – it won’t do it by itself!

On the same page I configured the rules of when to scale out, I also needed to configure rules to trigger it to scale in again. I configured a simple rule that when the CPU dropped below 70% then it’s time to scale back in an instance. I set the cool down period to 2 minutes so I didn’t have to wait around forever to see it scale in.

After saving those configuration changes, I expected to see the number of instances scale in and decrease by 1 every 2 minutes until it was back to a single instance.

Suddenly an email notification popped up, it had started happening!

I received the notifications and watched the Azure Portal over the next few minutes as it scales back from 6 to 5 to 4 to 3 instances. Then it stopped scaling in. I waiting over half an hour, scratching my head as to why it was failing to scale in. The CPU usage graph showed it was well under the scale in threshold of 70%, in fact it had peaked at 16% during that half hour of waiting. Why were these instances stuck running? I was paying for those instances and I didn’t need them.

On re-reading the Microsoft documentation on scaling best-practices, it became clear what was happening. You have to consider what Azure is trying to do when it’s scaling in. When Azure looks to scale in, it tries to predict the position it’s going to be in after the scale in operation to ensure it’s not placing itself in a position where it would trigger an immediate scale out operation again. So let’s look at how Azure was handling this scenario.

It had 3 instances running, from my first blog post of scaling up we had already established that each instance sat at 55% memory usage and nearly 0% CPU usage when it was idle. The trigger to scale down was when CPU usage was lower than 70%. The average CPU usage was under 15% so Azure had passed the trigger to scale in. But let’s look at what Azure thinks will happen to memory utilisation if it were to scale in. Each of the instances had 1.75GB memory allocated to it (based on the size of an S1 plan). So in Azure’s eyes my 3 instances each running at 55% memory usage required a total of 2.89GB (1.75GB*0.55*3). If we scaled in and were left with 2 instances then those 2 instances would have to be able to handle the total memory usage of 2.89GB (1.44GB each). Let’s do the maths on what the resulting memory usage on each of our instances would be (1.44/1.75*100) 82%. Remember the scale out rules I had set? they were set to scale out when memory usage was over 80%. Azure was refusing to scale in because it thought that would result in an immediate scale out operation again. What Azure was failing to take into account was that baseline memory that every new instance uses 55% memory (or 0.96GB) just doing nothing.

In reality, if Azure did scale in by one instance the remaining 2 instances would both continue to run at 55% memory usage and then scaling down to 1 instance would again result in the last single instance running at 55% memory usage.

Azure auto scale in isn’t the perfect predictor of what’s going to happen and you will need to pay careful attention to the metrics you are using to scale on. My advice would be to test your scaling configuration by load testing as I’ve done here so that you can have confidence that you’ve actually seen what happens under load. As this little test has proven sometimes the behaviour isn’t always obvious and what you’d expect, and a mistake here could lead to some nasty bill shock.

If you are stuck with scenarios when you can’t auto scale in, or you are concerned with scale in not working then here’s a few options to consider:

  • Configure a scheduled scale-in rule to forcible bring the instance count back to 1 at a time of day when you expect least traffic
  • Configure a periodic alert to notify you if the number of instance is over a certain amount, you could then manually reduce the count back to 1.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is the continuation of my experience with testing the auto-scaling capabilities of the Azure App Service. The first post dealt with scaling out as load increases, and this post deals with scaling back in when load decreases.

On the Scale Out blade of the App Service Plan you can see the current number of instances it has scaled to along with the Run History of scaling events (such as when each new instance was triggered and when it came online). Here we can see that under load my App Service Plan had scaled out to 6 instances, which was the maximum number of instance I’d configured for it to scale out to.

At this stage I removed the heavy load testing to watch it scale back down. In fact I cut all calls to the service so it was experiencing zero traffic. I was starting to get worried after 20 mins as I still had 6 instances and no sign of it attempting to scale back in. Then I realised I had to also setup scale in rules – it won’t do it by itself!

On the same page I configured the rules of when to scale out, I also needed to configure rules to trigger it to scale in again. I configured a simple rule that when the CPU dropped below 70% then it’s time to scale back in an instance. I set the cool down period to 2 minutes so I didn’t have to wait around forever to see it scale in.

After saving those configuration changes, I expected to see the number of instances scale in and decrease by 1 every 2 minutes until it was back to a single instance.

Suddenly an email notification popped up, it had started happening!

I received the notifications and watched the Azure Portal over the next few minutes as it scales back from 6 to 5 to 4 to 3 instances. Then it stopped scaling in. I waiting over half an hour, scratching my head as to why it was failing to scale in. The CPU usage graph showed it was well under the scale in threshold of 70%, in fact it had peaked at 16% during that half hour of waiting. Why were these instances stuck running? I was paying for those instances and I didn’t need them.

On re-reading the Microsoft documentation on scaling best-practices, it became clear what was happening. You have to consider what Azure is trying to do when it’s scaling in. When Azure looks to scale in, it tries to predict the position it’s going to be in after the scale in operation to ensure it’s not placing itself in a position where it would trigger an immediate scale out operation again. So let’s look at how Azure was handling this scenario.

It had 3 instances running, from my first blog post of scaling up we had already established that each instance sat at 55% memory usage and nearly 0% CPU usage when it was idle. The trigger to scale down was when CPU usage was lower than 70%. The average CPU usage was under 15% so Azure had passed the trigger to scale in. But let’s look at what Azure thinks will happen to memory utilisation if it were to scale in. Each of the instances had 1.75GB memory allocated to it (based on the size of an S1 plan). So in Azure’s eyes my 3 instances each running at 55% memory usage required a total of 2.89GB (1.75GB*0.55*3). If we scaled in and were left with 2 instances then those 2 instances would have to be able to handle the total memory usage of 2.89GB (1.44GB each). Let’s do the maths on what the resulting memory usage on each of our instances would be (1.44/1.75*100) 82%. Remember the scale out rules I had set? they were set to scale out when memory usage was over 80%. Azure was refusing to scale in because it thought that would result in an immediate scale out operation again. What Azure was failing to take into account was that baseline memory that every new instance uses 55% memory (or 0.96GB) just doing nothing.

In reality, if Azure did scale in by one instance the remaining 2 instances would both continue to run at 55% memory usage and then scaling down to 1 instance would again result in the last single instance running at 55% memory usage.

Azure auto scale in isn’t the perfect predictor of what’s going to happen and you will need to pay careful attention to the metrics you are using to scale on. My advice would be to test your scaling configuration by load testing as I’ve done here so that you can have confidence that you’ve actually seen what happens under load. As this little test has proven sometimes the behaviour isn’t always obvious and what you’d expect, and a mistake here could lead to some nasty bill shock.

If you are stuck with scenarios when you can’t auto scale in, or you are concerned with scale in not working then here’s a few options to consider:

  • Configure a scheduled scale-in rule to forcible bring the instance count back to 1 at a time of day when you expect least traffic
  • Configure a periodic alert to notify you if the number of instance is over a certain amount, you could then manually reduce the count back to 1.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I was recently testing the automatic scaling capabilities of Azure App Service plans. I had a static website and a Web API running off the same Azure App Service plan. It was a Production S1 Plan.

The static website was small (less than 10MB) and the Web API exposed a single method which did some file manipulation on files up to 25MB in size. This had the potential to drive memory usage up as the files would be held in memory while this took place. I wanted to be sure that under load my service wouldn’t run out of memory and die on me. Could Azure’s ability to auto-scale handle this scenario and spin up new instances of the App Service dynamically when the load got heavy? The upsides if this worked are obvious; I wouldn’t have to pay the fixed price of a more expensive plan that provided more memory 100% of the time, instead I’d just pay for the additional instance(s) that get dynamically spun up when I need them and therefore the extra cost during those periods would be warranted.

As an aside before I get started, it’s worth pointing out that the memory usage reporting works totally different on the Dev/Test Free plan than on Production plans, and I’d guess this has to do with the fact that the Free plan is a shared plan where it really doesn’t have it’s own dedicated resources.

What I noticed here is that if I ran my static website and Web API on the Dev/Test Free plan then my memory usage sat at 0% when idle. As soon as I change the plan to a production S1 then memory sat at around 55% when idle.

Enabling scale out is really simple, it’s just a matter of setting the trigger(s) for when you want to scale out (create additional instances) and I was impressed with a few of the other options that gave fine grained control over the sample period and cool-down periods to ensure scaling would happen in a sensible and measured way.

Before configuring my scale out rules, I first wanted to check my test rig and measure the load it would put on a single instance so I knew at what level to set my scaling thresholds. This is how the service behaved with just the one instance (no scaling)

You can see that the test was going to consistently get both the CPU and Memory usage above 80%.

Next I went about configuring the scale out rules. Here I’ve set it to scale out if the average CPU Usage > 80% or the Memory Usage > 80%. I also set the maximum instances to 6.

I also liked the option to get notified and receive an email when scaling creates or removes an instance.

So did it work? Let’s see what happened when I started applying some load to both the static website and the Web API.

Before long I started getting emails notifying me that it was scaling up, each new instance resulted in an email like this

These graphs show what happened as more and more load was gradually applied. The red box is before scaling was enabled and the the green box shows how the system behaved as more load was applied and the number of instances grew from 1 to 6 instances. While the CPU and Memory dropped notice how the amount of Data Out and Data In during the green period was significantly higher? Not only was the CPU and Memory usage on average across the instances lower, but it was able to process a much higher volume of requests.

I have to say, I was pretty impressed when I first watched all this happen automatically in-front of my eyes. During testing I was also recording the response codes from every call I was making the the static website and Web API. Not a single request failed during the entire test.

But what happened when I stopped applying such a heavy load on the website and web API? Would it scale down just as gracefully? Stay tuned for the part two of this test to find out.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Microsoft has been building out its ‘cloud first’ strategy for many years now. As once separate on-premises server products have morphed into online services, we have been reaping the benefit of tighter integration between these services products. We’ve seen this most strongly with the Office 365 services where SharePoint, Office (Word, Excel, PowerPoint, Outlook, OneNote), OneDrive, and Teams have really become tightly integrated. From a development point of view this cloud strategy has provided Microsoft with a way of delivering a single unified API across the entire surface area of Office 365 covering all these product services. This API is called the Microsoft Graph API. Recently Microsoft added a bunch of it’s security services into the Graph API providing a standard interface and uniform schema to integrate security alerts, unlock contextual information, and simplify security automation.

Additional Reading:

The Microsoft Graph Security API is now Generally Available

Use the Microsoft Graph Security API

Microsoft is running a developer hackathon (the Microsoft Graph Security Hackathon) which simply involves using the new security APIs to see what good use you can put it too. There’s some awesome prizes on offer
($15,000 worth!). There’s also some great judges that will be looking at the entries which will give your submissions and ideas some great exposure.

Get coding and submit your entry before March 1, 2019.

Learn all about the Microsoft Graph Security Hackathon on the DevPost site.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview