Loading...

Follow Microsoft Community at Perficient, Inc. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

This is the next installment in a series of blogs on the subject of Cloud Transformation. When an organization makes a transition to cloud technology it will often cause a change in the way that organization thinks about IT solutions. Cloud technology allows an organization to operate in more of an experimental/disposable way. This freedom allows an organization to “try out” a totally new solution with NO long term cost implications from an infrastructure perspective.

This same “disposable” freedom perpetuates another impact in terms of solution life span. In pre-cloud thinking, IT groups often took a very long view of any new solution, like years versus days or months. With cloud technology it is now possible to develop, test, and deploy totally new solutions in days or weeks to “test the water”. This shorter term thinking requires a different approach to governance, security, testing, etc.

This rapid ability to deploy new solutions also drives more precise narrowly focused solutions for very specific target audiences. These solutions will often find themselves in conflict with current IT controls that have been in place for years. No, this does not imply we need to eliminate those controls however, it does mean we might need a more precise way to quickly evaluate the risk profile of a new solution. Many of the current “gates” within these control processes establish very static, risk profiles which are simply not appropriate for these smaller focused solutions.

Cloud technologies assume a significant amount of “reuse” of critical assets to speed up the process of deploying new solutions. This is another change in thinking that needs to occur in our normal pattern for verification and validation of new solutions. If a new solution is being deployed with a previously approved automated infrastructure run book then the run book should not require another round of V&V. If the solution is using a number of previously “approved” API’s then those parts of the solution should not require a second approval.

These types of “changes in thinking” are not easy to make and require a conscious effort by everyone to work out the “new thinking”. This is NOT a call to throw out everything and start over but rather a call to make some adjustments in our processes while making sure the organization stays compliant and secure.

New Capabilities

With the adoption of cloud technology, any organization immediately gains access to new capabilities which can be made available to their internal and external audiences. In the following paragraphs I will discuss just a few of those capabilities. The reader must be aware that this list of capabilities is much longer than just the few I have chosen to discuss and that list is growing on a daily basis. Most public cloud providers are adding thousands of new capabilities every year.

One large category of capabilities are the “citizen development tools”. This typically references tools that non-technical and technical users can use to produce and publish production solutions with almost NO assistance from their internal technology organization. This group is also called “low-code platforms”. Some of the players in this group are OutSystems, Mendix, Kony, Apian, Salesforce just to name a few. The offerings from these vendors can handle all types of solutions from complex business process’s to multi-page web sites that serve thousands of users.

When an organization decides to make these capabilities available to their internal users there needs to be a solid plan developed prior to deployment. Internal users will demand these solutions sooner or later so it makes good sense for most technology organizations to proactively start working on that plan. Once again this type of capability will require a change in traditional thinking. The technology organization will need to work with the executive branch to craft an agreement on the overall guard rails to be put in place as part of the plan. A good place to start understanding how to plan for the use of these capabilities is to review some write-ups like this one from Forrester.

Another large category of capabilities would be “artificial intelligence (AI) and machine learning (ML)” . This group of capabilities has many different dimensions that need to be considered. The broad nature of these capabilities naturally leads to many different conclusions on how to implement these capabilities. Some of the major players like Microsoft Azure, and Amazon Web Services delivering these capabilities have struggled with the same questions any organization will need to address.

The category of capabilities has the potential to create significant new capabilities while also opening up large pools of risk for any organization. Organizations will need to address a broad range of subjects from organizational values to privacy rules to determine how to utilize these capabilities. This category, like almost all of the cloud technology categories, has the potential to overwhelm an organization and its’ customers in record time.

None of these cautions about changing the thinking prior to broadly deploying them should NOT be justification to halt working with these capabilities. The author would suggest that as you start your first pilot projects with these capabilities be sure to run a parallel project that starts the discussion on how you will change your thinking. Many of the cloud technologies need to be explored to some minimal depth in order to understand the right questions to challenge your organizations thinking.

Transferable Skill Sets

It is no secret that the adoption of cloud technology will require some new and sometimes unique skills. Technologies like AI, ML, IaC, serverless computing, etc. do require someone that has training on the basics of these technologies. However, I think, for too often there is a mindset that assumes you must go outside your current internal technology resources to find or develop these and other cloud skills.

Every organization thinking about adopting cloud technology for the first time should probably engage an outside firm that can help develop their cloud strategy. There are two primary reasons for this recommendation.

  • First, most organizations will not have a broad enough or deep enough background in all the important parts of a solid cloud strategy. The right outside firm will have experienced consultants who have an array of experiences across multiple clients which helps them identify the right questions to ask and answer.
  • Second, they look at your situation with an “outsiders” perspective and not be heavily influenced by natural internal bias.

However, once the broader strategy is developed then the technology organization should start looking at their internal team to develop those cloud skills. There will be some highly specialized skills that will simply take too long to develop internally and you will need to supplement with outside consulting assistance. When this is done there should also be an internal resource that is aligned with the outside resource to gain as much OJT as possible.

Potential Impacts and Considerations

There are other cloud technology skills that can be easily added to the current technology workers skill set. All of the “soft” skills required today will still be needed tomorrow in the cloud technology world. There is an argument that could be made that these “soft” skills will be MORE important in the cloud technology world simply because of the number of people that will be impacted by the use of cloud technology.

When you compare the duties of job descriptions for on-premise versus cloud based positions they shift from “design, procure, install, configure, deploy, manage” to “design, configure, deploy, manage”. Generally speaking 60% – 80% of the duties will be very similar. The other 20% – 40% is the net difference in primarily training on new tool sets and architectural designs. Both of these items can be added with normal training and educational offerings in the industry today. Most of the public cloud providers also provide “free” subscriptions for individuals to use as training sandboxes. This allows the employee to play around with the tools in a completely safe environment. This same environment can be totally deleted and start over many times to gain real world experience in the cloud.

Based on this blog, I think the reader should be developing a clearer picture of the many other change points to be considered when adopting cloud technology. Many organizations will get very focused on the impacts to applications and infrastructure and forget about all these other change areas. Without a broad based cloud strategy established at the front end before the adoption of cloud technology, it will be very difficult for any organization to be successful with cloud technology.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome back to part 3 of our Direct Routing blog series! In this article we’ll discuss voice routing basics and how you can plan for Direct Routing. With that being said, let’s jump right in with the first topic on dial plans.

Dial Plans

What gives Teams users the ability to dial a phone number? Dial plans of course! Depending on how you dial a phone number, the PSTN network will want the number to be dialed in a specific format. What happens if you dialed the phone number in a different way than what the PSTN network accepts? Well, you probably guessed it, the call would fail. Fear not though, this is where dial plans come into play. With the concept of phone number normalization all phone numbers will normalize to E.164 format which includes a “+” then the international dialing code (for a US number this will be 1) then the area code and lastly the local number. For example, +1555123456 would be a number located in the US. Understandably so, users won’t want to dial this long string of digits every time they want to call their favorite pizza place around the corner. Luckily we have dial plans built in to handle these types of scenarios where the number will be normalized to E.164 format “automagically”! Teams already comes with built-in rules for the most common types of normalization however you have the option of configuring custom normalization rules that can be used in scenarios where short digit calling takes place (i.e. dialing extensions directly). So if you wanted to call that pizza shop around the corner you can configure that custom rule to handle the user just dialing the local number and Teams will realize that you are attempting to dial a location in your area code so only the 123456 would be required by the person dialing the number. In addition, if you have colleagues used to just dialing the extension of another colleague they can continue to do this with Teams provided the correct custom rule is in place. I will go into further detail on custom rules that can be created in Teams in a future blog, for those of you that are interested :).

Voice Routing Basics

So, now that we know number normalization is possible via dial plans, let’s discuss how all of this “automagically” happens on the back-end by discussing the basics of voice routing. Let’s say we have someone from Germany making a call to someone within the US. The user in Germany tries to call the TFN (toll free number) 1-800-123-4567. The first check by the phone system will be whether or not a voice routing policy exists for this particular user. If not, this user will be unable to use Direct Routing for this call thus they will fallback to the Calling Plan. If the user doesn’t even have a Calling Plan in place, then the call will fail. However, if the user does possess a calling plan the Phone System will then check if the user has a domestic calling plan or an international calling plan assigned. If the user in Germany only had a domestic calling plan, then this call would fail, as they are trying to dial someone internationally. If the user in Germany did however have an international calling plan then the call would go through properly via the Microsoft Calling Plan. So, let’s back up again but this time let’s say the user does have a voice routing policy in place. In this scenario, the user would leverage the Direct Routing method to place the call. Within this voice routing policy there are things called “PSTN usages” which are evaluated in order and can consist of multiple routes. So, to summarize this scenario so far, the phone system will look for the voice routing rule and if so, what are the PSTN usages associated with that policy. If the phone system runs through all PSTN usages and is unable to find a match, meaning the user doesn’t have a voice routing policy that allows them to dial that phone number, then we go back to the original check on whether the user has a Calling Plan license assigned. From there it would follow the same path as the previous scenario and complete successfully provided the user has the proper calling plan in place. Backing up once again, if we did have a match within the voice routing policy where at least one route matches the dialed pattern, then the call will be routed via the SBC (session border controller) in the route. Each route can have multiple SBC’s for the purpose of load distribution as well as fail-over. If this succeeds, then the call would be placed via Direct Routing. However, if all of the SBC’s were down (heaven forbid), then the call would fail. I know we just covered a lot in a short amount of time, so let’s go over how you can properly prepare for Direct Routing.

Preparing for Direct Routing

When it comes to deploying Direct Routing there are several things you’ll need to take into consideration. Here are the steps that you’ll need to consider when preparing for Direct Routing:

  1. Choose who will be hosting the SBC (self-deployed vs partner hosted)
  2. Determine which licenses you need and assign appropriately
  3. Choose the SBC vendor and model
  4. Configure media bypass
  5. Configure FQDN & Cetificates
  6. Configure your firewall
  7. Determine if multiple SBC’s are needed and how to configure them
  8. If you are the hoster you’ll need to determine how to host the SBC’s
Self-deployed SBC vs Hosted SBC

Direct Routing gives you the option of deploying the SBC yourself or having someone host the SBC on your behalf. Both options have certain advantages and disadvantages that you’ll be able to see in the table below.

As you can see above, one of the key benefits of deploying the SBC yourself includes the ability of having full control over the SBC which means you can make changes on the SBC whenever you want/need. However, this will also mean that you are solely responsible for the SBC’s configuration, patching, and maintenance of the SBC. This will also mean that of course you’ll need to purchase the SBC itself. On the other side of things, if you chose to host your SBC one of the biggest advantages includes not having to purchase, maintain, and host the SBC. However, this will also mean that you will have no control over the SBC’s configuration and will also make the support model more complex in the process.

Note: If you choose a hosted option make sure you understand the partner’s cost structure, support process, and architecture prior to choosing your preferred partner.

Licensing Requirements

If you are looking to leverage Direct Routing in your environment you’ll need to ensure you have the proper licensing. First. and foremost, all users will require a Teams license no matter the circumstance. If they want to use Direct Routing with Microsoft Teams they will also need a Phone System license (included with E5 or Add-on for E1, E3 & E4). If you intend to give your users the ability to call via Direct Routing and via Calling Plan (Domestic/International) then you will also need a Calling Plan license add-on license. Lastly, you have the option of adding Audio Conferencing for your users the ability to add PSTN numbers to their meeting invites so participants can dial-in over the PSTN. Please note that both Calling Plans and Audio Conferencing licenses are considered optional and you are not required to possess these licenses to leverage Teams.

Session Border Controller (SBC) Requirements

In addition to the licensing, you’ll need a certified SBC if you plan on using Direct Routing in your environment. The SBC will serve as the component that connect Teams to the PSTN next hop. The PSTN next hop can include either a 3rd party PBX (Cisco, Avaya, Juniper, Mitel, etc.) or a telephony trunk (SIP, T1, E1, etc.). For the complete list of certified SBC’s, you can take a look here. As far as SBC requirements, you must ensure that Microsoft can communicate to your SBC via your SBC’s FQDN. With that being said, you’ll need to have a FQDN so that Microsoft can locate and find your SBC. It is crucial that the domain be registered with Microsoft (NO“.onmicrosoft.com” domains or sub-domains allowed). For example, if Perficient.com is our domain, this means some valid names for the SBC can include:

  • sbc1.perficient.com
  • noam.perficient.com
  • ussbc1.perficient.com

However, you won’t be able to use something like:

  • sbc1.perficient.onmicrsosoft.com
  • noam.perficient.onmicrosoft.com
  • ussbc1.perficient.onmicrosoft.com

When you go to register this domain, you can find documentation on this process here.

Note: To register your domain you’ll need to ensure you have access to the Office 365 admin center with global administrator privileges 

Certificates

There are also a number of certificates required to validate the identity of the trusted SBC. If you only have a single SBC in your environment then you will just put the name of the SBC in the SN (subject name) and you will be all set! However, if you have multiple SBC’s in your environment you can structure your certificates in one of the following manners:

Use Case: Maximizing certificate cost

  • Description: If you are looking to pair multiple SBC’s or change them frequently. Use of wildcard certificate
    • SN – gw1.perficient.com
    • SAN – *.perficient.com

Use Case: Balancing cost and security

  • Description: Your company doesn’t change the gateways very often
    • SN – gw1.perficient.com
    • SAN – gw1.perficient.com, gw2.perficient.com, gw3.perficient.com, gw4.perficient.com

Note: If you decide you need to add another gateway to the mix (gw5.perficient.com), then you will need to change your certificate or get a new certificate. In essence, the more secure you make your SBC’s the less flexible your options become.

Use Case: Maximal security

  • Description: Your company wants to assign each gateway its own certificate
    • SN – gw1.perficient.com
    • SAN – gw1.perficient.com

For a list of all supported CA’s please see the link here.

IP Ranges and Port Requirements

You may be thinking, well I already put in my port ranges for my Teams client, so shouldn’t I be all set? Good thought, but unfortunately wrong…. Simply put, the SBC needs to talk to different FQDN’s and use different ports than your Teams client. The SBC can be setup with either a public IP or can be placed behind a NAT. SIP signaling between the SBC and the SIP proxy in O365 will use TLS/SIP to talk to one another.

  • From SIP Porxy –> SBC
    • Source Port: 1024 – 65,535
    • Destination Port: Defined on SBC
  • From SBC –> SIP Proxy
    • Source Port: Defined on SBC
    • 5061

Media has the same principal as signaling, except in this case the media processor will need to talk to the SBC and vice versa. Media between the media processor in O365 and SBC will use UDP/SRTP to talk to one another.

  • From Media Processor –> SBC
    • Source Port: 49,152 – 53,247
    • Destination Port: Defined on SBC
  • From SBC –> Media Processor
    • Source Port: Defined on SBC
    • Destination Port: 49,152 – 53,247

For the IP ranges,  Microsoft provides distinct FQDN’s and IP addresses for each SIP proxy. There are SIP proxies located in 3 different regions around the world (America, Europe, and Asia). The FQDN’s and IP’s for those IP Proxies are as follows:

SIP Proxy
  • America
    • Traffic manager FQDN
      • sip-du-a-us.pstnhub.microsoft.com
    • Datacenter FQDNs and IPs
      • sip-du-a-uswe2.pstnhub.microsoft.com – 52.114.148.0
      • sip-du-a-usea.pstnhub.microsoft.com – 52.114.132.46
  • Europe
    • Traffic manager FQDN
      • sip-du-a-euwe.pstnhub.microsoft.com – 52.114.75.24
      • sip-du-a-euno.pstnhub.microsoft.com – 52.114.76.76
  • Asia
    • Traffic manager FQDN
      • sip-du-a-asea.pstnhub.microsoft.com – 52.114.7.24
      • sip-du-a-asse.pstnhub.microsoft.com – 52.114.14.70

Note: The SBC will need to talk to all IP’s and FQDN’s from any region for redundancy purposes. So if you are unable to talk to the America FQDN’s then this will fallback to the Europe or Asia FQDN’s. 

Media Processor

The media processors have a specific IP range that you will need to allow your SBC to talk to. The IP range is as follows:

  • 52.112.0.0/14 (52.112.0.1 – 52.112.255.254)
Media Bypass

Optionally, if you intend on using media bypass in your environment, then there will be a few extra ports and IP ranges. Notice the emphasis on “extra”, meaning you will need to open the following ports below in addition to the ones previously mentioned.

  • IP ranges
    • Transport relay: 52.112.0.0/14 (52.112.0.1 – 52.112.255.254)
  • Media Ports (UDP/SRTP)
    • From Transport Relay –> SBC
      • Source Port: 50,000 – 59,999
      • Destination Port: Defined on SBC
    • From SBC –> Transport Relay
      • Source Port: Defined on SBC
      • Destination Port: 50,000 – 59,999
  • Internal Media Bypass
    • Allow hairpin for internal clients
  • External Media Bypass
    • SBC will need the additional capabilities of talking to the Media Relays

This concludes today’s blog on Microsoft Teams Direct Routing. In the next blog we’ll be discussing how to leverage multiple SBC’s in your environment as well as how to manage Direct Routing. I hope you have found this blog article helpful, and I hope you’ll tune in for the next one soon!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Microsoft introduced Azure Site Recovery (ASR) in 2014.This Disaster Recovery as a Service (DRaaS) offering allows on-premises workloads to be replicated into Azure. Azure Site Recovery orchestrates and manages disaster recovery for Azure VMs, on-premises VMs, and physical servers.

Azure site recovery support four types of Disaster Recovery as a Service:

  • Azure to Azure site recovery
  • VMware to Azure site recovery
  • Hyper-V to Azure site recovery
  • On-premises (physical server) to Azure site recovery

In this blog, I’m focusing on On-Premises (physical server) to Azure site recovery.

Requirements to set up an Azure Site Recovery 1. Azure recovery vault

It stores all the configuration, management, setup of configuration/process server, vault key, and replication of on-premises virtual machines (VMs) in Azure.

2. Configuration Server & Process Server

The configuration server is an on-premises Windows server [Windows Server 2012 R2 or Windows Server 2016] it has site recovery components, which includes configuration server, process server, and master target server.

Minimum required server configuration as below:

  • CPU – 8vCPUs
  • RAM – 8GB
  • Disk – 500 GB minimum
  • Network speed – 1.6 GB/s

More details about these requirements are available on Microsoft.com.

3. Replicated machines (Client servers)

We are using ASR to replicate below two on-premises windows physical servers.

  • X-Web Server
  • Y-DB Server

We have to setup the configuration and process servers in the same on-premises environment where the replicated servers reside. We can use an existing stage server or build separate dedicated servers that can communicate locally with the replicated machine using the private IP. Let’s suppose the name of this new configuration and process server is Z-Server.

Below is the overview flow of our ASR disaster recovery setup.

Disaster Recovery – Process Diagram

  • X-Web and Y-DB communicate with the configuration server (Z-Server) on port HTTPS 443 inbound, for replication management.
  • X-Web and Y-DB send replication data to the process server (Z-Server) on port HTTPS 9443 inbound.
  • The configuration server (Z-Server) orchestrates replication management with Azure over port HTTPS 443 outbound.
  • The process server (Z-Server) receives data from source machines (X-Web and Y-DB), optimizes and encrypts it, and sends it to Azure storage over port 443 outbound.
Setup of the Azure Recovery Vault
  1. First we need to create the Recovery Vault in Azure.

Fill the details and create the vault.

Once the vault is created, we can use it for the Azure ASR setup.

Azure ASR Setup for Disaster Recovery

Open the Azure vault and go to Site Recovery.

Select the on-premises location.

You can download the deployment planner and estimate the network bandwidth, storage, and other requirement. In my case, I have selected “Yes.”

This the first step to build the configuration Server (Z- Server) in Azure.

Download the setup file and vault registration key and copy them to the configuration/process server (Z-Server). Run this setup file: MicrosoftAzureSiteRecoveryUnifiedSetup

Installing the Configuration and process server

Click “I Accept” and install the third party license with MySQL.

Select the registration key that we download from vault.

In Internet Settings, choose to connect directly if not using proxy.

In Prerequisites, it will verify the server’s eligibility to have this setup file installed. Once all the prerequisites are passed, we can move further.

Create a MySQL password to login into the MySQL server.

In Environment Details, select “No,” as we are not replication the VMware Server. We are replicating the physical servers.

Select Install Location, to install the binaries and store the cache.

In Network Selection, select the private IP of the server. Port 9443 is the default port used for sending and receiving replication traffic.

Review the summary and click Install.

Once finished, you have to reboot the Server.

You need to copy the passphrase and keep it safe. You will use it to connect the config./process server (Z-Server) from on-premises servers (X-Web and Y-DB Server).

There are some additional steps to configure the config./process server. You’ll need to add an account.

Account added successfully.

Select the registration key that we downloaded from the vault and connect directly.

Registration succeeded.

Once the setup of configuration server is over, we need to again move back to the Azure portal, into the Recovery Service Vault, for further configuration. Create a new storage account with standard performance and locally-redundant storage (LRS) replication.

For network selection, we can choose the existing network, ASR-VNET, or we can create a new one.

Create the replication policy.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this previous blog post, we discussed 3 ways you can leverage the Microsoft Azure Bot Framework and Bot Service to provide employees within your organization an improved self-service experience. Today, we’ll discuss how these tools provide an integrated environment that seamlessly connects your existing knowledge bases and back-end platforms to your end users – and why that matters.

Here are 3 key benefits of bot integration: Unlocks Existing Investments

With Microsoft Azure Bot Framework and Bot Service, you can consolidate existing knowledge bases and drive usage of existing products, driving higher ROI from those investments. This not only minimizes or eliminates functional silos but also enables you to integrate with standard APIs.

Simplifies User Processes

Leveraging the Microsoft Azure Bot helps you eliminate unnecessary delays that often result from human interactions by automating those basic inquiries. Remember when we talked about using bots to provide employees with common benefits FAQs during open enrollment in our last blog post? The Azure Bot Framework and Bot Service not only pull data from disparate locations but aggregate that data to provide an improved experience during common interactions such as those FAQs. Plus, the bot integration enables end users to conduct these interactions on the application platform of their choice.

Provides More Efficient Support

Leveraging bot integration enables departments such as HR or IT, which field numerous repetitive questions or requests, to provide a more streamlined process and or deflect menial requests. This not only lowers support team costs by eliminating the need to manage and mine numerous ad hoc request channels. It also enables more meaningful interactions with requests that are truly unique. Plus, it also provides end users with a central channel from which to resource the information they are seeking. By pulling information from existing knowledge bases bots can provide improved context and even warm escalation hand-offs when the situation warrants.

If you are not considering bots as part of your organization’s digital transformation, you should be.  To learn even more about Microsoft Azure Bot Framework and Bot Service, check out our Building Microsoft Azure Self-Service Bots demo video now.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today’s workforce is highly mobile, technology savvy and resourceful. It is more critical than ever to provide employees with real-time assistance and support that offers a positive employee experience.

To maintain productivity, employees need the ability to do things such as initiate IT tickets or register for HR benefits at their convenience.  They also want to stay up-to-date on company news and events. Supporting these needs with human interaction adds cost and can create unnecessary delays. Besides, most employees today prefer to conduct these actions using the application platform of their choice.

That is where the Microsoft Bot Framework can help. The Bot Framework provides a comprehensive skeleton that enables you to build enterprise-grade conversational AI experiences. Put simply, the Azure Bot Framework and Bot Service provides an integrated environment that seamlessly connects your existing knowledge bases and back-end platforms to your end users. End users interact within the apps of their choosing. Integration with your back-end systems enables you to make the most of your existing investments.

Here are three self-service bot use cases

These scenarios underscore how different areas of an organization could leverage the Azure Bot Framework to provide improved end-user (employees in this example) experiences:

  • IT could drive organization-wide adoption of a new tool. Let’s say it’s time for a roll-out of Microsoft Teams. By enabling self-service IT requests directly into ServiceNow using the Azure Bot Framework, employees can initiate their own Teams installation requests. This streamlines the process and helps IT scale their Teams roll out. It also provides a quicker and smoother resolution for employees.
  • HR could leverage Slack as a tool for providing employees with answers to common benefits questions during open enrollment. With the ability to retrieve employee-specific details and documentation from a range of back-end systems, HR would reduce the time spent fielding repetitive employee questions. This would better enable them to address less common situations directly.
  • Sales could deliver real-time customer history to sellers from Salesforce or Dynamics CRM and notify them of customer-specific opportunities or recent news via Twilio messaging.

Providing employees with these and other self-service options enables quicker resolutions and answers to common questions. Or, in the case of the sales scenario, an improved business process to will drive higher ROI. To learn more about building Azure Self-Service Bot, check out this quick demo.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By now, if your organization has had conversations about digital transformation, chances are they have included discussions around artificial intelligence such as bots. If not, it may be time to jump start that dialogue.

As virtual assistants that can help perform simple, repetitive tasks, such as making a reservation or addressing customer queries or issues, bots can be used a variety of ways. The most common use case for bots is providing customer service support. For example, if your internet lags again, you may interact with a bot “technician” to help problem solve the issue.

Self-service bots can be used to automate repetitive employee interactions where a human is not fully warranted. For example, self-service bots could enable employees to submit IT service ticket requests. Or, they could be used to provide employees with answers to common benefits questions.

Microsoft Azure Bot Service is a cost-effective, serverless chatbot service that can be scaled on demand. This enables you to build conversational experiences for your customers or employees, as in the use case provided above. The Azure Bot Service makes it easy to build any type of bot – from a Q&A bot to your own branded virtual assistant. Plus, the Azure Bot Service uses a comprehensive, open source SDK and tools to easily connect your bot across popular channels and devices. In the use case where bots support employee interactions, end users could interact with the bot using the messaging platform of their choice such as Skype, Slack, Microsoft Teams, Cortana or Facebook Messenger.

Here are the top benefits of using self-service bots 1. Operational Efficiencies

Bots enable you to automate basic inquiries while still providing real-life interactions. They free up human resources from responding to repetitive, mundane tasks. This enables them to focus on more strategic and business-critical tasks. Think back to the previously suggested use case of leveraging a bot to provide answers to common benefits questions. Think of how much time that could save your HR team during open enrollment time!

According to this article in Tech Republic, 70% of IT leaders reported that their organizations are actively using bots in place of humans to drive business efficiencies.

2.      Cost Effective

There are numerous ways to view the potential cost benefit of leveraging Azure Bots. First, hiring more humans to handle repetitive, mundane tasks is more expensive than it would be to leverage a chatbot. Plus, there is the scale factor to consider, as bots can easily communicate with hundreds or even thousands of end users at the same time versus the one or two that a human could handle.

Automation is another cost benefit associated with bots. Most of us are not very productive if we’re bored having to do the same things over and over. Leveraging chatbots, you can automate those tasks, freeing up your team to focus on meaningful work, ultimately driving their productivity higher.

3.      Better End-User Experiences

Whether your end users are your customers or your employees, their experiences drive your organization’s success. Not only are bots accessible anytime, but they don’t get weary after answering the same question a million times. This ensures that end users get the answers or support they need, when they need it via a more positive interaction. Bots are also not as susceptible to human error, further ensuring their answers will be accurate and more helpful.

One last consideration, studies indicate that 75% of internet users are adopting one or more messenger platforms (like those we previously named). So it might be helpful to note that Azure Bot Service integrates across multiple communication channels. The enables you to reach more end users, more often through the platform of their choice.

Bottom line: If you are looking to drive operational efficiencies through digital transformation, it’s time to have the conversation about bots. Check out our Building Microsoft Azure Self-Service Bots demo video today.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Security is one of the most important aspects within Information Technology. As technology advances so do the requirements for a robust security system to prevent breaches and threats to your data. To combat this, Microsoft will be implementing Microsoft Identity Platform 2.0 which will utilize OAuth 2.0. In this article we’ll be discussing this latest evolution to Microsoft’s Azure Active Directory identity service and show you how to prepare for this change in your environment.

What is OAuth 2.0?

Simply put, OAuth 2.0 is an authorization protocol that supersedes the original OAuth protocol. OAuth 2.0 provides authorization flows for web applications, desktop applications, mobile phones, and living room devices. OAuth 2.0 will use a method by which you can access web-hosted resources on behalf of a user via a third-party application ID.  That’s great to hear… but how would this impact your Skype for Business environment? Great question! This comes into play for Skype for Business when you have 3PIP phones . 3PIP is short for 3rd party IP, meaning Skype for Business certified IP phones such as AudioCodes, Crestron, Polycom, and Yealink.

Who does this affect?

This update to the 3PIP firmware will only be required if you fall under one of these 2 scenarios:

  1. You have a strictly Skype for Business Online environment
  2. Skype for Business hybrid w/ Modern Auth enabled
Who isn’t affected?

This update to the 3PIP firmware will NOT be required if you fall under one of these 2 scenarios:

  1. Skype for Business on-premises (no hybrid)
  2. Skype for Business hybrid w/ Moden Auth disabled
How do I update my 3PIP firmware?

The 3PIP manufacturers have made a code change to embed the application ID into their firmware. Each manufacturer will have a different application ID, so this means if you have multiple types of 3PIP phones in your environment then you will have to update the firmware with the new application ID for each phone manufacturer. Each vendor “application ID” needs approval by a tenant admin before phones with that ID/from that 3PIP manufacturer will be able to sign into your tenant. This means the approval must be completed before you move to this updated firmware(s).

Where do I go for this approval process?

Fear not, Tom Talks has included the links to the application ID for each vendor (Yealink link coming soon)!

Once you navigate to the corresponding 3PIP manufacturers link, you’ll be prompted with the following:

In the image above you’ll see a breakdown of the things that the 3PIP manufacturer will need your permissions to access. Once the permissions have been granted you’ll see something informing you that the approval has been properly consented to. You will need to grant these permissions once per 3PIP manufacturer which will cover all models of that specific manufacturer (i.e. once for AudioCodes, once for Crestron, once for Polycom, and once for Yealink).

Note: Granting the required permissions for the 3PIP phones grants no additional functionality than what the 3PIP phones already have in your environment today.

To confirm that the permissions have been granted for the specific 3PIP firmware update, you can hop on over to Azure AD admin center > Enterprise Applications > All Applications > Look for the 3PIP application ID.

What firmware version will I need to update to?

At this time, I only have the Polycom firmware versions but will be updating this article as other manufacturer firmware version details are released.

Device name Software Version Timeline
VVX Phones 5.9.3 Mid-May
Poly Trio 5.9.0 Rev AB Mid-May
Group Series 6.2.1.1 Mid-June
What is the deadline?

Luckily you still have more than a month to get this in place. As long as you act before July 1st, 2019, then you won’t have any issues signing your 3PIP phones into Skype for Business Online. I will update this article with any news released on this topic as it becomes available. I hope you have found this helpful and if you want to check out the official Microsoft documentation on this topic, you can do so here.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

ChefConf 2019 kicks off in Chef’s (and my) hometown of Seattle in less than two weeks on May 20th.  If you’re interested in going and haven’t already registered, check out more information here and use code Hugs4Chef19 for 10% off your registration.

This year promises to be a pretty significant year at ChefConf.  Since last year’s conference, the company has gone through some significant changes – new leadership, product updates, go-to-market approaches, etc.  Here are some things I am eager to hear more about:

  • Chef Habitat:  Chef has had the “infrastructure as code” and “compliance as code” story lines solidified for some time, but only in the past year has the Habitat “managing applications as code” story really started to coalesce in a meaningful way.  Chef Habitat is such a fundamental change and innovation to application development while also solving immediate needs like legacy app modernization that bringing that story together takes time.   You can read more about Perficient’s thoughts on Habitat here and here.  I am also eager to hear more about the “Habitat managed Chef” and “Habitat managed InSpec” and how to integrate these patterns in new and interesting ways.
  • Chef as an Open Source Company & the Enterprise Automation Stack(s):  ICYMI, Chef announced its move to a full open source model at the start of April as well as some significant updates to their product naming and bundling.  Chef’s new Enterprise Automation Stack is a compelling way for customers to address automation in a fundamentally holistic sense – infrastructure, compliance, and applications – whether on-prem, in one or many clouds, etc.  The move to full open source gives additional opportunities for innovation on the Chef platform.  With just a bit over a month since this change, I’m eager to see how Chef will communicate the many implications of this change to clients and partners.
  • Chef and the ecosystem: For partners like Perficient who deliver enterprise-grade DevSecOps transformations, Chef and all its wonderful parts are only part of the people, process, and tools story.  Chef has always sought new ISV and major cloud integrations.  I look forward to hearing more about how Chef has advanced these partnerships, specifically with the launch of Azure DevOps and the continued enterprise progress Google Cloud is making.

Chef is one of the most fun, energetic and well-run conferences I’ve ever attended.  This year, there are some amazing workshops and sessions available to attendees and as per usual, I won’t be able to attend all of them.  Here are some I’m very interested in attending:

  • Monday:  I’ll be attending the Managing a DevOps Transformation workshop to get an update from Chef’s Professional Services team on their point of view of successful patterns driving DevOps transformation, and my colleague Sean Wilbur  will be attending Modern Operations on Azure – Automate and monitor your infrastructure.
  • Tuesday/Wednesday:  I’m going to have a tough time both days as there are multiple competing session on both days all of which are interesting.

    Tuesday Wednesday Schedule for ChefConf 2019

I’m looking forward to connecting with other ChefConf attendees during breaks, at the evening events and parties, and at Perficient’s booth in the Expo Hall (#100).  I hope you’ll stop by, introduce yourself, grab some commemorative ChefConf 2019 and Seattle specific stickers (they are super cool!), and enter to win one of our prizes.

Before you go, be sure to download the ChefConf 2019 Official Ap, start building your schedule, and add me and your other Chef friends as connections!  I look forward to seeing you there.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today’s post marks the beginning of the final section of this blog series dedicated to boosting adoption of Office 365 through organizational change management (OCM). The previous four posts set the stage for what we’re about to do, accelerating user readiness in the adoption of Office 365.

To accelerate readiness for and adoption of O365, leaders should design and implement electronic and face-to-face resources (listed below) to support users through the change.

Electronic Resources

  • Custom email messages
  • Quick reference guides
  • Frequently asked questions (FAQ)
  • Portal site to serve as a central hub and just-in-time resource
  • Electronic surveys to assess progress, risks, and readiness by users before and after implementation
  • “Change Management Inbox” for users to email questions and feedback

Face-to-face Resources

  • Instructor-led training to provide live end-user training and answer questions in real time
  • Change Champion network with selected employees to serve as the extended project team. Change Champions provide two-way communication, feedback, and peer-to peer support for the change at the local business unit level.

Do you have employee impact at the top of your list? Do you know why, what, when, and how to prepare your users for the coming change? Do you have the capacity to pull it off? Let us help you explore these questions and more with you as you prepare for your O365 implementation.

For more insight on how organizational change management boosts adoption of Office 365, download our guide here or below.

About the Author

David Chapman serves as General Manager and Chief Strategist for Perficient’s Organizational Change Management practice. He has more than 20 years of management consulting experience, specializing in the change management discipline. David brings his unique insight to the people aspects of any change from technology implementations to broader strategic organizational imperatives.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At the beginning of 2019 Microsoft released a new set role-based certifications. These role-based certifications help show that you can keep pace with today’s technical roles and requirements. In addition, they prove that you have a thorough understanding of each particular technology and validate your skills. In this article we’ll be covering one of the newest Microsoft 365 certification exams called the “MS-300 – Deploying Microsoft 365 Teamwork”  and what you will need to know to pass this exam with flying colors!

What is the MS-300?

If you plan on taking this exam, this will prepare you for the Microsoft 365 Teamwork Administrator role and the Microsoft 365 Certified: Teamwork Administrator Associate certification. According to Microsoft, you will need to be able to do the following:

  • Configure, deploy, and manage Office 365 and Azure workloads that focus on efficient and effective collaboration and adoption
  • Manage apps, services, and supporting infrastructures to meet business requirements
  • Deploy, manage, migrate, and secure SharePoint (online, on-premises, and hybrid), OneDrive, and Teams
  • Make informed decisions regarding governance
  • Collaborate with the messaging administrator, voice administrator, and security administrator to ensure each area has been configured according to business needs
Prerequisites

According to Microsoft you should already have a good grasp of the following things:

  • Understanding of integration points for Office, PowerApps, Flow, Yammer, Microsoft Graph, Stream, Planner, and Project
  • Understanding of how to integrate third-party apps and services including line-of-business applications
  • Understanding of SQL Server management concepts, Azure Active Directory, PowerShell, networking, Windows server administration, Domain Name System (DNS), Active Directory mobile device management, and alternative operating systems

Now that we’ve gotten that out of the way, let’s start breaking down each section of this exam and where you can go to study these topics.

Configure and Manage SharePoint Online (35-40%) Configure and Manage OneDrive for Business (25-30%) Configure and Manage Microsoft Teams (20-25%) Configure and Manage Workload Integrations (15-20%)
Well, that wraps up all the topics covered on the MS-300 exam! As you may have noticed, you’ll need to know a good deal about SharePoint Online, OneDrive for Business, Teams, Yammer, Groups and Stream, among other things. Microsoft’s Technet (now called “docs”) has a plethora of information on each of these topics and I’ve only touched the surface for these areas, so I encourage you to check that out. In addition, Microsoft has some great learning material and several courses, to help you prepare! I hope you find this study guide helpful and best of luck on your exam!
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview