Loading...

Follow SQL Server with Mr. Denny on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Before I start, I’m just going to say that if I can do this with WordPress, then it can be done with anything.

The goal that I was going for to have a WordPress website that was running on multiple contents (I only used two regions, but this can be scalled out as needed to other regions as needed) so that responce time on one site of the planet would be similar to users of the site that are on the same side of the planet as me.  I also wanted the failover for all of this if an Azure Region goes offline to be as quick and painless as possible.

Here’s what I had to get setup. I setup two Azure WebApps both of which were running Windows. One was setup in US Central while the other was setup in North Europe.

Website Failover

I tried doing this with Linux WebApps, but some of the underlying functions that I needed to make this all work.  Specifically I needed Site Extentions, which are only available on Windows WebApps today.

After setting up the two web apps, I uploaded the website content to the Azure WebApp.  Once the site was uploaded, and the custom DNS was working on the WebApp it was time to start making the replication of the web traffic work.

In a new web browser I opened the .scm.azurewebsites.net version of my site.  To get the SCM site it’s http://{Azure WebApp}.scm.azurewebsites.net.  Since my Azure WebApp is named www-dcac-com-central by SCM site is https://www-dcac-com-central.scm.azurewebsites.net/. (There’s going to have to go to this site a few times, so keep the URL handy.)

Be very careful of the order that you are doing the replication. If you setup the replication from a blank website to your website, then it’ll replicate the blank website. So before doing this, make sure that you have a proper backup on your WebApp BEFORE you configure the mirroring.

One you have the website open click on the Site Extenstions on the right of the menu at the top.

You’ll see a galary option.  Select the galary option and find the “Site Replicator” extention. Enable this extention by clicking the plus sign.  A window will popup to complete the install, click install.

Once the installation is complete, go back to the Azure portal. From the Azure Portal stop the WebApp and then Start it again (clicking restart will not work).

Again in the Azure Portal select the second site (in my case the second region is the North Europe region).  From here you need to download the publish profile for the WebApp.  To do this from the Overview option tab select the “Get publish profile” option from the top right.

Just download the file, we’re going to need it in a second.

Go back to the first sites SCM website (https://www-dcac-com-central.scm.azurewebsites.net/ in my case) and click the play button on the Site Replicator extension.

This is going to give you the configuration screen for the Site Replication (it may take a minute to open). The setting screen is pretty simple.  There’s a browse button on the bottom left of the screen, click that and navigate to the publish profile file that you downloaded earlier.

Give the site some time to replicate all the changes to the other region.  When it says “Succeeded” it should be just about done. The larger the website, the longer this will take.  I FTPed into both WebApps and watched the files appear until they were all there.  On a standard WordPress install, this took about 10 minutes.

Once this was all finished, I repeated the process in the other direction.  I downloaded the publish file from the US Central version and configured the Site Replicator on the North Europe region; then I uploaded the publish file to the North Europe region.  I then gave the process about 10-15 minutes to settle down and do any replication that needed to be completed.

Once this was all finished, I was able to upload files, pictures, WordPress updates, etc. from either site and the change would be replicated to the other region within a second or two.

Once the website replication was handled it was site to setup Traffic Manager. This would allow people to connect to their local version of our website depending on where in the world they are connecting from. Getting the endpoints setup was pretty simple. I used basic geographic load balancing and pointed North/Central/South America to the US Central version, and Asia/Europe/Middle East/Africa/Anti-Artica to the North Europe version.

The only hard part was that because WordPress is going to have a ton of redirects you can’t do normal HTTP monitoring. Normally you could have traffic manager pointing to “/” for the path to monitor, but WordPress didn’t like this. I changed the website to use “/license.txt” instead as the path as this would cause the traffic manager can come online correctly. It isn’t a perfect situation, but it works well enough.

Once everything is setup and traffic manager is happy and working, we can point public DNS to the site.  In our DNS configuration for www.dcac.com we added a CNAME record to DNS. A CNAME record in DNS redirects the request to another record.  In our case we pointed www.dcac.com to www-dcac-com.trafficmanager.net. This allows the Traffic Manager service to resolve www.dcac.com to the correct version of the website.

We can test that this is working as expected by looking at the output of the nslookup command.

By running nslookup on my laptop (which is currently sitting at my house in San Diego, CA), we can see that I’m resolving www.dcac.com to www-dcac-com-central.azurewebsites.net.

If we do the same nslookup command from an Azure VM that’s running in Singapore, from the VM in Singapore when I do a nslookup on www.dcac.com I get back www-dcac-com-northeurope.azurewebsites.net.

From these outputs, I can assume that I’m viewing the version of the website that’s closer to the user.

I’ve now got two duplicate copies of the website running in two different Azure Regions.

Database Failover

On the database side of things, need need to setup some replicate for that as well. “Azure Database for MySQL servers” now supports multi-region replicas but there’s no auto-failover available yet (hopefully it’ll be coming at some point soon).  For the database copy I did basically the same thing as I did for the websites (and that I’ve done tons of times for Azure SQL DB).

For the database side of things I setup WordPress to use the dcac-mysqlcentral copy. From there I clicked the Add Replica button, and that made a copy of the data to a new server called dcac-mysqlnortheurope that I setup in North Europe.

Since there’s no automatic failover at the database level today, if I need to do a failover I need to edit the wp-config.php file on the webserver, and that’ll kick the connection over to the other server.  I also need to setup the needed PowerShell to do the failover. My next step of this process is going to be to setup some Azure Automation to handle all the database level failover, but so far this is a pretty good step as there’s not website level failover.

The End Result

The end result of all of this is that our website is setup and running in two different places for better availabililty and better performance of our application.

Denny

The post Making WordPress a globally distributed application on Azure WebApps appeared first on SQL Server with Mr. Denny.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Monitoring SQL Server Performance Monitor objects (Perf Mon for those in the know) can be an important part of monitoring your SQL Server instance. Finding information about the performance monitor objects that SQL Server exposes can be tricky, even though the SQL Server Project Team can documented what these objects all mean.

You can find the documents about the SQL Server Perf Mon objects online on the Microsoft docs website.

If you haven’t had a chance to check it out, I’d very much recommend it. If you are looking for which objects you should be monitoring this can answer a lot of questions for you.

If you need more help past that, let us know, we can help you out.

Denny

 

The post SQL Server Performance Monitor Objects appeared first on SQL Server with Mr. Denny.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
When you create a VM in Azure it’s always set to the UTC timezone. There are some times when that doesn’t work and it needs to be set for a specific time zone. In a perfect world, the apps could be fixed so that they could deal with the fact that the servers are now in UTC instead of the local timezone. But this isn’t always possible, and the server’s time zone needs to be changed. The “normal” process to change the timezone for a Windows server doesn’t work as expected. You can change the time zone by right-clicking on the clock and selecting “Adjust Date and Time”. If you change the time zone here, it doesn’t actually do anything (at least it didn’t when I did it).  It may also change for a short period of time and then revert back to UTC. If you use PowerShell to change the timezone the change will stick, even if the VM is deallocated and reallocated. First, we need to see what the options are for changing the timezone.  We can see that by running the Get-TimeZone cmdlet. Get-TimeZone -ListAvailable If a list of every time zone possible isn’t helpful (and it probably isn’t) you can filter the list down as you probably know the name of the time zone you’re looking for.  My specific client needed a server created in the Pacific Time Zone, So I filtered it down by the word Pacific. Get-TimeZone -ListAvailable | where ({$_.Id -like "Pacific*"}) Second, we then use the cmdlet Set-TimeZone to change the time zone of the VM.  You’ll want to put the ID from the Get-TimeZone cmdlet into the ID parameter of the Set-TimeZone cmdlet. Set-TimeZone -Id "Pacific Standard Time" Denny The post Changing the Time Zine of Azure VMs appeared first on SQL Server with Mr. Denny.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’m thrilled to be able to announce that I’m presenting two pre-cons at the Data Platform Summit 2019 in Bangalore, India. The first all-day session that I’m giving is “Azure Infrastructure” which I’ll be presenting on August 19th, 2019. The second that I’m presenting is “HADR – SQL Server & Azure” which I’ll be presenting on August 21st, 2019.

Azure Infrastructure

In this daylong session, we’ll review all the various infrastructure components that make up the Microsoft Azure platform. When it comes to moving SQL Server systems into the Azure platform having a solid understanding of the Azure infrastructure will make migrations successful and making support solutions much easier.

Designing your Azure infrastructure properly from the beginning is extremely important. An improperly designed and configured infrastructure will provide performance problems, manageability problems, and can be difficult to resolve without downtime.

With the introduction of multiple Azure data centers now available in India, many companies will begin moving services from data centers into the Azure, and a solid foundation is a key to successful migrations.

HADR – SQL Server & Azure

In this session, we’ll walk through the needs and process to set up a Hybrid Always On Availability Group using servers on premises for production and servers in Azure for Disaster Recovery.

We’ll be looking at high availability and disaster recovery tuning requirements, troubleshooting steps and various best practices for Always On Availability Groups in SQL Server 2017.

Why you want to be there

These two all-day sessions will both be a fun-filled day of learning that you won’t find anywhere else. Be sure to register now, as prices go up each month (the next price increase is in just a couple of days).

Denny

The post Data Platform Summit 2019 appeared first on SQL Server with Mr. Denny.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Blockchain is the new hot thing in IT. Basically, every company out there is trying to figure out where Block Chain fits into their environment.  Here’s the big secret of blockchain; your company doesn’t need it.

Blockchain is simply a write one technology that allows you to change records, but it keeps track of every change that was made.  Most systems need some auditing to see when specific changes were made for example thing about an order system that your company may have. You probably have auditing of some sort so that you can see when the new order comes in (it’s probably the create date field on the table), and there’s probably some sort of auditing recorded when the shipment is sent out. If the customer fixes their name, you probably aren’t keeping a record of that, because odds are you don’t care.

Think about what systems you have at your company. Do you need to keep a record of every single change that happens to the data, or do you care about what happens to only some of the tables?  Blockchain is a great technology for the systems that need that sort of data recording. But that’s going to be a small number of systems, and we shouldn’t be fooling ourselves into believing that every company needs a system like this.

I’m not g0ing to argue that there are no systems that need this; there definitely are some systems that due. But those systems are going to be in the minority.

Executives are going to read about how blockchain is this great new thing, and they are going to want to implement it. The thing about blockchain is that there’s one major thing that building a system on blockchain requires, and that’s lots of drive space. If you want to purge data from the system after 5-6 years, that’s great; you’ll need more drive space as deleting data from a blockchain database just means that you need more space as you aren’t actually deleting those records.

A friend of mine described Blockchain as a database in full recovery mode, and you can’t ever back up (and purge) the transaction log. That’s how the database is going to grow.  Remember those lovely databases that were on the Blackberry Enterprise Server back in the day? The database would be 100 Megs and the transaction log would be 1 TB in size. That’s precisely what blockchain is going to look like, but it’s going to be a lot worse because all your customers and/or employees are going to be using the application.  If you have a database that’s 100 Gigs in size after a few years (which is a reasonable size for an application) the blockchain lot for this could easily be 15-20 TB in size, if not 100TB in size. And you’ll have to keep this amount of space online and available to the system.

So if you like buying hard drives (and the nice car that they get from their commissions) then blockchain is going to be great. If you don’t want to spend a fortune on storage for no reason, then blockchain is probably something you want to skip.

Denny

The post Why your company doesn’t need block chain appeared first on SQL Server with Mr. Denny.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Recently Intel announced some major upgrades to their Xeon CPU line. The long and short of the CPU announcement was that Intel was releasing their 56 Core CPUs for public release. That’s just a massive amount of CPU power that’s available in a very small package. A dual socket server, with two of these CPUs installed, would have 112 cores of CPU power, 224 with Hyper-Threading enabled. That’s a huge amount of CPU power.  And if 112 cores aren’t enough for you, these CPUs can scale up to an eight-socket server if needed.

With each one of the processors, you can install up to 4.5TB of RAM on the server, per socket.  So a dual socket server could have up to 9TB of RAM. (That’s 36TB of RAM for an eight-socket server if you’re keeping track.)

For something like a Hyper-V or a VMware host, these are going to be massive machines.

My guess is that we won’t see many of these machines are companies. Based on the companies that Intel had on stage at the keynote (Amazon, Microsoft, and Google) we’ll be seeing these chips showing up in the cloud platforms reasonably soon.  The reason that I’m thinking this way is two-fold; 1. the power behind these chips is massive, and it makes sense that these are for a cloud play; 2. the people who were on stage at the Intel launch were executives from AWS, Azure and GCP.  By using these chips in the cloud, the cloud providers will be able to get their cloud platforms probably twice as dense as they have them now. That leads to a lot of square feet being saved and reused for other servers.

As to how Intel was able to get 56 cores on a single CPU, is through the same technique that they’ve used in the past. They took two dies, each with 26 cores on them and made one socket out of that.  In the olden days, we’d say that they glued two 26 core CPUs together to make one 56 core CPU. The work that Intel had to do, to make this happen was definitely more complicated than this, but this thought exercise works for those of us not in the CPU industry.

These new CPUs use a shockingly small amount of power to run. The chips can use as little as 27 Watts of power, which is amazingly low, especially when you consider the number of CPU cores that we are talking about. Just a few years ago, these power numbers would be unheard of.

Denny

The post Servers running too slow, just add all the cores! appeared first on SQL Server with Mr. Denny.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’ve seen a couple of conversations recently about companies that want to be able to script out their database schema on a daily basis so that they have a current copy of the database; or systems that have to change permissions with the database frequently, and they need to export a copy of those permissions so that they have a copy of those settings.

My question to follow up on these sorts of situations is, why aren’t these settings in Source Control?

Pushing these changes to production requires a change control process (and the approvals that go with these). That means that you have to document the change in order to put it into the change control ticket, so why aren’t these changes pushed into your source control system?

Anything and everything that goes into your production systems should be stored in your source control system. If the server burns down, I should be able to rebuild SQL (for example) from the ground up, from source control. This includes instance level settings, database properties, indexes, permissions, table (and view, and procedures) should all be in your source control system.  Once things are stored in your source control system, then the need to be able to export the database schema goes away, as does the need to export the permissions regularly.  As these have no point in doing them, there is no need to do them.

Think I’m wrong, convince me in the comments.

Denny

The post If It Requires A Change Control Ticket To Change It, It Should Be In The Change Control System appeared first on SQL Server with Mr. Denny.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With Microsoft Azure now supporting Virtual Machines with NVMe storage; things get a little different when it comes to handling recoverability.  Recoverability becomes very important because NVMe storage in Azure isn’t durable through reboots. This means that if your shutdown the server, or there is a host problem, or the VM host has to be patched and rebooted than anything on the NVMe drive will be gone when the server comes back up.

This means that to keep data on the VM past a shutdown you need to think about high availability and disaster recovery.

High Availability

You need to have High Availability built into the solution (with Availability Sets or Availability Zones) which probably means Always On Availability Groups to protect the data. The reason that you need to have Availability Groups is that you need to be able to keep the data in place after a failover of the VM.  When the VM comes back up, you’ll see the server is up, but it may not have any data. So what needs to be done at this point? You need to create a job on every node that will automatically look to see if the databases are missing and if they are then remove the databases from the AG, drop the databases, and reseed the databases from the production server.

Because of the risk of losing the data that you are protecting, you probably want at least three servers in your production site so that if one server goes down, you still have redundancy of your system.

Disaster Recovery

You need to have Disaster Recovery built into your solution as well as high availability. Because of the risk of losing data if a set of VMs fails you need to plan for a failure of your production site. The servers that you have in DR may or may not need to have NVMe drives in them; it all depends on why you need NVMe drives. If you need the NVMe for reads then you probably don’t need NVMe in DR; if you need NVMe for writes, then you probably do need NVMe in DR.

While a full failure of your production Azure site is improbable, it is possible, and you need to plan for it correctly.

If you have NVMe in DR, then you’ll want to the same sort of scripts to reseed your databases in the event of a SQL Server restart.

But this is expensive

Yes, it yes.

If the system is important enough to your business that you need the speed of NVMe drives, then you can afford the extra boxes required to run the system probably.  Not having HA and DR, then complaining that there was an emergency and the system wasn’t able to survive won’t get a whole lot of sympathy from me. By not having HA and DR you made the decision to have the data go away in the event of a failure. If these solutions are too expensive, then you need to decide that you don’t need this solution and that you should get something else to run the system.

Sorry to be brutal, but that’s the way it is.

Denny

The post Azure NVMe Storage and Redundancy appeared first on SQL Server with Mr. Denny.

The post Azure NVMe Storage and Redundancy appeared first on Denny Cherry & Associates Consulting.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Another year has past, and VMware has decided to make me a VMware vExpect again. I believe that this is the 5th time I’ve been a VMware vExpect (the 4th time in a row, there was a gap year because I forgot to fill out the form, it was a thing).

I’m thrilled that VMware has decided to give me this award for the 4th time in a row.  It’s a great honor to be selected for the VMware vExpert award, more so because I’m not a sysadmin by trade, but I’m able to talk to sysadmins about databases and what the best options for hosting them within your VMware environment are.

Thank You, VMware for seeing all the work that I’ve been doing, and that I plan to keep doing throughout the next year.

Denny

The post Another Year Gone, Another Year as a VMware vExpert appeared first on SQL Server with Mr. Denny.

The post Another Year Gone, Another Year as a VMware vExpert appeared first on Denny Cherry & Associates Consulting.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Oh, how wrong I was. Back in the day, all I worked on was Microsoft SQL Server. These days I’m doing some Microsoft SQL Server and a decent amount of Microsoft Azure and Amazon AWS cloud work. With all three of those, there’s a lot of Linux in play. Microsoft SQL Server has supported Linux since the release of SQL Server 2017 at Ignite 2017.  Microsoft Azure and Amazon AWS have both supported Linux since (I believe) they first supported VMs in their cloud platforms (forever is the world of computers).

Back when I had just a few years expense with SQL Server (and IT in general) I also owned and managed a large (at the time) Oracle database which ran on Unix. Once that was no longer my baby to manage, I assumed by *nix carrier was over. And it was, for a while, but now Linux is back and this time in the SQL Server world.

Looking at the servers that DCAC has in our Azure environment, we have more Linux boxes than Windows. Our website runs off of PHP running on a pair of Linux servers. Our database is MySQL running on a couple of Linux server (eventually we’ll move all this over to Azure PaaS, but still running on Linux). The only production servers in Azure that we have running Windows, our the Active Directory domain controllers, one of which also syncs from Active Directory to Azure Active Directory to handle our sign in, Office 365, etc.  That’s it. Everything is Linux.

Our lab environment in our CoLo is also a mix of Windows and Linux.  We have a few tools that were built by Microsoft that we run that are running on Windows, but we’ve also got a decent amount of Linux in the data center as well.  By the time this is published (I’m writing this on the flight to the PASS Summit in November 2018) we’ll have a Docker cluster up and running as well (unless I get lazy and I don’t get up to the CoLo to rack the servers for it). This Docker cluster is Linux based as well and will let us run a bunch more Linux servers as well.

Your point is?

The point that I’m trying to get to in all of this is that if you are a database administrator that thought they were going to stay in the Windows world forever, think again. You have to be an expert in Linux to manage these systems, but you’ll need to understand the difference between Windows and Linux. SQL Server has a few differences between the platforms, and these differences are significant to the platforms.  As a Windows DBA you’ll want to be able to navigate the Linux Operating System, and tell your system teams where SQL Server is storing the database files (they are in /var/opt/mssql/data if anyone asks) so that they know which mount points need to be made bigger.

You don’t need to know everything, but the basics of Linux are doing to take you a long way.

Denny

The post I thought my days of Linux were over appeared first on SQL Server with Mr. Denny.

The post I thought my days of Linux were over appeared first on Denny Cherry & Associates Consulting.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview