Loading...

Follow Kendra Little's Blog on SQL Server on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I recently realized that I’m in the early stages of burnout.

This isn’t an unfamiliar place for me, but it is new for me to recognize the early signs of burnout in myself before it becomes a full-fledged disaster. This time, I’m thinking about how I got here, and making an explicit plan to change course.

In the hopes of helping someone else out there, I thought some public journaling might be in order.

How do you recognize if you’re in the early stages of burnout?

I have recognized two symptoms which I identify as unusual for me. Together they indicate I’m heading towards burnout.

Symptom 1: Lately, I get frustrated and angry by small things

One warning sign of burnout is when small inconveniences start causing a disproportionately large emotional response.

For example, on a recent weekend I was traveling for work. I was in the Detroit area, about to head to London between a client visit and a conference. It was a beautiful day. I stopped by a local Starbucks and, by chance, another customer was rude to me.

Normally, if I wasn’t in the early stages of burnout, I would assume the other person was having a bad day, and I’d shrug this off. It wouldn’t be something that I’d even be likely to remember. But in this case, I felt a lot of anger toward that person. I was livid.

I quickly realized that the way I was feeling was more about me than about that random person in the Starbucks, but it took me a long time to shake off the anger.

This aspect of burnout is particularly tricky, because natural little miscommunications at work can slow you down more than normal, as you’re now having to handle not just the miscommunication, but also keep your own stress and irritability in check.

Symptom 2: Lack of excitement

I’m also just not as interested in work projects as normal. I’ve got a lot scheduled, but I often feel like I’m overwhelmed, and that all I can do is the bare minimum.

This lack of excitement contributes to:

  • Less curiosity and asking fewer questions
  • Taking less time to connect with my teammates and chat
  • Poorer listening skills
  • More hurried work / less critical thinking

Those things together mean that my work quality goes down a bit. And then that frustrates me.

What causes me to burn out?

Like I said, I’ve been here before — and I’ve been past this point, to where I simply couldn’t cope with the stress of my daily job anymore. Looking back, I can see some trends.

I have a tendency to enjoy working a little too much. And I have a few traits that I believe pay the way to burnout:

I tend to be a perfectionist, and I always want to help. I don’t like saying no, I always want to be involved when asked. I like finding a way to not only make things work, but to try to give them an interesting twist, too. I spend more hours working than I should — and by saying, “more hours than I should,” I mean…

I haven’t been giving myself enough time to recharge. For good health, I need to spend time away from a computer. I need to get exercise. I need to spend time outside. Meditating on a daily basis helps me ward off anxiety. Spending time with friends in person really helps as well. One of the things I realize now is that I haven’t been doing enough of these things lately.

I stretched myself thin and didn’t leave any room for life to happen. I signed up for a few more work projects than I should have this spring and summer. They are awesome projects, they are interesting and exciting and important. But I maxed out my schedule (plus a little), complete with loads of travel. It looked barely doable — until my dog, Mister, unexpectedly passed away. There was no room in my busy schedule for me to grieve my best doggo friend, and he wasn’t there to wag and say it’d all be great anymore. I started to feel trapped.

(( Perfectionist + overcommitment ) – self-care ) * grief = TIRE FIRE

As a career Ops person, I always want leeway. I like to plan a course where I’ve got a backup plan in my back pocket, and ideally a few viable alternatives behind that.

When you are headed towards burnout, you start feeling like there’s no leeway. No alternate plans. You have more stress than you can handle. You’ve got a seat on the struggle bus, and you’re not sure who is driving it.

So, what to do to change course?

Here’s my plan.

Step 1: Book time off

Yes, I am presently oversubscribed. But the very first thing I did when I recognized that I’m heading towards burnout was that I went into my calendar and started requesting days off wherever I could possibly make it work, or where someone else might be able to make it work in my absence.

This sounds counter-intuitive, but it’s necessary. While I can’t take a week or two weeks off right now, what I can do is:

  • Make a proposal for days off
  • Explain why I’m asking for those days off to my colleagues and boss, and ask for help to make it happen
  • Commit to not working during that time off — no notifications, no emails, nothing

Oddly enough, I find that it’s harder to disconnect from work than normal when I’m close to burnout. It’s something about the stress– it makes it harder to put work down. But disconnecting is truly needed, and for me I think this is one of the biggest ways to avoid burnout.

Putting work on pause and having time off helps give needed perspective on life. It reduces stress, eases the tension causing those knee jerk reactions.

It will also pay itself back by helping make me more focused and efficient when I am at work.

Step 2: Spend time with humans (of the non-work variety)

The second thing I did after recognizing the symptoms of burnout was to email my friends at home and suggest getting together. I was lucky to have a good friend who had just started up a conversation about this, but I looked around at my other friendships and thought about others who I haven’t seen in a while, as well.

As an adult, it can be tough to make and maintain friendships. But the time and effort is so worth it. For me, it makes me a happier person, and that happiness extends into my work life.

Just like disconnecting from work, shifting gears and making time for friendships can be a mentally tough thing when you’re feeling burnt out. My mind tends to fixate on the problems at work, and it wants to stay there.

But I know from experience that planning a hike with a buddy is so much better for me in the long run, so a big part of my “anti-burnout” plan is making sure that I’m getting at least four hours of non-nerd-time human contact a week for the next few months. (I know, it’s just so “perfectionist” of me to set an hourly goal, right? I like specific targets.)

Step 3: Pick an anti-anxiety habit (or two)

For me, daily meditation is very helpful. I have learned in the past that this is a simple tool that is quite effective at reducing my stress and anxiety. When I start doing this on a daily basis, it has a more significant effect each day.

I’m starting this slowly at just five minutes a day of meditation. At first my whole goal is simply to re-establish this as a habit, without mentally scolding myself if I skip a day. The point is to keep starting until it becomes something that I look forward to each day and it is once again natural.

I find that when I do practice meditation, I have a more balanced view of things and I am more able to ask for help. I’m also better at thinking of alternatives for how something could work when someone has a request that I can’t fulfill due to time commitments.

Journaling is also helpful for me. I’ve started writing for this blog in a new way. I’m finding that it’s helpful in a similar way that journaling helps me think through things.

For example, I dictated the first draft of this blog post aloud while walking around a hotel room looking out the window. An app on my phone recorded the audio and uploaded it to the cloud. Another app created a transcript, which I edited for the post.

This method encourages me to be more conversational and more personal in my writing.  That’s very very helpful for me right now, because it’s a little bit more about my experiences and it’s a little bit more about getting my thoughts out in a way that is helpful and therapeutic for me. It also just makes me more excited about writing again, which is incredibly welcome.

What if I don’t have time?

The voice of burnout in your head is likely to make an objection: we don’t have time for this. That’s the whole point.

Well, here’s the thing that I’ve learned from the past: it may hurt to make time, but it’s going to hurt even more if you don’t.

It’s not easy to ask for help with your workload. You may need to negotiate to make it happen. It’s not a good feeling to say that you can’t do things which you’ve agreed to. If you’re in a culture of over-achievers, it can be quite difficult to say that you don’t want to work as much as everyone else is working.

However, the thing about burnout is that you can’t sustain it. If you don’t take action and you just keep your nose to the grindstone, chances are good that you’ll become desperate for a job change, and that you’ll either quit your job or take something, anything, for a change.

Burnout leads to bad choices.

It’s a much better choice to start doing the tough work and speak up for your own needs, before you are so burned out that you can’t. Get yourself into a more productive place before making any big decisions about the future — and things will look better from your new vantage point.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One of the cool things that I do as an Evangelist at Redgate is to periodically visit company headquarters in Cambridge. The other Evangelists and I get to meet with every software developer, product manager, and UX designer at Redgate over a series of meetings. That’s really cool. We talk about things that they’ve released lately, what they’re looking at doing in the near future, and we get to give feedback based on what we hear from the community and from folks in the sales process. We also get to share what we personally think should happen in these products now.

As you might imagine, I have a wish list for features in a variety of different Redgate products

Our products are great, and one of the things about great products is that users are always inspired to want to use them in new ways, so I never lack for ideas.

So, I have a lot of opinions about things that I think should happen, and features that I would love to have for customers. And, of course, I’d like those features right now, please.

In a recent meetings with one of the teams, they mentioned that most of their work over the next couple of sprints involves working on their continuous integration process

This sounds like a bummer, right? It’s a time period when the features on my wishlist, and Steve‘s wishlist, and Grant and Kathi‘s wish lists aren’t getting worked on.

But what the team explained was that this application has been around for a while, and a large amount of tests have accumulated. It currently takes more than 10 hours for the build and test process to run. There may be duplicate work going on in the tests, and there are probably tasks that can be made much more efficient. The long build and test time currently makes it painfully slow for the team to iterate on developing and testing new features. 

The team said that in the long run it’s worth paying down some debt and making the automated build and testing cycle more efficient, so that they can iterate on features faster in the future, instead of having to find other things to do while waiting for the CI process to complete.

This news wasn’t greeted with cheers from all the Evangelists present — but, to be fair, when we do respond to something with cheers it makes some of the teams look at us oddly, as we’re in the UK and that’s not something they see every day at work. (Hey, I bring my American enthusiasm everywhere!)

But everyone in the room agreed that speeding up the build and test cycle as much as possible is a necessary and reasonable thing to do

Like any other set of users, we want what we want (and we want it ASAP), but we respect that to make software development work well, you occasionally have to step back and pay down some technical debt.

This is also true for database development

It’s not always obvious that when doing DevOps, stability is just as important as release frequency, but that is the case, and maintaining that stability requires being diligent about tidying up one’s processes.

As database professionals, as developers and DBAs, I believe what we should see as the ideal release cycle is one in which we are free to release features every day without any manual work, and we have a software development cycle in place that ensures that the risk of our changes is minimized, that coding patterns are being used that ensure system stability, and that we have a response process in place that restores services as quickly as needed should there be a performance, availability, or functionality problem.

But that doesn’t mean that we actually release changes every single day. In order to do that effectively, we usually have to have paid down a lot of technical debt. That means stepping back periodically and working on improving our processes, rather than relentlessly focusing on shipping, shipping, shipping, shipping.

This is even more critical with legacy applications, where there is a significant amount of technical debt to pay down

Spending release cycles on making continuous integration and continuous delivery/deployment work better isn’t the part of DevOps that gets business owners and users really excited. But it’s still important to talk about, because this is a critical activity that enables us to deliver value on a regular basis — and that is what gets those folks excited.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Photo by 수안 최 on Unsplash

Today I got a bit closer to a meaningful definition of automation, as it applies to the software development process. I’ve been turning this concept over in my head for a while, which is partly related to the dreaded question of licensing.

Why should licensing an automation product be related to the number of users?

A few weeks ago, I was chatting a bit in the SQL Server Community Slack Channel.✣ One community member was frustrated with running into situations with per-user licensing for monitoring and automation products.

This isn’t the first time I’ve heard grumbling about per-user licensing, of course — with any licensing model, you’re going to hear grumbling about it, that’s just how licensing goes.

But I think per-user licensing can make a lot of sense when it comes to automation products, because of the nature of automation. I work for Redgate, which does per-user licensing. I also often do demos of how our tools integrate with Microsoft’s Azure DevOps Services (formerly VSTS / or TFS-in-the-cloud), which does licensing based on user numbers.

But not everyone thinks this makes sense.

That’s because they see automation as:

  • Something that one person sets up on a server, which that person may occasionally tweak; and…
  • A script or orchestrated set of scripts and products that replace the work that people (maybe more people than the person who set it up) would do manually

This definition isn’t dumb or naive at all. This is classically what automation has been in IT for many years: I’ve got a problem. I create a script. The script helps save me and my team some time and I only ever look at it again if it stops working.

Based on that definition, it would seem most natural way to be charged for automation tools would be based on something like the number of times the tools are run, the number of servers/cores they are run on, etc. ✣✣

The nature of automation has changed dramatically in recent years

Like I said, I’ve been having a hard time putting a definition of what automation means now into words. Then I saw a link to this job description for a Sr. Resilience Engineering Advocate at Netflix.

There are a lot of interesting things about the job description, but one sentence that leapt off the page to me was that the team values:

Automation as a team player versus automation as a replacement for humans

Netflix Cloud and Platform Engineering SRE Team

This is a huge part of the evolving definition of automation. Automation is now:

  • Something that a team configures, interacts with, and improves on a daily basis
  • A script or orchestrated set of scripts and products that are an integral part of the productivity of the team

The big reason that per-user licensing makes logical sense to me when it comes to tools that are designed to be a key part of the software development life cycle is that the tools are meant to be experimented with freely. The tools will work best if they’re able to be tinkered with and adapted over time, to suit the needs of the team at that point. Licensing based on cores or CPU cycles or usage naturally reduces experimentation if it is going to drive up cost.

Also, the tools are meant to be team players: they are meant to be available to have every team member interact with them. Automation in the SDLC for database changes doesn’t mean that every time a change is committed, the change rockets toward production without a human being ever needing to think about it again. Instead, automation is a player in a process that can absolutely include rigorous review (both automated and human-powered), testing, and even approval gates when needed.

Automation looks different in different teams

One observation: team size matters. If you’re one person in a small shop and you’re setting up automation to reduce the amount of manual work that you personally have to do, this high-faultin’ definition of automation as a “team player” probably isn’t going to resonate with you. You’re much more likely to continue to see automation in the classically defined sense.

But, on the other hand, you don’t have to have a team nearly as large as Netflix to start seeing the advantages of thinking about automation differently. It just takes a few people working together collaboratively and thinking about how to more consistently and reliably deliver values to customers to start changing the way automation exists in the workplace.

✣ The SQL Server Community Slack channel is great, join up here

✣✣ I don’t mean to make this post about how much software should cost. I actually don’t think that’s too terribly related to licensing model choice at all — whatever you are charging by, whether it be people, cores, tentacles, or whatnot, you can find a way to make it cheaper or more expensive.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I got a question recently about a panel discussion on Database Development Disasters at SQL in the City Streamed. I had framed a question as, “how fast should development go without load or performance testing?”

I got a follow-up question from my friend Chris Randvere at Redgate: he asked for more information about what the question meant? I realized that my wording had been pretty unclear. I had meant to ask the panelists what their thoughts were on release cadence when a team lacks tooling to do automated load and performance testing outside of production.

Should the lack of automated performance testing ability change the rate at which we deploy software?

In other words, if we can’t do performance and load testing, does that mean that we should or shouldn’t deploy a change to a database every weekday?

I don’t think we covered this super-well in the panel because I worded the question poorly. So I wanted to share my experience around this, and also talk about why it can be fairly common for teams to lack automated load testing ability outside of production.

Why doesn’t everyone have an environment where they can validate performance by replaying activity against an updated database before it ever gets released?

We have some built-in tools for this in SQL Server. The current version of this is called Distributed Replay. These tools are not the most lovingly tended by Microsoft in terms of updates. A frequent complaint that I’ve heard about distributed replay is that the current version of the tooling still requires you to feed it with old style profile traces done with the old SQL Trace.

You don’t necessarily have to have the Profiler app running while you do the trace, but the old-style SQL Trace results are what it takes in in other words. You can’t do a more modern Extended Events trace and feed that into the tool.

But that lack of updates isn’t the main reason why not everyone runs Distributed Replay.

Distributed replay is tricky to set up

The more complex your environment, the trickier it is to set it up. If you’ve got things like linked servers or SQL Server transactional replication creating interesting patterns in which your SQL Server communicates with other SQL Servers, that can make using Distributed Replay particularly tricky.

There are absolutely people out there who’ve configured Distributed Replay on complex systems, but they all say it wasn’t something they set up in just two hours. So one factor is the complexity.

Another factor: distributed replay is designed to replay not to amplify

When we’re doing performance or load testing, we are not always interested in: how will the system perform under the current load? A lot of times we’re interested in: how will the system perform if it’s under even more load? 150% of the load, or 200%, or more.

But with any replay tool — I’m not just dogging on distributed replay here — if we think about the nature of any database replay tool, we can’t replay the exact same commands and expect to learn how performance will be at a higher load rate.

For example, a delete command. If a delete command on the first run finds 10,000 rows to delete, that could be a fair amount of work.

If we replay that same delete command, depending on what the criteria are in the delete, it possibly will find zero rows to delete the second time, because it’s already completed. Similar things can happen with updates. We may also have constraints that mean we can’t just insert the same thing twice depending on the nature of the data.

So the way modifications work, just amping up the load in a replay isn’t the same as adding true additional load.

Now, there are other ways you can do load testing. There are third-party tools that you can buy to get around this problem of repeated modifications.

They can be expensive. But they also require a fair amount of coding, because you’ve got to put in commands that help you get to a state where you can check: Hey, let’s turn let’s turn the volume of activity up to 200%, to 300%.

So, some folks do that. but because of the cost and the effort put into it, folks only tend to do this with third party tools when it’s really worth their while, and their management is deeply invested in the idea of having load testing.

Even then, the load testing needs to be updated for some changes

For some changes, you can test them with load testing tools without any changes. For example, if I refactor a function for performance tuning, but don’t change any inputs or outputs, I could test that with an existing configuration of a load testing tool.

But what if I add a new parameter onto a stored procedure? If I don’t change the load testing, will that be appropriate, or not? Should I be running the load test with a variety of values for that?

Or what if my change involves dropping one procedure and adding in another one? A replay system would have no idea what to do, and with a load testing system I’d need to adjust what gets executed, how often, etc.

When it comes to testing database changes, load testing tools are excellent, but I’d expect some human work to be required as well.

So, most people don’t have automated performance and load testing. Should that impact how frequently we deploy changes to production? 

What we’re really looking to find with load testing in the SDLC cycle is regressions in performance. We should have other testing to catch true defects such as making sure that the right results are returned, etc.

With database changes, there is a fair amount of work we can do to make sure that things perform well without load testing. There is other due diligence that can help: we can maintain and use a staging or pre-production environment with production size datasets and the same data distribution as production, for example.

In that environment, we can testing that confirms: what indexes is the modified code using, and are they optimal for the modified code? How long are queries taking? We can make educated guesses about production performance instead of waiting until after a release to see what it’s like.

This level of manual performance testing can work extremely well, and it unlocks us to do frequent deployments, in my experience.

Without load testing, it’s best to frequently deploy small database changes

By small, I mean as small as possible. A lot of these database changes are going to be things that our customers shouldn’t notice at all. Like, hey, we add a new column to this table. We’re not actually using it, though. We’re going to use it in the future for a feature that’s coming out soon.

But we regularly trickle out the staging steps for a future change. Each of these steps is backwards compatible and deployed well ahead of the point in which we “turn on” the new feature for our customers, which is often handled via an application control called a feature flag.

This regular stream of very small changes is helpful for speedily resolving any performance issues which may occur.

If performance does change, we haven’t released a big batch of 50 changes all at once — that’s a lot of things to go through to find the culprit. Instead, we’ve released maybe 7-10 changes in the last week, and we can look at the most recent ones first, and check if they could be related to the issue.

What do you mean by “frequent”?

By “frequent”, I mean that daily releases shouldn’t be a big deal.

But remember: batch size is critical. This isn’t like a race car, this is like a steadily dripping faucet.

Complexity: sometimes performance problems don’t happen right after a change is released

There can be changes that we make where performance is fine for a while, and then something happens. Maybe it’s even three weeks after the change was released, but suddenly performance is terrible.

One particular tricky issue in SQL Server that can cause this is what’s called parameter sniffing. If we are using queries that are parameterized, a lot of the way that things perform depends on the execution plan that is cached with the first set of parameters that are passed into the query or procedure when it’s compiled, because SQL Server reuses that execution plan until something causes it to recompile.

Maybe the thing that causes the query to recompile is enough data has changed that statistics automatically update. Maybe it’s that there’s a failover in an Availability Group. Maybe the SQL Server is restarted. Maybe someone manually clears the execution plan cache.

A very wide variety of things can change, but if we happen to we have a recompile and we happen to have a compilation with a set of parameters that leads to an execution plan that doesn’t work so well, then we can suddenly run into performance issues.

But this risk of parameter sniffing doesn’t mean that we shouldn’t release that changes frequently. We’re going to have the same risk whether or not we’re releasing a big batch of changes or whether we are regularly releasing small changes.

For issues like this, I like the recent auto-tuning features in SQL Server. Essentially the automatic plan correction feature will look cases where a query is sometimes fast and sometimes slow, and attempt to identify “bad” execution plans which are periodically cached for a query.

You have to have the built-in Query Store feature enabled, and when using on-prem SQL Server this feature requires Enterprise Edition for production (Developer Edition has all the features of Enterprise for non-production environments). But if you do have a SQL Server where performance is critical, this feature is a huge deal. It gives you the option to either let it temporarily correct the plan for you, or to notify you in a DMV that it’s spotted a problem that needs tuning.

This feature allows can either work with automated load testing — you can use the feature to spot parameter sniffing problems before deployment — or it can give you early warning of performance problems before your customers start containing. It also gives you insight into how to reproduce these tricky problems outside of production.

A quick recap

Automated load testing is fantastic, if you have the time and budget to take it on for your team.

But if you don’t have it, the lack of automated load testing shouldn’t block you from frequently deploying small changes to production. By combining targeted manual performance testing into properly planned changes, you can still deploy frequent changes safely into high performance and high uptime environments successfully.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’m excited to be teaching a full day session with Steve Jones at the SQL PASS Summit on Tuesday, November 5, in Seattle.

Steve and I will be discussing proven patterns to version and deploy changes successfully

Read more about this precon session, or check out the video below where I give a brief overview of what Steve and I will cover.

Upcoming full day pre-conference session, "How to Architect Successful Database Changes" - YouTube
Will you teach me how to use Redgate tools?

Nope– not in this session.

While Steve and I both work for Redgate, we will be showing patterns and approaches work with both vendor and custom tooling, and we’ll do demos with a variety of tools, including free tools when possible. This is absolutely not a product-specific session, and the patterns discussed have been proven in the industry by developers and DBAs using a wide variety of tooling.

If you’d like to learn more about Redgate tools, check out the Redgate SQL in the City Summit Precon that will be held on Monday, November 4. That day of training has 100% different content, so if you’d like to go all-in on DevOps, join us for both!

Reserve your spot

You can currently register for an individual pre-conference session for $499, or bundle pre-conference sessions with the conference.

Hope to see you at PASS Summit!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Calling all Database Administrators, Developers, Analysts, Consultants, and Managers: Redgate has a survey open asking how you monitor your SQL Servers.

Take the survey before April 5, 2019.

Your time is valuable. The survey will take 5 – 10 minutes to complete. That’s not a ton of time, but it’s a noticeable part of your day, and there should be something in it for you. Here’s why it’s worthwhile to take the survey.

Database use patterns and monitoring trends are valuable to everyone in the community — and we’ve been missing out on this trend information!

This isn’t the only survey from Redgate — you may also have heard of the annual State of Database DevOps Report.

But this survey is different in important ways: it asks how you manage and monitor your SQL Server database environment, regardless of whether you think about DevOps at all.

Redgate shared the results of the State of SQL Server Monitoring Survey with readers last year, and will do so again in 2019. 2018 was the first time we know of that a survey of this kind had been done, and it will be especially interesting to notice what has changed in the past year.

If you fill out the 2019 State of SQL Server Monitoring survey, you’ll get an advance copy of the results by email

The results will help you understand:

What do other DBAs, Developers, and IT Professionals see as their biggest challenges over the coming year?

In last year’s report the biggest challenges were seen as:

1. Migrating to the cloud

2. How to deploy changes faster to larger environments

3. Protecting data – especially for compliance reasons

2018 State of SQL Server Monitoring Report

I’m curious to see if these priorities have changed after a year. With GDPR implementation having come to pass and more states and countries around the world passing increasing privacy regulations, I suspect that ‘protecting data’ may move up from number 3 on the list, but I won’t know until we see the data.

How much time do your peers spend examining SQL Server health and resolving issues?

This is a great question on the survey — and I think this one is absolutely worth some reflection after you take the survey (and after the results come out). Do you spend more time firefighting than you should? If so, what ideas you have to change that?

At what point to most organizations move from manual monitoring to a third party tool?

If you work for a growing company and are interested in making the case to purchase monitoring to your management, it may be useful to know information like this…

Respondents with fewer than 10 servers were twice as likely to rely on manual monitoring as to use a paid-for tool. Those with 10 or more servers were more likely to use third-party software.

2018 State of SQL Server Monitoring Report
Survey results will help you prioritize what to learn

Are you curious as to whether you should learn another database platform, such as MongoDB, Oracle, MySQL, Cosmos DB, or PostGres? The survey will show how much respondents report that they are using each platform, and whether they think the amount will increase or decrease.

Want to know whether you should invest time (and maybe ask for some budget) to explore cloud technologies like Azure Managed Instances, Azure SQL Database, or Amazon RDS? The survey will show how many of your peers are using each one.

Also, You Could Win Money

Everyone who completes the survey and provides their details at the end will be entered into a prize draw to win a $250 Amazon Voucher (or equivalent in your local currency).

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’ve recently published an article, “Why You Shouldn’t Hardcode the Current Database Name in Your Views, Functions, and Stored Procedures,” over on Simple Talk.

Hello, my name is FINE

In the article, I discuss:

  • Why referencing the current database name creates a dependency
  • What ‘deferred name resolution’ is, and why the dependency may be more noticeable in views and some functions (rather than stored procedures)
  • Which activities are most likely to break if you place dependencies on the current database name

Read the full article here.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Are you interested in speaking at the Professional Association for SQL Server’s annual Summit conference? The call for speakers is now open, and you may submit up to three sessions between now and March 31, 2019.

I’m currently in the process of sketching out my ideas for what sessions that I’d like to submit, and I thought I’d share my process here.

Generating ideas: what am I interested in spending six months thinking about?

I’m a bit selfish when it comes to topic selection, and I think that’s fine: it needs to be something that I’m interested in thinking about for more than half a year.

That does NOT mean that it needs to be super-advanced rocket science content. Figuring out how to present introductory level content clearly, in an easy-to-understand way takes a lot of time. The topic simply needs to compelling enough for me to stay interested.

A filter: is the topic relevant to enough people?

When I first began speaking, I thought I needed to speak on topics that were unique to me, which other people in the community didn’t already “have covered.” This led me to somewhat esoteric concepts. There’s a big downside to that: your talks simply won’t be relevant to many people.

Now, I encourage myself (and you) to use the opposite filter: think about talks that will be be helpful to a lot of people. That means you probably won’t be the first one in the world to deliver a talk on the topic, and that’s perfectly fine: your perspective is valuable!

There is definitely a “three bears” aspect to this filter. Your talk doesn’t need to be useful to everyone at the event. But do think about your intended audience, and whether you’ll help a significant portion of the audience at the event.

Sketching out ideas and audiences

For this year’s PASS Summit talks, I currently have two topics I’m quite passionate about, which I’d love to share.

Right now I don’t have abstracts. I’ve started by creating notes on:

  • Subject matter area / rough title
  • What the talk would include – a problem summary (which I write as quickly as possible, just throwing out ideas) and a bulleted list of content ideas
  • Who would care (audience) and why they would care

Here’s where I’m at with my two topics.

Topic 1: Source Controlling Index Code in a Changing World 

Alternate title: I prefer, “How to Standardize Index Code in a Changing World.” Not everyone may get that Standardize = Source Control, however — not sure if that’s simply how I think of it.

Problem summary: It’s critical to get database code standardized into source control to manage collaboration, store and version your organization’s intellectual property, and to track and audit what has happened in your database code. The database code for indexes, however, is increasingly difficult to standardize: new features in Azure SQL DB automate the process of tuning index schema. Single-tenant database architectures often standardize table schemas, but require customizations of indexes for performance in individual databases. And index schema often needs to “drift” in production as operations teams respond to critical performance problems. How do you adapt successfully to this chaos, yet still maintain your sanity by managing your database code in source control?

Topics to include:

  • Differences in source controlling indexes with state vs migrations approaches to database code
    • Strategies for working around limitations on advanced index features with a state approach
  • How to manage drift that occurs with automatic indexing in Azure SQL Database
  • How to manage incidents where indexes are manually changed/drifted in production due to a critical need without going through a full release cycle 
  • How to manage single-tenant environments where the indexes are customized in individual databases 

Audience / who would care:

  • Developers and DBAs who have their databases in source control, but who struggle with “drift” in some areas
  • Developers and DBAs interested in getting databases into source control — sometimes thinking ahead about these more advanced issues at the beginning can help you learn fast
  • Architects thinking about using single-tenant designs
Topic 2: Best Practices for Branching Database Code in Git 

Problem summary: There are quite a few discussions and patterns available online for branching application code, but special considerations apply when it comes to database code — and hardly anyone has written about this! This talk helps DBAs and developers design the simplest branching strategy that meets the needs of their organization. I will share key considerations for designing a branching strategy for database code, and we will discuss multiple popular branching models along with “fit notes” describing the strengths and weaknesses of each model.

Topics to include:

  • The overarching rule of “keep it simple”, but why this leads to different practices for small teams (1-2 devs checking in code) vs larger teams
  • State-first vs migrations-first and explain what that means for branching – a migrations approach that doesn’t include a state component will SUCK for merging … maybe don’t use the word SUCK in all caps in the final abstract tho
  • Why shared vs dedicated development databases is important when considering branching – why object locking is a necessarily evil of a shared model, so don’t do shared if you can avoid it (esp considering that Dev Edition is free for SQL Server, side-eye at Oracle)
  • What feature branches are and how to tell if you need them 
  • What release branches are and how to tell if you need them 
  • Pull requests 
  • Rebasing 
  • Branching diagrams for multiple popular models, and “fit notes” for the teams using them

Audience / who would care:

  • Me!!! Hahahaha, I find this topic fascinating, even if there were five people in the room I’d be thrilled (but I do think this would be quite popular with dev-minded folks, I don’t think it’s just me)
  • Developers and DBAs who want to get database code into source control
  • Developers and DBAs who have code in source control, and who struggle with:
    • Managing concurrent changes to objects
    • Building the codebase from scratch to a specific release / version
    • Managing drift in different environments / comparing database versions to source versions
Next steps: mulling these over

I feel pretty good about these topics so far: I think that I care enough about them to want to spend a good amount of time with them, and I think that enough people at the conference care about the problems discussed to make them worthwhile.

These two talks are a bit more focused than talks I’ve done in the past, in that they’re specific to managing code in source control. I’m strongly of the opinion that everyone should be managing their database code in source control, though, so I’m fine with the level of focus — source control should be the norm!

I am presently waiting a few days before writing and revising “real” abstracts. I’ve got some time before the deadline, and I find it’s good that I sit with a topic for a few days to see if a bright new idea occurs to me, or to see if I may want to go in a different direction.

First time speaker? Go ahead and submit!

Writing an abstract and choosing a title for a talk can be daunting. I personally find it easier to start with making notes about a potential talk like I’ve shown here, and then refining those notes. I am hoping that helps some potential speakers out there to get started.

Here’s that link to the call of speakers again — submit by March 31, 2019.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Redgate is building a library of real-world stories about database development disasters.

Your mission: Tell us a true story in 500 words or less about a time when you were involved in an Agile or DevOps project that went full steam ahead in speeding up delivery of application code, but didn’t modernize database development practices. Did trouble follow? Check out the prizes and give us the scoop here before March 20, 2019.

Enter today, time’s almost up!
  1. Share your story using the form here
  2. The story must be true – but never fear, we will anonymize all stories and the names of the winners
  3. Enter as often as you’d like (but each person may only win once)
Need inspiration? Grant’s sample entry (the short version)

“Our organization built a new application using an Object Relational Mapping (ORM) tool. The team worked without any DBAs or database developers for speed. We ended up with an incredibly slow application that nobody could report from at the end – and deployments dropped and recreated tables! We had to implement weird workarounds to make deployments safe. The project delivered late and required a lot of unexpected budget from consultants. If our teams had been able to work together on the database as part of the project, we would have saved the company a lot of time and money.” …Read Grant’s full article

My own development disaster story: The V1 Team’s Doomed Redesign

Once upon a time, I worked for a startup which purchased another startup. For the first year, the acquired company’s applications operated independently, as they were originally designed. After that point, developers from both organizations began merging together, and a team was created and assigned the task of redesigning the applications of the purchased company to be more scalable. Let’s call this the “V1 Team.”

The existing production legacy applications involved complex data processing. The idea was that the V1 Team would reproduce almost all functionality of the legacy applications in the initial release of the redone application.

The V1 Team went off and began work. As they worked through their long development cycle, features continued to be added to the production application. These were added to the workload of the V1 Team, continuously increasing the scope of their project and delaying their initial release date.

There were a lot of quite smart developers on the V1 Team, people I enjoyed working with, but I found myself stopping by their area less frequently over time – frankly, it was a depressing place to visit. The team was overworked and frustrated. They were never releasing any code to production, and there were no design meetings with anyone.

Finally, we heard that the redesigned applications were going to be ready to launch soon! This was good news, as the legacy applications were not easy to support.

The first thing I remember about the database deployments for the V1 Team’s new application is that… well, they didn’t actually deploy. The V1 Team had been stuck in a prototype environment for so long that quite a bit of work was needed to bring the codebase into a deployable state for pre-production and production domains.

The second thing I remember is looking at one example table in one of the V1 Team’s new databases, and finding that the clustered primary key was defined as being the combination of four uniqueidentifier columns (GUIDs).

I’m not the kind of DBA who sees a uniqueidentifier column as being universally bad or a sign of doom. However, keeping clustering keys relatively narrow is generally a good practice for scalability in SQL Server– and a 64 byte surrogate key is a particularly bad sign. And it was too late to make any changes: the poor V1 Team was desperate to get anything in front of a customer at this point.

It was a tricky and painful struggle for both the V1 Team and the operations teams to get that code into production and to begin supporting it. Performance wasn’t great, and customer response was lukewarm at best– after all, they’d been told the whole project was about scalability.

Nobody declared victory, least of all the customers.

Scoping and architecture discussions, I learned, are critical for any project.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I love breaking technology.

Well, I love breaking technology on purpose, in a place where it’s not going to slow anyone else down. It’s a great way to learn more about how everything works and what your options are to fix the situation when things go sideways.

In this 9 minute video, I ignore SQL Source Control’s valiant attempts to keep me out of trouble, and put the database code in my local git repository into an inconsistent state. Can I fix it? (Spoiler: yes I can.)

Kendra Breaks and Fixes SQL Source Control - YouTube
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview