Historically, the 0 default meant unlimited cores. I like this setting instead because if somebody amps up the VM’s core count later, MAXDOP won’t automatically rise to match. If you jump from, say, 4 cores to 8, then I still kinda like the 4 setting better.
In vaguely-related-news, this week’s CTP 3.0 adds a lot of diagnostics for Azure SQL DB Hyperscale, plus a new sp_configure option for hybrid_buffer_pool (no trace flag required.)
I teach my Mastering classes in a regular rotation: Mastering Index Tuning, then Mastering Query Tuning, then Mastering Server Tuning. I go 1-2-3 like that because they build on each other, and I repeat ’em over and over through the year so people can gradually make progress.
I just finished another rotation, so it’s time to go through the reviews – not just the good ones, mind you, but all of the reviews, in order, to understand how I’ve been doing lately. Here goes:
“Absolutely incredible! Indexing is something we can all work on, and honestly this is the place to brush up. Brent’s class was entertaining and engaging, and with the labs he sets up well worth the money. I couldn’t be more happy with the class, will definitely be looking into Mastering Query Tuning next!” – Brandon, March 11
“Brent’s way of teaching is unmatched! This class is an absolute must for SQL Server professionals. Perfect amount of lecture, lab time, and Q&A.” – Mohammad Darab, March 8
“Absolutely loving it. The class, Brent style, everything. I have to admit, I had my reservations before the class, but boy I was wrong about it. Less than 5 minutes in the first lab anyone can see the insane amount of work what was put into the class. This admiration only grows through the 3 days as everything is marching toward the last lab. I am really grateful to Brent and his team to use their extensive real-life knowledge and put this class together. Cannot wait till Query Tuning starts, probably I will start the previous recordings” – Kalotai, March 8
“This class was just top notch. Brent is very entertaining and he explains the material in such a clear and concise manner. I learned a ton. Looking forward to the next class” – Katie, March 7
“The Master Index Tuning class is great. From the way the material is presented, to working the labs I really enjoyed this class. In fact, I have take 4 of Brents classes and they all are wonderful. I do highly recommend. And since I purchased the season pass, I look forward to the next class I take.” – Michael DeMore, March 7
“Brent is a great instructor with deep SQL Server knowledge and has a great sense of humor. Class was extremely educational and I highly recommend it. Looking forward to signing up for additional courses in the future.” – C. Brien, March 7
“If Mastering Index Tuning is a must have, Mastering Query Tuning is an absolute must have. I already know I will be in trouble for Mastering Server Tuning in about a month time :). Back to Mastering Query Tuning: it is not only informative, but the knowledge transfer is very entertaining too. And there is a great amount of knowledge in the class. By the end of the third day my head was tried to burst open a bit with the extra information, as it turned out I was not alone with this. I feel about Mastering classes as I feel about electric heated motorcycle gear: until you try it, you cannot even imagine what you missing from life. Once you had it, you do not want to live without it. If you work with SQL and you can only have one training course in your life, you really need to pick the Mastering classes. Go for all 3.” – Kalotai, April 8
“Another very educational, eye-opening course in the Mastering series! I’m not a developer by any means but after this course I am pretty comfortable tackling slow performing queries. Thank you Brent!” – Mohammad Darab, April 5
“After taking the Mastering Index Tuning class, I was expecting to come into Mastering Query Tuning and learn a lot. I was not disappointed. I would argue this class is harder than Index Tuning. There are just more moving parts to understand and take advantage of. One of the things I enjoy about this training is watching someone do things that are not necessarily part of the “plan” and then figure out what went wrong. Brent does a good job of it live. He’s not just a “trainer” who is parroting things off an MS SQL Server Training slide deck. I really like this training format. Brent does a great job in front of the camera. P.S. The difference between people that want 10 and 20 minutes breaks made me laugh out loud !” – Andy Benner, April 4
“Brent is making harder and harder to write a review and not to sounds like a fanboy (fangirl). In my previous reviews (Mastering Index Tuning, Mastering Query Tuning) I have highlighted how great is the material, how well the lectures are built. The true appreciation for the classes comes though after you did all 3. I can only repeat myself that these courses a must have if you ever just looked at MS SQL server. The best thing what you can do for yourself is that you are not working on the days of the course, and you dedicate your whole day for the classes. I consider myself very lucky that I signed up for this. One of my best decisions in my life. Thank you Brent for creating these courses and doing a fantastic job to transfer the knowledge. Oh, and I am definitely a fanboy for life.” – Kalotai, May 14
“I highly recommend taking Brent’s “Mastering Server Tuning with Wait Stats” class. For three days I learned more about how to performance tune my database servers than I have in any other class I’ve taken. What I liked best about this class is that the examples and labs were not just some steps to follow to get to a predetermined solution, as I’ve had in other classes, but they were specific problems set up to make the students think about and work through the solution using the knowledge from the lecture. Brent’s lecture style is easy to follow along with and a lot of fun. He does a really good job of keeping the subject material interesting and keeping the students engaged. I found myself wanting more lecture just because he’s a lot of fun to listen to. I will say this class would be more beneficial if you take either “Mastering Index Tuning”, “Mastering Query Tuning”, or both first. I did not and felt slightly confused at times. It’s not a big hindrance, and you will eventually catch up, but it would give the student a better background. Regardless, Brent’s teaching style and real-world examples easily help you overcome any slight deficiencies from not taking those classes first. Either way, you can be positive that you will be a better at tuning SQL Server after taking this class.” – Chris Fair, May 9
“I LOVED this class! Before this class, I would feel flustered when tuning issues came up and I had one hammer that I would turn to when I was in trouble. Now I have a full tool kit and more confidence to find how I can improve a servers performance. The labs help tons and are great to help me fully understand the problem before I see the solution presented. While I took the online only class, I felt there was a ton of great interaction through the slack channel. I loved the stories that help me better understand the concepts and the easy and kind way that Brent answers questions. He doesn’t make you feel stupid for not understanding a concept and is patient to help you get the best result. I will absolutely be back for more! Thank you!” – Andrea Allred, May 7
“This class was well worth the price of admission. Brent is constantly trying to improve his classes and it shows. The labs and lectures show a great deal of effort has gone into them. The labs are rewarding when you get them right and there is a depth to them so that it’s not a cakewalk. On the other hand, Brent’s not afraid to tackle a problem live with everyone watching. I think that is one of the more enjoyable aspects of his classes. I really like the format of this training and I am hoping I can convince my boss to get another season pass.” – Andy Benner, May 6
“Awesome content, the labs are hands on and they have real day to day opportunities to analyze and solve . I highly recommend this training.” – CHaro, May 6
HELL YEAH, I’M ON FIRE!
Forgive me for a moment while I bask in the glow of your reviews. I’ll give you fifteen minutes to stop that. Okay, maybe twenty.
I’ve seriously poured my heart into building a balance: a balance of lecture versus hands-on labs, a balance of tough challenges versus things you can actually accomplish in the time given, a balance of theory versus real-world stuff you can put to use the very same day.
I’m really proud of what I’ve built, and how you’ve reacted in classes while we’ve worked together. I’ve loved the questions you ask, the excitement you share with me when we get to go off-track on new demos, and your willingness to collaborate with others in Slack on your homework. I love seeing y’all work together – and compete against each other, too!
I think my Mastering classes are the best way to learn things that will really make a difference in your career. However, what I think doesn’t matter – all that matters is what y’all think.
And I’m so excited and humbled and fist-pump-y that y’all like ’em too.
Another round is about to start.
Wanna join us?
Heck yeah, you do. Time to pitch it to the boss. Here’s my upcoming lineup for the next quarter, in order:
I’m not gonna lie: my classes aren’t cheap. If you want inexpensive classes taught by people who’ve never done this stuff in real life, who don’t build the very open source scripts you use in your everyday performance tuning, you’ve got plenty of choices. There are plenty of local instructors who’d love to read you the slides that somebody else wrote.
You should read the reviews on those, though.
Oh, the students don’t have the ability to leave public reviews?
Wanna make your SQL Server applications go faster? Here are 9 simple tips that will give you a better chance for success. They’re not carved in stone, but they’re a good place to start.
Query Design Tips
1. Code for readability first. The easier your query is for humans to understand, the more likely SQL Server will be able to understand what you’re trying to do, and the better execution plan you’re likely to get.
2. Avoid scalar functions. Yes, functions are great for reusable app code, but SQL Server scalar functions are notoriously slow. They run row-by-row against your result set (possibly much more than that, depending on the query), and queries that call scalar functions are single-threaded, regardless of how much work they’re doing. Your functions might go faster in SQL Server 2019, but until then, don’t gamble.
3. Avoid table variables (@tablename). They sound like they’re going to be faster than temp tables, but they get bad row estimates, which drives the rest of your query to get a bad execution plan instead. If you need to store data somewhere temporarily, embrace temp tables (#mytable.) They’re the better of those two choices.
4. Never take user input to build a query. Crafty users will bypass your security using a technique called SQL injection. I know, you think you’ve built in a lot of security, but your code has to be perfectly bug-free every time. The hacker only has to be right once.
When you’re ready to learn more about query design, pick up the book T-SQL Fundamentals by Itzik Ben-Gan. The title sounds deceivingly easy, but this book will take you through several years’ worth of learning, and is great for veterans as well as beginners.
Table Design Tips
1. Start every table with a clustered index on an identity. If you’re just getting started with table design, use this tip to avoid problems with empty pages and forwarded fetches. Here’s an example of how to create a table with this ID built for you:
There are exceptions to the clustered index rule. If you need to generate unique values client-side, you can use GUIDs. If you have a staging table that is loaded from scratch and truncated every day, then a table without a clustered index (a heap) can be fine. However, if you’re not sure about what that means, just stick with a clustered index on an identity as a starting point.
2. Avoid NVARCHAR fields unless you really need ’em. The difference between NVARCHAR and VARCHAR fields is that NVARCHAR can store Unicode, but NVARCHAR requires twice as much storage space (even if you’re not using Unicode.)
3. Avoid MAX datatypes unless you really need ’em. You can use VARCHAR(MAX) and NVARCHAR(MAX) to store up to 2GB of data, but that comes at a performance price later when it’s time to index that data or compare it to other columns. You’re better off naming your length, say VARCHAR(50) or NVARCHAR(1000) based on the longest data you’ll put in the column. Don’t go overboard: SQL Server bases its memory grants in part based on the column definition. Oversize your column, and you’ll oversize your memory grant – which sounds good until you hit concurrency issues.
4. Recommended indexes are not smart, nor in order. When you see index recommendations that would supposedly make your query run faster, they’re generated by the modern equivalent of Clippy, the Office assistant. They’re better than nothing, but they’re rarely accurate. Think of them as nothing more than a hint that it’s time to consider hand crafting an index, and I teach you how in Fundamentals of Index Tuning.
5. Avoid storing files in the database. Store files in a filesystem, and in the database, just store a pointer to the file (either its http location or UNC path.) This keeps your database size lean and mean, lowers your licensing costs (because performance requirements will be lower), and makes your backups & restores faster.
Okay, look, it’s a mouthful of a blog post title, and there are only gonna be maybe six of us in the world who get excited enough to check this kind of thing, but if you’re in that intimate group, then the title’s already got you interested in the demo. (Shout out to Riddhi P. for asking this cool question in class.)
The system table sys.sysindexes has a rowmodctr that tells you how many times rows have been updated in an object. This comes in handy when we need to figure out if it’s time to update the statistics on a table.
To set the stage, let’s create a table, put a couple of indexes on it, and check rowmodctr to see how many changes have been made:
No rows have been modified yet since no data has been loaded. Let’s load 1,000 rows of data, and then check rowmodctr to see how many modifications have been made to the table:
Rowmodctr is 1,000 because 1,000 rows have been modified in the table – hey, inserts count as modifications.
Now the fun starts: let’s update everybody’s PersonName to be the same thing:
Remember, we had rowmodctr 1,000 for both indexes just a second ago – and now the index has gone up (saying it was modified), and the clustered index stayed the same (indicating that it wasn’t.)
I’m gonna be honest with you, dear reader: this was not the result I expected.
As a naive, delicate flower, I expected SQL Server to treat the rowmodctr the same way on both the clustered and nonclustered indexes. Instead, here we’re showing zero new modifications on the clustered index, but 1,000 new modifications on the name index.
In the big scheme of things, this isn’t a showstopper problem. It’s not like I’m over here waving my hands going, “OMG, this violates the laws of space and time!” It’s a system table, and if anybody’s relying on this thing for razor-sharp accuracy, they probably deserve what’s coming to them. I just find it so amusing that it’s handled differently on the two kinds of indexes – even though neither of them were modified. Huh.
How about if we explicitly tell SQL Server to set it to the same value:
Again, the nonclustered index shows modifications, and the clustered index doesn’t.
So what’s the takeaway?
I don’t think this is a performance-shattering problem: it’s probably fairly rare that you find an application that runs updates even when no changes were made.
To be clear, the fix isn’t to switch to a delete-the-old-row-and-insert-a-new-row design pattern: that would be twice as bad, since now you’re doubling the number of rows modified with each operation.
When we first launched SQL ConstantCare®, I saw it as a mentoring service, not a monitoring service. I wanted to give you advice that I’d learned the hard way across managing lots of systems.
With that in mind, I told users as they came onboard, “If there are any recommendations that you want me to mute, just let me know.” Sometimes, folks would email in about why they didn’t want to do something, and I agreed, so I muted it. Other times, we had a mentoring conversation, and they learned about why an issue might be more important than they’d assumed.
Here are the recommendations you muted most often:
Server Triggers Enabled – 33 users (which makes sense, once we talked through what their triggers were doing – turns out a lot of you use these in smart, creative ways)
Transaction Log Larger than Data File – 28 users (mostly due to SSIS)
Check for Corruption ASAP – 28 users (offloading to secondaries or restored servers)
Move User Databases Off the C Drive – 28 users (mount points, servers on death marches)
Take a Full Backup – 27 users (offloaded to AG secondaries not being monitored, dev servers, scratch databases, truly ginormous archive databases)
Move TempDB Off the C Drive – 20 users (see #5)
Check Unusual Database States – 18 users (offline databases kept around for safekeeping)
Take a Log Backup – 15 users (see #6)
Too Much Free Memory – 15 users (usually queries with comically large memory grants that ran simultaneously, and the DBAs couldn’t fix it for various reasons)
And…I’m okay with those. (There were plenty of others that I talked people out of, like auto-close, auto-shrink, CPU schedulers offline, and the plan cache being erased daily.) But for the most part, y’all made pretty good decisions about what you wanted to mute.
So now, you can do it yourself in the emails.
Starting with this week’s release, there are Mute links next to the recommendations, plus next to the individual details, like which database hasn’t been backed up:
You can either mute the recommendation altogether on this server, or just for one (or more) specific databases.
I’m really curious to see how this changes user behavior because I’m positive that folks are going to go mute-crazy. I have a hunch that y’all don’t want to tell me you’re going to ignore something – but you just want that recommendation to go away. (We send regular followups reminding you of what you’ve muted – because in a lot of companies, managers are included in the emails too, and we don’t want them to think their employees have a perfect bill of health when in reality they’re just muting everything.)
Behind the scenes, here’s how it works.
Normally, when you click on a web link, you expect to open a web page, which is served to you by a web server. That server takes your request, makes a connection to a database, and does your bidding.
That’s so 2008.
Instead, we’re using Function-as-a-Service, aka serverless. Serverless is a terrible name since there are still servers involved, but like I blogged about a year ago, it just means the servers aren’t my problem. Here’s how it works:
You click on a link in the email – and the link contains a unique string that represents you and the thing you can mute (think GUID, but it’s different)
Your web browser opens a static HTML web page – no fancypants web server required, just an HTML file (and supporting documents) that live in Amazon S3
Your web browser itself calls Amazon API Gateway – which runs the appropriate Lambda function, which can log your requested GUID (which we could write straight to the database, or just log it to a queue to be picked up later, since it’s not urgent)
If that request goes successfully, your browser renders a success message – or else it gives you info to send to us for diagnostics
It’s exactly how PasteThePlan works, too. We don’t have to provision app servers, patch them, manage load balancers, patch databases, etc. Richie can just focus on the code, the unit tests, and fixing my crappy Postgres functions. It’s not that serverless is easy, mind you – it’s hard, especially since it’s still in its infancy – but it just moves the work from systems administration into development.
Hosting? Well, Amazon takes care of the rest and gives us nice reports on how often y’all are using it:
This is one of those posts where I step away from the DBA role for a minute and tell you about something I just find personally mesmerizing. The serverless code costs so absurdly little that it doesn’t even pay to bother performance tuning it:
Whereas the database pricing, that’s another story. Isn’t it always…<sigh> So looking forward to serverless databases becoming more powerful and more mainstream for these kinds of sporadic use cases.
Oh while we’re here, next up in the release list: combining all your servers into one daily email if you want. Code’s done, and we’re testing it now.
By default, with a clustered index on the Id field (identity, starts at 1 and goes up to a bajillion), your results will probably look like this, ordered by Id:
Because if you look at the execution plan:
SQL Server is doing a clustered index scan. Since we’re clustered on Id, the results happen to come back in Id order.
But watch what happens when I create an index on DisplayName and Id, and then run the exact same query again:
Now the data comes back ordered by DisplayName due to the way SQL Server ran the query, as shown in the execution plan:
As I talk about in How to Think Like the Engine, SQL Server doesn’t have to do a clustered index scan for queries like these. It can use the smallest copy of the table that achieves its objectives, and in this case, it’ll use the nonclustered index because it takes less time to scan.
If you have to scan the Users table to get DisplayName and Id, which of these two objects would you rather scan:
The clustered index is 58MB, but our index on DisplayName and Id is only 9MB. It’s faster to scan that nonclustered index.
SQL Server may use indexes you didn’t expect.
Say we’ve got an index on DisplayName that happens to include Reputation, and we frequently run this query:
SELECT TOP 100 DisplayName
WHERE Reputation = 1;
Whenever we run the query, we always get the data back sorted by DisplayName, so we assume that it’s always going to be that way:
But that’s just because SQL Server happened to use the index on DisplayName that includes Reputation:
Because SQL Server knows a whole lotta users have Reputation = 1, so it can scan that index and quickly find the rows it needs. It doesn’t need to scan the ENTIRE index, just enough of it to find 100 users with Reputation = 1.
But the instant we create a “better” index – one that allows a seek, for example – and say that we only include DisplayName (not sort it):
The data doesn’t come back ordered. SQL Server did an index seek on our new index:
But since DisplayName was only included, not sorted, even just a 100-row result set doesn’t come back in the order you might expect.
So in summary: if you need stuff ordered, add an ORDER BY, or else your order can get changed based on the execution plan SQL Server chooses.
Lower CPU usage both for query execution and for compilation time
Lower locking/blocking overhead (even NOLOCK requires a schema stability lock, for example)
Less demands on query workspace memory
But how do you go about finding which queries to cache?
Find your most frequently executed queries, then ask 3 questions about them.
Run sp_BlitzCache @SortOrder = ‘executions’, or look on the Plans Executions tab of the Consultant Toolkit. Look at the executions per minute column – and on seriously high-volume servers, I’ll usually divide those numbers by 60 to get executions per second, which drives the point home even more. With those queries in mind, ask:
Question 1: How often does this data change? Often, we’re dealing with single-table queries that fetch a value from a global config table. The data just doesn’t change often, and could easily be fetched just once at application server startup. If we needed to change a config value, worst case, we could restart the app servers gradually and have them fetch the updated data.
When you’re not sure how often data changes, look at the table’s contents:
In particular, you’re looking at the statistics on each index to show you how many writes the table has had recently. It can be tough to discern exactly when this number resets: it varies based on your version, patch level, failovers, restores, etc., but just the ratio of reads vs writes can be really enlightening. For example, at a recent client, we noted that we were fetching employee phone numbers millions of times per day, when only a handful of changes had been made over the last several months.
Question 2: Will out-of-date data really hurt? Some data absolutely, positively has to be up to the millisecond, but you’d be surprised how often I see data frequently queried out of the database when the freshness doesn’t really matter. For example, one client discovered they were fetching live inventory data on every product page load, but the inventory number didn’t even matter at that time. During sales, the only inventory count that mattered was when the customer completed checkout – inventory was checked then, and orders only allowed if products were still in inventory then.
Question 3: How much data are we talking about? Is it a config table with only a handful of rows, or is it a product list with millions of variations? The less data involved, the easier it is to cache, especially if we’re trying to cache in memory on the application servers. The more data involved, the more it’s going to hurt if we try to duplicate that data on every app server – and the more we need to start investigating a dedicated caching system like Redis, AWS ElastiCache, Azure Cache, or Google Cloud Memorystore.
Cache rules everything around me
Which leads me to the second of my caching truths:
The fastest query is the one you never make, and
The first query you cache is the hardest.
When you implement caching for the very first time in an application, you’re going to be torn about a million little decisions. What caching platform do you use? How do you set it up? How do you test that it’s working? How much redundancy do you need? How will you add the code to your application?
But when you see the first results – millisecond-level response times for frequently accessed data, and lower loads on your database server – you’ll be sold. You’ll cache first, and ask questions later. That’s when you’ll start running into problems with cache invalidation, but for now, find your most frequently executed queries and ask those 3 questions.
I’m coming to Salt Lake City in August for SQLSaturday #885. On Friday, I’ll be teaching a one-day pre-con:
Performance Tuning in 21 Demos – You’re a developer or DBA who wants to tune indexes and queries to make your application go faster. You don’t like learning by watching slides or reading white papers – you want to jump right into the code and see what happens. You’re the kind of person who loves watching YouTube videos that start with, “Hold my beer and watch this.”
I won’t be at Summit this fall – I’m skipping that this year for a long-overdue Hawaii cruise – so if you wanna catch me this year in person, those are your best bets. (There’s always my live online classes, too.)
You’ve been writing queries for at least a year or two, and you’re curious about what’s happening under the hood when SQL Server runs a query. You want to know what you can do to improve your queries, tune your indexes, and change SQL Server’s configuration settings so that your results will return faster.
When I first read the Table of Contents, I wondered how the authors were going to cover so much ground in ~450 pages – and they did a great job striking a balance between giving you enough to get started, versus dragging you into the weeds with obscure internals details. Great example: pages 170-171 only spend two paragraphs on why you shouldn’t be using cursors – nuff said. There were so many times where I read one paragraph and thought, “I sure hope the readers get how important this is.” It’s a lot of killer, and not a lot of filler.
Pam & Pedro give you enough to pique your curiosity and make your queries go faster, and if you’re interested in learning more about a specific thing, well, that’s what Google is for. This book teaches you enough to Google for the right things.
Despite the title, it is by no means a level 100, here’s-how-to-write-a-query book. The first couple dozen pages act as if they’re a gentle on-ramp to the parts of a query, but just jump past those to chapter 2, understanding query processing, and Pedro Lopes (@SQLPedro) and Pam Lahoud (@SQLGoddess) start getting into how a query is optimized.
Or if you’re not quite so curious, and you just want faster queries, jump to chapter 6 and learn about anti-patterns.
Or if you’re not sure what queries you should review first, and you’re curious about what it would take to build your own plan cache analysis tool, turn to page 268 and start reading about the plan cache DMVs.
I realized the picture should have two thumbs up, not one
That is really my only small objection to the book, and it’s a tiny one: it seems just a little bit out of order, with some really challenging concepts coming before the easier parts about simple things you can do to make your queries go faster. (Page 17 introduces the WHERE clause as if you’ve never heard of it before, and by page 30, we’re into worker threads, tasks, and degrees of parallelism.)
That’s easy to work around, however: just hit the Table of Contents, find the parts that interest you, and read those first. You’ll come back to the others later anyway because this is the kind of book that rewards repeated reading. More than that, it also rewards using the contents as a jumping-off point for additional learning.
I highly, highly recommend this book for folks who want to start a curious journey into SQL Server execution and tuning. Get it, and let me know what you think.
If you’ve got a database that doesn’t need to be on all the time, Azure SQL DB Serverless is in public preview now. It makes a lot of sense for those development databases you have to keep around, but don’t really want to pay for.
If your database doesn’t get any queries for 6 hours, it’ll pause billing, and automatically resume again when your queries kick in. Here’s how to set it up in the portal:
How to Switch to Azure SQL DB Serverless - YouTube
The minimum auto-pause duration is 6 hours (that’s some pretty doggone slow response time)
After auto-pause, the first login attempt will fail
Any queries whatsoever reset the clock (including login attempts, which means this won’t really work for databases exposed to the Internet, which you probably shouldn’t be doing without IP restrictions anyway)
Not currently available in Australia Central, China East, China North, France South, Germany Central, Germany Northeast, India West, Korea South, South Africa West, UK North, UK South, UK West, and West Central US
This is a total no-brainer for non-public-facing, development-only or archive databases. For hobby apps, the 6-hour time with zero queries doesn’t really make sense: it’s just too long, especially when search engines are going to hit your site at least that often, let alone things like monitoring tools and automated builds.