close
Press "Enter" to skip to content

Curated SQL Posts

Tips for Disaster Response

Christophe Pettus shares some advice:

  • Wind your watch.

No one has a watch that winds anymore, but the point is: take a deep breath. Give yourself a minute, or two, or five to gather data. A too-fast response is the main way a problem becomes a disaster.

Click through for all seven of them. I fully agree with doing drills. If you don’t practice in the easy times, you probably won’t respond well in the harder times. Also, I recommend having the process written down on a one-pager that everyone has a copy of. This should include the most important details around emergency response: how to escalate, what information to start capturing early on, etc.

Leave a Comment

Page Splits and Readaheads on Clustered Tables

Erik Darling learns us some T-SQL:

But we can see that the number of page splits that have occurred on the server have gone up a bit here. We can see that that number has increased. So if we look back at the table itself now, and we look in here, right, we’re still not going to have any forwarded fetches because that’s never a thing.

But we do have a lot more pages in the table now, and the average page space used in percent has gone down dramatically. This was at 99-something percent. We are now below 50%.

We are at 46% full. Well, that doesn’t feel too good. What this means is that SQL Server has a lot more pages in the table now that are a whole lot less full, which means we are sort of like yesterday when we deleted a bunch of data from the heap and we still read empty pages.

Click through for the video.

Leave a Comment

Cold Storage Data Archival

Brent Ozar answers a question:

At least once a month, I get this question from a client:

We have big data, and we want to save money. We want to move the older data down to some kind of archives that will cost us less.

I ask exactly one followup question:

Are you willing to modify the app that queries the data, or do you want the users to still be able to query the old data in their existing screens and reports?

Brent recommends trying a linked server approach. I’d also toss PolyBase in to the mix, though it’s going to have similar pros and cons to linked servers in this specific scenario. I have an old post on cold storage with PolyBase but the mechanisms haven’t changed much. I do have some sample scripts in my PolyBase talk’s GitHub repo, though fair warning that these are going away soon as I rebuild the talk for 2025.

Leave a Comment

The Power of COALESCE()

Lukas Vileikis shows off a bit of ANSI SQL syntax:

When the need to deal with NULL values arises, multiple queries come onto the scene. The SQL COALESCE function is one of them. In simple terms, the SQL COALESCE function is a ‘fallback’ mechanism for missing data. Its only task is to return the first non-NULL value from a list of values.

I used to be a big believer in COALESCE() all of the time, but it turns out that ISNULL() is faster if you only have two things to compare. Granted, it’s not a huge difference in speed, as I recall, but the difference is there.

Leave a Comment

MSG 10054 Failures over VPN

Tim Radney takes us through a connection failure error:

Over the past few weeks, I’ve been contacted by multiple customers experiencing the same frustrating issue. Applications and SSMS sessions that had been rock-solid for years suddenly started throwing errors when connecting over VPN:

  • Msg 10054: “An existing connection was forcibly closed by the remote host.”
  • “Connection Failure (status code = 3000, [Microsoft][ODBC Driver 17 for SQL Server] The connection is broken and recovery is not possible…”

Read on to learn what’s going on and how you can remediate it.

Leave a Comment

Setting Function Parameters for Debugging in R

Jason Bryer has a function:

I tend to write a lot of functions that create specific graphics implemented with ggplot2. Although I try to pick graphic parameters (e.g. colors, text size, etc.) that are reasonable, I will typically define all relevant aesthetics as parameters to my function. As a result, my functions tend to have a lot of parameters. When I need to debug the function I need to have all those parameters set in the global environment which usually requires me highlighting each assignment and running it. This function automates this process.

Click through to see how it works. H/T R-Bloggers.

Leave a Comment

Database Deployment Variables with SQLCMD

Andy Brownsword changes a variable:

A regular Database Project deployment is static and delivers consistent results regardless of environment. When it comes to schema, that’s usually desired, but data is a different story.

Data is environment specific. You want a Database Project that works across all environments. You want smarter deployments. You need SQLCMD Variables.

These have been the go-to method for handling different environments and other things that change between releases since I started using database projects about 15 years ago. Looks like not a lot has changed on this front, but it’s good to see that they still work as expected.

Leave a Comment

T-SQL Tricks from Recent Versions

Rebecca Lewis has three tricks for us:

Three T-SQL features that have shipped over the last few releases and quietly retired patterns many of us are still using out of habit. Each replaces a stale workaround with one line of code, and in two of three cases it runs much faster, too. Take a look, try them out.

Click through for Rebecca’s list. And if you want a full talk’s worth of these sorts of things, I happen to have one.

Leave a Comment

Against Data Lakes

Christophe Pettus lays out an argument:

If your engineers have told you that you need a data lake, you should be a little suspicious. Most organizations that build data lakes don’t need them, and a substantial fraction of the ones that do build them end up with what the industry — without any irony — calls a “data swamp.” So before we get to what a data lake is, let me say plainly: the right answer is often “not yet, and maybe never.” The interesting question is when “yet” becomes “now.”

I think my level of agreement is about 80%, and I’m glad that Christophe anticipated my “It’s really useful for data science work” argument. If the large majority of your data is relational in nature, then yeah, a data lake seems like overkill. And most of the time, I see companies taking that lake data and then organizing it into a warehouse later.

I’d say the biggest downside to relying on a data warehouse is the latency of requests. I need to get some dataset that includes columns A, B, and C from a table in the relational database but I’m not 100% sure that I really need A, B, and C because I need to train a model first or otherwise work with the data in some significant way. The OLTP DBAs don’t want me writing large-scale analytical queries against this data because of the performance implications. The BI developers/DBAs give me a turnaround time in months on this data, and if it turns out I don’t need it, they’ve wasted a lot of time for nothing.

That kind of scenario, in my mind, is what compels people in organizations to push for data lakes or something similar.

Leave a Comment