Our website use cookies to improve and personalize your experience. Our website may also include cookies from third parties like Google Adsense, Google Analytics & Youtube. By using the website, you consent to the use of cookies. We have updated our Privacy Policy. Please click on the button to check our Privacy Policy.

Best Practices for SQL Server in Azure

Best Practices for SQL Server in Azure

Most of the typical items that DBAs will hit in tuning out a server for SQL Server hold true in Azure. The three major spots you will usually focus on are OS configurations, storage configurations, and SQL Server instance configurations.  These are generalizations and, as your workload demands, there may be variations that are necessary, but those are best-covered case by case and out of scope for this post.

OS Configurations

When configuring an OS for SQL Server, these are the items that you should hit:

  • Provision separate service accounts for all SQL Server services (database engine, SQL server agent, Integration Services, etc.). This is a security best practice and may be considered domain level work if you need the accounts to have the ability to hit domain locations.
  • Add the database engine service account to the following user rights:
    • Perform volume maintenance tasks – this enables instant file initialization, which can yield 4-5x performance improvements on tasks such as data file growths (it has no impact on transaction logs, those *must* be 0 initialized), database creation, and database restores.
    • Lock pages in Memory – this prevents the OS from paging out SQL Server’s buffer pool
  • Change power settings from balanced to high performance. CPU improvements didn’t appear to be as drastic as other documented cases (private clouds where the hypervisor and VMs were under the same administrative team), but based on my geek bench testing, you can still squeak out an ~8% performance improvement with this setting (which was a bit surprising to me, actually). This may be fluky, but it was consistent over ~10 tests at varying times of the day.

Storage Configuration

Azure storage is extremely simple to configure and use. That simplicity does, however, come with a price: the ability to make extremely fine-grained configuration decisions based on your workload. That said, for most purposes, Azure storage is perfectly fine, as long as you configure it properly. Here’s the list of best practice configuration guidelines for Azure storage for SQL Server:

  • Use data disks with no caching and no geo-redundancy
  • There are two options for scaling IO in Azure: Storage pools or simple volumes on singular data disks with file groups containing files on multiple volumes
    • The SQLCat team has benchmarked the performance of the two. The files/filegroups option yields better scaling of IOPS but comes at the cost of more management overhead and partial unavailability of any of the disks can result in the database being unavailable.
  • Set allocation units for the volumes which will hold SQL datafiles to 64kb
  • Use storage pools, not Windows’ software RAID. IOPS and throughput do not scale with Windows’ software RAID and can yield erratic and terribly performant results.
    • Caps to be aware of:
      VM SizeData DisksMax IOPSBandwidth
      A384000400 Mbps
      A4168000800 Mbps
      A6840001000 Mbps
      A71680002000 Mbps
      A816800040 Gbps
      A916800040 Gbps
    • Overall IOPS for a subscription is 20,000
  • There are a couple common ways that you can configure your storage pools:
    • One large storage pool with all of your disks in it and then placing all of your database files on that
    • Segregate out IO needs by category. Kinda’ like:
      • 4x data disks – TempDB
      • 4x data disks – Transaction Logs
      • 8x data disks – Data files
  • Number of columns on virtual disk should be equal to the number of physical disks in the pool (6 disks = 6 columns – this allows IO requests to be evenly distributed across all data disks in the pool)
  • Interleave value on virtual disks should be 256kb for all workloads
  • Linear scaling starts breaking down around 4 disks and writes scale better than reads

You may use the following PowerShell commands to verify the configuration of your virtual disks:

Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, @{Expression={$_.Interleave / 1KB}; Label="Interleave(KB)"}, NumberOfColumns, NumberOfDataCopies, @{Expression={$_.Size / 1GB}; Label="Size(GB)"}, @{Expression={$_.FootprintOnPool / 1GB}; Label="PoolFootprint(GB)"} –AutoSize

On an interesting note. Azure storage is a shared environment. This means there are going to be some behind the scenes things that happen and may not necessarily be what you want. If your systems are under heavy use, you will likely never notice this, but for new systems and for performance testing benchmark machines, you are going to want to warm up the disks (for roughly 20 minutes). Last, but not least: ALWAYS test your throughput. I recommend SQLIO when you are trying to just generally test the performance of the storage subsystem. It’s out of scope for what I want to cover here, but here is where you can find that application and really good documentation on it.

SQL Server Instance Configuration

There are instance level configuration options that generally benefit SQL Server. These are irrespective of where your SQL Server instance is located, but this is what you’ll want to hit:

  • Install only minimum feature set for instance
  • Enable option Advanced > optimize for ad hoc workloads
  • Change option Advanced > cost threshold for parallelism to 50. SQL Server’s default setting for this is just flat out too low. If you are tuning a pre-existing instance, you’ll notice a lot of CXPACKET waits in your wait_statistics
  • Change option Advanced > max degree of parallelism to value equal to the number of cores/vCPUs (up to 8). 0 defaults to all available logical processors up to 64.
  • Enable option Database Settings > Compress backup. There is really no reason to not compress your backups.
  • Split your TempDB up across multiple data files. You will use the same number of data files as you have processors, up to 8. These will all need to be of the same size, as SQL Server uses a weighted round robin when leveraging multiple data files. The goal behind this is to reduce contention for the GAM, SGAM and PFS pages for the TempDB.
  • Change option Memory > minimum server memory (in MB) to value ~2048-4096 lower than maximum server memory value
  • Change option Memory > maximum server memory (in MB) as follows:
    Server Total RAM (GB)Max server memory (MB)


That’s all folks! That should get you going on configuring an Azure VM to be home to your lovely little SQL Server instance(s). Happy clouding!

Next Steps

Review our case studies and engagements where we helped companies just like yours solve a variety of business needs.

About Oakwood

Since 1981, Oakwood has been helping companies of all sizes, across all industries, solve their business problems.  We bring world-class consultants to architect, design and deploy technology solutions to move your company forward.   Our proven approach guarantees better business outcomes.  With flexible engagement options, your project is delivered on-time and on budget.  11,000 satisfied clients can’t be wrong.

Related Posts