Desired State Configuration #3 – Details about SQL Server Installation with murphy’s law

If you are looking for a ready solution, please be patient … I explain here my project history, how I proceeded when and how … As the last blog post I will present the “finished” solution.

Many Thanks for your understanding

In the following post, I would like to step a little more into depth to show you how I customized my scripts in order to copy and prepare everything for my Desired State Configuration SQL Server rollout. After explaining and showing the preparations and fundamentals of my deployment in the first two blog posts, today I would like to explain the actual installation as I had implemented it (there is also another way that I would like to try later – as you know there is no time in such projects ;-))

My scripts are in a state that you can call them “working” as I wrote this post. My scripts are currently able to create all the necessary folders on one (or more) server from a central server, they will copy all the required files (ISO image, updates, PowerShell modules, etc) to the target server, then according to the specifications they will install a SQL Server 2017 including the latest patch level and configure this SQL Server according to best practice requirements.

As explained in the second part of the DSC series, I have copied all the necessary files with my DSC-Configuration – including the current ISO image of the SQL Server 2017 and the last cumulative update – to the destination server. (In the picture only the TARGET state was checked, no actions were necessary!)

You can start with that actual state of the script right away and finally start with the real setup… for that we need the following

  • the ISO-Image (en_sql_server_2017_developer_x64_dvd_11296168.iso)
  • the latest patch (SQLServer2017-KB4466404-x64.exe)
  • the ConfigurationFile.ini
  • and the necessary Powershell-modules (e.g. dbatools, SecurityPolicyDsc, SqlServerDsc …)

In the file “ConfigData.psd” I have written down all necessary configuration variables, values and roles which were needed during the setup. In my current example, all variables and values are identical for all servers but they may be different in other environments.

    AllNodes = @(
            NodeName        = "*"
            PSDscAllowPlainTextPassword = $true
            PSDscAllowDomainUser =$true
            SQLInstallPath     = "D:\SQL2017"
            InstanceName    = "MSSQLServer"
            NodeName        = "sqlvm01"
            Role            = "SqlServer"
            NodeName        = "sqlvm02"
            Role            = "SqlServer"
            NodeName        = "sqlvm03"
            Role            = "SqlServer"
            NodeName        = "sqlvm04"
            Role            = "SqlServer"
Dreams are broken - Photo by

Murphy has also struck

But from the beginning … On my test machines sqlvm01 and sqlvm02 I “played” a little bit without really logging what I did and why … so what did I do – for whatever reason – on those two machines locally or in which sequence and what caused my scripts to work ??? On sqlvm03 and later on sqlvm04, my scripts did not work as desired … Actually, I was very satisfied how my scripts ran on the first two machines, then SQL Server 2017 was installed and finally configured by Desired State Configuration. All four SQL Servers were configured identically within the deployment in the same resource group and I ran the same preparation scripts on all of them.

  • Network Discovery enabled
  • all discs were integrated and formatted
  • joined the target domain

And then from the domain controller using DSC, I pushed the configurations – as described before – to the target server … but unfortunately, the script resource did not like me … on 01/02 I was able to mount the ISO image without any problem. No matter what I did, no matter how I turned it, I could never get the ISO image mounted on 03/04 and therefore I wasn’t able to access the drive in order to run “setup.exe”.

The new beginning – everything back to 0.25

Sometimes things are going from bad to worse… of course, I didn’t have any backup of any script (whether working or not) … I created at least some kind of versioned folder structure if I was successful with a “milestones”, but I did not have a proper backup and certainly not included my work in a versioning software. So it came, how it had to come! I deleted the wrong folder with Powershell and everything was gone, only what was still open in Visual Studio code (3 files) could be saved.
Actually, I have – no idea where the error came from – deleted almost the entire C-disk… no more Internet browser, no PowerShell more, it was almost nothing there… it was just the easiest to rebuild the server (if you as I have no backup) => it was only the domain controller 😮

This unpleasant events, however, I have taken as an opportunity to clean up my entire development environment and choose “a little different” set up … actually, everything is the same, except that I will no longer develop on the domain controller itself and my scripts will no longer be located on the server, in other projects I have already gone this way. So I just set up a new Windows Server 2016 (standard DS1 v2), roll out and configure the Active Directory and DNS role. In addition, there is now a Windows 10 machine (Standard A4m v2) on which I will develop my scripts.

Learning from mistakes makes you stronger

In order to come closer to my goal or the project’s goal in terms of “automated installation of SQL servers with Desired State Configuration”, I, unfortunately, had to start over again … but this time with a slightly different approach. 😉
My first lessons-learned … Create a backup from a “blank machine” to get back to the original server faster. My second lessons-learned … do everything else with scripts (which I will publish in another blog post)

But back to the topic Desired State Configurations – I stayed with my starting activities, copying the Powershell modules and the installation media to the respective target server, what has remained is checking whether the necessary .NET framework is installed, otherwise, it will be installed. This time I oriented myself a bit (or a bit more) to Chris Lumnah (Blog), who has built a script based on an introduction to DSC from the Microsoft Virtual Academy and own knowledge, which I would like to adapt, since it doesn’t run with the DSC Script resource works, but with the “correct” SQL Server DSC resource.

More to come in the next blog post, because I had to spend another day on setting up everything again and start over with my new DSC scripts…
Sorry and thank you for your understanding!

Desired State Configuration # 2 – Getting Started with SQL Server Installation

After my first contribution – Preparing to use Desired State Configuration (DSC) – you can now take the first steps to install a SQL Server. So what do we need for prerequisites for installing a SQL Server on a new Windows Server?

To my test environment … I’m using a development environment in Azure consisting of at least two Windows servers for the development of my scripts in this case. I use one server as a domain controller and the second machine as SQL Server to test my scripts and test if they’re functional. I can and will roll out one or more servers as needed to show the scripts in my demos or to develop and test my scripts in a “larger” environment.

my dev environment for Desired State Configuration build in Azure - Overview - New Environment
my dev environment for DSC – Overview – New Environment

Installation Requirements

To perform a simple SQL Server installation – in the sense of a customer project – on a new server, we need three things:

  • .NET Framework 4.5 (minimum)
  • Installation media for SQL Server 2017 Standard Edition
  • and the last update in the sense of the last cumulative update

To deploy the installation media to the new server, you must have at least one folder on any drive. Now there are several possibilities to provide this folder…

  • Manual installation of the folder
  • Creation of the folder with Desired State Configuration
  • Creation of the folder using PowerShell

Since we are in this project, these contribution series endeavour to automate everything, of course, the manual creation of the folder is eliminated. Depending on how you want to proceed, what conditions you must/would like to meet or in which order you want to proceed, you can now choose between Desired State Configuration or Powershell. In this project, I would like to illustrate how to create a directory on the target server first and then get the installation media there with very simple steps.

Create the folder with PowerShell

In order to realize my project, theoretically, first I would have to create a Desired State Configuration for the folder structure, implement and start the copy action, then start the actual installation. I’ll try that in a second step, now I’m starting with the “simpler” approach. For this I use the “Invoke Command” and check if the folder exists on the target server or not, if not the folder will be created newly.

Invoke-Command -ComputerName $NodeName -ScriptBlock { 
param ($LocalInstallFolder)
if (!(Test-Path -Path $LocalInstallFolder )) {
    New-Item -ItemType directory -Path $LocalInstallFolder | Out-Null
} else {
    Remove-Item -Recurse -Force $LocalInstallFolder
    New-Item -ItemType directory -Path $LocalInstallFolder | Out-Null
} -ArgumentList "D:\$LocalInstallFolder\"

Why do I first delete the destination directory? Of course, if I run the script multiple times, then the current installation media and current update must be already there, therefore, once the directory will be deleted and then re-created. If the target directories exist, then you can copy the required files / directories.

Copy the installation media

Initially, I had copied all my files from a network drive to the destination server with “Copy-Item”, but since I was developing quite a lot with the Powershell ISE, I did not have a “progress bar” … later on, I switched back to “Start-BitsTransfer”.

Write-host "Copy SQL-Image to"$NodeName.ToUpper()
$DestinationPath = "\\$NodeName\d$\$LocalInstallFolder\"
Start-BitsTransfer -Source ..\SQL\* -Destination $DestinationPath -Description "..\SQL\* will be moved to $DestinationPath" -DisplayName "Copy SQL-Image" 
Start-BitsTransfer -Source ..\SQL\Updates\* -Destination "$DestinationPath\Updates\" -Description "..\SQL\Updates\* will be moved to $DestinationPath" -DisplayName "Copy SQL-Updates"

But that was somehow too much effort and complicated for me, so somehow like a “workaround” … but at least it worked 😉 That’s exactly how I started when I copied the necessary Powershell modules to the target server but with every day and every further attempt to deal with Desired State Configuration, I learned new things how to approach the goal and how to do so. By this learning, I changed the way I copied the necessary files via the Desired State Configuration.

Configuration CopyInstallationMedia
    Node $AllNodes.where{ $_.Role.Contains("SqlServer") }.NodeName
        File InstallationFolder 
            Ensure = 'Present'
            Type = 'Directory'
            SourcePath = "\\dc1\NetworkShare\SQL\"
            DestinationPath = "D:\SQL2017\"
            Recurse = $true
        File PowershellModules 
            Ensure = 'Present'
            Type = 'Directory'
            SourcePath = "\\dc1\NetworkShare\Modules\"
            DestinationPath = "C:\Windows\system32\WindowsPowerShell\v1.0\Modules\"
            Recurse = $true

$OutputPath = "\\dc1\DSC-ConfigShare"
CopyInstallationMedia -OutputPath "$OutputPath\CopyInstallationMedia\"    

Start-DscConfiguration -ComputerName sqlvm02 -Path \\DC1\DSC-ConfigShare\CopyInstallationMedia -Wait -Verbose -Force

Building my DSC configurations

For the sake of a better overview and a more simplified, step-by-step testing and traceability, I have created several sections in my Desired State Configuration. I can call the particular configuration specifically and individually and the “refine” or correct the function.

Configuration CopyInstallationMedia
    Node $AllNodes.where{ $_.Role.Contains("SqlServer") }.NodeName
        File InstallationFolder 

        File PowershellModules 

        Configuration ConfigureSQL

Of course, it also requires some parameters which can not be ignored, these are defined at the beginning of the script. In my case, for installation and later configuration, I need at least the path where the installation media should be stored centrally and the destination server where the DSC configuration should be rolled out to.

    # Path where all Install media will be located

    # Computer name to install SQL Server On

Now that I have subdivided and defined the configuration, I can add necessary scripts or modules and initiate the creation of the MOF files. Based on these MOF files, the actual configuration is then compared with the target configuration and corrected accordingly. Since something could have changed in the SET configuration between the last time the MOF files were created and “Now”, I will always have the files re-created for safety reasons, in order to be able to roll them out directly to the target server. For a more detailed explanation of my script section … I call the respective configuration, assign it a configuration file and define a path for the filing of the MOF files. Finally, the respective SET configuration is rolled out from the central storage location to the destination server.
As you can now understand, I am also able to execute the individual steps separately for debugging purposes, either manually one by one or only individual configurations such as the mere configuration of a SQL Server.

Write-host "Starting DSC process on"$NodeName.ToUpper()
Import-Module $PSScriptRoot\ConfigureSQLServer.psm1 -Force

## Create MOF-Files
$OutputPath = "\\dc1\DSC-ConfigShare"
CopyInstallationMedia -ConfigurationData \\dc1\NetworkShare\scripts\configData.psd1 -OutputPath "$OutputPath\CopyInstallationMedia\"
SQLInstall -ConfigurationData \\dc1\NetworkShare\scripts\configData.psd1 -OutputPath "$OutputPath\SQLInstall\"
ConfigureSQL -ConfigurationData \\dc1\NetworkShare\scripts\configData.psd1 -OutputPath "$OutputPath\SQLConfig\"

## Use MOF-Files to establish desired state configuration
Start-DscConfiguration -ComputerName $Computername -Path \\DC1\DSC-ConfigShare\CopyInstallationMedia -Wait -Verbose -Force     
Start-DscConfiguration -ComputerName $Computername -Path \\DC1\DSC-ConfigShare\SQLInstall -Wait -Verbose -Force     
Start-DscConfiguration -ComputerName $Computername -Path \\DC1\DSC-ConfigShare\SQLConfig -Wait -Verbose -Force

This is what my folder structure or file structure looks like on the central server, there is a folder for each SET configuration and a single file for each target server.

Folder and File Structure for my first Desired State Configuration deployment
Folder and File Structure for my first DSC deployment

More about Desired State Configuration of course in the Microsoft documentation

#4 SQL Server Configuration – implement Best Practices

I hadn’t been writing this Powershell series for a long time, and I’d like to make up for it, even though a lot of things have changed (in my work & community life) … I’d like to introduce you to two things today, once in a conventional way and how Powershell could configure SQL Server (or how I did it in my script), and a simpler and faster way with the Powershell module from

Best Practices with T-SQ

As part of the SQL Server installation, you should optimize certain parameters to guarantee high-performance and stable operation. This includes a variety of configuration parameters at the instance level, such as “Max. Memory” or “Max. degree of Parallelism”. All these instance settings can be set with the same SQL Server function, so I have written a “function” for these recurring T-SQL commands to stay flexible for later extensions.

function ExecuteSQLCmd ([string]$SQLQuery) {
     Invoke-Sqlcmd -ServerInstance $ServerName -Query $SQLQuery -QueryTimeout 65535

With this simple function (even before, but now it is “easier”), I can call the following functions accordingly and configure my SQL Server according to Best Practices, in which I call the respective function, calculate values according to system equipment then execute it with ExecuteSQLCmd. The following function enables me to adapt it to the respective environment.


Example – Powershell Function “SetMaxDOP”

In order to set the value for MaxDOP (max degree of parallelism), I have to know how many logical CPUs the server has. I determined this value I had at the beginning of the script through auxiliary functions, only with this value I can decide… Usually, I set the Cost Threshold for Parallelism on our systems to 40, this fits at least 90% of the systems. Of course, you can still have a lot more to stick to the best practices, as you can read in the next section, but with these values, I was going quite well on our systems for the last two years.

function SetMaxDOP() {
    Try { 
        Write-Host "Setting of MaxDOP / Threshold"
        $sqlquery = "
        EXEC sys.sp_configure N'show advanced options', N'1' RECONFIGURE WITH OVERRIDE;
        EXEC sys.sp_configure N'cost threshold for parallelism', N'40';
        ExecuteSQLCmd $sqlquery

        if ($global:NoLogicalCPUs -le 4) {
            $sqlquery = "
            EXEC sys.sp_configure N'max degree of parallelism', N'0'
            Write-Host "[INFO] Set Threshold to 40 and Set MaxDOP to 0."  -ForegroundColor Green
        } else {
            $sqlquery = "
            EXEC sys.sp_configure N'max degree of parallelism', N'"+($global:NoLogicalCPUs/2)+"'
            Write-Host "[INFO] Set Threshold to 40 and Set MaxDOP to "($global:NoLogicalCPUs/2) -ForegroundColor Green
        ExecuteSQLCmd $sqlquery
    Catch {
        Write-Host "[ERROR] Failed to set MaxDOP." -ForegroundColor Red

dbatools – the function Set-DbaMaxDop

I had already written several times about the Powershell module dbatools – for example when copying an agent job – here I would like to introduce you now the comparison between the conventional way and the easier way with dbatools. Dbatools provides a function for simply setting the SQL Server instance parameter for Max Degree of Parallelism, as well as other instance parameters.

The “Set-DbaMaxDop” command offers a variety of options, the simple setting of the MaxDop at the instance level (SQL Server 2008 – 2016) and the setting of the MaxDop at the database level starting with the SQL Server 2016. The function determines all the basic conditions, based on the algorithm from the KB2806535, as well as the MaxDoP Calculator from Sakthivel Chidambaram, are used to calculate the necessary values for setting the MaxDoP. Where, of course, – as with my data/notes – it must always be pointed out that these are recommendations that do not fit 100% to every environment and application, but give the first clue.

Set-DbaMaxDop -SqlServer SQL2016

dbatools - Set-DbaMaxDop - Set to Best Practices

If you want to specify a value yourself, this is also possible … or use the internal test function “Test-DbaMaxDop” Test-DbaMaxDop -SqlServer SQL2016

dbatools - Test-DbaMaxDop

Or even decide for yourself what value is more appropriate for this environment …Set-DbaMaxDop -SqlServer SQL2016 -MaxDop 6

dbatools - Set-DbaMaxDop - Set to your own Value

Further information can be found on the following pages and

dbatools – the function Set-DbaSpConfigure

Now we have changed not only the value for MaxDop but also the value for the Threshold for the Max Degree of Parallelism. Also, this value can be set with the dbatools very easy. Since both are instance parameters, you can customize both settings individually with your own values and this command, but the internal calculation makes the use of two different commands useful. So to set the value for the threshold to 40, I use “Set-DbaSpConfigure”, which should not be unknown to us.

Set-DbaSpConfigure -SqlServer SQL2016 -ConfigName CostThresholdForParallelism -Value 40

dbatools - Set-DbaSpConfigure

Especially with the IntelliSense feature, this function is a joy since the individually available parameters can be used quickly and you can quickly get the whole command. You will also find further help and examples on the help pages of the function =>

Thus, as opposed to my own script, you can now optimize both instance parameters with only 3 lines of code.

Import-Module .\dbatools\dbatools.psd1

Set-DbaSpConfigure -SqlServer SQL2016 -ConfigName CostThresholdForParallelism -Value 40
Set-DbaMaxDop -SqlServer SQL2016

Pause / Resume / Backup an Azure SQL Database with Powershell – Part 3

Many services in Azure allow certain cost savings through automation, this can also be achieved with the Platform-as-a-Service “Azure SQL Database”, this is what this blog post is all about. Quite as simple as e.g. with the Azure Analysis Service it is not because actually there is no pause-resume functionality – here comes the backup into place, that’s what I’ll talk about first.

Azure SQL Database and the Backup topic

Before we dive deeper into the Azure SQL Database and its pause-resume functionality, we first have to take a brief look at the topic of backup, which is not quite unimportant in this context. A big advantage of Azure SQL Database against a SQL Server (on premise or Iaas) in any case is … you do not have to explicitly worry about the backup since this is created automatically. According to the database size and number of data changes, Microsoft automatically backs up the databases at regular intervals! That means that the costs for an extra backup software or even the development of own services are already gone. Depending on Performance-Level, there are different backup retention times:

  • at the basic level the retention time is 7 days.
  • at the standard level the retention time is 35 days.
  • at the premium level the retention time is 35 days.

When will my database be backed up? Also there is an answer …
The full backup takes place every week (unfortunately) at non-fixed times, differential backups are generally created every few hours and depending on the data volume, the transaction logs are saved every 5-10 minutes. A first full backup is created within the first 30 minutes after the database is created. These backups are kept according to the service tier(see above) (cost savings: you do not need an extra storage account because the backup is already included in the price). If you want to keep your backup for more than 35 days, you have the option of activating a “Long-Time Retention-Backup”. This requires a further storage account, where the backups are stored in parallel and permanently.

Backup Azure SQL Database

Pause and Resume for cost savings

This functionality is unfortunately not available with Azure SQL Database … How can I still benefit with cost savings if I would like to use this platform-as-a-service… Of course, as already explained in another blog post changing the database performance to prevent occurring load peaks. But we would really like to achieve certain cost savings with the migration to the cloud … if a department only works during the day (8am-8pm), then I need this database(s) only during the night… So why you cannot stop those services, since you only pay when the database is online?

For this scenario, the department needs the database only during the day, there is actually no solution, but a workaround, dropping the database in the evening and restoring the next morning via “Create Database from Backup” helps. This procedure has been implemented by Microsoft extremely pleasantly and does not mean a lot of effort.

# Dropping DB to stop costs
Get-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -ev notPresent -ea 0
if ($notPresent) {
    Write-Host $databasename "already deleted" 
} else {
    Remove-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename

Please note that you only delete/remove the database and not the logical server, because the backup history is stored on that logical SQL server. In order to restore this database, you’ll need the date of the last backup database. When you restore the database on the following morning, you’ll directly use this backup time to perform a restore. In Powershell, you can combine these activities very easily.

$deleteddatabase = Get-AzureRmSqlDeletedDatabaseBackup -ResourceGroupName $resourcegroupname -ServerName $servername #-DatabaseName $databasename
# Do not continue until the cmdlet returns information about the deleted database.

Restore-AzureRmSqlDatabase -FromDeletedDatabaseBackup `
    -ResourceGroupName $resourcegroupname `
    -ServerName $servername `
    -TargetDatabaseName $databasename `
    -ResourceId $deleteddatabase.ResourceID `
    -DeletionDate $deleteddatabase.DeletionDate `
    -Edition "Standard" `
    -ServiceObjectiveName "S0"

You can find more sample scripts on backup / restore here =>

Resize an Azure SQL Database with Powershell – Part 2

In the first part of my Azure SQL Database Powershell Automation series I showed how to create the resource group, the logical SQL Server and the corresponding database, I would like to show you how to implement a resize of the performance level of this database.
Why should I want to resize a database? Upwards – so more power can be understood, but why again downsizing? So I start with the respective reasons for an adjustment of the selected performance class.

What power do I need when?

Let’s take an example… A larger company rents the workspace and is only active in Germany, has an application for the management / booking their meeting rooms or workplaces. This application is used by the ladies at the reception only during the “opening hours” intensively and outside these times occasionally by coworkers. Basically, the receptionists need their application and thus the database only between specific times, for example, 7 – 20 o’clock, the rest of the day the database remains almost unused …
So what is close to making this database “faster” during the day? On-prem is unfortunately not possible since you can not assign a single database so easily further resources.

Another example is evaluations or processing, where operational concerns can vary greatly when the end of the month ends with more computing power in the Azure SQL Database so that the data can be processed and deployed as fast as possible.

  • Depending on application usage
  • Depending on operational requirements
    • nightly processing
    • monthly financial statements
    • Year-end rally

Azure SQL Database

What do we need for a resize of the Azure SQL Database

  • A resource group or its name
  • A name for the logical SQL Server (which must be unique)
  • A database name
  • The new performance level (DTU)

The login to Azure and the selection of to-use Subscription I leave here outside forwards and start the actual script.
Again, I start with the definition of the necessary variables (see above):

# Set the resource group name for your server
$resourcegroupname = "RG-AzureSQLDatabase-Demo"
# Set logical server name
$servername = "server-sqldbdemo"
# The sample database name
$databasename = "db-sqldbdemo"
# Set new performance-level
$newdbDTUsize = "S3"

Now we can resize the Azure SQL Database

This is much easier than rebuilding a database because we need fewer variables (see above) and only one command line for execution. But as I’m careful I also ask if the database exist to ensure that the script does not make any “nonsense”.

# Resize Azure SQL Database to new performance-level
Get-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -ev notPresent -ea 0
if ($notPresent) {
    Write-Host $databasename "doesn't exist" 
} else {
    Set-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -Edition "Standard" -RequestedServiceObjectiveName $newdbDTUsize

As the before-and-after comparison shows, a resize is possible without problems and lasts only a few moments.

Vorher-Nachher-Resize Azure SQL Database