Pause / Resume / Backup an Azure SQL Database with Powershell – Part 3

Many services in Azure allow certain cost savings through automation, this can also be achieved with the Platform-as-a-Service “Azure SQL Database”, this is what this blog post is all about. Quite as simple as e.g. with the Azure Analysis Service it is not because actually there is no pause-resume functionality – here comes the backup into place, that’s what I’ll talk about first.

Azure SQL Database and the Backup topic

Before we dive deeper into the Azure SQL Database and its pause-resume functionality, we first have to take a brief look at the topic of backup, which is not quite unimportant in this context. A big advantage of Azure SQL Database against a SQL Server (on premise or Iaas) in any case is … you do not have to explicitly worry about the backup since this is created automatically. According to the database size and number of data changes, Microsoft automatically backs up the databases at regular intervals! That means that the costs for an extra backup software or even the development of own services are already gone. Depending on Performance-Level, there are different backup retention times:

  • at the basic level the retention time is 7 days.
  • at the standard level the retention time is 35 days.
  • at the premium level the retention time is 35 days.

When will my database be backed up? Also there is an answer …
The full backup takes place every week (unfortunately) at non-fixed times, differential backups are generally created every few hours and depending on the data volume, the transaction logs are saved every 5-10 minutes. A first full backup is created within the first 30 minutes after the database is created. These backups are kept according to the service tier(see above) (cost savings: you do not need an extra storage account because the backup is already included in the price). If you want to keep your backup for more than 35 days, you have the option of activating a “Long-Time Retention-Backup”. This requires a further storage account, where the backups are stored in parallel and permanently.

Backup Azure SQL Database

Pause and Resume for cost savings

This functionality is unfortunately not available with Azure SQL Database … How can I still benefit with cost savings if I would like to use this platform-as-a-service… Of course, as already explained in another blog post changing the database performance to prevent occurring load peaks. But we would really like to achieve certain cost savings with the migration to the cloud … if a department only works during the day (8am-8pm), then I need this database(s) only during the night… So why you cannot stop those services, since you only pay when the database is online?

For this scenario, the department needs the database only during the day, there is actually no solution, but a workaround, dropping the database in the evening and restoring the next morning via “Create Database from Backup” helps. This procedure has been implemented by Microsoft extremely pleasantly and does not mean a lot of effort.

# Dropping DB to stop costs
Get-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -ev notPresent -ea 0
if ($notPresent) {
    Write-Host $databasename "already deleted" 
} else {
    Remove-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename
}

Please note that you only delete/remove the database and not the logical server, because the backup history is stored on that logical SQL server. In order to restore this database, you’ll need the date of the last backup database. When you restore the database on the following morning, you’ll directly use this backup time to perform a restore. In Powershell, you can combine these activities very easily.

$deleteddatabase = Get-AzureRmSqlDeletedDatabaseBackup -ResourceGroupName $resourcegroupname -ServerName $servername #-DatabaseName $databasename
$deleteddatabase
# Do not continue until the cmdlet returns information about the deleted database.
 
Restore-AzureRmSqlDatabase -FromDeletedDatabaseBackup `
    -ResourceGroupName $resourcegroupname `
    -ServerName $servername `
    -TargetDatabaseName $databasename `
    -ResourceId $deleteddatabase.ResourceID `
    -DeletionDate $deleteddatabase.DeletionDate `
    -Edition "Standard" `
    -ServiceObjectiveName "S0"

You can find more sample scripts on backup / restore here => https://docs.microsoft.com/de-de/azure/sql-database/scripts/sql-database-restore-database-powershell

Björn works as a database administrator and Head of Competence for MS SQL and mySQL in Hamburg (Germany). He regularly participates in the PASS regional group meetings, events of the PASS such as SQLSaturday and SQLGrillen and he organizes the Azure Meetup group in Hamburg. He is interested in topics like SQL Server, Powershell and Azure for science fiction, snowboarding, baking and cycling.

Resize an Azure SQL Database with Powershell – Part 2

In the first part of my Azure SQL Database Powershell Automation series I showed how to create the resource group, the logical SQL Server and the corresponding database, I would like to show you how to implement a resize of the performance level of this database.
Why should I want to resize a database? Upwards – so more power can be understood, but why again downsizing? So I start with the respective reasons for an adjustment of the selected performance class.

What power do I need when?

Let’s take an example… A larger company rents the workspace and is only active in Germany, has an application for the management / booking their meeting rooms or workplaces. This application is used by the ladies at the reception only during the “opening hours” intensively and outside these times occasionally by coworkers. Basically, the receptionists need their application and thus the database only between specific times, for example, 7 – 20 o’clock, the rest of the day the database remains almost unused …
So what is close to making this database “faster” during the day? On-prem is unfortunately not possible since you can not assign a single database so easily further resources.

Another example is evaluations or processing, where operational concerns can vary greatly when the end of the month ends with more computing power in the Azure SQL Database so that the data can be processed and deployed as fast as possible.

  • Depending on application usage
  • Depending on operational requirements
    • nightly processing
    • monthly financial statements
    • Year-end rally

Azure SQL Database

What do we need for a resize of the Azure SQL Database

  • A resource group or its name
  • A name for the logical SQL Server (which must be unique)
  • A database name
  • The new performance level (DTU)

The login to Azure and the selection of to-use Subscription I leave here outside forwards and start the actual script.
Again, I start with the definition of the necessary variables (see above):

# Set the resource group name for your server
$resourcegroupname = "RG-AzureSQLDatabase-Demo"
# Set logical server name
$servername = "server-sqldbdemo"
# The sample database name
$databasename = "db-sqldbdemo"
# Set new performance-level
$newdbDTUsize = "S3"

Now we can resize the Azure SQL Database

This is much easier than rebuilding a database because we need fewer variables (see above) and only one command line for execution. But as I’m careful I also ask if the database exist to ensure that the script does not make any “nonsense”.

# Resize Azure SQL Database to new performance-level
Get-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -ev notPresent -ea 0
if ($notPresent) {
    Write-Host $databasename "doesn't exist" 
} else {
    Set-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -Edition "Standard" -RequestedServiceObjectiveName $newdbDTUsize
}

As the before-and-after comparison shows, a resize is possible without problems and lasts only a few moments.

Vorher-Nachher-Resize Azure SQL Database

Björn works as a database administrator and Head of Competence for MS SQL and mySQL in Hamburg (Germany). He regularly participates in the PASS regional group meetings, events of the PASS such as SQLSaturday and SQLGrillen and he organizes the Azure Meetup group in Hamburg. He is interested in topics like SQL Server, Powershell and Azure for science fiction, snowboarding, baking and cycling.

First steps with Azure SQL Database and Powershell – Part 1

In addition to the ability to deploy a virtual server on a SQL server in Azure, there is also the “simpler” method to deploy only one database on demand => the Azure SQL Database. This database-as-a-service is also very good and easy to use for a variety of applications, although there are a lot of options you should think about before.

  • is a simple database enough
  • rather an elastic pool
  • In which region to deploy
  • or rather geo-redundant
  • is the “supplied” backup is enough
  • or must have to have a “Long Time Retention” backup
  • Who accesses the database from where?
  • What level of performance do I need for the database or the Elastic Pool?

An overview of the individual possibilities and their use for the respective solution, as well as a larger number of answers to many questions, can be found in the documentation of Microsoft => https://docs.microsoft.com/en-us/azure/sql-database/

Start with a simple Azure SQL Database

An important point in providing an Azure SQL Database is that you not only need a database but also a logical SQL Server (=> listener endpoint), without this “shell” you can not host databases.

In my example, we use Powershell to create a default database, of course, using a script, so that we can create the server and its database, or just the database, the same default way as needed.

What do we need to create an Azure SQL Database

  • a resource group or its name
  • a location/region for the resource group
  • admin-username and password
  • a name for the logical SQL Server (which must be unique)
  • a database name
  • typically also the IP addresses/ranges, which can access it (the own IP suffices for the first access)

The login to Azure and the selection of to-use Subscription I leave here outside forwards and start the actual script. My first script starts with the definition of various variables (see above):

# Set the resource group name and location for your server
$resourcegroupname = "RG-AzureSQLDatabase-Demo"
$location = "west europe"
# Set an admin login and password for your server
$adminlogin = "dbadmin"
$password = "DemoPwd@2017"
# Set server name - the logical server name has to be unique in the system
$servername = "server-sqldbdemo"
# The sample database name
$databasename = "db-sqldbdemo"
# The ip address range that you want to allow to access your server
$clientIP = (Invoke-WebRequest ifconfig.me/ip).Content
$startip = $clientIP
$endip = $clientIP

Create the logical server and the Azure SQL Database using sample data

Now – as in almost all my scripts – first the query whether the resource group is already exisitiert, if not it is created. After the resource group of logical servers coming next in which we can integrate in the end our Azure SQL Database. We assign the credentials from adminlogin and password to the logical server so that the server and the databases are protected. Speaking protected, the firewall of the server we need to open external access, this I ermittel an additional function and an external service my own public IP address. With this IP address, we are now configuring the logical SQL server, and finally, we check if the desired database already exists, if not, with the desired parameters.

# Create a resource group
Get-AzureRmResourceGroup -Name $resourcegroupname -ev notPresent -ea 0
if ($notPresent) {
  $resourcegroup = New-AzureRmResourceGroup -Name $resourcegroupname -Location $location
} else {
  Write-Host $resourcegroupname "already exists"
}
 
# Create a server with a system wide unique server name
Get-AzureRmSqlServer -ResourceGroupName $resourcegroupname -ServerName $servername -ev notPresent -ea 0
if ($notPresent) {
  $server = New-AzureRmSqlServer -ResourceGroupName $resourcegroupname `
  -ServerName $servername `
  -Location $location `
  -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $adminlogin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))
} else {
  Write-Host $servername "already exists"
}
 
# Create a server firewall rule that allows access from the specified IP range
Get-AzureRmSqlServerFirewallRule -ResourceGroupName $resourcegroupname -ServerName $servername -FirewallRuleName "AllowedIPs" -ev notPresent -ea 0
if ($notPresent) {
  $serverfirewallrule = New-AzureRmSqlServerFirewallRule -ResourceGroupName $resourcegroupname `
  -ServerName $servername `
  -FirewallRuleName "AllowedIPs" -StartIpAddress $startip -EndIpAddress $endip
} else {
  Write-Host "FirewallRule already exists"
}
 
# Create a blank database with an S0 performance level
Get-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename -ev notPresent -ea 0
if ($notPresent) {
  $database = New-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname `
  -ServerName $servername `
  -DatabaseName $databasename `
  -RequestedServiceObjectiveName "S0"
} else {
  Write-Host "Database" $databasename "already exists"
}

Now we can work from our workstation to use SQL Server Tools – e.g. SQL Server Management Studio – connecting to this database and try to link the application and run some tests.

Sample Access - SSMS to Azure SQL Database

Björn works as a database administrator and Head of Competence for MS SQL and mySQL in Hamburg (Germany). He regularly participates in the PASS regional group meetings, events of the PASS such as SQLSaturday and SQLGrillen and he organizes the Azure Meetup group in Hamburg. He is interested in topics like SQL Server, Powershell and Azure for science fiction, snowboarding, baking and cycling.

dbatools – Migration of SQL Agent Backup Jobs

You can think of what you want, but sometimes you have to jump over your own shadow… usually I’m an advocate of SQL Server’s own backup, whether it’s backup to disk, backup to URL or backup to network share, but here I had to give in and set up the backup using 3rd party tools. Said and done…

At this customer, we had already switched everything to 3rd party backup, but not backups initiated from the backup server, but SQL Server initiated. The SQL Server Agent starts the backup as a command line call from within an Agent Job. For this, I had to copy the backup jobs from an existing server, as well as all related SQL Server objects. I started using “Create Objects to NewQuery” but received the error message that the mail component or receiver is not existing or not configured on the destination side. So I had to first migrate the complete SQL-Mail configuration, take over the operators and if we are already on the mailing configuration, then I can also migrate the SQL Agent Alerts recommended by Brent Ozar

dbatools - THE powershell modules for all DBAs

used powershell modules from dbatools

Just think about how to migrate the configuration of SQL mail, operators and custom alerts from one SQL Server to the other without much effort, so I mean with SQL Server builtin methods… This is not really much effort, but still a little more than 5 minutes. So what is closer than to deal more intensively with the dbatools …
With following commands “Copy-SqlDatabaseMail” you can migrate the SQLMail configuration, with “Copy-SqlOperator” you’ll take over all operators, continuing with “Copy-SqlAlert” and finally copy the jobs. It is not that much work 😉

Copy-SqlDatabaseMail -Source SQLServer01 -Destination SQLServer02
Copy-SqlOperator -Source SQLServer01 -Destination SQLServer02
Copy-SqlAlert -Source SQLServer01 -Destination SQLServer02
Copy-SqlJob -Source SQLServer01 -Destination SQLServer02 -Jobs  Full-Backup, TLog-Backup

So only 4 lines of Powershell code to ease the work, also thanks to the unification of the parameters within dbatools which also eases the parameterization of the individual commands in a very simple way – to find a quick and uncomplicated way to do any migration. I have to recommend the use of dbatools to every DBA!

Björn works as a database administrator and Head of Competence for MS SQL and mySQL in Hamburg (Germany). He regularly participates in the PASS regional group meetings, events of the PASS such as SQLSaturday and SQLGrillen and he organizes the Azure Meetup group in Hamburg. He is interested in topics like SQL Server, Powershell and Azure for science fiction, snowboarding, baking and cycling.

TempDB-Performance-Tuning out of the box – T-SQL Tuesday #87

TempDB-Performance-Tuning out of the box - T-SQL Tuesday #87

T-SQL Tuesday is a recurring blog series which was launched by Adam Machanic (b | t). Every month, a blogger is hosting a topic around the SQL Server and anyone can write a blog post on this particular topic.

This month, Matt Gordon (b | t) our host and the topic is about new features in the SQL Server (starting with the 2014 version) which fixes old issues, see more details in his announcement post.

My topic for this TSQL2sDay is the new possibility to configure the TempDB as part of the SQL Server installation.

much easier Implementation of the Best Practice for TempDB

Previously, after you’ve finished a SQL Server installation, you had to implement best practice recommendations from Microsoftthe best practice recommendations from Microsoft in a more or less complicated way. Either you approach the topic in a more conventional way and adjust the values manually via the SQL Server Management Studio or you had made it a little bit easier to automate this by a T-SQL or Powershell script. But still, you had to add these configuration adjustments every time to achieve an optimized TempDB performance.

Essentially the TempDB performance comes basically with the hardware configuration, in this case, the physical separation between user databases and TempDB by transferring the TempDB onto a high-performance storage. Microsoft recommends the strict separation of UserDB and TempDB, not only through separate folder structures or separate partitions but should by separated (fast) disks. There is no need for a great RAID configuration, a RAID 1 or RAID 10 would be great, but no real need, because the TempDB is always “re-created” when restarting SQL Server, so it would not be a “drama” when it is corrupt or the discs below would crash. (Ok, here I don’t care about the possible loss of data and time).

Overview of Best Practice recommendations

So many adjustments are not necessary to get a better performance of TempDB, I would like to briefly summarize this:

  • Physical separation of user databases and TempDB
  • Splitting the TempDB into individual data files according to the number of processors
  • Autogrowth parameter adjustments
  • Trace flag depending on usage / requirement

Once this physical optimization has been implemented in the storage area, you can make the logical adjustments in the configuration. Depending on the processor/core count, Microsoft also recommends splitting data files into several separate files instead of one large data file in order to optimize the performance of the IO streams (parallelization). Likewise, depending on the application or requirements, the data files can initially be set to a fixed size so that the internal structure of the data files are optimized and all files are the same size. (Similar to trace flag 1117/1118). Otherwise, the Autogrowth parameter should be set to 10% so that the data files in conjunction with Trace flags 1117 and 1118 contribute to an optimized, uniform structure of the TempDB.

SQLServer2016 - Configure TempDB during Installation

SQL Server 2016 brings integrated simplifications

The SQL Server 2016 now brings this whole logical optimization right to the installation process… now you can tell the SQL Server how to set the number and distribution of TempDB data files and their auto growth parameters within the installation steps. Microsoft has finally integrated the long-published Best Practices recommendations into the installation process. So you only have to create the hardware prerequisites for the TempDB and can start to install.
Ok, here, too, you have to worry about which values you want to set, but at least you do not have to worry about these configurations afterward!

THANKS to Microsoft for this improvement.

During the installation process, you can set how many data files you want to create, the initial size and the Autogrowth parameter. Theoretically, you are even able to name a separate path for each individual data file (division by the number of cores) or even a separate (SSD) disc. Everything is displayed on a clear screen and can be used also from an inexperienced DBA.

Through this new “Toy” some old “problems” could be reduced … 😉

Björn works as a database administrator and Head of Competence for MS SQL and mySQL in Hamburg (Germany). He regularly participates in the PASS regional group meetings, events of the PASS such as SQLSaturday and SQLGrillen and he organizes the Azure Meetup group in Hamburg. He is interested in topics like SQL Server, Powershell and Azure for science fiction, snowboarding, baking and cycling.