Skip to main content

SQL Server 2016 Stretch Database

One of the new options in SQL Server 2016 is to store cold data in Microsoft Azure. The idea behind it is that doing so could be far less expensive than storing the data locally. It sounds great but the reality is a bit different. in this blog post I will share my experiences with this feature.  

Stretch database or the first steps versus "Distributed database"
Looking closer at the feature of Stretch Database it incorporates actually the separation of data in a table over multiple servers. There is a main server and a back-end server that both contain a part of the data in your table. The back end server has currently to be in the Azure cloud. The front end can be in the Azure cloud to. When a query is fired on the table both front end and back end servers work together to get the data. 

The MSDN Library shares an interesting picture:
What this picture shows is that in the Azure cloud a SQL Server Database is created that stores all or a part of the cold data. The data in a table can be split using a filter like "if a date column is before 1/1/2016 move the data to the cold data server in the Azure cloud".

This process looks a bit like a user friendly version of hashing distributed tables on the Microsoft Analytics Platform System. The difference is that this solution uses Enterprise SQL Server versions and it is limited by 2 servers and that the reference tables are not stored distributed in the Azure cloud too. This means that when a query using fields that are not indexed the Azure Cloud version has to collect all the data to let the master decide if a row should be in the query results or not. Moving data means that a connection between the 2 servers becomes a bottle neck. 

There are also some limitations: 
  • Uniqueness can not be enforced on stretched tables in the Azure cloud
  • INSERT, UPDATE and DELETE operations are not allowed (Read-Only data)
  • Filters in indexes are not propagated into the Azure cloud table
  • No tables that contain FileStream data
  • Maximum of 1023 columns and 998 indexes
  • No Memory optimized tables
  • No replicated tables
  • No ntext, text, images, timestamp, sql variant, XML and CLR data types including geometry, geography, hierarchyid, and CLR user-defined types
  • No default and check constraints 
  • No computed columns
  • No Foreign Keys constraints to a reference table 
  • No full text indexes or indexed views
I did a little test and moved a relatively small table - 11 millions rows - partially into the Azure cloud. The process was guided by a nice Wizard but it can also be done using TSQL. 

At the end the SQL Server Wizard had created a Azure SQL Server Database with a performance level of 400 DTU. The wizard does not allow you to chose your performance level but it chooses a performance level close to your local environment which was in my case a 4 core VM with 8 GB of memory and multiple disks on Premium Storage. The total size of part of the table that was moved into the Azure cloud was 4,19 GB. As I stated before it was just a test. 

In my opinion the option to chose a performance level should be added to the Stretch Database Wizard as soon as possible.Why? Because the 400 DTU VM consumed in one night € 179,22 of my 200 € budget doing nothing and storing 4,19 GB of data which was not queried at all. Oeps!


That was also the moment that I started to look for the pricing of a Stretch Database solution in Azure (too late of cause) and I was blown of my chair. 

Holy crap!  Check out the current prices: 

  

SQL Server Stretch Database Wizard has chosen for me a VM that will cost me €6.274,15 per month. And that is for only 400 DTU.  

Chose 2000 DTU and it will cost you €31.370,76 per month

And that is only the server.You have to add also the Outbound Bandwidth for the data going to your local server if someone fires a query on your cold data in Azure. 

Oeps again !

My advice: if you really have a problem with a large database and no disk more space available consider an on premises SQL Server Reference architecture made by ATOS. It is a very performing and scalable SQL Server architecture that will cost far less than choosing a SQL Server Stretch Database in the Azure cloud. 




Comments

Popular posts from this blog

Privacy and the liberty to express yourself on LinkedIn

Unaware that LinkedIn has such a strong filtering policy that it does not allow me posting a completely innocent post on a Chinese extreme photography website I tried to post the following: "As an Mpx lover I was suprised to find out that the M from Million is now replaced by the B from Billion. This picture is 24 Bpx! Yes you read this well, 24 billion pixels.  Searching on the picture I stumbled on a fellow Nikon lover. If you want to search for him yourself you can find him here: http://www.bigpixel.cn/t/5834170785f26b37002af46a " In my eyes nothing is wrong with this post, but LinkedIn considers it as offending. I changed the lover words, but I could not post it.  Even taking a picture and post it will not let this pass:  Or my critical post on LinkedIn crazy posting policy: it will not pass and I cannot post it.  The technology LinkedIn shows here is an example what to expect in the near future.  Newspapers will have a unified re...

Windows Server 2016 with ... XBox extensions !!!

Microsoft must have been thinking that the live of a Windows administrator gets so boring that they need a distraction and they have integrated the XBox Live extensions standard in Windows Server 2016.  No kidding: take a look. I did not select it as a feature and it is there. As you can see the XBox Live service is started automatically even in mode manual (see error log).  To my opinion these 3 XBox Live services should never be available on a server. Unless the server is part of the XBox Live platform of cause but honestly I don't think that Microsoft will allow that.  Every IT manager with a serious Windows production environment would fire any administrator playing XBox Live on the production servers. So though it might be tempting don't do it Windows administrator. It is a trap! :-)

Windows Storage Spaces and SQL Server: a ride to super performance

This post is based on the tests I did to see if storage spaces in Windows 2012R2 can serve as a platform for our Fast Track environments. When Microsoft developed the Fast Track Data Warehouse architecture, which was at first very limited in hardware choice and for version SQL Server 2012 became a reference guide, Storage Spaces as a functionality in Windows did not exist.That has changed with the release of Windows 2012 and later on with version 2012R2 and soon 2016. Why is Storage Spaces as a storage technology so interesting for SQL Server?  Anyone who is a pro in SQL Server knows that parallelism - adding more disks - can greatly improve performance. Adding an hard disk for the tempdb and another one for the LOG files will do the job if the hard disks perform sufficiently (that might be another issue!). To my opinion (and also to others) SQL Server does not do a great job in using the available hardware. Before and after the installation it is mandatory to tune a SQL Se...