(You must have the database adventureworks2012 installed. Download it from codeplex.com)
This is one of the new cool features of Microsoft SQL Server 2012, and if you are a developer, then you will love.
Earlier when you wanted to create e.g. a website where you wanted to have a grid where you wanted paging, then you had to rely on either the .net components or complex SQL if you wanted to show e.g. 50 rows pr. page and your table had e.g. 2000 rows.
With the new Offset and fetch, you can create readable SQL and easy to understand SQL (even for all the .net guys out there).
So lets see a simple example. You want to retrieve 50 rows from the Person table in the Person Scheme. You want to order by lastname, firstname and middlename and you want to retrieve row 51 to 100.
Line 4 does the trick for you, and basically you say:
- I want you to start at offset 50 (0 i also a offset, so offset 50 will be record 51).
- You want to fecth the next 50 rows.
If you want to read the full documentation, then go to books online and read it there.
But what about performance? Well, if we look at two important things, then we can look at the execution plan and the page reads. The executionplan looks like this
As you can see, then we do an index scan, Compute Scalar (that is of no interest here, because that has just something to do with the Coalesce that I uise) and a Top.
From the executionplan I can see that the cost is 0,0037976.
If we look at the pages read, then it looks like this
As you can see, then there are 1 scan and 3 logical reads.
What if we did it the old fashion way? Then the code would look something like this (an example):
As you can see, then we uses a common table expression, and the code is much more complex. For a SQL Guy, it is easy to read, but why make it difficult, when we can make it simple? Letâ€™s look at the executionplan for this query:
Itâ€™s much more complex, and the cost is a little bit higher: 0,0038956.
Reads are the same.
Conclusion: I would recommend you to use the new offset and fetch instead of common table expressions; itâ€™s easier and in many cases performance would be better. If performance is the key issue, then make sure that you look into all alternatives AND that you measure instead of guessing. In some situations it might be better to load the complete dataset into the application and then do the filtering within the application code. It all depends.