Wow – The trick with ‘In Recovery’

Today has been one of these days where everything went wrong, but after solving all the problems together with my customer, I feel like an unbeatable SQL-guy 🙂

But, the best trick of the day is actually a trick that I have stolen from the genius that runs this website.

http://timlaqua.com/2009/09/determining-how-long-a-database-will-be-in-recovery-sql-server-2008/

We had a database with more than 600 GB data, and after we got the server up running again, the database went into ‘In Recovery’. That is what we expected, but the problem is that we did not have a clue about how long time it would take (translated to Pizza, we didn’t know if we could drive to the pizzabar and eat or if we needed to order them to finish the work).

I then googled and found the script below, and I would like to share it with all of you.

So thanks to Tim Laqua, I hope you all will visit his Blog.

DECLARE @DBName VARCHAR(64) = ‘databasename’
 
DECLARE @ErrorLog AS TABLE([LogDate] CHAR(24), [ProcessInfo] VARCHAR(64), [TEXT] VARCHAR(MAX))
 
INSERT INTO @ErrorLog
EXEC sys.xp_readerrorlog 0, 1, ‘Recovery of database’, @DBName
 
SELECT TOP 5
     [LogDate]
    ,SUBSTRING([TEXT], CHARINDEX(‘) is ‘, [TEXT]) + 4,CHARINDEX(‘ complete (‘, [TEXT]) – CHARINDEX(‘) is ‘, [TEXT]) – 4) AS PercentComplete
    ,CAST(SUBSTRING([TEXT], CHARINDEX(‘approximately’, [TEXT]) + 13,CHARINDEX(‘ seconds remain’, [TEXT]) – CHARINDEX(‘approximately’, [TEXT]) – 13) AS FLOAT)/60.0 AS MinutesRemaining
    ,CAST(SUBSTRING([TEXT], CHARINDEX(‘approximately’, [TEXT]) + 13,CHARINDEX(‘ seconds remain’, [TEXT]) – CHARINDEX(‘approximately’, [TEXT]) – 13) AS FLOAT)/60.0/60.0 AS HoursRemaining
    ,[TEXT]
 
FROM @ErrorLog ORDER BY [LogDate] DESC

 

SQL Server MVP – Deep Dive Volume 1

This is a crappy book, but you should buy it anyway…. It sounds strange, but please look at the review and feel free to disagree with me.

I bought this book since like the idea of having a lot of small but interesting articles gathered in one book, and some of the articles are very good and you can learn a lot from them, and some of the articles are really bad.

“Deep dives” means, as I see it, not an introduction but a Deep dive, and if we look at the chapter about full text search, then the big question is: What does it have to do with a Deep Dive? It is a good introduction, but it is not something that I would expect from a book that calls it self a Deep dive.

On the other hand, then there is an interesting article about Gaps and Islands, and this is really a deep dive. It is expected that you know a lot about SQL Server and programming, and now you want to be introduced to a problem that has caused a lot of troubles to a lot of developers over time.

So why do I think it is crappy? Because the level of the content is in average not a deep dive. It is in some cases a beginners guide to functionalities. When that is said, then the quality of the articles are high, and that is why I recommend you to buy the book.

 

Søren

 

 

10 steps to better performance

Hi,

I have decided to write 10 short blogs about performance improvement from a practical point of view. These 10 blogs are not the only answer to the performance issues, but they are practical answers to a lot of problems that a lot of users / companies experience.

The 10 blogs will be

  1. Forget about indexing, lets work with executionplans instead. A lot of users tend to be to focused on indexing. Indexing can be a great thing, but if you don’t understand the nature of your queries and what causes the problems, then you can’t do prober indexing. Don’t misunderstand me… Indexing is very IMPORTANT, but it is not everything.
  2. Let’s start profiling. Executionplans shows how the query is executed and profiling shows what is executed. Both things are important, and if you know how to master profiling and executionplans, then you are well ahead with performancetuning.
  3. Database properties. This one is a simple one, but basically its about how to setup your database to tweak the best performance out off it, or at least to avoid a degrade in performance.
  4. Server configuration. We don’t look at everything that you need to know about serverconfiguration, but we talk about the small things that can improve performance without a lot of costly changes.
  5. Indexes. Lets discuss indexes, what types of indexes that you can use and when to use them.
  6. Statistics. What are statistics and how can it help improve performance.
  7. Concurrency analysis. How do we handle multiple users.
  8. Code analysis and rewriting. A lot of developers write a lot of crap code, and in this blog, I will try to show what is important when you want to have a fight with developers.
  9. The worst tool, Database Engine tuning advisor. But we need to know how to use it, and what we can use it for.
  10. Datatypes and common mistakes.

I hope that you will join me in these blogs, and you are also welcome to comment on them.

Indexed views

Hi,
(You need the adventureworks 2008R2 database for this post).

One of the interesting things about an SQL-server is how to get the best performance, when you have complex views. Imagine a view with the following definition

— CODE SNIPPET START —

CREATE VIEW IndexedTest
as
SELECT  t1.SalesOrderNumber, t1.SalesOrderLineNumber, t3.FirstName, t2.EnglishProductName, t1.OrderDateKey, t1.OrderQuantity
FROM dbo.FactInternetSales AS T1
        INNER JOIN dbo.DimProduct AS T2 ON t1.ProductKey=t2.ProductKey
        INNER JOIN dbo.DimCustomer AS t3 ON t1.CustomerKey=t3.CustomerKey

— CODE SNIPPET END —

if you run this code:

— CODE SNIPPET START —

SET STATISTICS IO ON

SELECT *
FROM indexedTest

— CODE SNIPPET END —

 

then you will see, that the query will have to do 1030 reads in the FactInternetSales, 255 in Dim Products and 978 in dim customers. That might be ok, but what if we could do it better? We can, by creating an indexed view. A feature that should only be available in the enterprise version, but by knowing your SQL Server you can also get it to work in a standard edition of SQL 2008.

We need to alter the view, so that it is schemabound, and we do that with the following code:

— CODE SNIPPET START —

 ALTER  VIEW IndexedTest
WITH schemabinding
as
SELECT t1.SalesOrderNumber, t1.SalesOrderLineNumber, t3.FirstName, t2.EnglishProductName, t1.OrderDateKey, t1.OrderQuantity
FROM dbo.FactInternetSales AS T1
        INNER JOIN dbo.DimProduct AS T2 ON t1.ProductKey=t2.ProductKey
        INNER JOIN dbo.DimCustomer AS t3 ON t1.CustomerKey=t3.CustomerKey

— CODE SNIPPET END —

Now we can add a unique clustered index to the view, with the following code.

— CODE SNIPPET START —

USE [AdventureWorksDW2008R2]
go
CREATE UNIQUE CLUSTERED INDEX [OrderNoOrderLineNo] ON [dbo].[IndexedTest]
(
    [SalesOrderNumber] ASC,
    [SalesOrderLineNumber] ASC
)

— CODE SNIPPET END —

Now we are ready to use the indexed view, and when we write our code to select from the view, then we just need to add a tablehint.

— CODE SNIPPET START —

SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT *
FROM indexedtest WITH (NOEXPAND)

— CODE SNIPPET END —

And now, instead of having to read approx 2250 pages, we now only need to read 669 pages. that is much better, and faster.

If we look at the executionplans, then we can use the nice feature of SQL Server to compare two queries. Simply write both queries in the same window, and then show executionplan.

As you might see, then the executionplan were we use the indexed view is much faster and simpler than the one without (15% of the total executiontime).

What if we want to search on the productname? Well, then we can add a nonclustered index to the indexed view (eventhough it will not be a good index). So if we want to get the best “Select performance” from the following query, without us thinking about space or writeperformance:

— CODE SNIPPET BEGIN —

SELECT *
FROM indexedtest WITH (NOEXPAND)
WHERE EnglishProductName LIKE ‘Road%’

— CODE SNIPPET END —

 

Then we could try to add the following index:

— CODE SNIPPET START —

CREATE NONCLUSTERED INDEX [ProductName]
ON [dbo].[IndexedTest] ([EnglishProductName])
INCLUDE ([SalesOrderNumber],[SalesOrderLineNumber],[FirstName],[OrderDateKey],[OrderQuantity])
— CODE SNIPPET END —

If we then compare the selectstatement to the one without the tablehint (NOEXPAND), then we will see that it is only using 4% of the time, which means that it is incredible faster!!!!!!

I can’t say if indexed views is the right choice for you in the search for better performance, but it is worth looking at. BUT!!! There is a lot of limitations, and I suggest that you read the following article from Microsoft before you start working with indexed view:

http://msdn.microsoft.com/en-us/library/dd171921%28v=sql.100%29.aspx

Soren