Posted by: Fatherjack | 22/Nov/09

SQLBits round-up (The free bit)

OK, so I couldnt get to any of SQLBits other than the free Saturday, yup, they do a whole day for free. All you need do is register in time at and then turn up. This is the fifth happening and was held at the Celtic Manor Hotel, Newport, Wales. A very impressive hotel with plenty of staff keeping all of the debris that 300+ DBAs generate while they eat bacon/egg/sausage baps, tea/coffee, cookies, croissants, fruit cake and assorted packed lunches (I did mention this was free didnt I?).

Putting the excellent facilities to one side for a while, there were five rooms where sessions were taking place and each room had 5 sessions through the day so my primary school maths tells me that there were 30 sessions to select from. Yes, 30 – because the dinner break had sponsored sessions in each of the rooms too. The full listing is here and I had a lot of awkward decisions to make on what I wanted to see.

In the end I elected to go for a performance/tuning flavoured day so, in order of attendance, I went to
>”When a query plan geos wrong” – Simon Sabin
>”Let’s Make SQL fly – a session for developers and administrators who believe in magic” – R Meyyappan
>”Using Perfmon and Profiler” – Brent Ozar
>Quest sponsored lunch showing what to monitor and how their products Foglight and Spotlight can help – Brent Ozar
>”Vital Statistics” – Christian Bolton
>”SQL Server Optimisation stuff you won’t find on Google (yet)” – Andre Kamman

Simon’s session got things off to a great start, looking at how query plans affect the performance of queries and that sometimes the plan available isnt necessarily the one that should be used and SQL needs a hand to make a different choice. Simon carried the topic very well, demonstrating his points clearly and interacting with the audience consistently. I was handicapped in this session as the goodie bag I received didnt have a pen in it so I couldnt take any notes.

After a short break the second session demonstrated a curious effect of how many rows requested in the query affects the duration of the query. Well, obviously it does but the example had a difference of 1 row and a difference in duration from 25ms to 900ms! It turns out its all to do with the data type and the SQL engine using tempdb unnecessarily. Applying the fix Ramesh showed that there was no reason to accept the slower performance. As he stressed, find out what the problem is and then the solution is easy! Contact Ramesh at

Far and away the most enthusiastic presenter of the day Brent Ozar took us through how to use free tools that are on every server already to create baselines of current performance and then how to compare these over time to measure the effects of system changes and when something spikes what the causes may be. Its all on his blog at

Brent jumped out of this session and straight into the sponsored lunch session telling us how Quest tools can help take the hard work out of monitoring server performance and go some way to pinpointing the actual query that is causing a problem and what to do to solve it. Foglight and Spotlight are the tools they produce to help the DBA make sure their servers are performing as good as they can.

Christian Bolton took on the topic of SQL Server table statistics in the first session after lunch and explained what part statistics play in the efficiency of SQL queries, how to maintain up to date statistics and how different versions of SQL Server have different statistics. We also got an insight on how the “cost” value in execution plans is calculated and what it means.

The final session I attended was by Andre Kamman and Henk van der Valk(Unisys) to show how SQL Server 2008 scales up when pushed. On a system with 96 cores (that’s 64 quad core CPUs) and 512GB RAM they achieved some truly staggering data transfer speed to handle ETL process. SSIS out performed bcp, BULK INSERT and all other alternatives. There were some nice tweaks that they shared with us, setting the packet size in the BIOS for example gives a huge boost to transfer speed. Sadly this session was marred by the absence of a working internet connection and the live examples were replace with video from previous events.

A lot of attendees gathered for the RedGate software sponsored drinks reception and games in the bar afterwards. There was a quick prize giving process for the various competitions that were taking place at the vendors stands through the day. The lucky winners got prizes ranging from software licenses to web cams and one person won an Xbox. The driving and football games were popular, as was the Rock Band session in one of the meeting rooms, while others simply had a drink and a chat until dinner. Sadly I had to leave to drive home but will certainly be making the effort to attend next time and hopefully for more than just one day.

I would like to thanks all of those who gave up their time and talent to make this conference happen, you are a credit to the industry.

Did I mention this was all free?

Posted by: Fatherjack | 19/Nov/09

PowerShell vs LogParser

The scenario:

We have an application server (iis) that is running slowly, the database shows no signs of stress so I want to review the web server application log to see if there are any entries that show any clues as to the issue. Easy, grab the data from the event log and quickly analyse the results; what sort of event, how many of them, when do they occur??

I have two options immediately in my mind – PowerShell and LogParser.

PowerShell I feel I ought to learn, its embedded in the SSMS UI so I am guessing that Microsoft are going to make it defacto across their server products, there should be good support resources and it could well become a ‘required’ for certain tasks and even job descriptions.

LogParser is something I have used on and off for years now, there seems very little support for it but what there is is good quality. Starting off with the help document that ships with it as a chm file.

Both tools are command line, sadly, they are their own command line, not the DOS command window you may be used to but a command line none the less.

So, abandon your mouse and start typing …

In PowerShell 1 you cant access remote logs without some funky looking Function so this needs to be created:
Function Get-RemoteEventLog {
param ([string]$Computername =$env:ComputerName, [string]$Logname = 0)
# connect .NET class to get eventlogs
$evt = [System.Diagnostics.EventLog]::GetEventLogs(“$Computername”)

# check if logname is asked
if ($logname -eq 0) {
return $evt
Else {
$val = 0
$evt | foreach {
if ($_.log -contains $logname) {
$val = $val + 1
return $evt[$logindex].entries

and then you can run:
get-RemoteEventLog -ComputerName -LogName application | where-object {$_.source -eq ‘ASP.NET 2.0.50727.0’}| out-file -filepath C:\Server_App_Log.csv

This gets the data into a csv file of your choosing, ready for importing into your favourite analysis tool.

With LogParser we get to start work right away, simply by specifying the target machine in the FROM part of the query:
logparser “SELECT * into MyServer_App_Log from \\<servername>\application where sourcename = ‘ASP.NET 2.0.50727.0′” -o:sql -database:TargetDBName -server: -createtable:on -resolveSIDs:on -formatmsg:on -icheckpoint:on
Here I have got the data straight into a sql database, ready for a grilling from TSQL, alternatively I could have it in a csv file for Excel and so on… Its even resolved the SIDs from AD for me!


PS took 2 mins+ to get the data into the csv file and I still have to get that into something that I can analyse it with. Its not formatted for Excel to analyse from the get-go, the columns are not delimited, the messages are truncated, it isnt fit for purpose

LogParser took 13 seconds to get it into a table where I can get T-SQL working on it instantly, it would be no slower into a csv file where I could double click to get Excel or Calc (OpenOffice) to open it and interrogate it.

It seems to me that I still dont have a reason to go digging deep into PowerShell – to get my job done I have other tools that are faster and simpler.

This opinion may change after I have attended the PowerShell session at SQLBits this coming weekend. If it does I’ll let you know.

If you want to try one or the other then here are some good starting points.
Download –
Support – LogParser
Background – Wikipedia entry with lots of links

Download – PowerShell Blog
Support – PowerShell Blog

Posted by: Fatherjack | 09/Nov/09

Give SQL Server 2008 R2 CTP a Quick Try for Free

Extracted from

Give SQL Server 2008 R2 CTP a Quick Try for Free: “

You have probably heard that the SQL Server 2008 R2 CTP is available, but you may not have given it a try yet because it is such a pain to download the huge install file, find a test server to run it on, and then install it. Under the best of conditions, this is at least a 3 hour job.

Now, if you want to check out the new features of SQL Server 2008 R2 (the August 2009 CTP), you can without going through all this hassle. Instead, you can go to, create a virtual instance of the software, and try it out, all for free.

All you have to do is fill out a short registration form, and shortly thereafter, you will have your own personal instance of SQL Server 2008 R2 to play and experiment with.  So now, you don’t have any excuse to learn about, and try out for yourself, all the cool new features in R2.

This offer is sponsored by PASS, Microsoft, Dell, and MaximumASP.

Posted by: Fatherjack | 04/Nov/09

A relevant choice of Index options?

I have recently had reason to review the indexing of a table as a query was taking a lot longer to run than I would have expected.

My first move was to check out the estimated execution plan for the SQL that the view was based on. It turns out that a large chunk of effort in the query was running an index scan on a table that had 3 indexes. . The indexes details were: a primary key (unique, clustered), a single column index (based on one of the most common client references in our system) and a third index on some key columns that describe our clients. The Primary Key column was an INT IDENTITY column – basically just a unique ID in the table for absolute reference to asingle row if needed. It plays no part in our business.

Have you spotted what’s wrong here?

Exactly, having the ID column as a CLUSTERED index means that table is sorted on disk in the order of that column, this is like having a library sort it’s books by the height of the respective authors – useless when you need to find anything. You would have to run up and down the shelves to find anything you need.

I changed the PRIMARY KEY to be non-clustered and then changed the single column index to be the CLUSTERED index for the table and the performance of the query went from 7s down to 1s (YMMV). WIN!!

This got me thinking whether this may be a scenario that is repeated in other tables or other databases on our servers so I have written the query below to track down indexes that are clustered primary keys and returns the table/column name and the data type of that column. Simply reviewing the index name (columns called ID are suspicious!) and whether its an INT IDENTITY column mean I can check out anything that looks awry and improve our systems performance.

OBJECT_NAME([i].[object_id]) AS [Table] ,
[i].[name] AS [Index name] ,
[c].name AS [Column name] ,
[t].[name] AS [Column type] ,
[sys].[indexes] AS i
INNER JOIN [sys].[index_columns] AS ic
ON i.index_id = ic.index_id
AND i.object_id = ic.object_id
INNER JOIN sys.columns AS c
INNER JOIN [sys].[types] AS t
ON [c].[system_type_id] = [t].[system_type_id]
ON c.column_id = ic.column_id
AND ( ic.[object_id] = i.[object_id]
AND ic.[object_id] = c.[object_id]
[i].[is_primary_key] = 1
AND [i].[type] = 1

Personally where the column type is a INT and is_identity is true then I am off to look into making similar changes across our servers…

Posted by: Fatherjack | 02/Nov/09

Opera 10

Opera v10 is out. Go get it and enjoy the web at least 35% more.

(Please note: your experience may not be exactly 35% better, but it will be better than with any other browser)

Posted by: Fatherjack | 05/Oct/09

How to group time data by time periods

This solution builds on using a ‘numbers’ or ‘tally’ table as championed by Jeff Moden here Read that article for the full description of how to use set based solutions for efficient SQL queries.

Here I am considering a large log table that has entries that are time based and how to aggregate data based on various time bandings. For example if you want to see how many visitors your website had in 10 minute bands through the day or how many application errors you get in the server log per hour and so on…

— First things first we will create the Numbers table
— Things to note:
— We are using a temp table in this instance as it is a small one but it is often the case that DBAs add a numbers table to a production database or have one in their metadata database that they can use.
— Select a sensible number of rows to insert – relevant to the task in hand. If you are dealing with data that you want to analyse over a year and its date based then 1000 rows will be more than ample, its excessive to use 1,000,000! Especially in this instance where its a temp table, remember how temp tables work and how this might affect your system.

–DROP TABLE #numbers
IF OBJECT_ID(‘tempdb..#Numbers’) > 0
RAISERROR(‘There is already a table called Numbers, please use that or select a different name.’,0,0,1) WITH NOWAIT
RAISERROR(‘Numbers table created.’,0,0,1) WITH NOWAIT
— Insert values to a given (sic Relevant) value
INSERT INTO #numbers
SELECT TOP ( 1500 )
ROW_NUMBER() OVER ( ORDER BY c1.column_id )
master.sys.all_columns c1
CROSS JOIN master.sys.all_columns c2

RAISERROR(‘Numbers table filled.’,0,0,1) WITH NOWAIT

— Right the numbers table is ready for us, lets set some local values

DECLARE @Start DATETIME — when we want to start anlysis
DECLARE @interval INT — the grouping period we want to use

SET @interval = 60
–where 10=10 minutes;30=half hourly;60=hourly;240=4hourly;720=halfdaily;1440 = daily
SET @Start = ’05-Oct-2009 00:00′ ;
— any date you want as the start date

— Using the numbers table we can create a CTE that has the start and end times over which we want to analyse the data
WITH times
AS ( SELECT n.Num,
DATEADD(n, num – @interval, @Start) AS [LowerTime],
DATEADD(n, num, @Start) AS [UpperTime]
FROM [#Numbers] AS n
WHERE [n].[Num] % @interval = 0 — use modulo to get required intervals

— simply join our source data to the CTE using the date column in the join
SELECT [times].[LowerTime] AS [Interval Start],
COUNT(*) AS [Sessions]
FROM [dbo].[Log] AS dl
INNER JOIN [times] ON [StartTime] BETWEEN [times].[LowerTime]
AND [times].[UpperTime]
GROUP BY [LowerTime]
ORDER BY [LowerTime] desc

Your results will look something like this:

Interval Start Sessions
2009-10-05 11:00:00.000 320
2009-10-05 10:00:00.000 2892
2009-10-05 09:00:00.000 2490
2009-10-05 08:00:00.000 2264
2009-10-05 07:00:00.000 1249
2009-10-05 06:00:00.000 106
2009-10-05 03:00:00.000 1
2009-10-05 02:00:00.000 7

Simply adjusting the value for @interval will group your day in smaller or larger periods.

It is simple from here to adjust the script to calculate days or months rather than minutes and it could all be added into a stored procedure to provide a way for users to query their own data via some interface that allows them to supply a parameter for the interval they need – this may be Reporting Services, Excel or a bespoke interface.

Posted by: Fatherjack | 17/Aug/09

Unblocking your SQL Server

Finding which process is locking up your SQL Server can cause some headaches to people new to DBA activities. Digging deaper into the information that is available and refining the data to inform how you proceed is something that can still cause a well established DBA.

sp_who is a stored procedure that ships with SQL Server. If you run EXEC sp_who from a Management Studio query window then you have a grid of data returned that details the connection details of all activity on your server. This is useful as it shows who is connected to which database and what they are up to. Now this can be a small amount of data on your test server but on a production server could return hundreds of rows. This is also instantaneous, in so far as when you run it next it could all have changed.

I find it useful to get these results into a temporary table so that I can shake it up a little and refer to the same set of results more than once while I am trying to resolve block or some such. To do this you need to get the output of sp_who into the aforementioned temporary table with a section of code like this:

IF OBJECT_ID(‘tempdb.dbo.#who’) > 0
[Status] NVARCHAR(255) ,
[HostName] NVARCHAR(255) ,
[Blk] NVARCHAR(255) ,
[DBName] NVARCHAR(255) ,
[Cmd] NVARCHAR(255) ,
INSERT INTO #who EXEC sp_who ;

You can then recall this information at any time by running
SELECT * FROM [#who]

If your headache is blocking flavoured then running this will show you which SPID is the cause:

CAST([w].[Status] AS NVARCHAR(30)) AS [status],
[w].[SPID] ,
[w].[ECID] ,
[w].[HostName] ,
[w].[Cmd] ,
[w].[Blk] AS [Blocked by],
[#who] AS w
ORDER BY [DBName], [blocked BY]

The Blocked by column will have a number in it if that process is being blocked. The value in the column is the SPID of the problem process or, if that process itself has a number in its Blocked By column then it too is being blocked. Keep following the trail until you reach a process that has a zero in its Blocked by column but is being shown as blocking a different process.

Using a temporary table means that the information gathered is only available in the single process and will be dropped when you close the query.

I have this script set up as a SQL Snippet in SQL Prompt so that it is readily available in 3 keystrokes. SQL Prompt is an application that provides shortcuts to SQL Editors, making SQL development simpler and faster from Red Gate. Other editors are available and it is possible to create a script template in SSMS if you are not fortunate enough to have any third party tools working for you.

sp_who is documented in Books Online here so please read more about it there. There is also an undocumented version called sp_who2 that is very similar but has even more information regarding each process that is connecting to your databases.

Posted by: Fatherjack | 13/Aug/09

The Red Gate Incubator

Free food, office space, training and guest speakers, expenses, accommodation and mentoring for 10 weeks …

Sounds too good to be true?

Not if you know Red Gate. Not only do they make awesome applications to help your average DBA become an awesome DBA they are now running the Springboard Project for the burgeoning software business.

Take a look here:

Dont thank me, thank Red Gate

Posted by: Fatherjack | 12/Aug/09

Log Parser – the best value 3MB on your hard drive

I was wondering how best to describe LogParser and couldnt come up with anything better than the description in its help file:

  • Log parser is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows® operating system such as the Event Log, the Registry, the file system, and Active Directory®.

They are right. One thing they didnt mention is that its free – the price we all like and can all afford.

To my mind this application works much more intuitively than PowerShell and I use it regularly for grabbing data from all sorts of sources and bringing it into a SQL table for further dissection.

Got problems in a web application and need to review the application log? Then use something like “SELECT * INTO AppLog FROM \\Server\Application”.

Want to analyse the security log of a server for failure events? Then use “select top 100 * from \\boa\security where eventtypename = ‘Failure Audit Event'”

These pseudo scripts all make immediate sense to a SQL DBA, OK its not exactly like this but the differences wont stop you understanding the script syntax and getting to use more data about your environment.

Lets start off by downloading and installing the application. You get it from Microsoft here:Logparser download.
OK, you should have the application all installed now, along with its comprehensive help documentation.

Start the app and you will get a command-prompt-like window open, titled Log Parser 2.2 and with a stream of BNF description. Now, all Log Parser commands begin with the key word LOGPARSER, followed by the query in double quotes, with any switches and parameters following on behind that. Hitting enter runs the query.

Are you ready for your first Log Parser query? OK, we’ll get the names, sizes and last accessed date of files in the root of C:\. Type this into the Log Parser window.

  • LOGPARSER “SELECT TOP 10 name, size, lastaccesstime from C:\*.txt” -i:FS

Simple isnt it? We asked for the name, size and lastaccesstime for top ten records from C:\ that have a .txt extension. Just as a little hint to Logparser that we are asking for data from the file system, we set input format with -i:FS. If you want Log Parser to describe this to you itself type in:

  • LOGPARSER -h -i:FS

To get a chart illustrating the types of event in your application log from a given date then use

  • Logparser “select eventtypename, count(*) into c:\application.jpg from file://machinename/application where timegenerated > ‘2009-08-12 00:00:00’ group by eventtypename” -charttype:column3d

To review the massive list of options on what sort of chart you can have then use this:

  • LOGPARSER -h -o:charttype

Thats it for the minute, next time we’ll look at getting Log Parser to get data into our database so that we can analyse it further.

LogParser resources:

Download details

Support forum

Posted by: Fatherjack | 05/Aug/09

Keeping orphans away from your servers

When restoring a database backup to a different server it is wholly possible that there will be security issues when you take the next step and make the system live to the user interface.

Most often in our environment this happens when a live database backup is used to ‘refresh’ the data in the test system. To do this we take a backup file made from the database on Server A and restore it onto Server B. This also applies however if you have to restore to a new server in the event of a disaster – imagine Server A is on fire and you are under an SLA to get the system live again. First, make sure someone deals with the fire, preferably a trained professional such as someone from the local fire service.

In live the server may have a mixture of users – Windows domain user and group accounts, SQL server logins etc and some of these may have been given permission to the database you are in the process of transferring to a new location. This may not compare exactly with the new server configuration. In order to ensure that once the database is live you move on to dealing with all the security issues that may arise from the above scenario you need to reference the database security and the (new) server security details.

In SQL 2008 Database user details are stored in sys.database_principals; Server login details are stored in sys.server_principals.

In SQL 2005 Database user details are stored in sysusers; Server login details are stored in sys.syslogins.

The script below can be switched between the two versions simply by swapping the table names accordingly as the columns used are in both versions.

In order to view the users that have come with the database and dont exist on the new server we need to find them with:

[dp].name AS [Orphan DB User]
[sys].[database_principals] AS dp
LEFT JOIN sys.[server_principals] AS sp
ON dp.[name] = [sp].name
( [dp].[sid] IS NOT NULL
AND [sp].[name] IS NULL
AND CAST([dp].[sid] AS BIGINT) < 0

These users are in the database but dont have a server security principal to link to. Thats the “LEFT JOIN” and “[sp].[name] IS NULL” working it out for us.

Next are the database users that might have similar named principals on the server but if the SIDs for them dont match SQL Server will consider them different accounts and stop access to the database. This is a great GOTCHA – using SSMS you see the server login and then in the database you see the user, apparently identical. Only when you compare SIDs are the differences there to be seen. These occurrences are found with:

[sp].[name] AS [SQLServerLogIn] ,
[sp].[SID] AS [SQLServerSID] ,
[dp].[sid] AS [DatabaseSid]
sys.server_principals AS sp
INNER JOIN [sys].[database_principals] AS dp
sp.[name] = dp.[name]
[dp].[sid] sp.sid
AND [sp].[sid] < 1

Now this is OK but we can combine the two queries to have a concise list of security changes that we need to review:

NULL AS [‘Split’ login] ,
NULL AS [Server Principal SID] ,
[dp].[sid] AS [DB Principal SID] ,
[dp].name AS [Orphan DB User]
[sys].[database_principals] AS dp
LEFT JOIN sys.[server_principals] AS sp
ON dp.[name] = [sp].name
( [dp].[sid] IS NOT NULL
AND [sp].[name] IS NULL
AND CAST([dp].[sid] AS BIGINT) < 0
[sp].[name] ,
[sp].[SID] ,
[dp].[sid] ,
sys.server_principals AS sp
INNER JOIN [sys].[database_principals] AS dp
sp.[name] = dp.[name]
[dp].[sid] sp.sid
AND CAST([sp].[sid] AS BIGINT) < 0

We filter out SQL Server roles and system principals such as Guest, sa, public etc by CASTing the SIDs to a BIGINT and checking its a -ve value.

Next time we will review what to do with these accounts and how to get them enabled for the database in its new location.

Older Posts »