ThinkGeek! You"ll love this stuff!

Thursday, September 4, 2008

Is the file there?

I was recently asked to help verify that files actually existed in the locations specified in a field of a table. That is that the database has a table that contains a field that contains a file path and a file. My task (since I chose to accept it) was to validate that the database value actually pointed to a file in the OS.

A rather undocumented extended stored procedure continues to exist in SQL Server 2005 (xp_fileexist). I decided to use it since this task was not for application functionality but simply an ad-hoc query to help troubleshoot an issue. Microsoft discourages the use of undocumented syntax because it does not guarantee support for or existence of that syntax in the future. But for this situation, it was ok.

So basically the thing is used like this:

EXECUTE xp_fileexist 'c:\autoexec.bat', @result OUTPUT

I do suggest you use a OUTPUT variable in this case, since you are checking more than a single file. XP_fileexist can also be used without the OUTPUT variable which will return a result set like this:

File Exists | File is a Directory | Parent Directory Exists
----------- | ------------------- | ------------------
1 | 0 | 1

I however had a list of paths I needed to loop through so first I gathered the relevant data into a temp table. I created variables to hold a single file path value and also an INT OUTPUT variable to return the result (1= exists/ 0 = not exists).

Within the loop, I then update the temp table for each file path so I could report back which file paths were there vs not. Last is just a SELECT from the temp table I just updated (the report).

use PoeticSQL_BlogEntries

--drop table #temp1

SELECT uri, 0 as exists1
INTO #temp1
FROM Cache with (nolock)
WHERE exportmode='Print'
and createdate > '09/04/2008'
and uri like '%223454%'

DECLARE @URI varchar(2000) --to hold currently searched file path
DECLARE @iFileExists int --to hold the return value from the XP

--initialize the varibale
SELECT @URI = '' + min(cachefileuri) + '' FROM #temp1

--THE Loop

EXECUTE master..xp_fileexist @URI , @iFileExists OUTPUT

UPDATE #temp1
SET exists1 = @iFileExists
WHERE cachefileuri = @URI

--get next record
SELECT @URI = min(cachefileuri) FROM #temp1 WHERE cachefileURI > @URI

SELECT * FROM #temp1

--clean up

There are newer solutions to this problem. Some use the OACreate and OADestroy objects which require special permissions and another interesting one I have seen but not tested myself is master..sp_MSget_file_existence.

Good Luck and Happy TSQLing

Friday, January 18, 2008

The transaction log for database 'PoeticSQL_BlogEntries' is full

If you have (intentionally or inadvertently) set
your recovery level to FULL, your SQL Server transaction
logs will not be cleared of inactive transactions unless a
transaction log backup is performed. This is not the case
for database transaction logs whose recovery models
are set to SIMPLE. When the log needs more space, it will
grow automatically by the percent or number of MB specified*.
If not or if the file has already grown to the size of available
space on the drive, you could end up with the problem for
which this post it titled.

If your company is like the one for which I work, it may have
databases that are 'in transition' to becoming
full-fledged 9.0 version databases but are currently
in 8.0 compatibilty mode. If that is true for you, you
may find that the reporting feature on the Summary screen
of Management Studio does not show you the Disk Usage
report which is useful for troubleshooting this sort of

What to do?
SQL Server 2005 might give you a hint and tell you to
check the reusability status of the log in sys.databases.

SELECT log_reuse_wait_desc, [name]
FROM sys.databases

The resultset above indicates that my transaction log
is waiting to be backed up and that is the only thing
blocking the reuse of space within the log.

But this still doesn't answer the question about
Disk Usage as it pertains to the transaction log. SQL
Server has a command that you can run to see the current
size of the transaction logs and how much space is
currently being utilized. The command is
DBCC SQLPERF(logspace).

If you run this command at different times you will see the
Log Space Used (%) fluctuate depending on the activity in
your database, the recovery model set and whether you are
running transaction log backups (which clear out the inactive
transactions and free space for reuse).

So my transaction log is full! And there's no space on the
disk left to start a back up. Thankfully this is in our
dev environment so I issue a:

BACKUP LOG PoeticSQL_BlogEntries WITH truncate_only

There is another option, considering that this is
a development environment:
Temporarily change the recovery model to SIMPLE,
issue the syntax CHECKPOINT in a query window
and then change the recovery model back to full (if needed).
(Some companies make it a policy to back up their dev

Avoiding trouble in the Future
If one were so inclined, one might attempt to glean insight
into the environment and log usage by tracking the result of
DBCC SQLPERF(logspace) over time and then analyzing it. This
is accomplished by sending the output of this command to a
table and scheduling it's execution via a SQL Agent job.

You'll need a few things:

CREATE PROC dbo.spTLogSizeTracker
DBCC SQLPERF(logspace)

id INT IDENTITY (1,1),
logDate datetime DEFAULT GETDATE(),
DB sysname,
logSize decimal(18,5),
logUsed decimal(18,5)


DB sysname,
logSize decimal(18,5),
logUsed decimal(18,5),
status INT

EXEC spTLogSizeTracker

INSERT INTO LogTLOGSize (DB, logSize, logUsed)
SELECT DB, logSize, logUsed
FROM #temp

With the above supporting structures in place you can create
a SQL job in Management Studio to run the spTLogSizeTracker
procedure on a regular schedule such as hourly.

Then simply query the table and voila! You have a wealth
of information about your database's relative workload as seen
through the transaction log.

Happy TSQLing

*For example (filegrowth = 1MB) means the transaction log will grow by 1 meg every time the log needs to expand.

Tuesday, January 15, 2008

Arithmetic overflow error converting expression to data type int

Today I was troubleshooting a query that was not returning expected records. In so doing, I decided to get a count of the records SQL Server 2005 was returning. Here is the query I used:

SELECT count(*)
on hr.IB_ID = jdr.ib_id

Upon execution, I got the following SQL Server error:

Arithmetic overflow error converting expression to data type int.

As it turns out, the resulting count(*) was in fact out of range for an int data type.
See this SQL Books online article .

Using COUNT_BIG(*) instead of COUNT(*), I avoided the error:

on hr.IB_ID = jdr.ib_id


So the moral of the story is that if you are counting more than ~2 million records, better to use COUNT_BIG(*)

Happy TSQLing

Thursday, January 10, 2008

Unused Index

--Use this syntax to determine which indexes are not used in SQL Server 2005

SELECT object_name(stats.object_id),
CASE i.[index_id]
ELSE i.[name]
END AS [Index Name], *
FROM sys.dm_db_index_usage_stats stats
JOIN sys.objects o on stats.[object_id] = o.[object_id]
JOIN sys.indexes i on o.[object_id] = i.[object_id]
WHERE [database_id] = DB_ID()
and user_scans = 0
and user_seeks = 0
and user_lookups = 0
and not like 'pk%'

Thursday, November 29, 2007


Ever tried to use the sysdepends table to determine dependencies among your database objects? It doesn't ALWAYS work. There are those occasions when during creation, you might get a warning message from SQL Server that says that a procedure was created but that the row was not added to sysdepends:

Cannot add rows to sysdepends for the current stored procedure because it
depends on the missing object 'spGetSQLPoetsStats. The stored procedure
will still be created.

I think we have all had this issue and know why it happens. The purpose of this post is to show you another way to 'skin-the-cat'.

Take for instance this undocumented procedure: sp_MSdependencies. If you Googled it you might encounter some difficulty in finding just how to use it. It is one of many undocumented tools that Microsoft 'provides' with SQL Server. It should be noted at this point that Microsoft does not guarantee future support for undocumented objects. It is recommended that you do not use this in your application code but simply as a means of adhoc reporting for as long as we can.

The task in this case was to provide the QA folks with a list of the objects they would be testing and the dependencies thereof. As stated earlier, sysdepends does not ALWAYS help and this was one of those times.

If you EXECUTE sp_msdependencies '?' in the query editor against sql server's master database you will find a list of parameter values you can specify. Pay particular attention to the flag parameter. Therein lies the magic:)

I am using 266751 which shows the objects on which these depend and also 4607 to show the objects that depend on those objects...confused? The result set should fill in the holes in your understanding.


set nocount on

declare @tablename varchar(100)
declare @type varchar(100)
declare @ServerName varchar(100)

set @ServerName = db_name()

Declare @temp2 table
(ServerName varchar(100),
ObjectName varchar(100),
objecttype varchar(100),
Dependency varchar(100),
ObjectName_2 varchar(100),
Objecttype_2 varchar(100))

Declare @temp table
(otype varchar(100),
oObjname varchar(100),
oOwner varchar(50),
oSequence int)

Select top 1 @tablename = name, @Type = type_desc from sys.objects
where type in ('FN','P','TF','V','U')
and is_ms_shipped <> 1
order by name

while @tablename is not null

Insert into @temp
EXEC sp_MSdependencies @tablename, null, 266751 --this flag shows which objects depend on these

Insert into @temp2
select @ServerName, @tablename, @type, 'Is Used by', oObjname,
Case otype
When 4 then 'View'
When 16 then 'Stored Procedure'
When 256 then 'Trigger'
When 8 then 'Table'
When 1 then 'Function'
Else otype
from @temp

Delete from @temp

Insert into @temp
EXEC sp_MSdependencies @tablename, null, 4607 --this flag shows the objects these depend on

Insert into @temp2
select @ServerName, @tablename, @type, 'Is Dependent on', oObjname,
Case otype
When 4 then 'View'
When 16 then 'Stored Procedure'
When 256 then 'Trigger'
When 8 then 'Table'
When 1 then 'Function'
Else otype
from @temp

Delete from @temp
Print @tablename

set @type = (Select top 1 type_desc from sys.objects
where type in ('FN','P','TF','V','U')
and is_ms_shipped <> 1
and name > @tablename
order by name)

set @tablename = (Select top 1 name from sys.objects
where type in ('FN','P','TF','V','U')
and is_ms_shipped <> 1
and name > @tablename
order by name)


select * from @temp2 order by objecttype

With the results of this script, we can provide the QA folks the list they need to document, test, etc.

Happy TSQLing

Wednesday, November 14, 2007

Using SQL to write SQL

You may be asked to create users for your company's inhouse application. We're not talking SQL Server Logins and Database Users, no. We're talking security credentials that most web applications administer through tables seemingly common to most businesses like the [Users] table.

If you were to get hit with one of these requests, your first reaction might be to ask whether a process or scripts for this sort of thing already exist. Depending on the size and age of your company and the quality of your predecessor DBAs, there might not be any processes or scripts available and so you must create one. Sure you COULD go to the GUI and then script your changes out but you still have to manually enter the data values for the new record. Plus using the GUI is a much less efficient way to get these users into every environment in which they are needed (thanks to the human element and our tendency to error).

I recently got one of these requests in which the author of the ticket specified three new users to be created and modelled after another user (permission-wise). Our application keeps user information in a series of tables with the prefix User. In order for a user to have access to the application, they will need a record in the [User] table.

No brainer, so far?

To INSERT into the User table we might use something like:

INSERT [User](User_Group_ID,User_Name,Password_Hash,Full_Name,Email_Address,[...])
VALUES ('3121', 'sqlpoet', 'B44DDA1DADD351948FCACE1856ED97366E679239',[...])

Even with a manually written insert statement like this one we still have to get the data of the user we are required to model these new ones after. Since I did some preliminary querying on the the table, I know that our model user's User_ID = 11106

SELECT * FROM [User] WHERE User_ID = 11106

Using the data resulting from the query above, we plug in the common data values changing variable datavalues such as User_Name and then copy and paste each value into its cooresponding place in the INSERT statement.

But what if we re-wrote our SELECT to write the INSERT for us?

'INSERT [User](User_Group_ID,User_Name,Password_Hash,Full_Name,Email_Address,[...])
VALUES(''' + convert(varchar(20),User_Group_ID) + ''', ''' + [User_Name] + ''', ''' + char(10) +
Password_Hash + ''', ''' + Full_Name + ''', ''' + Email_Address + ''', ''' + [...] + ''')'
FROM [User] WHERE User_ID = 11106 --our model user

The result of the above query, done correctly, will yield a statement that with a little editing can be copied and pasted into your change control script so that these users easily move from environment to environment whenever executed. Or you can execute them to create these users now...its up to you. Happy TSQLing.

SQL Poet

Wednesday, October 10, 2007

Create permissions statements Procs and/or UDFs

** Today's post concerns security. I am in a smaller company that
** is just getting used to having DBAs. As such, the general care
** and feeding of the database and servers was similar to the
** marshal law of the Wild West. All developers had sa permissions
** to the database servers! Which in essence means everyone had a
** gun and could accidentally (or otherwise) shoot the database
** in the foot at will. This included the user that was created for the
** application as well. We want to switch to Role based permissions
** for the application.
** Long story short, after sufficient warning, we removed the elevated
** permissions that the developers were enjoying. We gave the application's
** user account db_datareader (not the best option but I digress) and then
** scripted all the SELECT and EXECUTE permissions pertaining to
** Procedures and UDFs for the application's user account. We did this
** last scripting step using a procedure included in the install of SQL Server:
** sp_helprotect; which brings me to this small disclaimer:
** This works only in SQL Server 2000 this is because the integral
** piece of this solution (sp_helprotect) works differently in 2005:(
** This script creates the Role and adds the specified user to it.
** Then it creates permissions statements for each Proc
** or UDF based on the permissions of the specified user. You must then
** copy/paste the results into Query Analyzer or the Management Studio
** editor.
** For our example, let us assume that the application database user
** account is named 'sqlpoet'. This account already has permissions directly
** granted to it for executing procedures and UDFs. We want to transfer those
** permissions to our new database role.

FROM dbo.sysusers
WHERE [Name] = N'ExecuteProcAndUDFs' and uid > 16399)
EXEC sp_addrole N'ExecuteProcAndUDFs'
DECLARE @UserAcct varchar(200)

SET @UserAcct = 'sqlpoet'

EXEC sp_addrolemember N'ExecuteProcAndUDFs', @UserAcct
IF EXISTS(SELECT * FROM tempdb..sysobjects WHERE [Name] like '%temp1%')
CREATE TABLE #temp1 ( --table to hold the current permissions set
Owner varchar(50),
Object varchar(200),
Grantee varchar(200),
Grantor varchar(200),
ProtectType varchar(200),
Action varchar(200),
[Column] varchar(200))
INSERT #temp1 --get the data
EXEC sp_helprotect
SELECT DISTINCT ProtectType --create the GRANT statements
+ ''
+ [Action]
+ ' on '
+ object
+ ' to ExecuteProcAndUDFs' /* optionally you can use Grantee field but I needed to grant these permissions to the hard-coded database role you see here.*/
FROM #temp1
JOIN information_schema.routines
ON object = specific_name
/*Rememeber to copy the results and execute them in the appropriate database
Happy TSQLing

Monday, September 24, 2007

Who changed my stored procedure?

** I currently work in an environment
** in which developers are
** allowed to make changes
** directly to the database objects
** in SQL Server.
** I have always endorsed empowerment
** and as a database developer/DBA for the
** same company, I do not feel threatened
** by their continued development in an
** environment under which they have this
** level of control.
** The problem arrises when we (as the database
** professionals on the development side of the house)
** are required to keep track of the data model and
** all objects. In order to accomplish this feat, we must be
** explicitly aware of all changes that occur. Well...
**'s a really cool tool that records these changes
** by taking advantage of features in SQL Server 2005
** namely the databse level trigger and the EVENTDATA()
** xml data type.
** What follows is a script that will allow you to see
** who is making changes to your objects in SQL
** Server 2005 databases.

** To see a full explanation of this solution
** by the author: Chris Rock (follow that link!!!)
** Happy TSQLing
** SQLPoet

USE [master]
USE [EventsLogDB]
--This goes into the eventslogdb database
CREATE TABLE [dbo].[ChangeLog](
[LogId] [int] IDENTITY(1,1) NOT NULL,
[DatabaseName] [varchar] (256) ,
[EventType] [varchar](50) ,
[ObjectName] [varchar](256) ,
[ObjectType] [varchar](25) ,
[SqlCommand] [varchar](max) ,
[EventDate] [datetime] NOT NULL CONSTRAINT [DF_EventsLog_EventDate] DEFAULT (getdate()),
[LoginName] [varchar](256) NOT NULL

--Note : Our DDL trigger goes into the database to monitor and is defined as:
USE [YourMonitoredDatabase]
CREATE TRIGGER backup_objects
DECLARE @data xml
INSERT INTO eventslogdb.dbo.changelog(databasename, eventtype,
objectname, objecttype, sqlcommand, loginname)
@data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(256)'),
@data.value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(50)'),
@data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)'),
@data.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(25)'),
@data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'varchar(max)'),
@data.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(256)')

Friday, September 21, 2007

read the name of the sql server agent account

As time goes on and the push to make things more dynamic continues, I find myself needing to create procedures and DTS/SSIS packs more intelligent about the environment to which it is being deployed. This is especially true now with the upgrade from earlier versions of SQL Server to SQL 2005 and soon SQL 2008.

This little script is what I use to determine the name of the sql server agent account so that I can correctly set the @owner variable of the sp_add_job procedure.

One could concievably put this into a stored procedure, if one so desired:)


-- script to read the name of the sql server agent account
-- Declare your variables...
DECLARE @rc int,
@machine nvarchar(128),
@instance nvarchar(128),
@profile nvarchar(128),
@regpath nvarchar(256),
@account nvarchar(128)

IF (charindex(N'7.00', @@version, 0) > 0)
SELECT @instance = NULL
IF (charindex(N'8.00', @@version, 0) > 0) or (charindex(N'9.00', @@version, 0) > 0)
SELECT @machine = CONVERT(nvarchar(128), serverproperty('machine')),
@instance = CONVERT(nvarchar(128),serverproperty('instance'))
-- Check if not running SQL Server as LocalSystem
-- @rc 0 = LocalSystem, 1 = running under account
EXEC @rc = master.dbo.xp_MSLocalSystem @machine, @instance
IF (@rc = 0)
PRINT 'SQL Server is running under the LocalSystem account'
-- Get account name running SQL Server
IF (@instance IS NULL)
SELECT @regpath = N'SYSTEM\CurrentControlSet\Services\SQLSERVERAGENT'
SELECT @regpath = N'SYSTEM\CurrentControlSet\Services\SQLSERVERAGENT$' + @instance
EXEC master.dbo.xp_regread N'HKEY_LOCAL_MACHINE',
@account OUTPUT,
PRINT 'SQL Server is running under account ' + @account

In the beginning...

So today I started my first ever blog as an online repository of all of the SQL scripts I keep in a folder called Code Library on my flash drive. This venue strives to make my scripts available to me regardless of whether I remebered my keys or not. As long as I have a connection to the internet, I can get to these little nuggets of time saving splendor.

In time I hope that this can be a place for more folks to share and store their favorite scripts:)

Happy TSQLing!