ThinkGeek! You"ll love this stuff!

Friday, January 18, 2008

The transaction log for database 'PoeticSQL_BlogEntries' is full

If you have (intentionally or inadvertently) set
your recovery level to FULL, your SQL Server transaction
logs will not be cleared of inactive transactions unless a
transaction log backup is performed. This is not the case
for database transaction logs whose recovery models
are set to SIMPLE. When the log needs more space, it will
grow automatically by the percent or number of MB specified*.
If not or if the file has already grown to the size of available
space on the drive, you could end up with the problem for
which this post it titled.

If your company is like the one for which I work, it may have
databases that are 'in transition' to becoming
full-fledged 9.0 version databases but are currently
in 8.0 compatibilty mode. If that is true for you, you
may find that the reporting feature on the Summary screen
of Management Studio does not show you the Disk Usage
report which is useful for troubleshooting this sort of
thing.

What to do?
SQL Server 2005 might give you a hint and tell you to
check the reusability status of the log in sys.databases.

SELECT log_reuse_wait_desc, [name]
FROM sys.databases



The resultset above indicates that my transaction log
is waiting to be backed up and that is the only thing
blocking the reuse of space within the log.

But this still doesn't answer the question about
Disk Usage as it pertains to the transaction log. SQL
Server has a command that you can run to see the current
size of the transaction logs and how much space is
currently being utilized. The command is
DBCC SQLPERF(logspace).



If you run this command at different times you will see the
Log Space Used (%) fluctuate depending on the activity in
your database, the recovery model set and whether you are
running transaction log backups (which clear out the inactive
transactions and free space for reuse).

So my transaction log is full! And there's no space on the
disk left to start a back up. Thankfully this is in our
dev environment so I issue a:

BACKUP LOG PoeticSQL_BlogEntries WITH truncate_only

There is another option, considering that this is
a development environment:
Temporarily change the recovery model to SIMPLE,
issue the syntax CHECKPOINT in a query window
and then change the recovery model back to full (if needed).
(Some companies make it a policy to back up their dev
environments)

Avoiding trouble in the Future
If one were so inclined, one might attempt to glean insight
into the environment and log usage by tracking the result of
DBCC SQLPERF(logspace) over time and then analyzing it. This
is accomplished by sending the output of this command to a
table and scheduling it's execution via a SQL Agent job.

You'll need a few things:

CREATE PROC dbo.spTLogSizeTracker
AS
DBCC SQLPERF(logspace)
GO


CREATE TABLE dbo.LogTLOGSize
(
id INT IDENTITY (1,1),
logDate datetime DEFAULT GETDATE(),
DB sysname,
logSize decimal(18,5),
logUsed decimal(18,5)
)
GO



CREATE PROC dbo.spGetTLOGSizes
AS
SET NOCOUNT ON

CREATE TABLE #temp
(
DB sysname,
logSize decimal(18,5),
logUsed decimal(18,5),
status INT
)

INSERT INTO #temp
EXEC spTLogSizeTracker

INSERT INTO LogTLOGSize (DB, logSize, logUsed)
SELECT DB, logSize, logUsed
FROM #temp

DROP TABLE #temp
GO
With the above supporting structures in place you can create
a SQL job in Management Studio to run the spTLogSizeTracker
procedure on a regular schedule such as hourly.

Then simply query the table and voila! You have a wealth
of information about your database's relative workload as seen
through the transaction log.

Happy TSQLing
SQLPoet



*For example (filegrowth = 1MB) means the transaction log will grow by 1 meg every time the log needs to expand.

Tuesday, January 15, 2008

Arithmetic overflow error converting expression to data type int

Today I was troubleshooting a query that was not returning expected records. In so doing, I decided to get a count of the records SQL Server 2005 was returning. Here is the query I used:

SELECT count(*)
FROM jdr WITH (NOLOCK)
INNER JOIN HR WITH (NOLOCK)
on hr.IB_ID = jdr.ib_id

Upon execution, I got the following SQL Server error:

Arithmetic overflow error converting expression to data type int.

As it turns out, the resulting count(*) was in fact out of range for an int data type.
See this SQL Books online article .

Using COUNT_BIG(*) instead of COUNT(*), I avoided the error:

SELECT COUNT_BIG(*)
FROM jdr WITH (NOLOCK)
INNER JOIN HR WITH (NOLOCK)
on hr.IB_ID = jdr.ib_id

--333,514,544,659

So the moral of the story is that if you are counting more than ~2 million records, better to use COUNT_BIG(*)


Happy TSQLing
SQLPoet

Thursday, January 10, 2008

Unused Index

--Use this syntax to determine which indexes are not used in SQL Server 2005

SELECT object_name(stats.object_id),
CASE i.[index_id]
WHEN 0 THEN N'HEAP'
ELSE i.[name]
END AS [Index Name], *
FROM sys.dm_db_index_usage_stats stats
JOIN sys.objects o on stats.[object_id] = o.[object_id]
JOIN sys.indexes i on o.[object_id] = i.[object_id]
WHERE [database_id] = DB_ID()
and user_scans = 0
and user_seeks = 0
and user_lookups = 0
and i.name not like 'pk%'