Hi, Our application runs on a SQL Server 2008 database. For 1 piece of functionality (email related) we use an entity (table) that contains about 100000 records. However these records are frequently inserted and deleted (every day new emails are inserted and every day emails older than X days are deleted. The number of records in the table is as such not increasing a lot. However the database table currently seems to occupy 40 GB of space according to the SQL server. When I download the full content of the table (all 100000 records) to a csv file the size is approx 1,8 GB. Sometime ago (less users) the table only contained 50000 records and occupied 5.8 GB, average email size was comparable). The table/entity does contain 3 columns of type nvarchar(max), but only 1 really requires this. Our initial thoughts were to rebuilt the indexes on the table but this does not seem to provide a relevant change. We are a bit reluctant in using the shrink database option since this seems to be quite a 'dangerous' operation. Anyone any suggestions on what directions to look for?
Check your recovery plans and the maintenance plans. In some cases all transactions are logged causing huge files. Google on "SET RECOVERY SIMPLE".
I never had problems with shrinking.
Maintenance plans can be used for periodically reindexing and shrinking.
Chris de Gelder
Problem is caused by LOB/nvarchar(max) columns. Deleting records does not release the occupied space in that case. Shrinking does not have any effect on this.
The solution is to rename the existing table from A to B, create a new table A identical to B, copy (use INSERT) all records from B to A. (re)create indexes on A.
In one of our cases it reduced the table size from 54GB to 4GB.
As far as I know shrink isn't dangerous and is ment for this, just don't use it every day since that would be counterproductive, but if you do this once in a while it could solve your issue. Why don't you try it in a test environment?