How long database shrink




















If not, bear in mind that shrinking consumes system resources as you are seeing and can cause disk fragmentation. You might try the brute force method: Create a new database on the 'spare' server and then your tables - avoid the indexes and triggers if you can.

Once completed, install your other objects - triggers, procs, etc. Backup the new database, verify all objects between db's , drop the old database, and then restore the new database onto your production system and rebuild the indexes. I still have the scripts and format files from a previous bcp we had done.

Michael Valentine Jones. The way to do it is to shrink it by database file in small imcrements in a loop, say 50 MB at a time. It will take a while, but it will be making slow, steady progress. Rudyx - the Doctor. I found your script very useful. Now i am stuck in a situation where i have to shrink the datafile to reclaim disk space. Do you think your query would be helpful in this scenario or should i just go ahead and shrink the datafile the usual way.

In my case what would be the ideal increments? Thanks again for the wonderful script You must be logged in to reply to this topic. Login to reply. February 4, at am The server is primarily used as a spare where I restore databases to.

You also may need to put tempdb on its own disk. February 4, at pm Some times it may take many hours If you have LOB in your db It depends.

I know that answer blows, but it really does depend. Using Management Studio logged on to the server. No blocks. Only GB remaining on the server.

Do you have lots of large data types or images on this database? No images. I believe bigint, decimal 18,0 ,varchar max are the biggest data sizes in the DB. March 24, at am March 25, at am Viewing 10 posts - 1 through 10 of 10 total. If you want to shrink the reserved space of the database after you delete data and the reserved space needs to be increased later as data is inserted again, then this procedure may create physical disk fragmentation and affect performance.

Be sure to run a disk defragmentation afterwards. Before we start inserting data, we need to look at and document the file size for the "mdf" and "ldf" files for the database. This will provide us a baseline for comparison.

We have a number of ways to accomplish this, I have listed two of them for your convenience. The results should look something like this, depending on your settings of minimum file size during the creation of the database. Navigate to the folder that holds the "mdf" and "ldf" files and take a screen shot like in image below. By default, this will be the folder path where SQL Server stores your data files.

Depending on your specific configuration and version of SQL Server. In the screenshot below, I have my data files on drive E:.

Remember to make sure you have enough available drive space before starting this insert. This is going to remove about eight million records from our database. Which will equate to approximately 3GB depending on how many rows were randomly created with dates older than five years. Remember, this is a "random" record generator so, every time you run the generator, you will get different results.

Looking back at the file sizes after deleting approximately eight million rows of data, we can see that the size of the mdf file has not changed. Conversely, the ldf file has grown tremendously. Remember, the log file is just as it seems, it logs all insert, update, delete, etc. Our ldf file grew to this size because there were about eight million "delete" actions run on the Sales.

Customers table in the database. Notice also the available free space on the drive has decreased when we were expecting an increase. This is due primarily to the increased ldf file size and the size of the mdf file retaining at that peak reserved space. SQL Server offers a couple of ways you can shrink the database and file size.

Like many things, this will boil down to personal preference.



0コメント

  • 1000 / 1000