Coincidentally, I had to do something very similar about 3 hours ago. The table was 35m rows, it is fairly wide, and it was taking forever to just do this.
Coincidentally, I had to do something very similar about 3 hours ago. The table was 35m rows, it is fairly wide, and it was taking forever to just do this: alter table myTable add myNewColumn int not null default 0; Here's what what I ended up going with: alter table myTable add myNewColumn int null; while 1=1 begin update top (100000) myTable set myNewColumn = 0 where myNewColumn is null; if @@ROWCOUNT = 0 break; end alter table myTable alter column myNewColumn int not null; alter table myTable add constraint tw_def_myNewColumn default (0) for myNewColumn; This time around, the alter table statements were near-instant. It took about 7-8 minutes (on a slow server) to do the update batches.
I'm speculating that SQL Server was generating undo in my original query to restore the values, but I didn't expect that starting off. Anyway, in your case, maybe something similar would help. You could try adding a new bigint column, update the new column in batches, then set the constraints on it.
I did not find any performance differences between changing the column size and copying into a new column in a table with 10 million + rows. – SharePoint Newbie Dec 1 '10 at 12:50 Which means it was slow in both cases? – Matt Dec 1 '10 at 15:43.
Create the new table you desire, with the right columntypes and indices. (script out the old table, and change the name. ) insert into new table (column list) select * from old_table; rename the old_table old_table_back, rename new_table old_table.
Create the old indices on the new table, drop any ri constraints on the old table and creaet them on the new table. Again, your rdbms will have some easy way to generate scripts to do this.
We are doing this for Primary Key columns. – SharePoint Newbie Nov 30 '10 at 8:22 Generally much faster; rather than having to resize each row, it's just creating new rows of the correct size. – tpdi Nov 30 '10 at 14:12.
I just ran into this a few weeks ago with a table with 639m rows. I ended up creating a new table and copying the data over in "batches". It took about 2 days on the main server and replication took 3 days to replicate it all.
I then modified all the views and procs that used to old table. This allowed me to clean up a few issues, like getting rid of a column I didn't want and picking (in some cases) better indexes. After all the data was moved sql changed, I then dropped the old table.
A mess, but am wiser for it now. Use big ints for you identity primary keys if the system will be long lived and there is any chance of multi-millions of rows.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.