Inserts Based on Subqueries

The following example reads every row in large_tab and inserts the values of the columns col1 and col2 into new_tab, all within a single transaction:

insert new_tab select col1, col2 from large_tab

Each individual insert operation is logged and the records remain in the transaction log until the entire statement has completed. Any locks required for processing the inserts remain in place until the transaction is either committed or rolled back. Because the destination table is often a new table, the locking issue might not be as severe as in previous examples. Depending on the size of the table being read, however, the amount of logging could pose a problem.

Like the mass updates problem, you can often solve this problem by breaking up the statement into several statements that accomplish the same task. For example, the single INSERT statement in the previous example could be broken into two or more pieces, as follows:

insert new_tab 
select col1, col2 from large_tab where col1 <= y
go

dump transaction database_name with truncate_only
go

insert new_tab
select col1, col2 from large_tab where col1 > y
go

dump transaction database_name with truncate_only
go

In this example, y represents a median value for col1, and null values are not allowed. This is just one example of several possible ways to break up a query. Although it is not required, the inserts run significantly faster if there is a clustered index on large_tab.col1.

If transaction logs are saved for media failure recovery, the log should be dumped to a device, and the WITH TRUNCATE_ONLY option should not be used. After you execute a DUMP TRANSACTION WITH TRUNCATE_ONLY statement, you must dump the database before you can dump the transaction log to a device.