Showing posts with label inserts. Show all posts
Showing posts with label inserts. Show all posts

Friday, March 23, 2012

Exclusive Insert Lock on a Table

Hello All!

I want to perform 4 or 5 statements as a transaction but I need to make sure that during this complete transaction no one else inserts or deletes records from a table named SomeTable.

So how can I lock MyTable at the beggining of the transaction so that during my transaction no one else can insert or delete anything in table SomeTable?

Thanks!

David

You can open an transaction explicitly so other transactions are placed in a queue. Use BEGIN TRAN /COMMIT TRAN. Read up books on line for more info on transactions.

|||

Yup, I could read about this as you suggest, but it's certainly pretty complex so I was hoping someone experienced and knowleadgable would give me the answer.

Your post but it doesn't answer my question. I know how to create an explicit transaction.

My question is how do I lock a table during a transaction so that no one else inserts or deletes records from it.

I don't think that by just creating an explicit transaction and reading a row from MyTable this will lock the complete table.

|||

neutrino:

I don't think that by just creating an explicit transaction and reading a row from MyTable this will lock the complete table.

Yes it does. And thats all you have to do. Try this scenario:

(1) In a query analyzer window run this script

BEGIN TRAN

INSERT INTO TheTable ... <complete the rest of the INSERT>

(2) Open another query analyzer and do a SELECT * From TheTable. You will see that your query will be in "suspended" status waiting for the Insert to finish since you opened a Transaction explicitly.

(3) Now go back to the first window and run this:

Commit

(4) Check the second window and you will see results for your SELECT *...

Basically your BEGIN TRAN has opened a transaction --> locked the table for any other commited transactions (Even reads). As soon as you commit the transaction the lock on the table is released and others can read from the table. You can by pass this and do a dirty read by using NOLOCK hint. Not always suggested unless your business requirements allow you to.

|||

Thanks. This tells me how to lock a table by creating a transaction and doing and insert statement. However, in my scenario I need to lock the table innitially before doing any inserts to it and I want it to remain locked until the transaction ends (even if I don't do any inserts). I don't want any other transaction to be able to insert any rows until my transaction finishes.

I think that what I need is to set the transaction isolation level to SERIALIZABLE.

I found this: http://msdn2.microsoft.com/en-us/library/ms173763.aspx

I'll be trying it later and will post results.

David

|||

I do find your query analyzer excercise extremelly useful. I will use it to test my locking 'theories". Thanks!

|||

ndinakar:

neutrino:

I don't think that by just creating an explicit transaction and reading a row from MyTable this will lock the complete table.

Yes it does. And thats all you have to do. Try this scenario:

(1) In a query analyzer window run this script

BEGIN TRAN

INSERT INTO TheTable ... <complete the rest of the INSERT>

(2) Open another query analyzer and do a SELECT * From TheTable. You will see that your query will be in "suspended" status waiting for the Insert to finish since you opened a Transaction explicitly.

(3) Now go back to the first window and run this:

Commit

(4) Check the second window and you will see results for your SELECT *...

Basically your BEGIN TRAN has opened a transaction --> locked the table for any other commited transactions (Even reads). As soon as you commit the transaction the lock on the table is released and others can read from the table. You can by pass this and do a dirty read by using NOLOCK hint. Not always suggested unless your business requirements allow you to.

Actually, that doesn't lock the whole table. It locks a portion of the table, but your SELECT requires access to the entire table (including the locked portion), so it has to wait. If you have the table indexed, and the SELECT can use the index to determine that it doesn't need the locked portion then it won't delay the SELECT. In addition, a second INSERT should complete without being delayed.

To the original poster, what you are asking for is rather uncommon, and you are best to try and avoid doing what you are asking to do. Perhaps you need to rethink why you want the table locked, and what you are trying to accomplish by doing so. Usually there is a much better way of achieving that.

|||

Motley:

To the original poster, what you are asking for is rather uncommon, and you are best to try and avoid doing what you are asking to do. Perhaps you need to rethink why you want the table locked, and what you are trying to accomplish by doing so. Usually there is a much better way of achieving that.

You are totally right. This was actually what I did. I re-thought the process and found a better way that doesn't require the table lock.

But anyways it was a great learning experience.

Thanks all for your support.

David

|||

ndinakar:

neutrino:

I don't think that by just creating an explicit transaction and reading a row from MyTable this will lock the complete table.

Yes it does. And thats all you have to do. Try this scenario:

It actually doesn't but the rest of your post was really helpful. Thank you.

Monday, March 12, 2012

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||

Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.