Showing posts with label subscriber. Show all posts
Showing posts with label subscriber. Show all posts

Monday, March 19, 2012

Exclude column from replication, but leave column on subscriber.

It seems I can't figure out how to have two tables, say integer columns named
A, B, and C, with a rowguid column.
I want that table structure on my subscriber (A, B, C, rowguid) but I only want
to replicate data from columns A and B.
As it stands, replication tries to create it on the subscriber as A, B,
rowguid, leaving the C column off.
Thanks for any help.
Brian K
should have specified, this is merge replication, SQL Server 2000.
Brian
In article <4IJtc.16319051$Id.2708018@.news.easynews.com>, Brian Keener wrote:
> It seems I can't figure out how to have two tables, say integer columns named
> A, B, and C, with a rowguid column.
> I want that table structure on my subscriber (A, B, C, rowguid) but I only want
> to replicate data from columns A and B.
> As it stands, replication tries to create it on the subscriber as A, B,
> rowguid, leaving the C column off.
> Thanks for any help.
> Brian K
|||Brian,
is it a requirement that the rowguid changes independantly on publisher and
subscriber? If not and the guid can't change then why not include it?
Regards,
Paul Ibison
|||In article <OT6YKXNREHA.3300@.tk2msftngp13.phx.gbl>, Paul Ibison wrote:
> Brian,
> is it a requirement that the rowguid changes independantly on publisher and
> subscriber? If not and the guid can't change then why not include it?
> Regards,
> Paul Ibison
>
Hmm, rowguid is not a column I'm worried about. It can stay the same
across the board. Row C need to be able to change independently on
the two tables.
Brian K
|||Brian,
you'd have to edit the merge triggers to enable this by preventing entries
into MSmerge_contents from changes to C, and I wouldn't recommend it. It
would also make validation impossible. You could however achieve the same
functionality using views - the view would be a join returning columns A, B,
C, rowguid while the underlying tables would be
table1: A,B,rowguid and
table2: C,pk (one of A or B)
Then you just replicate table1. If changes can occur to cols A/B and C in
one update then you'll need to set up instead of triggers.
HTH,
Paul Ibison
|||its not the merge triggers you edit for this. Its the stored procedures that
is uses which look like this
sp_ins_3477CF08A8A2481D3269F4FE171245B8
look for a statement that looks like this.
if @.retcode<>0 or @.@.ERROR<>0
begin
set @.errcode= 0
goto Failure
end
insert into [dbo].[authors] ( [au_id] , [au_lname] , [au_fname] , [phone] ,
[address] , [city] , [state] , [zip] , [contract] , [msrepl_tran_version] ,
[rowguid] ) values ( @.p1 , @.p2 , @.p3 , @.p4 , @.p5 , @.p6 , @.p7 , @.p8 , @.p9 ,
@.p10 , @.p11 )
select @.rowcount= @.@.rowcount, @.error= @.@.error
do this for both the insert and update procedures
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23zbYbaOREHA.3012@.tk2msftngp13.phx.gbl...
> Brian,
> you'd have to edit the merge triggers to enable this by preventing entries
> into MSmerge_contents from changes to C, and I wouldn't recommend it. It
> would also make validation impossible. You could however achieve the same
> functionality using views - the view would be a join returning columns A,
B,
> C, rowguid while the underlying tables would be
> table1: A,B,rowguid and
> table2: C,pk (one of A or B)
> Then you just replicate table1. If changes can occur to cols A/B and C in
> one update then you'll need to set up instead of triggers.
> HTH,
> Paul Ibison
>
|||Hilary,
I agree that it could be done in the stored procedure, but think the trigger
modification is simpler:
For an update, what I had in mind is something like
IF Not UPDATE(C)
BEGIN
existing trigger code
END
The insert and delete triggers could be left as they are, as both should
propagate through the normal merge replication.
Cheers,
Paul Ibison
|||Paul
After posting my solution I had reservations about it.
The advantage of your solution is the update trigger will fire but won't
write replication metadata for updates to the columns you wish to exclude.
My solution is that all updates will be propagated to the subscriber, but
will not be applied.
In retrospect I think your solution is perhaps the better one.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23c9lNXaREHA.2408@.tk2msftngp13.phx.gbl...
> Hilary,
> I agree that it could be done in the stored procedure, but think the
trigger
> modification is simpler:
> For an update, what I had in mind is something like
> IF Not UPDATE(C)
> BEGIN
> existing trigger code
> END
> The insert and delete triggers could be left as they are, as both should
> propagate through the normal merge replication.
> Cheers,
> Paul Ibison
>
|||In article <#zbYbaOREHA.3012@.tk2msftngp13.phx.gbl>, Paul Ibison wrote:
> Brian,
> you'd have to edit the merge triggers to enable this by preventing entries
> into MSmerge_contents from changes to C, and I wouldn't recommend it. It
> would also make validation impossible. You could however achieve the same
> functionality using views - the view would be a join returning columns A, B,
> C, rowguid while the underlying tables would be
> table1: A,B,rowguid and
> table2: C,pk (one of A or B)
> Then you just replicate table1. If changes can occur to cols A/B and C in
> one update then you'll need to set up instead of triggers.
> HTH,
> Paul Ibison
>
Pretty much what I assumed.
I've already implemented this as a view in our test environment, but
thanks for the information.
Brian K

Monday, March 12, 2012

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.

Excessive network usage with SQL 2000 to SQL 2005 transactional replication

We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.

We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.

We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.

Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?

Thanks,

Geoff

WA POLICE

is this merge or tran?

|||

i will guess it's merge replication. MErge replication does transfer a lot of metadata over the wire, you can see what's being replicated by running a profiler trace.

|||

sorry, i totally misread your thread, you do have transactional replication. You need to understand the data changes you are making at the publisher and what is getting replicated. Every change made to a table that is enabled for replication will get marked as a change that needs to be replicated. These changes, one by one, are inserted into the distribution database. So if your database is growing to 30 GB, you must either be doing a lot of inserts or updates, or you might be touching BLOB columns as well. You should investigate exactly what you're replicating to understand your scenario.

|||

Thanks for the information Gerg. Yes we do have transactional and yes we have determined that the traffic seems to be related to the triggers on the publisher. We are looking at ways to limit these at present without impacting the business functions that use that database.