Initial set up question for large table - 11-09-2010 , 10:08 AM
I am new to SQL replication and have been assigned to replicate a few
tables to another server. I have configured it so that the distributor
is on the subscriber server and all seemed well replicating a handful
I have run into a problem when I created another publication, this
time for a table with approximately 9 million records in. It seems to
create the snapshot okay but does not seem to put the data in the new
I am not really too sure where to look for possible causes of this. I
tried this yesterday but seemed to be getting timeouts upon doing an
insert into this table and then today I am getting the "The row was
not found..." error.
Has anybody got any advice? Do I need to start from scratch on all
publications? This error is stopping my distributor from updating the
tables which did initially work now.
Re: Initial set up question for large table - 11-09-2010 , 11:25 AM
I've had this problem before. While I can't say what causes it (nor
how to make it work like it's supposed to), here's how I've resolved
it in the past
1) Wait for a command to fail against that article
2) Stop the distribution agent
3) Go to the snapshot directory at the distributor. In there, you'll
find a bunch of files, but they break down into a handful of
types: .pre (T-SQL script to run first), .sch (T-SQL schema
files), .bcp (the data), and .idx (T-SQL indexes)
4) Apply the scripts to the subscriber in the following
order: .pre, .sch, .bcp, and .idx. A note about the .bcp files:
they're in bcp-native format. You can apply them with something like
the following: bcp database.schema.table in file.bcp -n -S server -T
(understand that command before you apply it)
5) Once all of the scripts are applied at the subscriber, start the
distribution agent again.
The distribution agent should get over the rough patch and apply
commands from that point forward.
On Nov 9, 11:08*am, Gary <fail.g... (AT) googlemail (DOT) com> wrote:
Re: Initial set up question for large table - 11-11-2010 , 07:46 AM
On Nov 9, 5:25*pm, Ben Thul <thul... (AT) gmail (DOT) com> wrote:
Thanks for the reply. I only have one question, after manually loading
the snapshot via bcp etc. how does the distributor know not to try
again once it is restarted?
Re: Initial set up question for large table - 11-11-2010 , 11:06 AM
Good question. Once you see an error in trying to deliver commands to
the newly-added table, you know that the distribution agent has
skipped the command to apply the snapshot. However, if it does apply
the snapshot again, it's no worries: it'll do just what you did
manually and then start distributing commands. So the worst case
scenario is that the snapshot gets applied twice, and that's not such
a bad worst case.
On Nov 11, 7:46*am, Gary <fail.g... (AT) googlemail (DOT) com> wrote:
Re: Initial set up question for large table - 11-12-2010 , 04:22 AM
On Nov 11, 5:06*pm, Ben Thul <thul... (AT) gmail (DOT) com> wrote:
What I ended up doing was running it as normal but some how it didn't
seem to add any indexes on to the table. From SQL Profiler I could see
the bulk updates and then the individual transactions etc (what I
realised at this point was that the table would be locked until this
was done so I had to be patient). So I decided to stop the distributor
and manually add the primary key index to the table and remove the
drop table / create table commands from the .sch then when I started
the distributor again it bulk loaded the data and the queued
transactions were done in a flash. From what I can tell it all seems
to be going okay today.
Thanks again, your help is much appreciated.
Re: Initial set up question for large table - 11-12-2010 , 09:54 AM
If you want some peace of mind, you can use sp_article_validation on
the article to ensure that everything's in sync. You'll view the
results in the replication monitor. Glad everything worked out for
On Nov 12, 4:22*am, Gary <fail.g... (AT) googlemail (DOT) com> wrote: