[eluser]carvingCode[/eluser]
Follow up:
I added a SELECT statement prior to INSERT to check for the occurrence of duplicate key(s). In my tests using approximately 60 records with 20 duplicates, all worked fine. No duplicates were inserted and no errors.
When I bumped up my test to a more real world example of 10K records (this portion of the app converts a CSV dump of an existing DB), the program errors outs with a duplicate KEY error at approx. record 8200. There were no duplicates prior to this, so it found the first dupe.
Any idea what would cause this? Is MySQL being overrun?
Any way to solve this, aside from pre-cleaning the file-to-be-imported?
TIA