[eluser]jedd[/eluser]
Ah, okay then. I think we're looking at slightly different aspects of the same problem, from slightly different angles.
I do understand that if you use a non-transactional database engine, you don't get transactions. Honestly I do
[quote author="WanWizard" date="1279302054"]
Our solution works table-based, so it doesn't matter how many tables are involved.
[/quote]
I think my concern here is that if you are making changes to multiple tables, the level of complexity in tracking these rises linearly. On a complex schema this may become untenable.
Quote:Transations might come in handy, as they allow you to roll back all updates within a transation at one, and might be required to secure consistency. But that's diffferent from concurrency.
I think the attempt to sacrifice consistency to provide a crude form of concurrency might bite you later. What I mean is that you're only guaranteeing A_ID here with this approach, and I think that's dangerous.
Unless you're locking tables (all the tables that your pseudo-transaction will operate on) starting prior to your re-reading of the timestamps on relevant rows/tables, and unlocking them once you've done the updates to all the tables -- then you have no way of guaranteeing consistency or ensuring a loss of data.
This is because there's no atomicity for the sequence of actions:
o read the time stamp(s)
o write the updates to the table(s)
Sure, this may be a very small window of opportunity .. but it's still a risk. It explains why you see favourable performance over transactions under InnoDB - it's because you're not getting the same functionality.
Retrying a failed transaction is pointful iff you think you've just failed to commit because of a lock - it's a timeout question, I know, and can be handled in a number of places. But as the [url="http://dev.mysql.com/doc/refman/5.0/en/innodb-deadlocks.html"]MySQL documentation advises[/url]:
Quote:"Normally, you must write your applications so that they are always prepared
to re-issue a transaction if it gets rolled back because of a deadlock."
If you're doing table locks on MyISAM (from memory you can't do row-level locking on MyISAM, and in any case row-level locking might not be satisfactory depending on the nature of the data you're trying to insert/update) then I can see some benefit to doing a brief pause & a retry there, too.
And, of course, if you're doing table-level locks across all tables that you're about to make changes to, then I ack that this
does provide you the atomicity that I said was lacking above.