Welcome Guest, Not a member yet? Register   Sign In
Multi user application
#1

[eluser]arthurhanlon[/eluser]
Hi there,

I stumbled across Codeigniter a while back and am absolutely loving it so far. I am in the middle of re-writing one of our smaller applications simply because I cannot follow the code written by the previous developer and have decided that Codeigniter is the best choice.

It's an application that will be used by multiple users at the same time though and I am unsure as to how it will fair once it's live as many of my applications to date have been used only by myself.

I was wondering if any of you could lend some advice with regards to developing a mutli user app with Codeigniter? How are you guys doing it and what things should I be aware of?

Thanks in advance guys,
Arthur
#2

[eluser]Tripic[/eluser]
As far as how it will impact the servr it should really be noticable but the first thing you should do is figure out exactley what you need in the application then break it down by what you want to do first im working on one right now and i went with a user auth system to start so that i can start building the site security in from the start rather then adding it later. I know this isnt much help but without any ideah of what your working on theres really not much i can say
#3

[eluser]arthurhanlon[/eluser]
Hey, thanks for the reply.

I'm really more concerned about multiple users accessing and changing the same set of data at the same time and how I can stop conflicts or what the best practice is to building an app that would allow for this type of access.

Arthur
#4

[eluser]Tripic[/eluser]
ok so then multiple users will be able to acess and change the same data I cant remember how i did it before but theres a way you can lock the data while some one is editing it and unlock it when there finished
#5

[eluser]jedd[/eluser]
[url="/user_guide/database/transactions.html"]Transactions[/url] are your friend.
#6

[eluser]pbreit[/eluser]
Maybe you could give us more information about how users will be utilizing the site and acting on the data. Are people collaborating on large data elements? I could see that being an issue. But if people are just editing random bits of data, it's unlikely to be a problem.
#7

[eluser]WanWizard[/eluser]
Transactions are not really a solution for concurrency.

For ExiteCMS, we've developed a standard solution that is being built into our database update method.

For a table that needs concurrency protection, we add a column named 'last_updated', which contains the last update timestamp. When an update query is run by the update method, it adds 'WHERE last_updated = <last_update_timestamp>' to the query (the timestamp is retrieved with the other fields in the select to display the form).

If the result is that no records were updated, the record must have been updated by someone else (timestamp in the record doesn't match the one we retrieved earlier), and the code needs to take appropriate action.
If the record was updated, the last_updated field in the table was still the same as the one we retrieved, so it was unaltered in the time you were busy with the form.
#8

[eluser]jedd[/eluser]
[quote author="WanWizard" date="1279288248"]Transactions are not really a solution for concurrency.
[/quote]

I'm not sure how you do concurrency without transactions - either using the RDBMS's built-in transaction facilities, or engineering your own pseudo transaction handling mechanism. It seems that you opted for the latter - but I'm genuinely confused how this:

Quote:For a table that needs concurrency protection, we add a column named 'last_updated', which contains the last update timestamp. When an update query is run by the update method, it adds 'WHERE last_updated = <last_update_timestamp>' to the query (the timestamp is retrieved with the other fields in the select to display the form).

If the result is that no records were updated, the record must have been updated by someone else (timestamp in the record doesn't match the one we retrieved earlier), and the code needs to take appropriate action.

... is substantively different from using the DB (or even CI's front-end to the DB's) transaction feature.

In cases where you're updating and inserting across several tables, I think your approach would be very expensive - have you done any testing on this scale, or is this much more of one-customer-per-table approach to the 'database update method'?

You say that each table that needs concurrency protection gets this extra column - isn't this a touch expensive in terms of size? TIMESTAMPs are only four bytes, I know, but any extraneous columns worry me. Is TIMESTAMP even granular enough for this activity - don't you need something sub-second?

Actually, the more I think about it, it seems the really expensive bit would be trying to coordinate several tables being updated by a single transaction (in the loose sense of the word) and then having code wrapped around all those updates to check each and every table for any updates that happened at the same time.

I don't use CI's AR, as a rule, but the trans_start() and trans_complete() wrappers appear to be a very simple way of achieving exactly what you've just re-engineered - especially as in either case, as you allude to, you still have to write the code that 'takes appropriate action' if a contention is detected. I think retrying $x number of times to re-commit after a failed trans_complete(), and once retries == $x handling it further up the stack is almost identical to the approach you describe you're using in ExciteCMS.

If it isn't, I'd love to learn more about how you're doing it -- this really isn't an area I've done much practical programming in.
#9

[eluser]WanWizard[/eluser]
I didn't say that. I said transations don't solve the issue.

If you use (in a MySQL context) MyISAM tables, you don't need/can't use transations. First, because it's not supported, but also because the MySQL engine works sequential. You can not have simultaneous updates on a table. If you use InnoDB, you can.

Our solution works table-based, so it doesn't matter how many tables are involved. Transations might come in handy, as they allow you to roll back all updates within a transation at one, and might be required to secure consistency. But that's diffferent from concurrency.

As about expensive: disk space is cheap, I don't worry at all about the few extra bytes, even for large databases. Tests on one of our larger sites has shown that this solution in combination with MyISAM tables was a lot faster than using transactions and InnoDB. Offcourse, YMMV, it might depend on your update to read ratio's. We don't use MySQL's "ON UPDATE CURRENT_TIMESTAMP" feature, as that is not portable, we use a microtime() based function to determine the timestamp.

Retrying a failed transaction is pointless, so I'm not sure what you're trying to archieve there. There aren't that many reasons why a transaction that fails the first time would succeed the second time.

But, you missing the point. A transaction ensures that I can do a set of modifications without being interrupted by someone else. So, to keep the data consistent.

The issue here is this: I retrieve a record and populate a form. Then I go for a cup of coffee. In the mean time, you do the same, update the data, and save the form. Now I come back to my desk, and I don't know that the form in front of me contains invalid data. I post the form to save the data, which succeeds because the records aren't locked, and overwrite your updates. Transactions are not going to solve this, as they are only valid within a single process. HTTP (and therefore PHP) is stateless, when your script ends all locks will be released. You can not transfer a transaction lock to a next page request (nor would you want to, the chance of a deadlock is very real).

The 'appropriate action' has to be manual, because it depends on the situation. In case of a user posting a form, you could return to the form with a message 'sorry, someone else has beaten you to it', and populate the form with the new data from the updated record. In case of a workflow process (an action on a record triggers a workflow, but the record is modified before the workflow process starts) you can't do that, you have to abandon the workflow, generate error messages, etc.
#10

[eluser]jedd[/eluser]
Ah, okay then. I think we're looking at slightly different aspects of the same problem, from slightly different angles.

I do understand that if you use a non-transactional database engine, you don't get transactions. Honestly I do Smile

[quote author="WanWizard" date="1279302054"]
Our solution works table-based, so it doesn't matter how many tables are involved.
[/quote]

I think my concern here is that if you are making changes to multiple tables, the level of complexity in tracking these rises linearly. On a complex schema this may become untenable.

Quote:Transations might come in handy, as they allow you to roll back all updates within a transation at one, and might be required to secure consistency. But that's diffferent from concurrency.

I think the attempt to sacrifice consistency to provide a crude form of concurrency might bite you later. What I mean is that you're only guaranteeing A_ID here with this approach, and I think that's dangerous.

Unless you're locking tables (all the tables that your pseudo-transaction will operate on) starting prior to your re-reading of the timestamps on relevant rows/tables, and unlocking them once you've done the updates to all the tables -- then you have no way of guaranteeing consistency or ensuring a loss of data.

This is because there's no atomicity for the sequence of actions:
o read the time stamp(s)
o write the updates to the table(s)

Sure, this may be a very small window of opportunity .. but it's still a risk. It explains why you see favourable performance over transactions under InnoDB - it's because you're not getting the same functionality.

Retrying a failed transaction is pointful iff you think you've just failed to commit because of a lock - it's a timeout question, I know, and can be handled in a number of places. But as the [url="http://dev.mysql.com/doc/refman/5.0/en/innodb-deadlocks.html"]MySQL documentation advises[/url]:

Quote:"Normally, you must write your applications so that they are always prepared
to re-issue a transaction if it gets rolled back because of a deadlock."

If you're doing table locks on MyISAM (from memory you can't do row-level locking on MyISAM, and in any case row-level locking might not be satisfactory depending on the nature of the data you're trying to insert/update) then I can see some benefit to doing a brief pause & a retry there, too.

And, of course, if you're doing table-level locks across all tables that you're about to make changes to, then I ack that this does provide you the atomicity that I said was lacking above.




Theme © iAndrew 2016 - Forum software by © MyBB