• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Why can't database session driver use a persistent connection?

#1
Hi,
Does anyone know why the database session driver will not work if you are using a persistent database connection?

I've looked in the code and this restriction was put in place back in May 2014 in the initial version of v3.

After a bit of digging I found this commit:
https://github.com/bcit-ci/CodeIgniter/issues/3073
which says "Database storage can't be used with a persistent connection (to avoid deadlock and to allow concurrency)"

Is this still relevant in 2018 with the modern databases we're using?
Reply

#2
Even MySQL recommends not to use it, because of problems with it.

Here the low down on PHP.NET

Persistent Database Connections
What did you Try? What did you Get? What did you Expect?

Joined the CodeIgniter Community in 2009.          ( Skype: insitfx )
Reply

#3
Got any sources for that MySQL recommendation?
The PHP link has got comments dating back 15 years - and I'm trying to understand if this is a legacy problem that's now resolved.
It seems that it's simple to mitigate with the register_shutdown_function().
On an Azure + SqlServer site I've seen response time drop from 1s to 45ms by using a persistent connection but CI forces me to use file for sessions.
Reply

#4
Best place to check would be MySQL own website.
What did you Try? What did you Get? What did you Expect?

Joined the CodeIgniter Community in 2009.          ( Skype: insitfx )
Reply

#5
(01-12-2018, 02:45 AM)rich8374 Wrote: Got any sources for that MySQL recommendation?
The PHP link has got comments dating back 15 years - and I'm trying to understand if this is a legacy problem that's now resolved.
It seems that it's simple to mitigate with the register_shutdown_function().
On an Azure + SqlServer site I've seen response time drop from 1s to 45ms by using a persistent connection but CI forces me to use file for sessions.

It's not a legacy problem, it's a technical limitation.

Locks are meant to prevent concurrent reads/writes (depending on the lock type) to the same resource, and the only way to identify concurrent requests is by the connection ID that they get. A persistent connection breaks that because it is shared between processes and thus they all share the same connection ID.

But even the reason why you want to use a persistent connection is a trap ... You're getting that reduced response time only because you're the only user at that time. Concurrent requests won't be able to asynchronically share that persistent connection, because the connection itself would be (ironically) locked to a single process at a time, and therefore each request will have to wait its turn to use it - whether it's 5 or 5000, they'll all be processed sequentially, so you'll have a huge bottleneck on your hands, not a performance improvement.
There may be some overhead when each request/process opens its own database connection (the reduced response time with a persistent connection is because it doesn't need to authenticate and do a ton of other initialization procedures each time), but that allows them to work simultaneously in isolation of each other.

As with everything else in programming, it's a simple trade-off - you pay a small price upfront, for significant gains later.

The difference between 1s and 45ms is unusually large though, it should certainly be reducible by other means. First thing I would try is putting the database server IP address (instead of hostname) in the configuration - that skips a DNS lookup that might be slow.
Reply


Digg   Delicious   Reddit   Facebook   Twitter   StumbleUpon  


Users browsing this thread:
1 Guest(s)


  Theme © 2014 iAndrew  
Powered By MyBB, © 2002-2018 MyBB Group.