What could CI handle?

#1
[eluser]bapobap[/eluser]
I know this is very hard to estimate but if anyone could take an educated guess, I'd appreciate it.

Assuming I have excellent coding standards (equiv to CI), make use of the built-in CI caching and use PHP, Apache and MySQL on this dedicated box:

Quad Core Dual Processor Xeon 5320 1.86 GHz (Clovertown)
CentOS 5
2048MB FB-DIMM 667 ECC Ram (Possibly up to 16GB)
250GB SATA II 16MB Cache 3.0 Gb/sec

How many page transactions could CI handle per second? When I refer to a page transaction I mean around 10 MySQL queries and rendering a page, say, 4 views in total.

And how many basic transactions could it handle, say, reading an XML file and importing that into the DB.

Again if you can take an educated guess I'd love to know.

Thanks!

#2
[eluser]wiredesignz[/eluser]
You never said which operating system Tongue

#3
[eluser]bapobap[/eluser]
CentOS 5

#4
[eluser]tonanbarbarian[/eluser]
These sort of estimates are not easy because there are so many factors

Things like the setup of of Apache and Mysql as well as certain PHP options can all determine how well a site performs.

Given that CI is the lightest PHP framework I have seen or worked with I would suggest this box will be able to handle quite a load.

If we assume that each request is going to use up 4Mb of memory (and I have yet to build a CI app that used more than 2.5 on average). Then lets double that value to take into account the memory used by not just PHP but MySQL and Apache to handle the request.

This would mean theoretically you should have enough memory for 400+ similtanious requests.
Whether Apache and MySQL are able to be tunned to handle this is another story.
And dont forget disk. Any time you try to do multiple reads or writes on the disk it will slow things down. Fortunately Linux employes a caching mechanism that caches commonly used files into memory, but that reduces the amount of memory available to the webserver side of things etc.

I would suggest that what you have there is a good to great spec for a webserver. It is similar to the spec that lots of hosting companies would provide if you asked for a dedicated server and probably similar to what a lot of then use for general hosting as well.

If you find that it is no longer performing as well as you would like then look at the following options.
- Spreading data across multiple raid drive to reduce seek times i.e. linux core on 1 drive, website files a 2nd drive, database files on a 3rd drive, with maybe log files on a 4th drive (or turn off logging altogether)
- Move the database onto a seperate box so that the webserver can do just the one things

#5
[eluser]bapobap[/eluser]
Thanks for your input.

A company I used to work for has asked me about building a group messaging system that has to handle messages from a variety of sources. The problem is the potential amount, they estimate 250,000 requests a day. Some of it is just API calls, some will be SMS/IM messages to people, some will be page views, some email, some RSS and eventually an automated call system.

I also thought about using AWS services to handle all of it but then I have to program to manage that.

Seems I've bitten off more than I can chew :-)

#6
[eluser]Sarfaraz Momin[/eluser]
Well we have a much lower end box with

Dual Xeon
CentOS 4.5
2048MB DIMM
120GB IDE Drive

It easily handle around 150,000 uniques per day on CI. I have around 6 sites going on this server with 60K uniques on one of them. They all work smoothly.

Good Day !!!

#7
[eluser]Derek Allard[/eluser]
great posts tonanbarbarian and Sarfaraz. Very informative. Thanks for sharing.

#8
[eluser]bapobap[/eluser]
Yup, thanks guys. It's always best to hear from people who actually know or can have a go at guessing rather than me asking my local "expert" who seems to constantly rub his hands together with a dollar sign in his eye.


Digg   Delicious   Reddit   Facebook   Twitter   StumbleUpon  


  Theme © 2014 iAndrew  
Powered By MyBB, © 2002-2020 MyBB Group.