• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Persistent Codeigniter 4 error 500 and cancelled Ajax calls when testing with cypress

I have several cypress 3.8 tests running in parallel simulating final user on an exotic application config:
  • Three code-igniter projects
  • One database
  • Everything is thrown as folders into one digital ocean droplet

  • No front-end framework.
But tests are flaky even considering the code is not being changed
  1. Sometimes all tests pass
  2. Sometimes the same tests fail with sever #500
  3. Sometimes ajax API call gets cancelled.
All tests ALWAYS pass if they are run in sequence -I've used loader.io to do a performance test on the server, we simulated 10.000 clients with API call directly and it works fine

[Image: fxPXh.png]
This occurs in multiple parts of the application, always when running multiple tests on parallel.
What could be possible back-end causes? which tools to use to find the root cause? or is it really cypress doing it too fast?

[Image: G5toR.jpg]
 and cancelled Ajax calls when testing with cypress

What kind of hardware do you have?
How have you tuned your software nginx/apache, mysql and PHP?

What errors do you actually get (in your server logs)? As Error 500 are a server error. It crashed due to high load.
You are making at least 166 req/sec, and depending on what those 10.000 req/min mean, the number of request could be higher.

The second picture are that an GUI separate from this API that dies? As it loads much more than just one request. 486 to be exact.

Digital Ocean Droplet - Linux Ubuntu
Standard Shared CPU
1 vCPU
3 GB
60 GB
3 TB

Performance graph: https://i.imgur.com/fTEZ5Dp.png

Nginx and MariaDB

Server logs: I need to get those. I am actually not the backend guy but tester

The picture is from cypress runner (end to end GUI test framework). Cypress is not actually doing API call but actually clicking on buttons like a final user. The other loads are just images

Hi, how is your memory usage? You got lot of spare CPU left, so that dosen't look like the issue.
So if you don't max out your memory as well, you need to tune your application / server.

It depends on how many users you expect to have online. Are those 486 request on one page load? If so do you have cache enabled? So that it only fetches dynamic data.

Do image fails to load as well? It's a Nginx problem.
Only PHP files? Then PHP and/or MariaDB problem.
Do you need to query the database for everything? If it's somewhat static content, you need to cache it either in memory (e.g. Redis) or the complete website (with Nginx or Varnish).

Ok, depending on how Cypress does thing it clicks 166 times a second and it may or may not do 486 x 166 = 80 676 requests. If one client only clicks ones, if they have set it up so that client are visiting your website multiple times, that number may differ, you can log the amount of request with Nginx to count for sure.

Memory usage:
I don't have access now to install the memory monitor, only until next week. But I think this Graffanna could* be accurate (I put in PDF): https://docdro.id/GXtEqtr

It depends on how many users you expect to have online:
Could be 200 people

Are those 486 request on one page load?
No, those requests are in 5 pages average.

If so do you have cache enabled
Yes, but I don't know how it was done - I will ask next week once the guy is back.

Do image fails to load as well:
They never fail.

Only PHP files?
Php + Javascript.
Note: This particular problem came from datatables.net library when we use ajax to query the backend, we are not using server side rendering. But that occurs on other parts.

Do you need to query the database for everything?
Not in general but in this particular case I found fetches a lot of stuff from multiple tables to render on the database

Depending on how Cypress does thing it clicks ....
This fails at the beginning of the test. I tested this at 10pm when nobody was working on it, and this is a test server (we didn't launch the project yet). Cypress did 82 clicks in 108 seconds. Let's say in parallel we do this 20x (20 tests at the same time), we usually get 5 of them as failures, but they not always fail at the beginning.

Hi, I don't think you need to install a memory monitor. You can just get away with top / htop or what you just gave me. If that document are correct you are only using 29% of your RAM. So that means you can increase for example your PHP-FPM workers, if it's those who are dying. Only way to know are checking the logs.

Depending on your database size you can tune it so it reads everything off Memory instead.

200 people online (if you check your Google Analytics) stats, don't do 200/request per second. If you don't do AJAX request every second to your server. At the maximum ~25-50 request / second. And that's a high number. And some bursts for images/css/javascript for new users of course.

Are everything user specific content or are this content static for guest? If it's static I would cache it with Nginx.

If you do 20 tests, that is 1640 clicks every second. That's a high number, never seen that in real life. And you need much beefier hardware than you got.

We actually had other stuff installed on the same server, so just to do a double check I installed anyway:


Memory is below 35%

The challenge is:
1-These tests are not being run locally, but they are loaded in multiple parallel docker builds that are created on the fly for each run Gitlab CY, so cache could be an issue. It's like each run it's completly new visit. These containers are destroyed and rebuilt.

2- This issue 90% of the time happens in this case:
-We store the user data in session, then in the same page we might have 3 ajax calls to multiple third party APIS, such as this dongle thing + Fedex + Stripe and we update the page without refresh. When cypress goes through different pages without the ajax this never happens. The achiles heel is doing without the page refresh or loading a new page, cypress goes too fast.

3-We have no unit tests, so we relly on 120 cypress tests to use it as a baseline for debugging. It takes 2 days for 2 people to run all tests and it's quite exaustive, if we slow down all tests it could take too long. We also don't mock anything, everything in cypress is real data that confronts with a real dataseed simulation almost 100% of a real user case scenario (apart from the speed). For each run we wipe out the database and re-seed.

By the way, I spoke with backend dev and he will try adjusting the PHP-FPM. He is also trying to put load animation, then try to wait for that load to finish to proceed with the test.

Thanks a lot by the way for helping us, this forum is much better than stackoverflow for these types of Codeigniter questions! Smile

1. That's generally what they are supposed to do. Act like separate users, but depending on the content (if it's not unique to them) you need to start caching it. Either the whole page (Varnish / Nginx) or object based caching (Memcached / Redis).

2. They probably have a DDoS detection on their own APIs. So you need to start caching it, or buy a higher rate from them, so you can do that many queries.

3. Don't have any experience with cypress, so can't be of any assistans.

Based on your CPU and Memory load, you can without any problem have higher PHP-FPM and Nginx settings, depending on what dies first.

Your welcome, but these aren't really CodeIgniter problems, as this can applied to any application.

Digg   Delicious   Reddit   Facebook   Twitter   StumbleUpon  

  Theme © 2014 iAndrew  
Powered By MyBB, © 2002-2020 MyBB Group.