Skip to content

[ngx-php] Remove guard function#559

Open
joanhey wants to merge 2 commits intoMDA2AV:mainfrom
joanhey:nginx-without-guard
Open

[ngx-php] Remove guard function#559
joanhey wants to merge 2 commits intoMDA2AV:mainfrom
joanhey:nginx-without-guard

Conversation

@joanhey
Copy link
Copy Markdown
Contributor

@joanhey joanhey commented Apr 19, 2026

Description

  • The guard function in the nginx access point is removed, as the new crud endpoint need all methods.
    And nginx return a 400 Bad method for any incorrect method, before arrive to the locations.

  • Add backlog to nginx listen, as the server have set somaxconn 65535.

  • Fix Post problem in the baseline, thanks to checking the docker logs that are in this repo.

  • Still try to fix the upload problem, but locally we don't have logs to check the errors.

Local platform problems

It's very difficult to test and optimize any framework locally.

  • Validate.sh don't use the docker build cache, so each time need to download and build the image. It's very slow for each validate, and also we can top the max reqs to docker and github.
    With run.sh the build is very fast, almost instantly.
  • The benchmark-lite.sh still don't work, fail to build the load generators images.
  • We don't have logs to check any error. With Validate.sh showing the logs when finish, it'll be enough.
  • When benchmark-lite.sh work, how we can test the json comp score formula ?
    It's a very good reference for any framework to balance size and performance.

PR Commands — comment on this PR to trigger (requires collaborator approval):

Command Description
/benchmark -f <framework> Run all benchmark tests
/benchmark -f <framework> -t <test> Run a specific test
/benchmark -f <framework> --save Run and save results (updates leaderboard on merge)

Always specify -f <framework>. Results are automatically compared against the current leaderboard.


Run benchmarks locally

You can validate and benchmark your framework locally with the lite script — no CPU pinning, fixed connection counts, all load generators run in Docker.

./scripts/validate.sh <framework>
./scripts/benchmark-lite.sh <framework> baseline
./scripts/benchmark-lite.sh --load-threads 4 <framework>

Requirements: Docker Engine on Linux. Load generators (gcannon, h2load, h2load-h3, wrk, ghz) are built as self-contained Docker images on first run.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

/benchmark -f ngx-php

@github-actions
Copy link
Copy Markdown
Contributor

👋 /benchmark request received. A collaborator will review and approve the run.

@github-actions
Copy link
Copy Markdown
Contributor

🚀 Benchmark run triggered for ngx-php (all tests). Results will be posted here when done.

@github-actions
Copy link
Copy Markdown
Contributor

Benchmark Results

Framework: ngx-php | Test: all tests

Test Conn RPS CPU Mem Δ RPS Δ Mem
baseline 512 2,551,065 6619.6% 4.4GiB +7.4% ~0%
baseline 4096 2,679,690 6489.9% 4.4GiB +4.9% ~0%
pipelined 512 3,346,676 6561.0% 4.4GiB +8.1% ~0%
pipelined 4096 3,271,237 6414.9% 4.4GiB +4.0% ~0%
limited-conn 512 1,804,002 5915.9% 4.4GiB +149.1% ~0%
limited-conn 4096 2,143,816 6310.5% 4.4GiB +202.0% ~0%
json 4096 994,486 6576.9% 4.4GiB ~0% ~0%
json-comp 512 684,277 6150.6% 4.4GiB +16.4% ~0%
json-comp 4096 727,787 6367.2% 4.4GiB +19.8% ~0%
json-comp 16384 712,729 6581.1% 4.5GiB +20.6% ~0%
json-tls 4096 818,469 6444.2% 4.5GiB +1.7% ~0%
api-4 256 51,328 405.3% 4.4GiB +1.8% ~0%
api-16 1024 141,246 1688.7% 4.4GiB -0.2% ~0%
static 1024 1,070,828 6548.8% 4.4GiB +3.7% ~0%
static 4096 1,072,224 6525.4% 4.4GiB +5.1% ~0%
static 6800 1,060,827 6529.4% 4.4GiB +4.0% ~0%
async-db 1024 232,486 3689.4% 4.4GiB -0.4% ~0%
baseline-h2 256 2,170,285 6583.0% 4.4GiB +2.9% ~0%
baseline-h2 1024 2,182,509 6532.5% 4.5GiB +3.8% ~0%
static-h2 256 754,040 6557.4% 4.5GiB +7.9% -6.2%
static-h2 1024 753,224 6567.0% 4.8GiB +4.2% -9.4%
baseline-h3 64 3,980,752 4631.0% 4.5GiB +1.6% ~0%
static-h3 64 295,982 5454.7% 4.6GiB +10.3% -2.1%
Full log
status codes: 19838930 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.48GB (1586838560) total, 1.29GB (1388727620) headers (space savings 40.68%), 18.92MB (19838966) data
UDP datagram: 621246 sent, 1906238 received
                 min         max         median     p95        p99        mean         sd        +/- sd
request     :      288us      5.58ms       971us     1.73ms     1.87ms     1.12ms       365us    63.94%
connect     :     1.68ms      7.26ms      3.31ms     5.30ms     7.26ms     3.46ms      1.05ms    71.88%
TTFB        :     2.91ms      9.00ms      4.53ms     7.31ms     9.00ms     4.87ms      1.31ms    75.00%
req/s       :   36855.50    93088.70    71766.33   79206.28   93088.70   61991.38    16044.83    60.94%
min RTT     :       24us       747us       213us      640us      747us      289us       195us    68.75%
smoothed RTT:      405us      1.49ms       639us     1.34ms     1.49ms      744us       302us    68.75%
packets sent:       5772       14559       11228      12404      14559    9708.97     2507.07    62.50%
packets recv:      17667       44687       34477      37997      44687   29785.97     7714.41    60.94%
packets lost:          0           0           0          0          0       0.00        0.00   100.00%
GRO packets :          1           1           1          1          1       1.00        0.00   100.00%
[info] CPU 4541.8% | Mem 4.5GiB

[run 3/3]
starting benchmark...
60.
11.
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 Warm-up started for thread #13.
253 bits
Certificate: RSA 2048 bits
Negotiated Group: x25519
Resumption: no
Application protocol: h3

finished in 5.01s, 3988714.00 req/s, 304.26MB/s
requests: 19943570 total, 19947666 started, 19943570 done, 19943570 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 19943570 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.49GB (1595209760) total, 1.30GB (1396056900) headers (space savings 40.68%), 19.02MB (19943670) data
UDP datagram: 625047 sent, 1905833 received
                 min         max         median     p95        p99        mean         sd        +/- sd
request     :      282us      6.54ms       939us     2.24ms     2.60ms     1.13ms       465us    81.59%
connect     :     2.02ms      4.93ms      3.10ms     4.29ms     4.93ms     3.19ms       575us    73.44%
TTFB        :     3.27ms      7.63ms      4.54ms     5.67ms     7.63ms     4.58ms       770us    64.06%
req/s       :   28215.51    86768.49    70180.63   76306.66   86768.49   62319.06    15913.69    70.31%
min RTT     :       42us       715us       184us      573us      715us      241us       168us    71.88%
smoothed RTT:      388us      1.86ms       622us     1.38ms     1.86ms      710us       318us    82.81%
packets sent:       4434       13573       11011      11937      13573    9768.36     2486.92    70.31%
packets recv:      13475       41414       33498      36501      41414   29779.64     7608.16    70.31%
packets lost:          0           0           0          0          0       0.00        0.00   100.00%
GRO packets :          1           1           1          1          1       1.00        0.00   100.00%
[info] CPU 4631.0% | Mem 4.5GiB

=== Best: 3980752 req/s (CPU: 4631.0%, Mem: 4.5GiB) ===
[info] saved results/baseline-h3/64/ngx-php.json
httparena-bench-ngx-php
httparena-bench-ngx-php

==============================================
=== ngx-php / static-h3 / 64c (tool=h2load-h3) ===
==============================================
[info] waiting for server...
[info] server ready

[run 1/3]
starting benchmark...

.

.
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Certificate: RSA 2048 bits
Negotiated Group: x25519
Resumption: no
Application protocol: h3
14. Stopping all clients.
27Stopped all clients for thread #28
. Stopping all clients.

finished in 5.01s, 290137.00 req/s, 4.42GB/s
requests: 1450685 total, 1454781 started, 1450685 done, 1450685 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1450924 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 22.09GB (23721902611) total, 95.75MB (100404496) headers (space savings 40.01%), 21.99GB (23606487082) data
UDP datagram: 4026922 sent, 17114535 received
                 min         max         median     p95        p99        mean         sd        +/- sd
request     :      995us       1.06s     12.15ms    31.40ms    51.39ms    15.50ms     14.92ms    94.49%
connect     :     2.35ms      9.82ms      3.78ms     6.96ms     9.82ms     4.19ms      1.35ms    78.13%
TTFB        :     3.34ms     18.75ms      5.84ms    10.97ms    18.75ms     6.54ms      2.72ms    92.19%
req/s       :     911.19     7516.68     5024.77    7375.40    7516.68    4533.22     1594.29    67.19%
min RTT     :       17us      4.36ms       292us     1.15ms     4.36ms      508us       696us    95.31%
smoothed RTT:     1.13ms     12.97ms      7.74ms    11.94ms    12.97ms     7.84ms      2.79ms    60.94%
packets sent:      15681       94068       72646      87777      94068   62922.66    24636.11    64.06%
packets recv:      52705      431836      295083     427113     431836  267415.61    92859.70    67.19%
packets lost:         39         240         136        213        240     135.03       51.30    62.50%
GRO packets :          1           1           1          1          1       1.00        0.00   100.00%
[info] CPU 5036.1% | Mem 4.6GiB

[run 2/3]
starting benchmark...
34.
17.
36.
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Certificate: RSA 2048 bits
Negotiated Group: x25519
Resumption: no
Application protocol: h3
34. Stopping all clients.

finished in 5.01s, 296530.00 req/s, 4.52GB/s
requests: 1482650 total, 1486746 started, 1482650 done, 1482650 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1482874 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 22.58GB (24244646425) total, 97.86MB (102615546) headers (space savings 40.01%), 22.47GB (24126675346) data
UDP datagram: 4074945 sent, 17907042 received
                 min         max         median     p95        p99        mean         sd        +/- sd
request     :      959us       1.03s     11.97ms    30.61ms    54.14ms    15.14ms     16.31ms    95.32%
connect     :     2.12ms      7.10ms      3.26ms     5.32ms     7.10ms     3.50ms       994us    76.56%
TTFB        :     3.25ms     21.49ms      5.06ms    10.03ms    21.49ms     5.82ms      2.85ms    89.06%
req/s       :     792.12     7501.27     5066.65    7262.40    7501.27    4633.09     1529.19    65.63%
min RTT     :       21us      5.03ms       240us     2.82ms     5.03ms      539us       948us    92.19%
smoothed RTT:     1.75ms     14.25ms      8.80ms    13.79ms    14.25ms     9.12ms      2.65ms    67.19%
packets sent:      11605       94683       74207      82869      94683   63673.02    20672.22    71.88%
packets recv:      45803      447691      302921     438853     447691  279798.53    91833.91    65.63%
packets lost:          0         272         140        226        272     143.05       55.14    71.88%
GRO packets :          1           1           1          1          1       1.00        0.00   100.00%
[info] CPU 5454.7% | Mem 4.6GiB

[run 3/3]
starting benchmark...
28.
.Main benchmark duration is started for thread #38.

TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Certificate: RSA 2048 bits
Negotiated Group: x25519
Resumption: no
Application protocol: h3
37. Stopping all clients.Stopped all clients for thread #50


finished in 5.01s, 287800.40 req/s, 4.38GB/s
requests: 1439002 total, 1443098 started, 1439002 done, 1439002 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1439240 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 21.91GB (23529888149) total, 94.98MB (99595335) headers (space savings 40.01%), 21.81GB (23415361646) data
UDP datagram: 4104534 sent, 17100419 received
                 min         max         median     p95        p99        mean         sd        +/- sd
request     :      823us    986.72ms     12.04ms    36.71ms    64.54ms    16.05ms     16.78ms    93.32%
connect     :     1.99ms      6.14ms      3.20ms     5.46ms     6.14ms     3.38ms       942us    70.31%
TTFB        :     3.22ms      9.36ms      5.41ms     8.47ms     9.36ms     5.58ms      1.53ms    67.19%
req/s       :     992.22     7639.42     5016.77    7301.26    7639.42    4496.68     1820.74    62.50%
min RTT     :       18us      2.38ms       285us     1.75ms     2.38ms      543us       580us    81.25%
smoothed RTT:     2.01ms     16.79ms      6.60ms    13.40ms    16.79ms     7.56ms      3.44ms    71.88%
packets sent:      15671      116864       75121      93376     116864   64135.34    27559.11    64.06%
packets recv:      57096      449460      297091     431895     449460  267195.05   107434.41    62.50%
packets lost:          0         292         122        235        292     123.02       70.41    59.38%
GRO packets :          1           1           1          1          1       1.00        0.00   100.00%
[info] CPU 5036.3% | Mem 4.7GiB

=== Best: 295982 req/s (CPU: 5454.7%, Mem: 4.6GiB) ===
[info] saved results/static-h3/64/ngx-php.json
httparena-bench-ngx-php
httparena-bench-ngx-php
[info] skip: ngx-php does not subscribe to gateway-64
[info] skip: ngx-php does not subscribe to gateway-h3
[info] skip: ngx-php does not subscribe to production-stack
[info] skip: ngx-php does not subscribe to unary-grpc
[info] skip: ngx-php does not subscribe to unary-grpc-tls
[info] skip: ngx-php does not subscribe to stream-grpc
[info] skip: ngx-php does not subscribe to stream-grpc-tls
[info] skip: ngx-php does not subscribe to echo-ws
[info] rebuilding site/data/*.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/frameworks.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/api-16-1024.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/api-4-256.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/async-db-1024.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/baseline-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/baseline-512.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/baseline-h2-1024.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/baseline-h2-256.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/baseline-h3-64.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/json-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/json-comp-16384.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/json-comp-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/json-comp-512.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/json-tls-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/limited-conn-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/limited-conn-512.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/pipelined-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/pipelined-512.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/static-1024.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/static-4096.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/static-6800.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/static-h2-1024.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/static-h2-256.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/static-h3-64.json
[updated] /home/diogo/actions-runner/_work/HttpArena/HttpArena/site/data/current.json
[info] done
httparena-postgres
[info] restoring loopback MTU to 65536
[info] restoring CPU governor → performance

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

Can you change this to tuned?

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Why ?

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

If you explain why we need to change it, we can decide about any change that is breaking the production, to delete it or use tuned.

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

worker_cpu_affinity auto;
worker_rlimit_nofile 65536;
timer_resolution 1s;
worker_connections 65536;
keepalive_requests 1000000;
access_log off;
etag off;
gzip_static on;
brotli on;
brotli_static on;
listen ... reuseport backlog=65536;
php_ini_path ...;
init_worker_by_php ...;

These are not default nginx, can you find anywhere in nginx docs recommending to set these as default configs?

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

It's like node.js if they can add reuseport or cluster. As default don't work with this config.

Or like asp.net configuring the kestrel.

builder.WebHost.ConfigureKestrel(options =>
{
    options.Limits.Http2.MaxStreamsPerConnection = 256;
    options.Limits.Http2.InitialConnectionWindowSize = 2 * 1024 * 1024;
    options.Limits.Http2.InitialStreamWindowSize = 1024 * 1024;

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

worker_cpu_affinity auto
timer_resolution 1s
keepalive_requests 1000000
reuseport
backlog=65536
worker_connections 65536

these seem to be clearly tuned configs, they are documented in the docs but not typically used on a prod deploy, it should be nginx handling these not us setting deliberately.

nodejs doesnt work without reuseport, it is a core config, nginx does work fine without it and is the default config to not set it.

Those aspnet configs are to be removed from any prod type

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Nginx
Nginx defaults are not specifically configured for a big server; they provide a generic, pre-baked configuration ready for most general use cases, including small to medium deployments.

  • The default setup often includes conservative values like worker_processes 1 and worker_connections 1024
  • Production-grade traffic typically requires tuning these parameters (e.g., setting worker_processes to the number of CPU cores) .
  • While the architecture supports high concurrency, the default configuration itself is intended as a starting point rather than an optimized setup for large-scale servers.

Later:

  • worker_cpu_affinity auto now it's not working, because the bench change to only use physical cores, because using also the logical cores, some frameworks have worst results. A biased change, I don't know.
    Actually for this value we have this error at startup:

    sched_setaffinity() failed (22: Invalid argument)
    sched_setaffinity() failed (22: Invalid argument)
    sched_setaffinity() failed (22: Invalid argument)
    ...
    
  • timer_resolution 1s have no advantage, we can remove it.

  • reuseport all frameworks will be tuned.

  • worker_connections 65536 sorry, so we leave as 1024 x 1 worker_process, that never will be a production config. Also never will be used all this worker connections with this config.

  • keepalive_requests 1000000 that's a legacy option for old faulty frameworks, than fail after n requests. Nginx for me maintain too legacy options, this need to have a -1 value for a health framework.

  • backlog=65536 this is fundamental for any language, platform,... because is used in the listen(). Without listen() we can connect to any socket 🤔 , and what is the synopsis for this call:
    int listen(int sockfd, int backlog);
    The rest of frameworks need to learn that.

Nodejs
By default, Node.js sockets do not have the reusePort option enabled (it defaults to false), meaning a new socket cannot bind to a port already in use by another process.

Etc, etc,..
I think that more frameworks need to learn from this nginx.conf file, also the AI (hello Benny).
But mark tuned this config, because the another frameworks don't use all the potential than they can. I don't think so.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

About that config that is not "production".
What is this ? 🤔

worker_rlimit_nofile 65535;
worker_connections 65536;
...

Well, marked as "production":
aspnet-minimal_nginx
https://github.com/MDA2AV/HttpArena/blob/main/frameworks/aspnet-minimal_nginx/proxy-production/nginx.conf#L16-L22

https://github.com/MDA2AV/HttpArena/blob/main/frameworks/aspnet-minimal_nginx/proxy/nginx.conf#L1-L6

....

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

What is production for me actually:

Production

  • H1 all with LTS
  • All responses with gzip, br, ... if the client ask for them (99%).
  • No H1 pipeline. No browser use pipeline, and only 2 esoteric proxy servers, than nobody use.
    The actual way is to use the H2 multiplex.
  • Best configuration for the each server (manual or automatic). If the automatic configuration don't work correctly, we need to do it manually.
  • ...

Tuned

  • Use caching
  • Use static files from memory, bypass the IO to hard disk.
  • Use simd json
  • ...

This bench seems to biased to asp.net, if the framework don't use the same automatic config then is tuned.

e.g. ASP.NET MapStaticAssets, IMemoryCache, automatic config,... the configs and rules than are based always

We are developers, we create code, and the asp.net opinionated classes are not always the optimal or more performant.
We need to create our solutions, and try to be efficient with the code and than the rest learn.

If any dev want easy automatic solutions, they can check the code in this repo and choose.
Caddy server need less lines than nginx, because do it all automatic, but in big servers never is near to nginx in performance.
Choose automatic or manual config (or use AI).

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

Yea these nginx flavour "frameworks" are going to be a pain in the ass, there are a trillion ways to configure nginx and every can claim they are production because there are use cases for all those configs. This is more of a nginx module than a framework which sort of invalidates prod/tuned logic applied to it because nginx is not a web framework. These prod rules are targeted to frameworks like Spring, Quarkus, Helidon, Elysia, ASP.NET, Laravel, FastHttp, Gin, etc which are a different category, for this reason we need a new category for anything that falls on reverse proxies, web servers like caddy, nginx, traefik, etc

So I'll just move whatever is nginx based to this new category, apples to apples

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Like you want.
But a benchmark need to be fair and not biased.
It's the developer than need to decide which framework to use.

Thank you.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

We can't move ngx-php to the gateway test, as minimum need 2 docker containers !!

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

So you think it is fair to have ngx-php and laravel under same category?

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Any framework than use fpm-php can never compare with any of the rest of frameworks in this bench.
But ngx-php, can't compare with pure Rust, C#, java, ... frameworks ??

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

Because it does not provide the same functionality. Oh cmon you know exactly what I am talking about, so anyone can just add a module to nginx or some low level engine to support basic features like routing and request parsing and compete with a full feature web framework?

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Later I will add Symfony and Laravel to this bench, but not with fpm-php (we can do it for check the difference) but never will be performant with normal fpm-php.

Even Symfony and Laravel are always slow, is the same which platform they use.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

If you have a Rust framework, you have all the possibilities to configure in the best way to be faster than any module from nginx, than later create all the code in an interpreted language, not compiled !!

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

But you have to code in Rust to use it so it sucks already and nobody cares

Build a basic web api like we enforce here is doable fast in any language, if you have to build a large reliable and mantainable one nobody is going to use these Rust or shortcut modules

It's cool to see them go fast wow 3-4 million requests per second on baseline, cool for a benchmark but what I want to see is real web frameworks performance comparison like aspnet vs spring vs quarkus vs fasthttp vs laravel

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

features like routing and request parsing and compete with a full feature web framework?

That is a framework, request and routing it's the core of any framework, and need to be fast. How you use it later is opinionated. The rest of classes on top are secondary.

But you have to code in Rust to use it so it sucks already and nobody cares

Yes, but with ngx-php, we use plain and easy interpreted php, and we can use almost any framework on top.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

I have ready 3 full PHP MVC frameworks with Adapteman, than use Workerman, to add to this bench.

But some devs, don't want to use these full beasts, they prefer use simpler solutions.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

I said here what is production and tuned actually for me.
#559 (comment)

But the language, platform, framework, ... it's a decision of the developer, not from us.

Perhaps "production" and "tuned" need to be changed for other more appropriate words.

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

The production criteria is simple: Default config.

If a framework needs to be tuned like nginx this isn't a web framework, it is a web server or something else, web frameworks adapt to the environment they run in and if we allow people to start configuring everything then whatever numbers we see on production are a byproduct of a config crafted to the benchmark

nginx and any of its modules cannot fit in the same category as a web framework and I won't mark as framework anything that is not battle tested with proper documentation like any vibe coded framework with 3 stars

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Like you want.
But I don't think so.

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

Mark it as tuned !!
Now we can start to tune it.

Thank you.

Completelly contrary to my opinion
@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

Not as tuned, will be marked with new category Infrastructure

@joanhey
Copy link
Copy Markdown
Contributor Author

joanhey commented Apr 19, 2026

So OpenRestry will be in the same category. But not Rust frameworks. 🤔

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

If workerman is allowed in production so is actix.

nginx openresty, ngx_lua, h2o_mruby will fall together with nginx, h2o, traefik, caddy, pingora etc as they are extensions to it and configurations can be applied without having to be marked separately as tuned

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Apr 19, 2026

and pipeline test will be removed in the future, no need to mention that all the time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants