Björn Hansson Precis som i framtiden

4Aug/140

AWS c1 VS c3 – benchmark

How much more value for your money will you get using a c3.large instance compared to previous generation c1.medium? Quite a lot actually! Not only is the HVM-optimized c3 instance cheaper, it also delivers 31% more power. Benchmarked using Apachebench, PHP 5.3 + APC. After a lot of testing with memcache, memcacheD and AWS Cluster client (memcacheD) I noticed there was a serious performance loss using anything else than memcache, surely something gone wrong in the setup, but as I was using mostly yum for installing packages I have no idea what could have caused this. Enough talking, here's the results.

Instance type: c1.medium
Price: $0.130 per Hour
nginx version: nginx/1.6.0
PHP 5.3.28 + APC
Sessionhandling: memcache (elasticache)

Concurrency Level: 200
Time taken for tests: 28.077 seconds
Complete requests: 1000
Failed requests: 999
(Connect: 0, Receive: 0, Length: 999, Exceptions: 0)
Write errors: 0
Total transferred: 35441979 bytes
HTML transferred: 35026737 bytes
Requests per second: 35.62 [#/sec] (mean)
Time per request: 5615.329 [ms] (mean)
Time per request: 28.077 [ms] (mean, across all concurrent requests)
Transfer rate: 1232.74 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 6 35.3 2 1094
Processing: 2043 5258 1178.3 5302 9061
Waiting: 2041 5058 1175.7 5192 8598
Total: 2049 5264 1177.8 5303 9080

Percentage of the requests served within a certain time (ms)
50% 5303
66% 5684
75% 5896
80% 6050
90% 6771
95% 7272
98% 7752
99% 8051
100% 9080 (longest request)

 

Instance type: HVM c3.large (VPC)
Price: $0.105 per Hour
nginx version: nginx/1.4.7
PHP 5.3.28 + APC
Sessionhandling: memcache (elasticache)

Concurrency Level:      200
Time taken for tests:   19.347 seconds
Complete requests:      1000
Failed requests:        956
   (Connect: 0, Receive: 0, Length: 956, Exceptions: 0)
Write errors:           0
Total transferred:      34895567 bytes
HTML transferred:       34481567 bytes
Requests per second:    51.69 [#/sec] (mean)
Time per request:       3869.351 [ms] (mean)
Time per request:       19.347 [ms] (mean, across all concurrent requests)
Transfer rate:          1761.42 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   19 133.0      1    1001
Processing:  1395 3599 1001.6   3614   11848
Waiting:     1393 3598 1001.6   3612   11847
Total:       1398 3618 1069.8   3618   12850

Percentage of the requests served within a certain time (ms)
  50%   3618
  66%   3745
  75%   3811
  80%   3852
  90%   3962
  95%   4629
  98%   5975
  99%  10032
 100%  12850 (longest request)

 

Note: Different Nginx version, but it doesn't affect the performance so it doesn't matter here. Also ignore the failed requests part, it just means the responses have different length due to dynamic content.

The HVM optimized instances is obviously more powerful and in a VPC it also benefits from Enhanced Networking support which results in higher performance (packets per second), lower latency, and lower jitter. If you launch an Amazon EBS–backed C3, R3, or I2 instance today using a current Amazon Linux HVM AMI, enhanced networking is enabled for your instance per default.

28Jun/130

Nginx VS Apache performance

I ett större webbaserat onlinespel jag är delaktig i så bytte vi förra året ut Apache mot Nginx. Jag gjorde lite jämförelser med New Relics RUM-verktyg med riktig live-trafik. Appen ligger på Amazon AWS cloud och instanstypen som användes var c1.xlarge (7 GB ram).

Samtliga servrar som är namngivna wwwX kör Apache med PHP som modul. De som är namngivna nginx kör Nginx + PHP-FPM. Vid testtillfället användes senaste stabila versionerna (Augusti 2012). Endast dynamiska requests servades (i princip 100% gick alltså till backenden PHP), all statisk content servades genom ett externt CDN. Då Nginx är känt för att prestera bäst med just statiskt content så hade förmodligen skillnaderna blivit klart större vid ett sådant scenario.

Skärmdumparna får tala för sig själv men slutsatsen man kan dra är ju att Nginx presterar klart bättre vid hög load, loadbalancern ger mer trafik till Nginx-burken och den svarar ändå snabbare, se bild 4 där detta syns tydligt.

app_after_10_hours

Picture 1 of 10