flood.io
bit.do/flood_stpcon
Extinction Event
Expectation vs Reality
@tim_koopmans
CTO / Founder Flood IO
Cow Farmer, Child Wrangler & Recovering Load Tester
"That doesn't get in the way of load testing"
"Distributed, Loosely Coupled, Shared Nothing"
"Pay for the infrastructure you use"
Read why we think paying per VU is broken
"A good simulation model is worth a thousand tests"
"Last night we had 180K uniques doing something in the order of 500K requests per minute BUT the business wants us to test up to 1M requests per second for the next big sale event"
An "average" way to estimate
180,000 uniques --------------- = 15,000 concurrent users ( 60 minutes / 5 minutes)
An "average" way to estimate
Random session duration
Poisson Distributed
Another "average" method
500,000 requests per minute --------------------------- = 33 rpm per user 15,000 users
We can start to validate business targets of 1M rps 😲
60,000,000 requests per minute ------------------------------ = 4,000 rpm per user 15,000 users OR maybe ... 60,000,000 requests per minute ------------------------------ = 1.8M concurrent users 33 rpm per user
"They don't exist"
Make it simple
"Application Performance Management is a vast ecosystem"
https://github.com/flood-io/loadtest
├── Dockerfile
├── Makefile
├── config
│ ├── default.vcl
│ ├── limits.conf
│ ├── nginx.conf
│ ├── supervisord.conf
│ └── sysctl.conf
├── scripts
│ └── jenkins.sh
├── terraform
│ ├── api
│ │ ├── main.tf
│ ├── asg
│ │ ├── cloudconfig.yml
│ │ ├── main.tf
│ └── elb
│ ├── main.tf
└── tests
└── load.rb
"Treat your tests as any other code"
"Record and replay is b@##$h!t"
"Canary in the mine"
"Halve and halve again"
"Small targeted changes"
"(re)Moving bottlnecks"
"Scalability Curves"
"Scalability Curves"
"Scalability Curves"
Model, Measure, Build ... Decide
"Today's load test of 30K users, 3M rpm and +2Gbps has cost us $15 per hour"