Speed up your PHP application without changing a single query.
A few days ago I set out to answer a simple question: how much faster can a PHP app go if you drop a SQL cache in front of MySQL — without touching application code? The answer is below. On a four-table join aggregating revenue by category and region, Readyset served the same result 60x faster than MySQL. And all I did was change a port number.
This post walks through exactly what I did, step by step, so you can reproduce it yourself.
What Is Readyset?
Readyset is a SQL-aware caching engine that speaks the native MySQL wire protocol. Your PHP application connects to it the same way it connects to MySQL — same driver, same queries, same credentials. Underneath, Readyset proxies queries to your upstream database and, for the ones you choose to cache, serves results straight from memory.
Unlike Redis or Memcached, you don’t serialize objects or manage keys. You just tell Readyset “cache this query” and it handles the rest. It supports two caching modes:
- Shallow caching — stores results with a TTL, works with any SELECT, refreshes automatically in the background. This is what we’ll use here.
- Deep caching — maintains materialized views updated incrementally via the MySQL binlog. Always-fresh, but limited to supported query patterns.
Shallow caching is the fastest way to get going. Let’s start.
Prerequisites
- PHP 8.1+ with the
pdo_mysqlextension - Docker and Docker Compose
Tested with PHP 8.1.34, 8.2.30, 8.3.30, and 8.4.19. All versions produced consistent results.
Step 1: Start Readyset with Docker
Create a docker-compose.yml:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
services: mysql: image: mysql:8.0 environment: MYSQL_ROOT_PASSWORD: readyset MYSQL_DATABASE: myapp expose: - 3306 healthcheck: test: "mysqladmin ping -h127.0.0.1 -P3306 -uroot -preadyset" interval: 5s timeout: 5s retries: 12 readyset: image: docker.io/readysettech/readyset:latest environment: UPSTREAM_DB_URL: mysql://root:readyset@mysql:3306/myapp QUERY_CACHING: explicit LISTEN_ADDRESS: 0.0.0.0:3307 ALLOW_UNAUTHENTICATED_CONNECTIONS: true DEPLOYMENT: docker_compose_deployment CACHE_MODE: shallow DISABLE_TELEMETRY: true ports: - "3307:3307" - "6034:6034" healthcheck: test: ["CMD", "curl", "--fail", "127.0.0.1:6034/health"] interval: 2s timeout: 1s retries: 30 start_period: 10s depends_on: mysql: condition: service_healthy |
Bring it up:
|
1 |
docker-compose up -d |
Give Readyset a moment to connect to MySQL and initialize. You can check when it’s ready:
|
1 |
curl http://localhost:6034/health |
Expected output:
|
1 |
Adapter is in healthy state |
Step 2: Connect Your PHP Application
The only change: point your DSN at port 3307 instead of 3306.
|
1 2 3 4 5 6 7 8 9 10 11 12 |
// Before: connecting directly to MySQL // $dsn = 'mysql:host=127.0.0.1;port=3306;dbname=myapp'; // After: connecting through ReadySet $dsn = 'mysql:host=127.0.0.1;port=3307;dbname=myapp'; $username = 'root'; $password = 'readyset'; $pdo = new PDO($dsn, $username, $password, [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_EMULATE_PREPARES => false, ]); |
Set PDO::ATTR_EMULATE_PREPARES to false so PDO uses the binary protocol for prepared statements — Readyset handles these natively.
One thing to know: Readyset admin commands (SHOW PROXIED QUERIES, SHOW CACHES, CREATE CACHE) need the text protocol. If you want to run those from PHP, use a separate connection with emulated prepares enabled:
|
1 2 3 4 |
$admin = new PDO('mysql:host=127.0.0.1;port=3307;dbname=myapp', 'root', 'readyset', [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_EMULATE_PREPARES => true, ]); |
Step 3: Set Up Some Test Data
I wanted something realistic enough to show a meaningful difference — not just a single-table lookup. So I created a small e-commerce schema: customers, products, orders, and order items.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
$pdo->exec(" CREATE TABLE customers ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, region VARCHAR(50) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) "); $pdo->exec(" CREATE TABLE products ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, category VARCHAR(100) NOT NULL, price DECIMAL(10,2) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) "); $pdo->exec(" CREATE TABLE orders ( id INT AUTO_INCREMENT PRIMARY KEY, customer_id INT NOT NULL, status VARCHAR(50) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (customer_id) REFERENCES customers(id) ) "); $pdo->exec(" CREATE TABLE order_items ( id INT AUTO_INCREMENT PRIMARY KEY, order_id INT NOT NULL, product_id INT NOT NULL, quantity INT NOT NULL, unit_price DECIMAL(10,2) NOT NULL, FOREIGN KEY (order_id) REFERENCES orders(id), FOREIGN KEY (product_id) REFERENCES products(id) ) "); |
Then I seeded it: 500 customers across 5 regions, 200 products in 5 categories, and 2,000 orders each with 1–5 line items. Nothing huge, but enough to make a join query actually work for its result.
Step 4: The Slow Query
Here’s the kind of query you’d find behind a dashboard — revenue broken down by product category and customer region, filtered by order status:
|
1 2 3 4 5 6 7 8 9 10 |
SELECT p.category, c.region, COUNT(DISTINCT o.id) AS total_orders, SUM(oi.quantity * oi.unit_price) AS revenue FROM order_items oi JOIN orders o ON oi.order_id = o.id JOIN products p ON oi.product_id = p.id JOIN customers c ON o.customer_id = c.id WHERE o.status = ? GROUP BY p.category, c.region ORDER BY revenue DESC |
Four tables, a GROUP BY, a DISTINCT count, and a SUM. Against MySQL directly, this ran in about 4ms on average. Not terrible, but if you’re calling it hundreds of times a second on a dashboard or an API, it adds up.
Step 5: Cache It
First, run the query through Readyset so it can observe the shape. Then tell it to cache:
|
1 |
mysql -h 127.0.0.1 -P 3307 -u root -preadyset myapp |
|
1 |
SHOW PROXIED QUERIES; |
Expected output:
|
1 2 3 4 5 |
+--------------------+-----------------------------------------------------------+--------------------+-------+ | query id | proxied query | readyset supported | count | +--------------------+-----------------------------------------------------------+--------------------+-------+ | q_4b2510ce90bdd3d8 | SELECT ... FROM `order_items` ... WHERE (`o`.`status`=$1) | pending | 0 | +--------------------+-----------------------------------------------------------+--------------------+-------+ |
|
1 2 3 4 5 6 7 8 9 10 11 |
CREATE CACHE FROM SELECT p.category, c.region, COUNT(DISTINCT o.id) AS total_orders, SUM(oi.quantity * oi.unit_price) AS revenue FROM order_items oi JOIN orders o ON oi.order_id = o.id JOIN products p ON oi.product_id = p.id JOIN customers c ON o.customer_id = c.id WHERE o.status = ? GROUP BY p.category, c.region ORDER BY revenue DESC; |
Verify:
|
1 |
SHOW CACHES; |
Expected output:
|
1 2 3 4 5 |
+--------------------+--------------------+----------------------------+-----------------------------------------------------------+-------+ | query id | cache name | query text | properties | count | +--------------------+--------------------+----------------------------+-----------------------------------------------------------+-------+ | q_4b2510ce90bdd3d8 | q_4b2510ce90bdd3d8 | SELECT p.category, c.re... | shallow, ttl 10000 ms, refresh 5000 ms, coalesce 5000 ms | 0 | +--------------------+--------------------+----------------------------+-----------------------------------------------------------+-------+ |
The properties column tells you exactly how this cache behaves. The ttl 10000 ms is the default time-to-live — cached results are considered stale after 10 seconds. The refresh 5000 ms means Readyset starts refreshing the cache in the background 5 seconds before it expires, so your users almost never hit a cold cache. The coalesce 5000 ms groups concurrent cache misses together to avoid hammering the upstream database. These are all Readyset defaults (configurable via DEFAULT_TTL_MS, DEFAULT_COALESCE_MS environment variables), independent of the QUERY_CACHING setting — which only controls whether caches are created explicitly (you run CREATE CACHE) or implicitly (Readyset caches queries automatically).
Or do it from PHP:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
$admin->exec("CREATE CACHE FROM SELECT p.category, c.region, COUNT(DISTINCT o.id) AS total_orders, SUM(oi.quantity * oi.unit_price) AS revenue FROM order_items oi JOIN orders o ON oi.order_id = o.id JOIN products p ON oi.product_id = p.id JOIN customers c ON o.customer_id = c.id WHERE o.status = ? GROUP BY p.category, c.region ORDER BY revenue DESC "); |
Step 6: The Benchmark
I ran the same query 500 times against MySQL directly, then 500 times through Readyset cache. Here’s what I got on PHP 8.4.19:
|
1 2 3 |
MySQL direct: 4.125 ms avg (500 iterations) Readyset cached: 0.065 ms avg (500 iterations) Speedup: 63.3x faster |
The results were consistent across every PHP version I tested:
| PHP Version | MySQL direct | Readyset cached | Speedup |
|---|---|---|---|
| 8.1.34 | 3.996 ms | 0.067 ms | 59.7x |
| 8.2.30 | 4.008 ms | 0.051 ms | 78.0x |
| 8.3.30 | 4.173 ms | 0.067 ms | 62.0x |
| 8.4.19 | 4.125 ms | 0.065 ms | 63.3x |
That’s a four-table join with aggregation going from ~4ms to ~0.06ms. For a dashboard hitting this query on every page load, that’s the difference between your database sweating and barely noticing.
Here’s what the query actually returns:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
category region total_orders revenue ------------------------------------------------------- books eu-east 68 $78,619.79 sports eu-east 69 $72,150.55 books us-east 50 $69,442.39 books ap-south 54 $65,119.23 home eu-east 59 $63,388.10 electronics eu-east 53 $63,108.77 clothing eu-east 47 $54,604.23 sports eu-west 52 $53,354.70 sports ap-south 47 $51,671.39 home us-east 48 $51,549.50 |
Here’s the complete script that sets up the data, creates the cache, and runs the benchmark. Save the code below as benchmark.php and run it with php benchmark.php. Make sure the file starts with a tag (some browsers strip it when you copy from a web page):
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
<?php echo "PHP " . PHP_VERSION . "\n\n"; // --- Connections --- $db = new PDO('mysql:host=127.0.0.1;port=3306;dbname=myapp', 'root', 'readyset', [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_EMULATE_PREPARES => false, ]); $rs = new PDO('mysql:host=127.0.0.1;port=3307;dbname=myapp', 'root', 'readyset', [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_EMULATE_PREPARES => false, ]); $admin = new PDO('mysql:host=127.0.0.1;port=3307;dbname=myapp', 'root', 'readyset', [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_EMULATE_PREPARES => true, ]); // --- Create tables --- echo "Creating tables...\n"; $rs->exec("DROP TABLE IF EXISTS order_items"); $rs->exec("DROP TABLE IF EXISTS orders"); $rs->exec("DROP TABLE IF EXISTS products"); $rs->exec("DROP TABLE IF EXISTS customers"); $rs->exec(" CREATE TABLE customers ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, region VARCHAR(50) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) "); $rs->exec(" CREATE TABLE products ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, category VARCHAR(100) NOT NULL, price DECIMAL(10,2) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) "); $rs->exec(" CREATE TABLE orders ( id INT AUTO_INCREMENT PRIMARY KEY, customer_id INT NOT NULL, status VARCHAR(50) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (customer_id) REFERENCES customers(id) ) "); $rs->exec(" CREATE TABLE order_items ( id INT AUTO_INCREMENT PRIMARY KEY, order_id INT NOT NULL, product_id INT NOT NULL, quantity INT NOT NULL, unit_price DECIMAL(10,2) NOT NULL, FOREIGN KEY (order_id) REFERENCES orders(id), FOREIGN KEY (product_id) REFERENCES products(id) ) "); // --- Seed data --- echo "Seeding data...\n"; $regions = ['us-east', 'us-west', 'eu-west', 'eu-east', 'ap-south']; $categories = ['electronics', 'clothing', 'books', 'home', 'sports']; $statuses = ['completed', 'pending', 'shipped', 'cancelled']; $stmt = $rs->prepare("INSERT INTO customers (name, email, region) VALUES (?, ?, ?)"); for ($i = 1; $i <= 500; $i++) { $stmt->execute(["Customer $i", "customer$i@example.com", $regions[array_rand($regions)]]); } $stmt = $rs->prepare("INSERT INTO products (name, category, price) VALUES (?, ?, ?)"); for ($i = 1; $i <= 200; $i++) { $stmt->execute(["Product $i", $categories[array_rand($categories)], round(mt_rand(499, 29999) / 100, 2)]); } $stmtOrder = $rs->prepare("INSERT INTO orders (customer_id, status) VALUES (?, ?)"); $stmtItem = $rs->prepare("INSERT INTO order_items (order_id, product_id, quantity, unit_price) VALUES (?, ?, ?, ?)"); for ($i = 1; $i <= 2000; $i++) { $stmtOrder->execute([mt_rand(1, 500), $statuses[array_rand($statuses)]]); $orderId = $rs->lastInsertId(); for ($j = 0; $j < mt_rand(1, 5); $j++) { $stmtItem->execute([$orderId, mt_rand(1, 200), mt_rand(1, 10), round(mt_rand(499, 29999) / 100, 2)]); } } echo "Seeded: 500 customers, 200 products, 2000 orders.\n\n"; // --- The query --- $query = " SELECT p.category, c.region, COUNT(DISTINCT o.id) AS total_orders, SUM(oi.quantity * oi.unit_price) AS revenue FROM order_items oi JOIN orders o ON oi.order_id = o.id JOIN products p ON oi.product_id = p.id JOIN customers c ON o.customer_id = c.id WHERE o.status = ? GROUP BY p.category, c.region ORDER BY revenue DESC "; // Run once so Readyset sees it, then cache it $stmt = $rs->prepare($query); $stmt->execute(['completed']); $stmt->fetchAll(); $admin->exec("CREATE CACHE FROM $query"); echo "Cache created.\n"; // Warm the cache $stmt = $rs->prepare($query); $stmt->execute(['completed']); $stmt->fetchAll(); usleep(200000); // --- Benchmark --- $iterations = 500; $stmt = $db->prepare($query); $start = microtime(true); for ($i = 0; $i < $iterations; $i++) { $stmt->execute(['completed']); $stmt->fetchAll(); } $mysqlTime = (microtime(true) - $start) / $iterations * 1000; $stmt = $rs->prepare($query); $start = microtime(true); for ($i = 0; $i < $iterations; $i++) { $stmt->execute(['completed']); $stmt->fetchAll(); } $rsTime = (microtime(true) - $start) / $iterations * 1000; echo sprintf("\nMySQL direct: %.3f ms avg (%d iterations)\n", $mysqlTime, $iterations); echo sprintf("Readyset cached: %.3f ms avg (%d iterations)\n", $rsTime, $iterations); echo sprintf("Speedup: %.1fx faster\n", $mysqlTime / $rsTime); |
What About Writes?
Writes pass straight through Readyset to MySQL. No configuration needed — INSERTs, UPDATEs, and DELETEs go upstream automatically. The shallow cache refreshes on its TTL cycle, so new data shows up within seconds:
|
1 2 3 4 |
$stmt = $pdo->prepare("INSERT INTO products (name, category, price) VALUES (?, ?, ?)"); $stmt->execute(['New Widget', 'electronics', 39.99]); // After the TTL refresh (~10 seconds), the cached query includes the new row |
I verified this in every test run — after the TTL window, New Widget consistently appeared in the cached results with no manual invalidation.
If you need writes to appear instantly in cached results, switch to deep caching (CACHE_MODE=deep), which uses the MySQL binlog to update caches incrementally. The trade-off is that deep caching only supports a subset of query patterns, whereas shallow caching works with any SELECT. However, the DEEP caching is a topic for another blog post.
Using Readyset with Laravel
Update your .env and you’re done:
|
1 2 3 4 5 |
DB_HOST=127.0.0.1 DB_PORT=3307 DB_DATABASE=myapp DB_USERNAME=root DB_PASSWORD=readyset |
Eloquent, the query builder, raw queries — they all work through Readyset without changes. Cache the expensive ones through SHOW PROXIED QUERIES and move on.
A Few Things I Learned Along the Way
Start with the expensive queries. The simple single-table lookup (SELECT name, price FROM products WHERE category = ?) only showed a 1.5–2x improvement because MySQL already handles it quickly (~0.1ms). The big wins come from joins, aggregations, and anything that makes the database do real work.
Readyset exposes Prometheus metrics. Hit http://localhost:6034/metrics to get cache hit rates, latency distributions, and replication lag. Wire that into Grafana and you have full observability.
Shallow caching works with everything. I didn’t hit a single query that shallow caching couldn’t handle. Deep caching has restrictions around certain functions and subqueries, but shallow just works.
Readyset is source-available under the Business Source License 1.1 (BSL 1.1) — you can use it freely in production at no cost. The code is available at github.com/readysettech/readyset.
We’d love to see what you build with Readyset. If you run a proof of concept or benchmark with your own workload, share your results with us — real-world feedback is how we make this better. Reach out at readyset.io or open a discussion on GitHub.
The floor is yours.
0 comments · Moderated · civil & on-topicFirst comment appears here once approved. Questions, corrections, and counterpoints welcome — just no self-promotion.