All documentation
Reference
Performance and scaling
How Gravity Tables handles large datasets. Tested limits, the cache layers, what slows it down, and when to split a table into per-period siblings.
This doc consolidates everything we’ve learned about running Gravity Tables on large datasets. None of it is theoretical, every number cited has been measured in production.
Tested limits#
A single Gravity Form with these characteristics renders cleanly on standard shared hosting (PHP 8.1, 256 MB memory limit, MySQL 8 with default buffer pool):
| Operation | Tested up to | Notes |
|---|---|---|
| Total entries in a single form | 50,000 | Pagination keeps render-time constant past this |
| Entries rendered per page | 500 | Above this, browser layout cost dominates |
| Concurrent table instances on one page | 10 | Each adds one DB query group; cached after first hit |
| CSV export | 25,000+ rows clean | Community reports of clean 75,000-row exports |
| Excel (.xlsx) export | 10,000 rows | Higher format overhead than CSV; use CSV beyond this |
| PDF export | 5,000 rows | DomPDF is the bottleneck; chunk into multiple PDFs |
| Bulk action over selected rows | 5,000 rows | One transactional batch |
If you’re sitting beyond any of these limits, the mitigations below cover what to reach for.
Layer-by-layer how it stays fast#
1. Pagination is server-side by default#
The [gravity_table] shortcode never sends the full table to the client. With per_page="25", the AJAX endpoint returns only the 25 rows in the current page plus the active filter context. Page 2 is a fresh request.
[gravity_table id="42" per_page="25"]
The bytes-on-the-wire grow as O(per_page), not O(total_rows). A 50,000-row table renders the same payload as a 50-row table.
2. Top-N display caps the result set server-side#
For leaderboards and “show me the top performers” use cases, top_n_count (shipped 4.2.54) limits the result set before pagination, search, and export apply. So a 100,000-row table can render a 10-row leaderboard with all the expected operations narrowed to those 10.
[gravity_table id="42"
top_n_count="10"
top_n_column="value"
top_n_direction="desc"]
Combined with pagination, the underlying query becomes ORDER BY value DESC LIMIT 10, fast on any reasonable index.
3. Streaming exports#
CSV exports stream entries in 500-row chunks via php://output + ob_flush(), so memory holds only the current chunk regardless of total size. See the bulk-data-flow release post for the implementation walkthrough.
For the typical 25,000-row export on shared hosting, peak memory is 4-6 MB instead of the 200+ MB the pre-streaming implementation needed.
4. Auto-refresh uses ETag-aware polling#
auto_refresh="true" doesn’t blindly re-fetch the table data. The server emits an ETag based on the active filter + the latest entry_id in the result set. Subsequent polls send the ETag back; if nothing has changed, the server returns 304 Not Modified with no body.
[gravity_table id="42"
auto_refresh="true"
refresh_interval="30"]
The cost of “is there anything new?” is one HEAD-style request that returns 304 for the 99% case. Free polling for steady-state tables.
5. Two-tier caching#
GT_Admin::get_table() (the table-config lookup that runs on every render) uses a two-tier cache (shipped 4.1.57):
- Request-level: a PHP
static $cachevariable. The same shortcode rendered twice on a page ([gravity_table id="42"]× 2) only queries the database once. - Cross-request:
wp_cache_get()/wp_cache_set()against any persistent object cache (Redis, Memcached). The table config persists across requests until invalidated.
For a page with 5 instances of the same table id, this drops the DB hits from 5 to 1 cold + 4 free.
6. Targeted cache invalidation, not site-wide flush#
Before 4.1.25, every entry edit called wp_cache_flush(), nuking the entire object cache for the site. On a Redis-backed install with hundreds of cached objects, this was a measurable performance regression.
The new behaviour invalidates only the affected groups: gravity_tables, gravity_forms, gf_entries, plus the specific entry. The rest of the cache stays warm.
// Internally: targeted invalidation, not site-wide flush
wp_cache_delete($entry_id, 'gf_entries');
wp_cache_delete_group('gravity_tables');
If you have an object cache and noticed it’s now staying warm after entry edits, this is why.
Database index recommendations#
For very large tables, make sure your wp_gf_entry and wp_gf_entry_meta tables have these indexes (Gravity Forms ships them by default; if a migration was incomplete you may not have them):
SHOW INDEX FROM wp_gf_entry;
-- Should include: form_id, status, date_created
SHOW INDEX FROM wp_gf_entry_meta;
-- Should include: entry_id, meta_key, (entry_id, meta_key) compound
If any are missing, restoring them is one ALTER TABLE per index, see Gravity Forms’ database documentation.
For tables that filter heavily on a specific custom field (e.g. status for a moderation queue), add a compound index on (meta_key, meta_value):
ALTER TABLE wp_gf_entry_meta
ADD INDEX idx_gt_meta_lookup (meta_key, meta_value(20));
The (20) prefix limits the index to the first 20 chars of the value column, sufficient for status-string filtering, much smaller than indexing the full TEXT column.
What slows it down#
In our experience, performance issues on Gravity Tables installs trace back to one of four causes, in rough order of frequency:
1. Heavy hooks on gform_after_submission#
If your form has 5+ plugins each hooked into submission (notifications, payment add-ons, third-party CRM glue), each plugin’s hook runs synchronously during the submission. A 500-row CSV import via 4.1.22 triggers 500 cycles of all those hooks.
Mitigation: use the gravity_tables_entry_created action (shipped 4.1.31) and offload heavy work via wp_schedule_single_event to run asynchronously. Submissions stay fast; integrations run in the background.
2. Object cache misconfiguration#
If wp_cache_get / wp_cache_set always miss (no persistent backend), the two-tier cache degrades to request-level only. Symptoms: page-load times that scale linearly with the number of table instances on the page.
Mitigation: install a Redis or Memcached object cache plugin, point it at the right backend, verify with wp option get _transient_doing_cron returning fast.
3. Calculated fields with deep dependencies#
A calculation field that references 8 other fields and is included in a sortable column will recompute its dependency chain for every row in the result set. For a 5,000-row table that’s a meaningful CPU hit on each render.
Mitigation: persist the calculation result to a real number field via the gform_pre_submission hook, and table that instead. The calculation runs once at submission time, not on every render.
4. JOIN-style filters across many meta keys#
Filters like status:approved AND priority:high AND assigned_agent:current_user produce a SQL query with three meta-table JOINs. On a 50,000-row table without compound indexes, that query can hit 500 ms+.
Mitigation: add the compound (meta_key, meta_value(20)) index above. Also consider denormalising frequently-filtered meta keys into a dedicated column on wp_gf_entry (custom plugin work, not built-in).
When to split#
For tables that have grown past ~75,000 entries and slowed down even with all the above optimisations, consider splitting into per-period sibling tables:
- One Gravity Form per quarter (
leads-2026-Q1,leads-2026-Q2, …) - Hidden form field for the period auto-populated at submission time
- A “merged-table” view (shipped 4.1.62) renders all sibling tables as one logical table
The merged-table renderer issues separate paginated queries per source form, so the largest single query stays bounded even as historical data accumulates indefinitely.
[gravity_table id="leads-merged"
type="merged"
source_forms="leads-2026-Q1,leads-2025-Q4,leads-2025-Q3"]
The visitor sees one table; the database sees three smaller queries that each fit in their indexes.
Profiling and measurement#
When WP_DEBUG is on, several measurements are logged automatically:
[gravity-tables] render: 234 rows, 1.2s, peak 8.4 MB
[gravity-tables] export complete: 24,318 rows, peak 5.2 MB, 1.8s
[gravity-tables] bulk action: approve, 487 entries, 2.1s
These four numbers, render time, peak memory, export memory, bulk action throughput, are the data you need to size your hosting honestly. For sizing decisions, don’t guess; turn on WP_DEBUG, run your worst-case operation, read the log.
For deeper profiling (where exactly is the time going?), New Relic, Tideways, and XHGui all hook in cleanly. The plugin’s call sites are conventionally named (gt_render_*, gt_export_*, gt_bulk_*) so they show up identifiable in flame graphs.
Frontend rendering#
The plugin ships zero JavaScript framework dependencies, no React, no Vue, no Angular. The frontend is roughly 18 KB of vanilla JavaScript (gzipped) plus the table-specific CSS variables. The bundle has not grown past 25 KB across any 4.x release.
For sites where every kilobyte counts, the table can be rendered fully server-side without the JS bundle by adding interactive="false", you lose inline editing and live polling, but search/sort/filter still work via standard form-submit semantics.
[gravity_table id="42" interactive="false"]
This produces a 0-JS, plain-HTML table that’s fully indexable by search engines and works with JavaScript disabled.
Summary checklist#
Going through a performance review of an existing install? Walk through this in order:
- Pagination on?
per_pageset to a reasonable number (≤ 100 typical, ≤ 500 max). - Object cache active?
wp_cache_getresolves to a real backend (Redis, Memcached). - Indexes present? Confirm
wp_gf_entry_metahas(entry_id, meta_key)compound; add(meta_key, meta_value(20))if filtering heavily. - Hooks slim? Heavy work on
gravity_tables_entry_created, async viawp_schedule_single_event. - Calculated fields persisted? Reference real columns, not on-the-fly calculations, in sortable contexts.
- Top-N applied? For leaderboard-style views,
top_n_countinstead of pagination tricks. - Auto-refresh tuned? 30s+ interval; ETag means most polls return 304.
- Big table split? Past 75,000 rows, consider per-period siblings via merged-table.
Related#
- REST API, the API counterpart inherits the same pagination + caching characteristics
- Bulk data flow release post, the streaming-export implementation walkthrough
- 4.2 line release post, Top-N (4.2.54) and merged-table (4.1.62) deep dives
- FAQ → large datasets, the short version of this page
- Hooks,
gravity_tables_entry_createdfor async-friendly integration patterns