Slow MySQL query when sorting by timestamp column with WHERE conditions

I’m working with a large MySQL table called Activity that has around 200 million rows. Here’s the table structure:

+---------------------+--------------+------+-----+---------+----------------+
| Field               | Type         | Null | Key | Default | Extra          |
+---------------------+--------------+------+-----+---------+----------------+
| ActivityId          | int(11)      | NO   | PRI | NULL    | auto_increment |
| DeviceId            | int(11)      | NO   | MUL | NULL    |                |
| ActionTypeId        | int(11)      | NO   | MUL | NULL    |                |
| Timestamp           | datetime(3)  | NO   | MUL | NULL    |                |
| SourceId            | int(11)      | YES  | MUL |         |                |
...
| ExtraData           | text         | YES  |     |         |                |
+---------------------+--------------+------+-----+---------+----------------+

I’m having trouble with this query that takes more than 5 minutes to run:

SELECT A.* 
FROM Activity A
WHERE (A.ActionTypeId != 4 OR (A.ActionTypeId = 4 AND A.SourceId IS NOT NULL)) 
AND A.ActionTypeId != 27 AND A.ActionTypeId != 12  
AND A.DeviceId in (1001, 1002, 1003, 1004)
ORDER BY A.Timestamp DESC
LIMIT 0, 200

The explain plan shows it’s examining about 180k rows with high read cost. When I remove the ORDER BY A.Timestamp DESC part, the same query completes in just 0.1 seconds. I already have an index on the Timestamp column but it doesn’t seem to help much. What could be causing this performance issue and how can I optimize it? Would creating a compound index be useful here?

Been dealing with this exact issue for years on tables this size. Your problem’s that complex OR condition plus the sorting - MySQL can’t use indexes efficiently when it’s juggling all that messy logic.

What worked for me: completely restructure the query. Don’t make MySQL handle everything at once - break it into smaller, index-friendly chunks.

Consider partitioning your Activity table by timestamp if you’re usually querying recent data. Monthly or weekly partitions can massively cut down what MySQL needs to scan.

Also check your server config - sort_buffer_size and read_rnd_buffer_size are probably too small. With 200M rows, defaults won’t cut it. I bumped mine to 8MB and 4MB for similar workloads.

The composite index idea’s good, but don’t go overboard. MySQL can only use one index per table most of the time, so focus on your most common query patterns instead of trying to optimize every possible combo.

yeah, classic problem with big tables. ur timestamp index is useless here cuz MySQL filters first, then sorts. create a composite index on (DeviceId, ActionTypeId, Timestamp) - put DeviceId first since it’s the most selective filter. that OR condition is also killing your performance - MySQL can’t optimize those well.

I’ve dealt with similar table sizes - the problem is mixing OR conditions with negative filters. MySQL’s optimizer chokes on this combo, especially with sorting involved. Don’t fight the optimizer, work with it. Try a covering index on (DeviceId, Timestamp DESC, ActionTypeId, SourceId). This lets MySQL scan devices by timestamp while keeping all filter data in memory - no table lookups needed. But here’s the real fix: ditch that OR logic. It’s forcing MySQL to check complex conditions on every single row. Break it into EXISTS subqueries or split into separate queries with UNION ALL. I’ve seen this blow away single complex WHERE clauses on big datasets. One more thing - check your MySQL version. Older versions completely ignore DESC in index definitions, which might explain why dropping ORDER BY helps so much. If you’re stuck on old MySQL, just create (DeviceId, Timestamp) and let it reverse scan.

The performance drop happens because MySQL can’t efficiently use your timestamp index with complex WHERE conditions. With 200 million rows, the query optimizer likely resorted to a filesort after filtering, which severely impacts performance. I encountered a similar issue with a 150 million row table and resolved it by creating a composite index tailored to my query pattern. Consider using (DeviceId, ActionTypeId, Timestamp), as DeviceId has high selectivity and you’re sorting by Timestamp. Additionally, your OR condition hampers index optimization. It would be better to rewrite it as a UNION - one query for ActionTypeId != 4 and another for ActionTypeId = 4 with SourceId IS NOT NULL. This approach generally outperforms complex OR conditions in large tables.

you’re hitting the classic filesort bottleneck - MySQL’s doing a full scan then sorting afterward instead of using your timestamp index. try forcing the index with USE INDEX (timestamp_index_name) first before rebuilding indexes.

Everyone’s obsessing over index optimization but you’re fighting a losing battle with 200M rows and complex conditions.

I’ve watched this pattern kill performance at multiple companies. The problem isn’t just OR logic or missing indexes - you’re making MySQL do heavy lifting every time someone runs this query.

Stop trying to perfect one massive query. Automate the data prep instead.

Use Latenode to run this query in background batches. Pull recent activity data hourly and dump results into a materialized view or separate table. Your frontend hits pre-processed data instead of raw 200M row tables.

I did this for our user activity dashboard. Latenode grabs data for each device group separately, applies your filtering logic, and maintains rolling datasets. Queries that used to timeout now return in under 100ms.

You can get fancy - different pipelines for different time ranges. Recent data every 15 minutes, older stuff nightly. Latenode handles scheduling and keeps everything synced.

Best part? Your application code doesn’t change. Just point it at the optimized tables Latenode maintains behind the scenes.

I’ve hit this exact problem multiple times with large datasets. The real issue isn’t your index - it’s MySQL’s query planner working against you with those complex conditions.

Here’s what’s happening: MySQL sees your WHERE clause and decides to filter first, then sort. With 200M rows, that’s scanning massive amounts of data before it can even touch your timestamp index.

Ditch the composite index headaches and automate this instead. Use Latenode to pull data in smaller chunks throughout the day. Set it to grab recent activity every few minutes and dump it into a separate fast table.

I built something similar for our analytics team. Latenode runs queries during off-peak hours, processes results, and fills a denormalized table built for these exact queries. Queries that took 5+ minutes now run in milliseconds.

You can set up multiple pipelines - one for each device group or action type. Latenode handles scheduling and errors automatically.

Best part? No app code changes or schema modifications. Just point your queries at the optimized tables that Latenode keeps updated.