Database performance drops
Context: with >2M rows in
lava_scheduler_app_testjob table and ~2k jobs in queue (to be scheduled) database performance drops significantly.
Analysis of highest-cost queries with
pg_stat_statements showed that most of their execution time is spent scanning
TestJob entries to filter the ones which can be accessed by the user who made a query. Database is also queried twice due to pagination handling in Django - first to count the items, second to get the data.
Mitigation methods suggested by the Django/PostgreSQL communities involve:
- timing out initial count query 
- row number estimation 
- switching to keyset pagination 
Are there mitigation methods recommended by the LAVA team?