atom feed18 messages in org.postgresql.pgsql-hackersProgress bar updates
FromSent OnAttachments
Gregory StarkJul 18, 2006 11:35 am 
Luke LonerganJul 18, 2006 11:44 am 
Dave PageJul 18, 2006 1:08 pm 
Andreas PflugJul 18, 2006 5:12 pm 
Neil ConwayJul 18, 2006 6:52 pm 
Tom LaneJul 18, 2006 8:23 pm 
Josh BerkusJul 18, 2006 9:24 pm 
Greg StarkJul 19, 2006 2:18 am 
Hannu KrosingJul 19, 2006 2:33 am 
Dave PageJul 19, 2006 2:35 am 
Andreas PflugJul 19, 2006 5:23 am 
Tom LaneJul 19, 2006 7:33 am 
Darcy BuskermolenJul 19, 2006 8:54 am 
Andrew HammondJul 19, 2006 10:29 am 
Christopher Kings-LynneJul 19, 2006 6:38 pm 
Agent MJul 19, 2006 7:40 pm 
Csaba NagyJul 20, 2006 1:51 am 
Luke LonerganJul 20, 2006 8:36 am 
Subject:Progress bar updates
From:Gregory Stark (gsst@mit.edu)
Date:Jul 18, 2006 11:35:33 am
List:org.postgresql.pgsql-hackers

Has anyone looked thought about what it would take to get progress bars from clients like pgadmin? (Or dare I even suggest psql:)

My first thought would be a message like CancelQuery which would cause the backend to peek into a static data structure and return a message that the client could parse and display something intelligent. Various commands would then stuff information into this data structure as they worked.

For a first cut this "data structure" could just be a float between 0 and 1. Or perhaps it should be two integers, a "current" and an "estimated final". That would let the client do more intelligent things when the estimates change for the length of the whole job.

Later I could imagine elaborating into more complex structures for representing multi-step processes or even whole query plans. I also see it possibly being interesting to stuff this data structure into shared memory handled just like how Tom handled the "current command". That would let you see the other queries running on the server, how long they've been running, and estimates for how long they'll continue to run.

I would suggest starting with utility functions like index builds or COPY which would have to be specially handled anyways. Handling all optimizable queries in a single generic implementation seems like something to tackle only once the basic infrastructure is there and working for simple cases.

Of course the estimates would be not much better than guesses. But if you want to say it's not worth having since they won't be perfectly accurate be prepared to swear that you've never looked at the "% complete" that modern ftp clients and web browsers display even though they too are, of course, widely inaccurate. They nonetheless provide some feedback the user desperately wants to be reassured that his job is making progress and isn't years away from finishing.