Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infinite Lock userlock during select count(*) from pg_wait_sampling_history; #18

Open
grin31 opened this issue Apr 8, 2020 · 1 comment

Comments

@grin31
Copy link

grin31 commented Apr 8, 2020

Good day.
During query from pg_wait_sampling_history session infinite wait.
Query from pg_wait_sampling_current working.

config:
pg_wait_sampling.history_period | 100
pg_wait_sampling.history_size | 5000
pg_wait_sampling.profile_period | 100
pg_wait_sampling.profile_pid | on
pg_wait_sampling.profile_queries | on

SELECT version();
PostgreSQL 10.11 (Ubuntu 10.11-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit

query:
[local]:5454 postgres@postgres=# select count(*) from pg_wait_sampling_history;

from pg_stat_activity
-[ RECORD 18 ]---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
datid | 12974
datname | postgres
pid | 117180
usesysid | 10
usename | postgres
application_name | psql
client_addr | [null]
client_hostname | [null]
client_port | -1
backend_start | 2020-04-08 14:53:19.027164+03
xact_start | 2020-04-08 14:53:21.038274+03
query_start | 2020-04-08 14:53:21.038274+03
state_change | 2020-04-08 14:53:21.038276+03
wait_event_type | Lock
wait_event | userlock

state | active
backend_xid | [null]
backend_xmin | 32249100
query | select count(*) from pg_wait_sampling_history;
backend_type | client backend

[local]:5454 postgres@postgres=# SELECT * from pg_locks where pid=117180;
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath
------------+------------+----------+--------+--------+------------+---------------+---------+--------+----------+--------------------+--------+-----------------+---------+----------
relation | 12974 | 9020314 | [null] | [null] | [null] | [null] | [null] | [null] | [null] | 10/788746 | 117180 | AccessShareLock | t | t
virtualxid | [null] | [null] | [null] | [null] | 10/788746 | [null] | [null] | [null] | [null] | 10/788746 | 117180 | ExclusiveLock | t | t
userlock | 3398742279 | [null] | [null] | [null] | [null] | [null] | 1 | 0 | 0 | 10/788746 | 117180 | ExclusiveLock | f | f
userlock | 3398742279 | [null] | [null] | [null] | [null] | [null] | 0 | 0 | 0 | 10/788746 | 117180 | ExclusiveLock | t | f
(4 rows)

postgres@postgres=# SELECT relname from pg_class where relfilenode = 9020314;
result:
pg_wait_sampling_history

@maksm90
Copy link
Collaborator

maksm90 commented Aug 15, 2022

Hi, @grin31 !
I suppose this issue was resolved in #31 .

From your picture it might be concluded that collector process hangs holding its own lock, I suppose, while working with message queue: in fact, in a whole fragment under collector lock just manipulation with message queue is a single blocking operation. The pg_stat_activity entry for collector process would able to confirm this suggestion. Before the patch such hanging might occur when some last query to pg_wait_sampling function was interrupted, e.g., by Ctrl+C. Now, I don't see any reasons for collector hanging.

@grin31 do you remember, was some previous query to pg_wait_sampling before select count(*) from pg_wait_sampling_history interrupted? If so, I'll close this issue.

@postgrespro postgrespro deleted a comment from keremet Aug 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants