Hans-Jurgen Schonig

  • Maryna Lysenko 💙💛has quoted2 years ago
    The table has grown considerably, even though the data inside the table is exactly the same. The reason for this problem is called alignment and can be found deep inside the code. The theory is as follows: a CPU has a hard time if a field does not start at a multiplier of CPU word-size. Therefore, PostgreSQL will accordingly align data physically.
    The most important point here is that it can make sense to group columns with similar data types next to each other. Of course, the outcome and the potential difference in size greatly depend on the content. If "abc" was used instead of "abcd" in this example, the results would not have shown any difference; both versions of the table would have had a fixed size of 498 MB.
  • Maryna Lysenko 💙💛has quoted2 years ago
    table in PostgreSQL can hold up to 1600 columns
  • Maryna Lysenko 💙💛has quoted2 years ago
    SELECT schemaname, relname, seq_scan, seq_tup_read,
    idx_scan, seq_tup_read / seq_scan
    FROM pg_stat_user_tables
    WHERE seq_scan > 0
    ORDER BY seq_tup_read DESC;
    The query provides us with schemaname and relname to identify the table inside the database. The seq_scan field will return the number of times the table has been read sequentially. During those sequential scans, the database had to read seq_tup_read rows. The idx_scan field informs us about the number of index scans, and finally, the average number of rows needed by seq_scan is displayed.
  • Maryna Lysenko 💙💛has quoted2 years ago
    test=# \d pg_stat_user_indexes
    View "pg_catalog.pg_stat_user_indexes"
  • Maryna Lysenko 💙💛has quoted2 years ago
    Prepared queries try to solve exactly that problem. The idea is that the backend process will cache the plan and reuse it just in case it is needed again. The beauty is that sending the parameters instead of the complete query can be enough. Usually, prepared statements are happening behind the scenes when desired (this is usually done through the driver if the driver is prepared for it). However, you can also prepare queries manually
  • Maryna Lysenko 💙💛has quoted2 years ago
    Logically, this transaction has to wait for the first transaction. In this case, it is obvious what is going on, but how can it be checked if you don't know?
    The first thing to check is always pg_stat_activity. It will reveal what is going on and which operations are active:
    test=# \x
    Expanded display is on.
    test=# SELECT pid, query, waiting
  • Maryna Lysenko 💙💛has quoted2 years ago
    The following query detects locks that have not been granted:
    test=# SELECT locktype, granted, transactionid,
    mode
    FROM pg_locks WHERE granted = 'f';
  • Maryna Lysenko 💙💛has quoted2 years ago
    test=# SELECT granted, transactionid, mode, pid
    FROM pg_locks
    WHERE transactionid = 1812
  • Maryna Lysenko 💙💛has quoted2 years ago
    test=# CREATE FUNCTION mysum(int, int)
    RETURNS int AS ' SELECT $1 + $2 '
    LANGUAGE 'sql'
  • Maryna Lysenko 💙💛has quoted2 years ago
    SELECT schemaname, relname, n_tup_ins,
    n_tup_upd, n_tup_hot_upd, n_tup_del
    FROM pg_stat_user_tables
    ORDER BY n_tup_upd DESC;
fb2epub
Drag & drop your files (not more than 5 at once)