■Dockerコンテナのログ
API Proxyサーバー内コンテナのログ
fj_kongコンテナのログ
API Proxyサーバーにログインし、以下のコマンドを実行します。
$ sudo docker logs fj_kong
異常発生時のみ、出力されます。
nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:197: [postgres error] could not retrieve server_version: receive_message: failed to get type: closed stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/kong/init.lua:197: in function 'init' init_by_lua:3: in main chunk nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:197: [postgres error] could not retrieve server_version: receive_message: failed to get type: closed stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/kong/init.lua:197: in function 'init' init_by_lua:3: in main chunk
fj_kong_dbコンテナのログ
API Proxyサーバーにログインし、以下のコマンドを実行します。
$ sudo docker logs fj_kong_db
"LOG: database system was shut down at 2018-09-05 08:52:22 UTC LOG: MultiXact member wraparound protections are now enabled LOG: autovacuum launcher started LOG: database system is ready to accept connections ERROR: syntax error at or near "use" at character 1 STATEMENT: use kong; ERROR: syntax error at or near "desc" at character 1 STATEMENT: desc tables; ERROR: function max(uuid) does not exist at character 8 HINT: No function matches the given name and argument types. You might need to add explicit type casts. STATEMENT: select max(id) from services; LOG: received smart shutdown request LOG: autovacuum launcher shutting down FATAL: terminating connection due to unexpected postmaster exit LOG: database system was interrupted; last known up at 2018-09-06 11:20:09 UTC LOG: database system was not properly shut down; automatic recovery in progress LOG: invalid record length at 0/16ABBB8: wanted 24, got 0 LOG: redo is not required LOG: MultiXact member wraparound protections are now enabled LOG: autovacuum launcher started LOG: database system is ready to accept connections ERROR: duplicate key value violates unique constraint "upstreams_name_key" DETAIL: Key (name)=(address.v1.service) already exists. STATEMENT: INSERT INTO upstreams("healthchecks", "hash_on", "id", "hash_on_cookie_path", "name", "hash_fallback", "slots") VALUES('{"active":{"unhealthy":{"http_statuses":[429,404,500,501,502,503,504,505],"tcp_failures":0,"timeouts":0,"http_failures":0,"interval":0},"http_path":"\/","timeout":1,"healthy":{"http_statuses":[200,302],"interval":0,"successes":0},"concurrency":10},"passive":{"unhealthy":{"http_failures":0,"http_statuses":[429,500,503],"tcp_failures":0,"timeouts":0},"healthy":{"http_statuses":[200,201,202,203,204,205,206,207,208,226,300,301,302,303,304,305,306,307,308],"successes":0}}}', 'none', '2fe0d850-6b28-4c60-8662-2a98609a9cc7', '/', 'address.v1.service', 'none', 10000) RETURNING * ERROR: relation "information_schema.ratelimiting_metrics" does not exist at character 15 STATEMENT: select * from information_schema.ratelimiting_metrics; ERROR: syntax error at or near "." at character 1 STATEMENT: .schema select * from information_schema.ratelimiting_metrics; ERROR: relation "statsd" does not exist at character 15 STATEMENT: select * from statsd;
fj_pgpool2コンテナのログ
API Proxyサーバーにログインし、以下のコマンドを実行します。
sudo docker logs fj_pgpool2
[ OK ] Started Permit User Sessions. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Reached target Network. [ OK ] Started Pgpool-II. Starting Pgpool-II... [ OK ] Reached target Network is Online. [ OK ] Started Login Service. [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ OK ] Stopped Getty on tty1. [ OK ] Started Getty on tty1. Starting Getty on tty1... [ TIME ] Timed out waiting for device dev-ttyS0.device. [DEPEND] Dependency failed for Serial Getty on ttyS0. [ OK ] Reached target Login Prompts. [ OK ] Reached target Multi-User System. Starting Update UTMP about System Runlevel Changes... [ OK ] Started Update UTMP about System Runlevel Changes;
pgpoolのログを確認する場合、以下のコマンドを実行してください。
sudo docker exec fj_pgpool2 journalctl -xe -u pgpool
Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 68: LOG: failover: set new primary node: 0 Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 68: LOG: failover: set new master node: 0 Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 70: LOG: new IPC connection received Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 70: LOG: received the failover indication from Pgpool-II on IPC interface Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 70: LOG: watchdog is informed of failover end by the main process Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 68: LOG: failback done. reconnect host 172.16.3.1(5432) Nov 09 00:51:37 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:37: pid 121: LOG: worker process received restart request Nov 09 00:51:38 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:38: pid 120: LOG: restart request received in pcp child process Nov 09 00:51:38 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:38: pid 68: LOG: PCP child 120 exits with status 0 in failover() Nov 09 00:51:38 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:38: pid 68: LOG: fork a new PCP child pid 169 in failover() Nov 09 00:51:38 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:38: pid 68: LOG: worker child process with pid: 121 exits with status 256 Nov 09 00:51:38 190bdab7dc0f pgpool[68]: 2018-11-09 00:51:38: pid 68: LOG: fork a new worker child process with pid: 170 Nov 09 00:52:12 190bdab7dc0f pgpool[68]: 2018-11-09 00:52:12: pid 70: LOG: new watchdog node connection is received from "172.16.3.2:12504" Nov 09 00:52:12 190bdab7dc0f pgpool[68]: 2018-11-09 00:52:12: pid 70: LOG: new node joined the cluster hostname:"172.16.3.2" port:9000 pgpool_port:9999 Nov 09 00:52:12 190bdab7dc0f pgpool[68]: 2018-11-09 00:52:12: pid 70: LOG: new outbound connection to 172.16.3.2:9000 Nov 09 00:52:13 190bdab7dc0f pgpool[68]: 2018-11-09 00:52:13: pid 70: LOG: adding watchdog node "172.16.3.2:9999 Linux e934363589f3" to the standby list Nov 09 00:52:59 190bdab7dc0f pgpool[68]: 2018-11-09 00:52:59: pid 76: LOG: watchdog: lifecheck started Nov 09 05:34:07 190bdab7dc0f pgpool[68]: 2018-11-09 05:34:07: pid 166: LOG: selecting backend connection Nov 09 05:34:07 190bdab7dc0f pgpool[68]: 2018-11-09 05:34:07: pid 166: DETAIL: failback event detected, discarding existing connections Nov 09 05:39:08 190bdab7dc0f pgpool[68]: 2018-11-09 05:39:08: pid 68: LOG: child process with pid: 166 exits with status 256 Nov 09 05:39:08 190bdab7dc0f pgpool[68]: 2018-11-09 05:39:08: pid 68: LOG: fork a new child process with pid: 195
ログローテーション
Dockerコンテナのログファイルサイズ、ファイル数は以下のとおりです。
ログサイズ:200MB
ファイル数:4(最新ファイルを含む)
■Dockerコンテナのログ以外
API Proxyで出力するログは以下のとおりです。
出力ファイル | 備考 |
---|---|
/var/FJSGHD/kong/logs/access.log | アクセスログ Kong標準 |
/var/FJSGHD/kong/logs/error.log | API Proxyログ Kong標準 |
/var/FJSGHD/kong/logs/admin_access.log | 管理APIアクセスログ Kong標準 |
上記出力ログのログファイルサイズ、ファイル数は以下のとおりです(注)。
ログファイルサイズ:10MB
ログファイル数:31(最新ファイルを含む)
注)API Proxyのログファイルサイズは、定期的にチェックされています。チェック時にファイルサイズが10MBを超過している場合、1日1回ログローテートが実行されます。
注)アクセスログ Kong標準に以下のログが出力されますが、ロードバランサーから定期的なヘルスチェックを行っているためであり、製品動作に問題はありません。
192.168.4.2 - - [05/Feb/2019:07:01:24 +0000] "GET /api-proxy-healthcheck HTTP/1.0" 404 58 "-" "-" 192.168.4.2 - - [05/Feb/2019:07:01:54 +0000] "GET /api-proxy-healthcheck HTTP/1.0" 404 58 "-" "-" 192.168.4.2 - - [05/Feb/2019:07:02:24 +0000] "GET /api-proxy-healthcheck HTTP/1.0" 404 58 "-" "-"