-
-
Notifications
You must be signed in to change notification settings - Fork 20
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
smooth out formatting, add screenshot
- Loading branch information
1 parent
4adfde0
commit e6e5836
Showing
2 changed files
with
47 additions
and
29 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,39 +1,40 @@ | ||
--- | ||
title: "Troubleshooting upgrades" | ||
title: "Troubleshooting 4.x upgrades" | ||
linkTitle: "Troubleshooting upgrades" | ||
weight: 50 | ||
aliases: | ||
- | ||
description: > | ||
What to do when CHT 4.x upgrades get stuck | ||
What to do when CHT 4.x upgrades don't work as planned | ||
relatedContent: > | ||
hosting/4.x/data-migration | ||
--- | ||
|
||
With 4.x well into a mature stage as 4.0.0 was released in November of 2022, Medic has learned a number of important lessons on how to unstick 4.x upgrades that get stuck. Below are some specific tips as well as general practices on upgrading 4.x. | ||
4.0.0 was released in November of 2022 so 4.x is now well into a mature and Medic has learned a number of important lessons on how to unstick 4.x upgrades that get stuck. Below are some specific tips as well as general practices on upgrading 4.x. | ||
|
||
{{% pageinfo %}} | ||
All tips apply to both [Docker]({{< relref "hosting/4.x/production/docker" >}}) and [Kubernetes]({{< relref "hosting/4.x/production/kubernetes" >}}) based deployments unless otherwise specified. | ||
|
||
All upgrades are expected to succeed without issue. Do not attempt any fixes unless you actively have a problem upgrading. | ||
{{% /pageinfo %}} | ||
|
||
## Before you start | ||
## Considerations | ||
|
||
tk - flesh out, but be prepared by: | ||
When troubleshooting, consider make sure there are: | ||
|
||
* Have and have tested backups | ||
* Have extra disk space (up to 5x!) | ||
* Have tested the upgrade on a dev instance | ||
* ? | ||
* Backups exist and restores have been tested | ||
* Extra disk space is availabe (up to 5x!) | ||
* The upgrade has been tested on a development instance with production data | ||
|
||
## A go-to fix: restart | ||
|
||
A safe fix for any upgrade getting stuck is to restart all services. Any views that were being re-indexed will be picked up where they left off without loosing any work. This should be your first step when trouble shooting a stuck upgrade. | ||
A safe fix for any upgrade getting stuck is to restart all services. Any views that were being re-indexed will be picked up where they left off without loosing any work. This should be your first step when trouble shooting a stuck upgrade. | ||
|
||
If you're able to, after a restart go back into the admin web GUI and try to upgrade again. Consider trying this at least twice. | ||
|
||
## CHT 4.0.x - 4.3.x: CouchDB Crashes | ||
|
||
**[issue](https://github.com/medic/cht-core/issues/9286)**: Starting an upgrade that involves view indexing can cause CouchDB to crash on large databases (>30m docs) | ||
**[Issue #9286](https://github.com/medic/cht-core/issues/9286)**: Starting an upgrade that involves view indexing can cause CouchDB to crash on large databases (>30m docs). The upgrade will fail and you will see the logs below when you have this issue. | ||
|
||
HAProxy: | ||
|
||
|
@@ -52,16 +53,16 @@ CouchDB | |
``` | ||
|
||
**Fix:** | ||
1. I'm checking that all the indexes are warmed by loading them one by one in fauxton. | ||
2. Restart all services, **retry** upgrade from Admin GUI (not cancel and upgrade) | ||
1. Check that all the indexes are warmed by loading them one by one in fauxton. | ||
2. Restart all services, **retry** upgrade from Admin GUI - do not cancel and upgrade. | ||
|
||
## CHT 4.2.4 - 4.c.x: view indexing can become stuck after indexing is finished | ||
## CHT 4.0.0 - 4.2.2: view indexing can become stuck after indexing is finished | ||
|
||
**[issue](https://github.com/medic/cht-core/issues/9617):** Starting an upgrade that involves view indexing can become stuck after indexing is finished | ||
**[Issue #9617](https://github.com/medic/cht-core/issues/9617):** Starting an upgrade that involves view indexing can become stuck after indexing is finished | ||
|
||
upgrade process stalls after view indexes are built | ||
Upgrade process stalls while trying to index staged views: | ||
|
||
tk - get screenshot of admin UI with no progress bar | ||
![CHT Core admin UI showing upgrade progress bar stalled at 4% ](stalled-upgrade.png) | ||
|
||
**Fix:** | ||
|
||
|
@@ -71,14 +72,14 @@ Unfortunately, the workaround is manual and very technical and involves: | |
* The admin upgrade page will say that the upgrade was interrupted, click retry upgrade. | ||
* Depending on the state of the database, you might see view indexing again. Depending on how many docs need to be indexed, indexing might get stuck again. Go back to 1 if that happens. | ||
* Eventually, when indexing jobs are short enough not to trigger a request hang, you will get the button to complete the upgrade. | ||
* | ||
|
||
## CHT 4.0.1 - 4.9.0: CouchDB restart causes all services to go down | ||
|
||
**Note** - This is a Docker only issue. | ||
|
||
**[issue](https://github.com/medic/cht-core/issues/9284)**: A couchdb restart in single node docker takes down the whole instance. | ||
**[Issue #9284](https://github.com/medic/cht-core/issues/9284)**: A couchdb restart in single node docker takes down the whole instance. The upgrade will fail and you will see the logs below when you have this issue. | ||
|
||
Haproxy continuously reports NOSRV errors like: | ||
Haproxy reports `NOSRV` errors: | ||
|
||
```shell | ||
<150>Jul 25 18:11:03 haproxy[12]: 172.18.0.9,<NOSRV>,503,0,1001,0,GET,/,-,admin,'-',241,-1,-,'-' | ||
|
@@ -96,26 +97,42 @@ nginx reports: | |
2024/07/25 18:40:28 [error] 43#43: *5757 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, | ||
``` | ||
**Fix:** Restart all services | ||
## CHT 4.x.x upgrade to 4.x.x - no more free disk space | ||
[Issue](https://github.com/moh-kenya/config-echis-2.0/issues/2578#issuecomment-2455702112): prod instance couch is crashing, stuck at compaction initiation - escalated to MoH Team to resolve [lack of free disk space issue] | ||
**Issue\*:** Couch is crashing during upgrade. The upgrade will fail and you will see the logs below when you have this issue. While there's two log scenarios, both have the same fix. | ||
CouchDB logs scenario 1: | ||
tk - can't (re)start services during upgrade | ||
```shell | ||
[error] 2024-11-04T20:42:37.275307Z [email protected] <0.29099.2438> -------- rexi_server: from: [email protected](<0.3643.2436>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] | ||
[error] 2024-11-04T20:42:37.275303Z [email protected] <0.10933.2445> -------- rexi_server: from: [email protected](<0.3643.2436>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,526}]},{couch_bt_engine,include_reductions,4,[{file,"src/couch_bt_engine.erl"},{line,1074}]},{couch_bt_engine,skip_deleted,4,[{file,"src/couch_bt_engine.erl"},{line,1069}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,848}]},{couch_btree,stream_kp_node,8,[{file,"src/couch_btree.erl"},{line,819}]}] | ||
[error] 2024-11-04T20:42:37.275377Z [email protected] <0.7374.2434> -------- rexi_server: from: [email protected](<0.3643.2436>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,526}]},{couch_bt_engine,include_reductions,4,[{file,"src/couch_bt_engine.erl"},{line,1074}]},{couch_bt_engine,skip_deleted,4,[{file,"src/couch_bt_engine.erl"},{line,1069}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,848}]},{couch_btree,stream_kp_node,8,[{file,"src/couch_btree.erl"},{line,819}]}] | ||
``` | ||
CouchDB logs scenario 2: | ||
```shell | ||
[info] 2024-11-04T20:18:46.692239Z [email protected] <0.6832.4663> -------- Starting compaction for db "shards/7ffffffe-95555552/medic-user-mikehaya-meta.1690191139" at 10 | ||
[info] 2024-11-04T20:19:47.821999Z [email protected] <0.7017.4653> -------- Starting compaction for db "shards/7ffffffe-95555552/medic-user-marnyakoa-meta.1690202463" at 21 | ||
[info] 2024-11-04T20:21:24.529822Z [email protected] <0.24125.4661> -------- Starting compaction for db "shards/7ffffffe-95555552/medic-user-lilian_lubanga-meta.1690115504" at 15 | ||
``` | ||
**Fix:** Give CouchDB more disk and Restart all services | ||
_* See eCHIS Kenya [Issue #2578](https://github.com/moh-kenya/config-echis-2.0/issues/2578#issuecomment-2455702112) - a private repo and not available to the public_ | ||
## CHT 4.2.x upgrade to 4.11 - kubernetes has pods stuck in indeterminate state | ||
## CHT 4.2.x upgrade to 4.11 - Kubernetes has pods stuck in indeterminate state | ||
**Note** - This is a Kubernetes only issue. | ||
[Issue](https://github.com/moh-kenya/config-echis-2.0/issues/2579#issuecomment-2455637516): A number of pods were stuck in indeterminate state, presumably because of failed garbage collection | ||
**Issue\*:** A number of pods were stuck in indeterminate state, presumably because of failed garbage collection | ||
API Logs: | ||
API Logs | ||
```shell | ||
2024-11-04 19:33:56 ERROR: Server error: StatusCodeError: 500 - {"message":"Error: Can't upgrade right now. | ||
The following pods are not ready...."} | ||
|
@@ -127,7 +144,8 @@ Running `kubectl get po` shows 3 pods with status of `ContainerStatusUnknown`: | |
**Fix:** delete pods so they get recreated and start cleanly | ||
(tk - is this syntax legal/correct?) | ||
`kubectl delete po 'cht.service in (api, sentinel, haproxy, couchdb)'` | ||
```shell | ||
kubectl delete po 'cht.service in (api, sentinel, haproxy, couchdb)' | ||
``` | ||
_* See eCHIS Kenya [Issue #2579](https://github.com/moh-kenya/config-echis-2.0/issues/2579#issuecomment-2455637516) - a private repo and not available to the public_ |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.