Tool to make eck diag a bit more human readable
git clone https://github.com/jlim0930/eck-glance.git
Goto the extracted eck-diag and run /path/eck_1.sh
for most cases.
If your workstation is able to handle it you can run /path/eck_1fast.sh
it will launch all the jobs in the background and run all the subscripts at once - I tried this on multiple diags and it was super fast !!
All eck_*_1.sh
scripts will give high level of the kind
that you are looking at(more detailed than just kubectl get <kind>
)
All eck_*_2.sh
will give a more descriptive and detailed view of the resource(its more like kubectl describe <kind> <resource>
)
Files generated by the tool will be named eck_*.txt
In the main ECK diag directory eck_nodes.txt
In each name space any elastic related items will be eck_[elasticsearc|beat|agent|kibana|enterprisesearch/elasticmapservice]*.txt
All other eck_*.txt
are for kubernetes kinds
- StatefulSets - manages elasticsearch
- ReplicaSets - manages beats(metricbeats/etc that runs on 1 or more host)/apmserver/kibana
- DaemonSets - manages filebeat(or anything that needs to run on all kubernetes hosts)
- Secrets - list of secrets like passwords,certificates,etc
- Services - list of services
- ...
-
If you want to run individual jobs all
eck_*_1.sh
, scripts can be ran witheck_*_1.sh /path/file.json
Example:/path/eck_beat_1.sh beat.json
-
If you want to run individual jobs for all
eck_*_2.sh
, scripts can be ran witheck_*_2.sh /path/proper.json resourcename
Example:/path/eck_beat_2.sh beat.json filebeat
After the eck-glance
runs I would start looking at eck_events.txt from the <namespace>
directory, then eck_pods.txt, etc.. depending on the issue at hand.
- Find
# FIX
for things to fix. - Validate code against different diags to ensure proper coverage
- Find ways to remove loops
- Find ways to make current jq queries more simple and account for various nulls