Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lb service in haproxy always PENDING #327

Open
iamfoolberg opened this issue Dec 27, 2023 · 8 comments
Open

lb service in haproxy always PENDING #327

iamfoolberg opened this issue Dec 27, 2023 · 8 comments

Comments

@iamfoolberg
Copy link

I'm trying to install this great work in my 2 ubuntu 22.04 VMs. i installed the microk8s cluster and its dashboard.
After deploy the hcce.yaml, all pods are running, but the lb service in haproxy is always PENDING.
by using lsof, all ports in lb service but 443 are available:

root@k8s-master:/home/berg/hubs-cloud/community-edition# lsof -i :443

COMMAND      PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
calico-no 698987 root    9u  IPv4 2663987      0t0  TCP k8s-worker01:35324->10.152.183.1:https (ESTABLISHED)
calico-no 698990 root    9u  IPv4 2663977      0t0  TCP k8s-worker01:35298->10.152.183.1:https (ESTABLISHED)
calico-no 698994 root    9u  IPv4 2663985      0t0  TCP k8s-worker01:35300->10.152.183.1:https (ESTABLISHED)

And the haproxy complains:

 _   _    _    ____                                            
| | | |  / \  |  _ \ _ __ _____  ___   _                       
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |                      
|  _  |/ ___ \|  __/| | | (_) >  <| |_| |                      
|_| |_/_/   \_\_|   |_|  \___/_/\_\\__, |                      
 _  __     _                       |___/             ___ ____  
| |/ /   _| |__   ___ _ __ _ __   ___| |_ ___  ___  |_ _/ ___| 
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __|  | | |      
| . \ |_| | |_) |  __/ |  | | | |  __/ ||  __/\__ \  | | |___ 
|_|\_\__,_|_.__/ \___|_|  |_| |_|\___|\__\___||___/ |___\____| 
                
2023/12/27 13:41:12 HAProxy Ingress Controller v1.8.5 ca59756.dirty
2023/12/27 13:41:12 Build from: https://github.com/haproxytech/kubernetes-ingress
2023/12/27 13:41:12 Build date: 2022-09-13T13:31:15
2023/12/27 13:41:12 ConfigMap: hubs/haproxy-config
2023/12/27 13:41:12 Ingress class: haproxy
2023/12/27 13:41:12 Empty Ingress class: false
2023/12/27 13:41:12 Publish service: 
2023/12/27 13:41:12 Using local backend service on port: %!s(int=6061)
2023/12/27 13:41:12 Default ssl certificate: hubs/cert-hcce
2023/12/27 13:41:12 Frontend HTTP listening on: 0.0.0.0:8080
2023/12/27 13:41:12 Frontend HTTPS listening on: 0.0.0.0:4443
2023/12/27 13:41:12 TCP Services provided in 'hubs/haproxy-tcp-config'
2023/12/27 13:41:12 Controller sync period: 5s
2023/12/27 13:41:12 Running on haproxy-5498bdbdbc-t2ss9
2023/12/27 13:41:12 k8s/main.go:86 Running on Kubernetes version: v1.28.3 linux/amd64
2023/12/27 13:41:12 haproxy/main.go:100 Running with HAProxy version 2.5.8-0cbd0f6 2022/07/25 - https://haproxy.org/
2023/12/27 13:41:18 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/27 13:41:18 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:41:18 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.mydomain.com' does not exist
2023/12/27 13:41:18 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.mydomain.com' does not exist
2023/12/27 13:41:18 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:41:18 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.mydomain.com' does not exist
[NOTICE]   (212) : haproxy version is 2.5.8-0cbd0f6
[WARNING]  (212) : Exiting Master process...
[ALERT]    (212) : Current worker (240) exited with code 143 (Terminated)
[WARNING]  (212) : All workers exited. Exiting... (0)
Memory limit for HAProxy: 0MiB
[NOTICE]   (262) : haproxy version is 2.5.8-0cbd0f6
[WARNING]  (262) : config : config: Can't get version of the global server state file '/var/state/haproxy/global'.
[NOTICE]   (262) : New worker (269) forked
[NOTICE]   (262) : Loading success.
2023/12/27 13:42:13 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/27 13:42:13 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:42:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.mydomain.com' does not exist
2023/12/27 13:42:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.mydomain.com' does not exist
2023/12/27 13:42:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.mydomain.com' does not exist
2023/12/27 13:42:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:43:23 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/27 13:43:23 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:43:23 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.mydomain.com' does not exist
2023/12/27 13:43:23 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.mydomain.com' does not exist
2023/12/27 13:43:23 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.mydomain.com' does not exist
2023/12/27 13:43:23 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:51:13 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/27 13:51:13 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist
2023/12/27 13:51:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.mydomain.com' does not exist
2023/12/27 13:51:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.mydomain.com' does not exist
2023/12/27 13:51:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.mydomain.com' does not exist
2023/12/27 13:51:13 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.mydomain.com' does not exist

I can neither access http://@:80 nor https://@443, but the https://@:4443 says:

ERROR

Cannot GET /

What should i do? and how can i put cert-*.hubs.mydomain.com into haproxy:)

@zigit
Copy link

zigit commented Dec 31, 2023

I have a similar setup. Router with a dynamic public IPv4 address with NAT towards LAN.

The hcce.yaml needs a loadbalancer and microk8s has metallb available as an addon.

I installed metallb (microk8s enable metallb) and assigned it a reserved/excluded private IPv4 range from my LAN. I used 192.168.0.250-192.168.0.253, but one is enough and it will pick the first 192.168.0.250. Then the hcce.yaml lb is created and the four ports 80,443,4443,5349 could be reached at 192.168.0.250. In my router I configured port forwarding for these ports and pointed them towards 192.168.0.250

@iamfoolberg
Copy link
Author

thx zigit. I'm new in k8s and microk8s, k3s, should i use the following commands to install metallb?

#create Namespace
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml

#create metallb
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

@iamfoolberg
Copy link
Author

i tried

microk8s enable metallb:192.168.2.150-192.168.2.160

and get

root@k8s-master:/home/berg# microk8s enable metallb:192.168.2.150-192.168.2.160
Infer repository core for addon metallb
Addon core/metallb is already enabled

But the haproxy says:

2023/12/31 07:33:00 HAProxy Ingress Controller v1.8.5 ca59756.dirty
2023/12/31 07:33:00 Build from: https://github.com/haproxytech/kubernetes-ingress
2023/12/31 07:33:00 Build date: 2022-09-13T13:31:15
2023/12/31 07:33:00 ConfigMap: hubs/haproxy-config
2023/12/31 07:33:00 Ingress class: haproxy
2023/12/31 07:33:00 Empty Ingress class: false
2023/12/31 07:33:00 Publish service: 
2023/12/31 07:33:00 Using local backend service on port: %!s(int=6061)
2023/12/31 07:33:00 Default ssl certificate: hubs/cert-hcce
2023/12/31 07:33:00 Frontend HTTP listening on: 0.0.0.0:8080
2023/12/31 07:33:00 Frontend HTTPS listening on: 0.0.0.0:4443
2023/12/31 07:33:00 TCP Services provided in 'hubs/haproxy-tcp-config'
2023/12/31 07:33:00 Controller sync period: 5s
2023/12/31 07:33:00 Running on haproxy-5498bdbdbc-hxxk2
2023/12/31 07:33:00 k8s/main.go:86 Running on Kubernetes version: v1.28.3 linux/amd64
2023/12/31 07:33:00 haproxy/main.go:100 Running with HAProxy version 2.5.8-0cbd0f6 2022/07/25 - https://haproxy.org/
2023/12/31 07:33:05 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/31 07:33:05 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:33:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.seuoa.com' does not exist
2023/12/31 07:33:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.seuoa.com' does not exist
2023/12/31 07:33:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:33:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.seuoa.com' does not exist
[NOTICE]   (211) : haproxy version is 2.5.8-0cbd0f6
[WARNING]  (211) : Exiting Master process...
[ALERT]    (211) : Current worker (241) exited with code 143 (Terminated)
[WARNING]  (211) : All workers exited. Exiting... (0)
Memory limit for HAProxy: 0MiB
[NOTICE]   (262) : haproxy version is 2.5.8-0cbd0f6
[WARNING]  (262) : config : config: Can't get version of the global server state file '/var/state/haproxy/global'.
[NOTICE]   (262) : New worker (269) forked
[NOTICE]   (262) : Loading success.
2023/12/31 07:34:05 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/31 07:34:05 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:34:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.seuoa.com' does not exist
2023/12/31 07:34:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.seuoa.com' does not exist
2023/12/31 07:34:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:34:05 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.seuoa.com' does not exist
2023/12/31 07:34:30 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:34:30 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:34:30 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.seuoa.com' does not exist
2023/12/31 07:34:30 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.seuoa.com' does not exist
2023/12/31 07:34:30 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.seuoa.com' does not exist
2023/12/31 07:34:30 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/31 07:34:35 WARNING service/endpoints.go:36 Ingress 'hubs/certbotbot-http': no matching endpoints for port '80'
2023/12/31 07:34:35 WARNING ingress/ingress.go:225 Ingress 'hubs/dialog': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:34:35 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-hubs.seuoa.com' does not exist
2023/12/31 07:34:35 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-assets.hubs.seuoa.com' does not exist
2023/12/31 07:34:35 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-stream.hubs.seuoa.com' does not exist
2023/12/31 07:34:35 WARNING ingress/ingress.go:225 Ingress 'hubs/ret': secret 'hubs/cert-cors.hubs.seuoa.com' does not exist

and the lb is pending yet.

@zigit
Copy link

zigit commented Dec 31, 2023

Try deleting the lb and apply it again if you have not already tried that after installing metallb. The cert stuff is a different thing and you need the lb up to get the certs issued and verified.

@iamfoolberg
Copy link
Author

well, i tried to config metallb as the following.

nano metallb-config.yaml

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.150-192.168.1.159
 
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-config
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

kubectl apply -f metallb-config.yaml

after that, the lb starts and get an external IP(e.g. 192.168.1.150)
And the hubs is working.
Buttt, there's no objects in my scene, it's dark and gray:(

Anyway, thanks alot.

@zigit
Copy link

zigit commented Jan 12, 2024

Happy to hear you got it working! Yes, the initial setup is empty so you need to import scenes and avatars on the /admin page. Login with the email you configured in the hcce.yaml ADM_EMAIL (I had some trouble with the smtp settings so check that if you do not get the sign-in email). More info on how to use the admin page here https://hubs.mozilla.com/docs/setup-configuring-content.html

I am looking into writing a tutorial on this kind of self-serve setup (using you own hardware and internet connectivity) if there is an interest in the community.

@iamfoolberg
Copy link
Author

yeah, all my steps are recorded, but there are some Chinese clauses😁
i may share them later...

@iamfoolberg
Copy link
Author

iamfoolberg commented Jan 13, 2024

Okay, here is my steps:

#Host Mozilla hubs in your home
#see also 
#  https://hubs.mozilla.com/labs/welcoming-community-edition/,
#  https://github.com/mozilla/hubs-cloud

#main steps:
# 1. Install Proxmox VE or other hypervisors in your machine(Intel/AMD x86, >=16GB ram)
# 2. Create 4 virtual machines (with the same vlan id, e.g. 1001), 
#    1. Openwrt, lan ip=192.168.1.1, wan ip by your network.
#        set the following hosts in OP
#          hubs.seuoa.com, assets.hubs.seuoa.com, stream.hubs.seuoa.com, cors.hubs.seuoa.com, all to the lb's IP.
#          and mail.seuoa.com to 1.180, if you use your own mail server for test.
#    2. Windows 10, for mail server(e.g. hMailServer), e.g.  ip=192.168.1.180
#    3. two ubuntu 22.04, for microk8s master and worker.
# 3. Install microk8s in ubuntu vms
# 4. Deploy the hubs community in microk8s
# 5. Install hMailServer in Windows 10
# 6. Generate SSL certification and apply to hMailServer(port 25)
# 7.Login hubs and import scene/avatar...

#1. Install Proxmox VE
# google/baidu as you wish.

#2.Create virtual machines
# install OP and config it , by google/baidu
# install win10 and hMailServer
#    create a domain (e.g. seuoa.com), and an account for admin email, e,g, [email protected]

# install ubuntu 22.04 server mini, with openssh
#  8GB ram and 30 GB hdd is enough for taste.
# change mirror for apt, iff apt-get update fails. 
cat > "/etc/apt/sources.list"<<EOF
deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
EOF
#install some useful tools
apt-get update
apt-get install -y ceph-common curl dnsutils dosfstools freeradius-utils glusterfs-client inetutils-ping lsof mysql-client nano net-tools nfs-common nfs-kernel-server openssh-server sshpass telnet unzip uuid-runtime vim wget  ntp ntpdate git

#disable IPv6, to make it simple.
cat > "/etc/sysctl.conf"<<EOF
#add the following to disable IPv6
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
EOF
#  and execute:
sudo sysctl -p

#set its IP (as master), the worker may have ip=192.168.1.102
cat > "/etc/netplan/00-installer-config.yaml" << EOF
network:
  ethernets:
    ens18:
      addresses:
      - 192.168.1.101/24
      gateway4: 192.168.1.1
      nameservers:
        addresses:
        - 192.168.1.1
        search: []
  version: 2
EOF
#apply it
netplan apply

#[optional] you may want to set the hostname of this vm,
#  e.g.
hostnamectl set-hostname k8s-m
sed -i 's/^127.0.1.1 .*/127.0.1.1 k8s-m/' /etc/hosts

#[optional] you may want the set alias for all of your vms.
nano /etc/hosts
 192.168.1.101 k8s-m
 192.168.1.102 k8s-w1

# 3.install microk8s
#install microk8s 
sudo snap install microk8s --classic
2023-12-23T03:23:49Z INFO Waiting for automatic snapd restart... 
microk8s (1.28/stable) v1.28.3 from Canonical✓ installed

#set alias for K8s commands
sudo snap alias microk8s.kubectl kubectl
sudo snap alias microk8s.ctr ctr
sudo snap alias microk8s.helm helm
sudo snap alias microk8s.helm3 helm3

#test version
microk8s.version
MicroK8s v1.28.3 revision 6089

#check the installation
microk8s.inspect

#In case the registry.k8s.io/pause:3.7 is not ready, check it
microk8s status 
microk8s is not running. Use microk8s inspect for a deeper inspection.
#check why?
kubectl describe pod --all-namespaces
...
failed to pull image "registry.k8s.io/pause:3.7"
...

#IF(in China :( ) failed to pull image "registry.k8s.io/pause:3.7", run the following...
#  (1)
nano /var/snap/microk8s/current/args/kubelet
#    append this line:
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
#  (2)
nano /var/snap/microk8s/current/args/containerd-template.toml
#change "plugins -> plugins.cri -> sandbox_image" to
#sandbox_image = "registry.k8s.io/pause:3.7"  
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7"

#  (3) download the image
ctr image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
#  (4) restart the k8s
microk8s.stop && microk8s.start
microk8s.start

#  (5) check it.
microk8s status
microk8s inspect
kubectl describe pod --all-namespaces
kubectl get pod --all-namespaces
#-------------you may clone your VM here, for simplicity.

#IN k8s master, create the dashboard for webui operations.
microk8s enable dashboard
#  start it. [you NEED to start it when your vm is restarted]
microk8s dashboard-proxy &
#    and get the access token, for webui https://192.168.1.101:10443/
eyJhbGciOiJSUzI1NiIsImtpZCI6IkJNcE11TUJRODNxcmE3ZWlEbk8zS2pIeXAxQWswcGZOOTkwYThBVmpmaHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJtaWNyb2s4cy1kYXNoYm9hcmQtdG9rZW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM3OThjZmZjLWM0ZmEtNGRhMS05ZDZmLWY2ODgyMjJlYWY4OCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpkZWZhdWx0In0.HLpe62wEmqqJw2sUsvMJ_wDh0jQxmHkqmic1mNMREUwfHmv6Exh6Hkfk8pchdEWPfig78XBCN-6wUqDYBpWwj-_qw1vl7k04k1aaNFMtddCloxida7Jz1GkbE6JQu0u9XPacanVuMQcqVxKrdcH9CzRr6Cb9lRg-_3r_-nPlcqCCk-7qQyA7HBcSkaScECD604iT5JaQZ6s47Oh9zVGeb3SD9pqzLsyxEuiaYe6BM2K1NgzES_zVqQjF56zhMDKi30_UVq1Ws02tb82K2-YZDTQhFJFNHjCuiGqNJjkir03PQHYNgL7BGk7My-p_6BLz0FCrjYSChGz7skRJ7JyTfg

#Now you can create another worker node, from "install ubuntu 22.04 server mini" 
#    to "(5) check it."

#Go on, in master node, create the cluster join command. [you MUST execute it again for another worker.]
microk8s add-node
microk8s join 192.168.1.101:25000/29672fd613d8e3fa672b203570fbd44b/43307c7fd026 --worker --skip-verify

#execute the output command
The node has joined the cluster and will appear in the nodes list in a few seconds.  
This worker node gets automatically configured with the API server endpoints. If the API servers are behind a loadbalancer please set the '--refresh-interval' to '0s' in:    
    /var/snap/microk8s/current/args/apiserver-proxy 
and replace the API server endpoints with the one provided by the loadbalancer in:    /var/snap/microk8s/current/args/traefik/provider.yaml

#you may check all nodes status
kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces

#prepare loadbalancer metallb
#enable it. It seems that the IPs take no effect :(
microk8s enable metallb:192.168.1.150-192.168.1.159

#create a metallb config
nano metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.150-192.168.1.159
 
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-config
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
#and deploy it
kubectl apply -f metallb-config.yaml

#you may test it, by deploy a simple service
nano testmetallb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoareyou-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: whoareyou
  template:
    metadata:
      labels:
        app: whoareyou
    spec:
      containers:
      - name: whoareyou-container
        image: containous/whoami
---
apiVersion: v1
kind: Service
metadata:
  name: whoareyou-service
spec:
  type: LoadBalancer
  ports:
  - name: http
    targetPort: 80
    port: 80
  selector:
    app: whoareyou
#  and deploy the test service
kubectl apply -f testmetallb.yaml
#  the service will get an external ip, e.g. 1.150.
curl 192.168.1.150

#4.deploy hubs in k8s
git clone  https://github.com/mozilla/hubs-cloud.git
cd hubs-cloud/community-edition
nano render_hcce.sh
#change the 
#  domain name(e.g. hubs.seuoa.com) 
#  and admin email([email protected])
#  and SMTP server(mail.seuoa.com, or 192.168.1.180 for test) so on

#prepare pem-jwk to generate JWT key, avoiding error "Server lacks JWT secret"
sudo apt install npm 
sudo npm install pem-jwk -g
#add  --registry=http://registry.npm.taobao.org/ , if you failed to install it in China.

#create the yaml file
bash render_hcce.sh

#deploy it in k8s
kubectl apply -f hcce.yaml

#you should open the following URLs, since the self-certified *.pems are used.
[https://](https://hubs.seuoa.com/)[hubs.seuoa.com](https://hubs.seuoa.com/)
[https://](https://assets.hubs.seuoa.com/)[assets.hubs.seuoa.com](https://assets.hubs.seuoa.com/)
[https://](https://stream.hubs.seuoa.com/)[stream.hubs.seuoa.com](https://stream.hubs.seuoa.com/)
[https://](https://cors.hubs.seuoa.com/)[cors.hubs.seuoa.com](https://cors.hubs.seuoa.com/)

#now you can access your hubs, but can not sign in without its token in email.

#PS: you can access the postgres database with the following commands
#  in pgsql pod, execute
psql -h localhost -U postgres -d retdb
\d
\d login_tokens;
select * from login_tokens;

#  you can find the payload/token, and paste the following URL in your browser to hack in.
[https://hubs.seuoa.com/](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)[?auth_origin=](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)[hubs](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)[&auth_payload=](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)[a87672d8d656dce1ac7b5213808e8ed4](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)[&auth_token=](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)[ae95b3f9fe10e2d240768d1d328e31f0](https://hubs.seuoa.com/?auth_origin=hubs&auth_payload=a87672d8d656dce1ac7b5213808e8ed4&auth_token=ae95b3f9fe10e2d240768d1d328e31f0)

#5/6. Install hMailServer in Windows 10
#following the isntaller's instructions, it's easy.
#user certbot or something else to generate the SSL for domain(e.g. *.seuoa.com)
apt install letsencrypt
  
certbot certonly \
    --manual \
    --preferred-challenges=dns \
    --server https://acme-v02.api.letsencrypt.org/directory \
    --agree-tos \
    -m [email protected] \
    -d *.seuoa.com
    
#  i get the following file by certbot
#    privkey.pem/cert.pem/...
#create a SSL certifacates(settings-->advanced-->SSL certifacates) with the cert.pem and privkey.pem
#bind it to port 25.
#  settings-->advanced-->TCP/IP Ports--> 0.0.0.0/25/SMTP --Con..Security(STARTTLS Optional) & your SSL c..tes.
#hubs REQUIRE SSL connection! The error would be:
17:14:44.129 [error] GenServer #PID<0.4370.0> terminating
** (Bamboo.SMTPAdapter.SMTPError) There was a problem sending the email through SMTP.

#7.Have fun
#access the URL in your win10 vm, sign in with [email protected], you will get the magic link in your mailbox.
#  fetch it by foxmail.
https://hubs.seuoa.com/

#Question,
# how to apply the certifications manually to hubs:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants