-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
renaming bm to mno #498
renaming bm to mno #498
Conversation
This is yet untested. |
I marked this as a draft since it hasn't been tested yet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at this again since your #513 depends on it. Aside from concerns about testing, I don't really see any obvious problems: but I made one minor documentation comment when I first looked through it which I'd never submitted...
So consider this a reminder that I'd prefer to get this merged separately before we consider #513, so I guess we need some sort of assurance of testing coverage.
I mistakenly submitted #513 dependent on this patch. I have since corrected that. |
There are some whole files that got "added" in the mno-install-cluster directory. I'll get a new lab assignment in the next week or so and test this again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be really hard to handle a git rebase when someone else moves things around on main
("sorry 'bout that); you need to be really careful to manually compare what you get vs what's on main
and was previously in your branch. You've recreated some moved files, and the install-cluster
role is no longer MNO-specific so you should un-rename it. 😆
ansible/roles/mno-install-cluster/templates/controlplane-setup-lvm.sh.j2
Outdated
Show resolved
Hide resolved
Thanks for the review @dbutenhof This helps clear things up. I'll make these updates and have a chance to test this on some lab gear I'm getting hopefully next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without "approving" a draft PR ... this looks good.
cb6a198
to
80e5494
Compare
I had a chance to validate being able to deploy an mno cluster using this patch this week. I'm going to pull this out of draft. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've got two minor merge conflicts that got embedded as git diffs; they should be easy to resolve.
Ok, I think those are all fixed up now. Found a couple new file rename too in the process. |
I attempted to run but it appears the mno-deploy is lacking a role we use to determine which ocp_version to run: ...
TASK [create-ai-cluster : Add e38-h06-000-r650 ip config] *********************************************************************************************************************************************************
Monday 30 September 2024 22:25:03 +0000 (0:00:00.025) 0:00:03.539 ******
ok: [e38-h01-000-r650.rdu2.scalelab.redhat.com]
TASK [create-ai-cluster : MNO - Populate static network configuration with worker nodes] **************************************************************************************************************************
Monday 30 September 2024 22:25:03 +0000 (0:00:00.050) 0:00:03.589 ******
TASK [create-ai-cluster : Create cluster] *************************************************************************************************************************************************************************
Monday 30 September 2024 22:25:03 +0000 (0:00:00.038) 0:00:03.628 ******
fatal: [e38-h01-000-r650.rdu2.scalelab.redhat.com]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'openshift_version' is undefined\n\nThe error appears to be in '/root/akrzos/jetlag/ansible/roles/create-ai-cluster/tasks/main.yml': line 38, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create cluster\n ^ here\n"}
... looks like we need ocp-release role - https://github.com/redhat-performance/jetlag/blob/main/ansible/bm-deploy.yml#L16 I added and will see how the next run goes. |
I must have had openshift_version defined in my var file. Let me know if that role fixes it and I can add it in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran though the PR and was able to build a cluster after adding the ocp-release
role but I found a few things that need to be fixed before merging. I also believe there are some missed references to "bm" clusters most notably in the ibmcloud docs and roles.
@@ -1,21 +1,21 @@ | |||
# Accessing a disconnected/ipv6 cluster deployed by Jetlag | |||
|
|||
Jetlag includes the installation of a HAProxy instance on the bastion machine with ipv6/disconnected environment setup. The HAProxy instance allows you to proxy traffic over ports 6443, 443, and 80 into the deployed disconnected cluster. Port 6443 is for access to the cluster api (Ex. oc cli commands), and ports 443/80 are for ingress routes such as console or grafana. This effectively allows you to reach the disconnected cluster from your local laptop. In order to do so you will need to edit your laptop's `/etc/hosts` and insert the routes that map to your cluster. While running the `setup-bastion.yml` playbook, an example copy of the hosts file is dumped into `bastion_cluster_config_dir` which is typically `/root/bm` for bare-metal cluster type. | |||
Jetlag includes the installation of a HAProxy instance on the bastion machine with ipv6/disconnected environment setup. The HAProxy instance allows you to proxy traffic over ports 6443, 443, and 80 into the deployed disconnected cluster. Port 6443 is for access to the cluster api (Ex. oc cli commands), and ports 443/80 are for ingress routes such as console or grafana. This effectively allows you to reach the disconnected cluster from your local laptop. In order to do so you will need to edit your laptop's `/etc/hosts` and insert the routes that map to your cluster. While running the `setup-bastion.yml` playbook, an example copy of the hosts file is dumped into `bastion_cluster_config_dir` which is typically `/root/mno` for bare-metal cluster type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the end, you caught /root/mno
... but the sentence continues "for bare-metal cluster type".
Patch for PR#441
baremetal is not the right name for the cluster being deployed. It should be named mno: multinode openshift