-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installing Moose with external dependencies ? #26041
Comments
Are we talking about the Or is this Conda package related? If this is not related to the above, I apologize. I am not understanding where the 'other versions' is coming from. |
This is about the "update_and_rebuild" scripts. This is not conda related. This is for installing on an HPC cluster. We already have thousands of different modules for scientific software and libraries (https://docs.alliancecan.ca/wiki/Available_software), including the dependencies of Moose. We know very well how to properly build all of those libraries for our environment. We do not want Moose to re-download and reinstall those (and the build scripts do not work anyway). |
Another issue that a user found is that the It is for example barfing on :
|
I can't speak for how libMesh searches for dependencies but @roystgnr would know! I am sure there are some influential environment variables I am unaware of. Long story short: MOOSE should be good to go if you provide paths to following three libraries:
If these are set, there is no need to run More DetailsPETSc
Some aspects of MOOSE or MOOSE-based applications will make use of other contributions to PETSc, like; SLEPc, MUMPS, strumpack, metis, parmetis, super-lu, etc. It's probably best to enable them all. Which, I am sure is already be done. libMesh
MOOSE calls $LIBMESH_DIR/bin/libmesh-config --cppflags --cxxflags --include --cxx --cc --fc WASP
That is pretty much the gist of what MOOSE requires from these three libraries. Any way you can provide these, MOOSE shouldn't complain where they reside. Your mileage may vary when problem solving, depending on what version of these libraries you load. We do extensive testing on the versions of these libraries at their submodule HASH (millions of tests per week), which is what is used when you run |
Thanks, I will tell my user to try that. Once those are defined, is the way to build moose to still go in the test folder and to run make there (that seems a little strange to me). |
I understand the frustration of dealing with tons of seemingly duplicate libraries and packages to build an application like MOOSE. One thing to keep in mind is that just because you have MPI or PETSc installed, it doesn't mean just pointing to all those installations will necessarily work with MOOSE. As you may or may not be aware, there's a very close relationship of the software stack that MOOSE relies upon. Each package in the stack relies on the stack below it so everything must be built in a consistent environment. If you build PETSc with on MPI version on your system an libMesh with another, they will not work together. They may not even work if slightly different compilers or other dependencies changed between those builds. Everything must work together, which is why we generally recommend you build everything from the ground up. Also, we use a very large set of optional packages in both libMesh and PETSc that you may not be configured and built with your system supplied versions. That all being said, it's frustrating when the build scripts don't work. Generally there are many environment variables that can be set to aid both PETSc and libMesh and finding the dependencies they need. You may need to export a few of those in your environment to succeed as Jason was pointing out. Finally, for your last question - you do not have to build MOOSE from the test directory, although people frequently find that a convenient way to do it. MOOSE is a library and you can build it from the Let us know if you have any other questions on getting up and going. |
We are pretty used to dealing with these concerns. As HPC package managers and support staff, we deal with such stuff on a daily basis. With a typical well designed module system on an HPC cluster, mixing MPI/compiler/dependencies is pretty much impossible, so that is not a concern. We also build PETSc and libMesh with a large number of optional packages (and if some are missing, we can rebuild them). We would rather know what those required optional packages/options are, than modify hard-coded installation scripts that make assumptions about where and how things are installed.
Thanks for this insight. |
Ok, that route does not work because apparently, Moose requires libMesh code that has never been released (i.e. using development code, libMesh/libmesh#3708). (build log with And the build script fails with:
so I am in a dead end. This is with the latest tag https://github.com/idaholab/moose/tags (2023-11-08) |
What is the command that you're running? |
|
I'm surprised the script failed in that way. Did the libmesh submodule get checked out? |
I don't believe it did. I simply downloaded the tarball from the release page and I run the script. I think the release tarballs are not complete. |
If the tag was downloaded as a tarball, and not checked out |
Ah, I see. Do you have actual tarball releases available ? Git clones aren't checksummable (because the checksum changes when creating a tarball of the repository). We checksum every source tarball we build to be sure it does not change if it is needed to reinstall it. |
I am not sure there is going to be an easy way to do this without the use of a cloned repository. All of the submodule versioning data is stored in the Some of our dependencies also do not revolve around releases. I am scratching my head as to how we would represent these dependencies as 'releases' to be obtained as tarball links from within the MOOSE repository (like in that configure script that failed). I suppose we can try to introduce a URL instead, like so: curl -L -O https://github.com/libMesh/libmesh/archive/e4812f5b9245831473dd269d4bfd5e8485f536b4.tar.gz Which would represent the repository HASH at the time (and a means of obtaining that version). The other tricky bit, is that libMesh for example uses submodules. And recursively so on (metaphysical, timpi, etc). All of which will be required. I don't think PETSc will be an issue. They have a versioning scheme already. WASP may be an issue, I haven't looked into that submodule yet. |
Indeed, I realized that libMesh itself is using submodules, and they have not created a release in 18 months. Typically, projects that use submodules and make releases will create tarballs separate from what Github creates automatically, which will include all submodules already checked out at the proper version. PETSc is indeed not an issue as they already make versioned releases. I am currently building libmesh with the For libmesh to build, I had to alter I also specified I am not yet at WASP. |
|
Ah, the version of WASP which we have is too new (version 4), will try with wasp 3 Edit: successfull build with |
Reason
We already have many versions of PETSc, LibMesh, WASP and others. It does not have to download and install other versions.
Design
Let Moose's own "configure" script detect existing dependencies and use those instead of insisting on downloading, updating and building its own.
The text was updated successfully, but these errors were encountered: