#40039: New port: mumps 4.10.0 - a library for solving sparse linear systems -------------------------+-------------------------------- Reporter: wimmer@… | Owner: macports-tickets@… Type: submission | Status: new Priority: Normal | Milestone: Component: ports | Version: 2.2.0 Resolution: | Keywords: Port: | -------------------------+-------------------------------- Comment (by sean@…): Replying to [comment:6 wimmer@…]:
There's two things to note there: - the sequential mpiseq has its own mpif.h, with definitions that differ from e.g. openmpi. From the Mumps source code it's hard to tell if there is some dependency on that particular mpif.h. Presumably not, but without the developers confirming this ... In any case one has to be careful which mpif.h to use.
[[BR]] Using the correct mpi is always an issue. The goal of mpiuni is to be a drop-in replacement for mpi (with obvious differences, of course). This doesn't affect MUMPS that much, actually. We do this kind of thing all the time within PETSc (which has an optional dependency on MUMPS).
- Mumps has the orderings it can use defined at compile-time (with -Dparmetis etc.). For a variant you'd want to use mostly in parallel, this means you want to compile it with the parallel parmetis and ptscotch. Now, assuming you can use the parallel version also in sequential mode by linking mpiseq, you still would have to link against parmetis and ptscotch although in the sequential version you wouldn't ever use it. Now, one could presumably fix this by adding dummy parmetis/ptscotch wrappers to mpiseq, but that's even more patching
[[BR]] Dummy wrappers for parmet… what? You can compile all the optional orderings and decide on which one to use at run-time. You can also compile MUMPS for parallel and use its sequential algorithm at run-time. As homework, you can try it out: [[BR]] * running mpi-mumps + parmetis with 'mpiexec -n 1 ./test_prog' * running mpi-mumps + ptscotch with 'mpiexec -n 1 ./test_prog' * running mpi-mumps + parmetis with 'mpiexec -n 2 ./test_prog' * running mpi-mumps + ptscotch with 'mpiexec -n 2 ./test_prog' * running mpi-mumps + internal ordering with 'mpiexec -n 1 ./test_prog' * running mpi-mumps + internal ordering with 'mpiexec -n 2 ./test_prog' And then try all of that again with sequentially built MUMPS.
What's so bad about having both a sequential and a parallel version installed (from the same Portfile)? This is how Debian does it, so it's kind of a standard (with the sequential version having a trailing _seq in the library name)
[[BR]] Well, for one, I don't like having an unnecessary port (sequential MUMPS) since all its functionality would be provided by the parallel version.
I was mostly thinking about people wanting to compile software that uses metis v4 - those wouldn't want to patch everything themselves. (even for macports, if you can avoid patching stuff, I think it's better, with every patch you can make a mistake).
[[BR]] Instead of a hypothetical situation, do you have any examples? I have updated most of the ports (in my side repo) to use the newest version of MeTiS / ParMeTiS but might have missed some. Since it's a very, very simple change (usually two lines of code change), I would rather push others to update their code. -- Ticket URL: <https://trac.macports.org/ticket/40039#comment:7> MacPorts <http://www.macports.org/> Ports system for OS X