crosscompiling macports, how hard is it?
hi all, nowadays is very common that osx software is distributed as universal binaries, so i'm quite sure somebody have tried to crosscompile (intel+powerpc) some macports libraries. i'd need to do that for a handful of libraries, so, before starting my experiments i'd like to hear some previous experiences. do you think its worth trying it? or is definitely better to use two (intel and powerpc) boxes to produce binaries and then combine them with lipo? [1] greetings, pau 1. http://developer.apple.com/technotes/tn2005/tn2137.html -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On 2 Jan, 2007, at 16:52, Pau Arumi wrote:
hi all, nowadays is very common that osx software is distributed as universal binaries, so i'm quite sure somebody have tried to crosscompile (intel+powerpc) some macports libraries. i'd need to do that for a handful of libraries, so, before starting my experiments i'd like to hear some previous experiences.
do you think its worth trying it? or is definitely better to use two (intel and powerpc) boxes to produce binaries and then combine them with lipo? [1]
It depends :-). Some software, and possibly the majority of software, compiles without problems using '-arch i386 -arch ppc'. Software that does configure-time checks for byteorder will cause problems unless the build environment knows about universal binaries (Python is an example of that, python 2.4.4 and 2.5 build as universal binaries even though the configure script checks for the byteorder of the host machine). Ronald
If you're going to distribute software, you don't want to build it with macports (because all the linker paths will be absolute and pointing at /opt/local, which is not how you want to distribute anything). Because of this, I don't think crosscompiling is an issue, since when building just for yourself, you don't need the other architecture. As a note, I once downloaded a program which had just released a new version that used openjade for HTML validation. It didn't work. Why? Because the author had built openjade using MacPorts and bundled it that way. It worked fine on his system, but on anybody else's system it was looking for libraries in /opt/local/lib that simply weren't there. Beware distribution of MacPorts-built binaries. On Jan 2, 2007, at 10:52 AM, Pau Arumi wrote:
nowadays is very common that osx software is distributed as universal binaries, so i'm quite sure somebody have tried to crosscompile (intel+powerpc) some macports libraries. i'd need to do that for a handful of libraries, so, before starting my experiments i'd like to hear some previous experiences.
do you think its worth trying it? or is definitely better to use two (intel and powerpc) boxes to produce binaries and then combine them with lipo? [1]
-- Kevin Ballard http://kevin.sb.org eridius@macports.org http://www.tildesoft.com
On Jan 2, 2007, at 2:08 PM, Kevin Ballard wrote:
If you're going to distribute software, you don't want to build it with macports (because all the linker paths will be absolute and pointing at /opt/local, which is not how you want to distribute anything). Because of this, I don't think crosscompiling is an issue, since when building just for yourself, you don't need the other architecture.
I'm a little puzzled by this statement. Unless you're willing to make all of your distributed applications and libraries into bundles, which is quite a bit of work that most folks aren't willing to go to anyway, you're going to have hard-coded linker paths pointing somewhere no matter what you do! Since you also can't install directly into locations like /usr/bin and /usr/lib (well, you can, but a special hit squad of Mac OS X engineers will be dispatched to terminate you and your software with extreme prejudice if you do), / opt/local is as good a location as /sw, /usr/local or pretty much any other location you might come up with. In fact, if the MacPorts project ever gets off its collective duff and starts distributing binary packages (in some, any, package format) like was originally intended at the start of all this, one can reasonably expect to see a lot of users whacking stuff into /opt/ local without really even being aware of it.
As a note, I once downloaded a program which had just released a new version that used openjade for HTML validation. It didn't work. Why? Because the author had built openjade using MacPorts and bundled it that way. It worked fine on his system, but on anybody else's system it was looking for libraries in /opt/local/lib that simply weren't there. Beware distribution of MacPorts-built binaries.
No offense, but you're barking up the wrong tree with that analysis. The problem wasn't that the author had built openjade using MacPorts, the problem was that he didn't instruct you to install the dependent libraries as well or simply bundle them with his software too. That's one of the reasons that the MacPorts community always gets so hung up on package management - they want to distribute packages, but they also want to ensure that any system which installs those packages also follows dependencies, deals with upgrades and otherwise handles all the messy details of making that software work exactly the way it did on the package author's system. Someday, one hopes, MacPorts will finally reach parity with its FreeBSD/Gentoo/Red Hat cousins and offer such a collection, after which problems like the ones you're describing will go away and be replaced by an entirely new and different set of problems which, at least, will be interesting and relevant to a wider audience. :-) - Jordan
On Jan 2, 2007, at 10:52 AM, Pau Arumi wrote:
nowadays is very common that osx software is distributed as universal binaries, so i'm quite sure somebody have tried to crosscompile (intel+powerpc) some macports libraries. i'd need to do that for a handful of libraries, so, before starting my experiments i'd like to hear some previous experiences.
do you think its worth trying it? or is definitely better to use two (intel and powerpc) boxes to produce binaries and then combine them with lipo? [1]
-- Kevin Ballard http://kevin.sb.org eridius@macports.org http://www.tildesoft.com
_______________________________________________ macports-users mailing list macports-users@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo/macports-users
well, i'm already distributing binaries that gets compiled/linked against /usr/local/ libraries. with those libs bundled in the app. so i guess my binaries gets linked in a way they do not keep absolute paths to libs. Kevin Ballard wrote:
If you're going to distribute software, you don't want to build it with macports (because all the linker paths will be absolute and pointing at /opt/local, which is not how you want to distribute anything). Because of this, I don't think crosscompiling is an issue, since when building just for yourself, you don't need the other architecture.
As a note, I once downloaded a program which had just released a new version that used openjade for HTML validation. It didn't work. Why? Because the author had built openjade using MacPorts and bundled it that way. It worked fine on his system, but on anybody else's system it was looking for libraries in /opt/local/lib that simply weren't there. Beware distribution of MacPorts-built binaries.
On Jan 2, 2007, at 10:52 AM, Pau Arumi wrote:
nowadays is very common that osx software is distributed as universal binaries, so i'm quite sure somebody have tried to crosscompile (intel+powerpc) some macports libraries.
i'd need to do that for a handful of libraries, so, before starting my experiments i'd like to hear some previous experiences.
do you think its worth trying it? or is definitely better to use two (intel and powerpc) boxes to produce binaries and then combine them with lipo? [1]
-- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Jan 2, 2007, at 5:41 PM, Jordan K. Hubbard wrote:
On Jan 2, 2007, at 2:08 PM, Kevin Ballard wrote:
If you're going to distribute software, you don't want to build it with macports (because all the linker paths will be absolute and pointing at /opt/local, which is not how you want to distribute anything). Because of this, I don't think crosscompiling is an issue, since when building just for yourself, you don't need the other architecture.
I'm a little puzzled by this statement. Unless you're willing to make all of your distributed applications and libraries into bundles, which is quite a bit of work that most folks aren't willing to go to anyway, you're going to have hard-coded linker paths pointing somewhere no matter what you do! Since you also can't install directly into locations like /usr/bin and /usr/lib (well, you can, but a special hit squad of Mac OS X engineers will be dispatched to terminate you and your software with extreme prejudice if you do), /opt/local is as good a location as /sw, /usr/ local or pretty much any other location you might come up with.
There's 2 options in cases like this. The first is to make a bundle, which really isn't all that hard. In this case, your linker paths should be point at some variation of @executable_path/../Frameworks/foo. The other option is to install somewhere like, say, /usr/local. I disagree that /opt/local is as good a place to install stuff as any - I don't want anything touching /opt/local but MacPorts. /usr/local is the common choice for stuff like this as well, so precedent is on your side if you pick that.
In fact, if the MacPorts project ever gets off its collective duff and starts distributing binary packages (in some, any, package format) like was originally intended at the start of all this, one can reasonably expect to see a lot of users whacking stuff into / opt/local without really even being aware of it.
Sure, if MacPorts ever starts distributing binaries, at that point cross-compiling will become useful (although the argument can be made that separate binaries for ppc vs. i686 should be used instead, as it cuts down on filesize).
As a note, I once downloaded a program which had just released a new version that used openjade for HTML validation. It didn't work. Why? Because the author had built openjade using MacPorts and bundled it that way. It worked fine on his system, but on anybody else's system it was looking for libraries in /opt/local/ lib that simply weren't there. Beware distribution of MacPorts- built binaries.
No offense, but you're barking up the wrong tree with that analysis. The problem wasn't that the author had built openjade using MacPorts, the problem was that he didn't instruct you to install the dependent libraries as well or simply bundle them with his software too. That's one of the reasons that the MacPorts community always gets so hung up on package management - they want to distribute packages, but they also want to ensure that any system which installs those packages also follows dependencies, deals with upgrades and otherwise handles all the messy details of making that software work exactly the way it did on the package author's system.
He did in fact bundle all the dependencies with his software. He just didn't realize that those bundled dependencies weren't actually being used - I don't know what he was thinking, but I guess he assumed if they existed in the bundle Frameworks dir they'd take precedence, which isn't the case. So the problem was he built stuff using a package management system, and then bundled his app the way he'd seen other people bundle it, but the binaries were still all linking against /opt/local/lib rather than the bundled libraries. Sure, this is basically a 1-D-10-T ission, but if he had built libraries by hand it would have forced him to think about this stuff. The basic point is building stuff via MacPorts that you intend to distribute basically won't work unless you put extra effort in, probably more effort than it would have taken to just build everything yourself correctly.
Someday, one hopes, MacPorts will finally reach parity with its FreeBSD/Gentoo/Red Hat cousins and offer such a collection, after which problems like the ones you're describing will go away and be replaced by an entirely new and different set of problems which, at least, will be interesting and relevant to a wider audience. :-)
Haha. -- Kevin Ballard http://kevin.sb.org eridius@macports.org http://www.tildesoft.com
I've successfully used a combination of lipo(1) and install_name_tool (1) to build universal versions of MacPorts libraries (built on separate PPC/Intel machines, alas) so that everything the app needed lived in <app>/Contents/Frameworks. I used install_name_tool to change any links to, eg, /opt/local so the linker would resolve them within the app bundle instead. This was for a client, so I can't post the actual script, but if anyone's interested, I could probably write up a distilled version of what I did. It really wasn't very difficult. --John
I'd be interested! It would nice if we could figure out some way to semi-automate at least part of that process... On Jan 2, 2007, at 11:06 PM, John Labovitz wrote:
I've successfully used a combination of lipo(1) and install_name_tool(1) to build universal versions of MacPorts libraries (built on separate PPC/Intel machines, alas) so that everything the app needed lived in <app>/Contents/Frameworks. I used install_name_tool to change any links to, eg, /opt/local so the linker would resolve them within the app bundle instead.
This was for a client, so I can't post the actual script, but if anyone's interested, I could probably write up a distilled version of what I did. It really wasn't very difficult.
--John _______________________________________________ macports-users mailing list macports-users@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo/macports-users
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dr. Ernie Prabhakar wrote:
I'd be interested! It would nice if we could figure out some way to semi-automate at least part of that process...
On Jan 2, 2007, at 11:06 PM, John Labovitz wrote:
I've successfully used a combination of lipo(1) and install_name_tool(1) to build universal versions of MacPorts libraries (built on separate PPC/Intel machines, alas) so that everything the app needed lived in <app>/Contents/Frameworks. I used install_name_tool to change any links to, eg, /opt/local so the linker would resolve them within the app bundle instead.
This was for a client, so I can't post the actual script, but if anyone's interested, I could probably write up a distilled version of what I did. It really wasn't very difficult.
--John _______________________________________________ macports-users mailing list macports-users@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo/macports-users
_______________________________________________ macports-users mailing list macports-users@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo/macports-users
Bob Ippolito's py2app package (which wraps up Python applications into standard Mac .app bundles) has a separate script/command-line tool called macho_standalone, which scans an app bundle and rewrites all the linker bits so that the dylibs in the app bundle are self-contained. It runs install_name_tool on them, IIRC. You may want to Google for "macho_standalone" to find the most recent version and documentation. - -- Kevin Walzer Code by Kevin http://www.codebykevin.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFm/KaEsLm8HXyq4sRAq/eAJ4tnru+dCc7StETLI1flV+idt96eQCfbOC8 adOjFMLMu+D2oOb559haXRw= =pf3U -----END PGP SIGNATURE-----
In addition, the Mozilla project has a script (unfortunately I forget the exact name) which takes 2 single-architecture trees and combines them via lipo. It creates a new tree that contains all files that are the same in the 2 old trees. Any differing files that are binaries it combines via lipo. Any differing files that aren't binaries it tosses (can't do anything else with them), so you have to be careful in some projects (if, say, it builds a header file for distribution that changes per-architecture), but in general it works quite well. On Jan 3, 2007, at 1:14 PM, Kevin Walzer wrote:
Bob Ippolito's py2app package (which wraps up Python applications into standard Mac .app bundles) has a separate script/command-line tool called macho_standalone, which scans an app bundle and rewrites all the linker bits so that the dylibs in the app bundle are self- contained. It runs install_name_tool on them, IIRC. You may want to Google for "macho_standalone" to find the most recent version and documentation.
-- Kevin Ballard http://kevin.sb.org eridius@macports.org http://www.tildesoft.com
On 3 Jan, 2007, at 18:02, Dr. Ernie Prabhakar wrote:
I'd be interested! It would nice if we could figure out some way to semi-automate at least part of that process...
The python library macholib can do this as well. It includes the tool macho_standalone that copies non-system libraries into the application bundle and rewrites link paths. Ronald
On Jan 2, 2007, at 11:06 PM, John Labovitz wrote:
I've successfully used a combination of lipo(1) and install_name_tool(1) to build universal versions of MacPorts libraries (built on separate PPC/Intel machines, alas) so that everything the app needed lived in <app>/Contents/Frameworks. I used install_name_tool to change any links to, eg, /opt/local so the linker would resolve them within the app bundle instead.
This was for a client, so I can't post the actual script, but if anyone's interested, I could probably write up a distilled version of what I did. It really wasn't very difficult.
--John _______________________________________________ macports-users mailing list macports-users@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo/macports-users
_______________________________________________ macports-users mailing list macports-users@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo/macports-users
On Jan 3, 2007, at 15:36, Kevin Ballard wrote:
In addition, the Mozilla project has a script (unfortunately I forget the exact name) which takes 2 single-architecture trees and combines them via lipo. It creates a new tree that contains all files that are the same in the 2 old trees. Any differing files that are binaries it combines via lipo. Any differing files that aren't binaries it tosses (can't do anything else with them), so you have to be careful in some projects (if, say, it builds a header file for distribution that changes per-architecture), but in general it works quite well.
I was very interested in such a script, and found it inside the Firefox 2.0.0.1 source tarball. It's called "unify" and it works great for me (in a non-MacPorts context).
participants (8)
-
Dr. Ernie Prabhakar
-
John Labovitz
-
Jordan K. Hubbard
-
Kevin Ballard
-
Kevin Walzer
-
Pau Arumi
-
Ronald Oussoren
-
Ryan Schmidt