[Xquartz-dev] 2.3.3_rc1

George Peter Staplin georgeps at xmission.com
Tue Mar 10 12:38:25 PDT 2009


Quoted Jeremy Huddleston <jeremyhu at apple.com>:

> LIBGL_ALWAYS_INDIRECT isn't about software versus hardware rendering.
> It's about who handles the rendering.  In IGLX (Indirect OpenGL for
> X11), you have the server handling the rendering.  AIGLX is Accelerated
> IGLX which *is* hardware rendering, but the application that actually
> "asks" the hardware to render is the X server.  In direct mode, the
> client interacts with the hardware itself and asks the server for some
> memory to dump the framebuffer to.
>
> As a bit of a simplification, with AIGLX, the client tells the server
> "draw a triangle here, use this shader, use this lighting".  In direct
> mode, it tells the server "here, use these pixel values".

It's similar to that.  In this case in direct mode the gl*()  
operations directly operate on a VRAM surface for a window, assuming  
the particular context is using a pixel format compatible with the  
hardware.

When using indirect, say if you want to run an app from GNU/Linux on  
an XQuartz server, the surface is stored entirely on the XQuartz  
server side.  The gl*() commands are packed, unpacked, and run on the  
server to manipulate the surface that is associated with an X window.

The xserver/hw/xquartz/GL/indirect.c code is what handles that.

Note: extensions often don't support indirect.

> With Mesa's libGL, you can enable the LIBGL_ALWAYS_INDIRECT environment
> variable to turn on indirect rendering to push all the GL commands over
> the X protocol and have the server render it.

Right.

> As for why the 7500, 9600, and X1600 are all reporting the Software
> renderer, I really couldn't say.  One guess is the same as Georges
> (it's using offscreen rendering), but the HD2600 case seems to
> contradict that.
>
> What does 'glxinfo' say on these systems about the OpenGL version and
> extensions?  It *should* say something about ATI rather than Apple
> Software.  I don't have any ATI hardware unfortunately, but for my
> QuadG5+NV6600, I have:
>
> OpenGL vendor string: NVIDIA Corporation
> OpenGL renderer string: NVIDIA GeForce 6600 OpenGL Engine
> OpenGL version string: 2.0 NVIDIA-1.5.36
> OpenGL shading language version string: 1.20
>
> LIBGL_ALWAYS_INDIRECT is being dropped because indirect rendering is
> not being supported by our libGL at present.  It is an edge use that
> has significant performance impacts.  If there are bugs in libGL, then
> bypassing them to ask the server to do the rendering is not the right
> thing to do.

Right, instead of doing a more direct path, we end up needing to  
lookup thread-specific data for every gl*() operation, so that we can  
determine the path to take for each thread's context (indirect or  
direct).  That has a huge impact on performance.  The CGL layer  
already does this, as some of you might expect, because CGLContextObjs  
are per-thread, but we can't violate CGL's encapsulation without  
causing other problems.

> So step 1 is to figure out why your first three systems are actually
> using the software renderer.

There are several ways you can get the "Apple software renderer."  The  
most common as I understand it, is to use a set of attributes with  
glXChooseFBConfig or glXChooseVisual that don't match the display very  
accurately.  We attempt to only provide values for the GLXFBConfigs  
and Visuals that are going to be accelerated, however some visuals are  
marked with the "Slow" caveat (see glxinfo -v), because they use  
software rendering.  The libGL tries to prefer non-slow configurations  
in most cases too from what I recall.

You can dump the visual id used by an application (assuming it's using  
glXChooseVisual) via:

env LIBGL_DUMP_VISUALID=1 /path/to/someapp

That should print the visual id that the application selected to  
stdout.  You can then look up the visual id reported by "glxinfo -v"  
to see what attributes the visual has.

I want to add something like this to the glXChooseFBConfig() path, but  
glXChooseFBConfig() returns an array of GLXFBConfig that match, so  
we're probably better off doing something like that in  
glXCreateNewContext() which takes the final GLXFBConfig, and in the  
case of glXGetVisualFromFBConfig() we could have a similar path.

I don't know why vmd-xplor doesn't work yet.  I don't know why the  
software rendering fallback (provided by CGL) is working in some  
cases.  It would help if I could provide a means to dump the  
attributes of a visual or fbconfig request to stdout when some  
environment variable is enabled.  I've thought about that the last 2  
weeks.  It's mainly just a matter of figuring out the best way to go  
about it.  The code should be fairly simple once it does work, and  
handles all of the known attributes and values that are accepted by GLX.


> On Mar 10, 2009, at 06:59, Jack Howarth wrote:
>
>> Jeremy,
>>  Could you clarify something here. When I run vmd-xplor (a ppc
>> legacy binary) under X11 2.3.3-rc1 on the following configurations...
>>
>> 1) dual G4 QuickSilver with Radeon 7500
>> 2) dual G5 Powermac with Radeon 9600
>> 3) late 2006 MacBook Pro with Radeon X1600
>>
>> in all those cases the OpenGL render reported by vmd-xplor is Apple
>> Software Render.Only in the instance of my 2008 MacPro with HD2600
>> graphics does it report hardware rendering. In the first three
>> cases vmd-xplor works fine. This would seem to imply that software
>> rendering in GLX is indeed possible but only that the LIBGL_ALWAYS_INDIRECT
>> isn't allowing the software rendering to be forced when hardware rendering
>> is the available.
>>  I could understand your decision to disable LIBGL_ALWAYS_INDIRECT
>> if the Apple Software Render was in fact being removed from X11's OpenGL
>> support but this certainly doesn't seem to be the case in X11 2.3.3-rc1.
>> It would seem rather arbitrary if that were true to disable
>> LIBGL_ALWAYS_INDIRECT since it prohibits the user from dropping down
>> into software rendering if there is bugs in the hardware rendering.
>>                                Jack
>>
>> On Tue, Mar 10, 2009 at 02:34:15AM -0700, Jeremy Huddleston wrote:
>>> Still not 100% correct.
>>>
>>> The SERVER still supports indirect rendering.  Our libGL does NOT
>>> support indirect rendering.  This means that you can ssh to your linux
>>> box and run OpenGL applications there, and they will use indirect
>>> rendering on the server.  This is hardware accelerated, but very slow
>>> since you're bottlenecking the rendering with the network latency.
>>>
>>> What you can't do is ssh to an OSX box and run OpenGL applications on
>>> the remote OSX box on your local X server.  If you *REALLY* want to do
>>> that, then you can compile your own libGL from mesa, and use that for
>>> the indirect GLX capability, but I think this is really an edge case of
>>> an edge use that I don't think will really affect many users.
>>>
>>> On Mar 9, 2009, at 23:47, Jordan K. Hubbard wrote:
>>>
>>>>
>>>> On Mar 9, 2009, at 11:34 PM, Martin Costabel wrote:
>>>>
>>>>> I don't understand what you are saying here. Or rather, I don't
>>>>> believe you are really saying what this sounds like. You are no
>>>>> longer supporting running X clients on remote machines? No more "ssh
>>>>> -Y"? This is too horrible to be true; please clarify.
>>>>
>>>> Jeremy is talking about OpenGL indirect rendering mode, not the
>>>> ability to run remote X clients.  Running an xterm over the wire is
>>>> fine, in other words, but if you're trying to run OpenGL apps remotely
>>>> then you won't be able to use the GLX extension (though you could
>>>> always use a software renderer like Mesa, of course).  Given the total
>>>> lack of sense in trying to use the fastest possible 3D acceleration
>>>> while simultaneously putting the client "far away" from the server,
>>>> this is nowhere near as limiting (or "horrible") as it sounds.
>>>>
>>>> - Jordan

George
-- 
http://people.freedesktop.org/~gstaplin/


More information about the Xquartz-dev mailing list