[CalendarServer-changes] [11779] CalendarServer/branches/users/gaya/sharedgroupfixes

source_changes at macosforge.org source_changes at macosforge.org
Wed Oct 2 16:27:44 PDT 2013


Revision: 11779
          http://trac.calendarserver.org//changeset/11779
Author:   gaya at apple.com
Date:     2013-10-02 16:27:44 -0700 (Wed, 02 Oct 2013)
Log Message:
-----------
merge in r11667 through r11778

Revision Links:
--------------
    http://trac.calendarserver.org//changeset/11667
    http://trac.calendarserver.org//changeset/11778

Modified Paths:
--------------
    CalendarServer/branches/users/gaya/sharedgroupfixes/support/build.sh
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/adbapi2.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/ienterprise.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/test/test_adbapi2.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/sendfdport.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/test/test_sendfdport.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/log.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/test/test_log.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/test/test_util.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/util.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/metafd.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/test/test_metafd.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/caldavxml.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/method/report_sync_collection.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/storebridge.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Africa/Juba.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Anguilla.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Araguaina.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Aruba.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Cayman.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Dominica.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grand_Turk.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grenada.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Guadeloupe.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Jamaica.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Marigot.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Montserrat.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Barthelemy.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Kitts.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Lucia.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Thomas.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Vincent.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Tortola.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Virgin.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Amman.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Dili.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Gaza.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Hebron.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jakarta.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jayapura.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Makassar.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Pontianak.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Busingen.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Vaduz.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Zurich.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Jamaica.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Fiji.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Johnston.ics
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/links.txt
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/timezones.xml
    CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/version.txt
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/carddav/datastore/test/test_sql.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_tables.py
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/xml/rfc6578.py

Added Paths:
-----------
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql
    CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql

Property Changed:
----------------
    CalendarServer/branches/users/gaya/sharedgroupfixes/


Property changes on: CalendarServer/branches/users/gaya/sharedgroupfixes
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11612
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:11632-11666
   + /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11612
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/log-cleanups:11691-11731
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:11632-11778

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/support/build.sh
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/support/build.sh	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/support/build.sh	2013-10-02 23:27:44 UTC (rev 11779)
@@ -598,10 +598,11 @@
 
   export              PATH="${dstroot}/bin:${PATH}";
   export    C_INCLUDE_PATH="${dstroot}/include:${C_INCLUDE_PATH:-}";
-  export   LD_LIBRARY_PATH="${dstroot}/lib:${LD_LIBRARY_PATH:-}";
+  export   LD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${LD_LIBRARY_PATH:-}";
   export          CPPFLAGS="-I${dstroot}/include ${CPPFLAGS:-} ";
-  export           LDFLAGS="-L${dstroot}/lib ${LDFLAGS:-} ";
-  export DYLD_LIBRARY_PATH="${dstroot}/lib:${DYLD_LIBRARY_PATH:-}";
+  export           LDFLAGS="-L${dstroot}/lib -L${dstroot}/lib64 ${LDFLAGS:-} ";
+  export DYLD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${DYLD_LIBRARY_PATH:-}";
+  export PKG_CONFIG_PATH="${dstroot}/lib/pkgconfig:${PKG_CONFIG_PATH:-}";
 
   if "${do_setup}"; then
     if "${force_setup}" || "${do_bundle}" || [ ! -d "${dstroot}" ]; then
@@ -626,10 +627,10 @@
   cat > "${dstroot}/environment.sh" << __EOF__
 export              PATH="${dstroot}/bin:\${PATH}";
 export    C_INCLUDE_PATH="${dstroot}/include:\${C_INCLUDE_PATH:-}";
-export   LD_LIBRARY_PATH="${dstroot}/lib:\${LD_LIBRARY_PATH:-}:\$ORACLE_HOME";
+export   LD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:\${LD_LIBRARY_PATH:-}:\$ORACLE_HOME";
 export          CPPFLAGS="-I${dstroot}/include \${CPPFLAGS:-} ";
-export           LDFLAGS="-L${dstroot}/lib \${LDFLAGS:-} ";
-export DYLD_LIBRARY_PATH="${dstroot}/lib:\${DYLD_LIBRARY_PATH:-}:\$ORACLE_HOME";
+export           LDFLAGS="-L${dstroot}/lib -L${dstroot}/lib64 \${LDFLAGS:-} ";
+export DYLD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:\${DYLD_LIBRARY_PATH:-}:\$ORACLE_HOME";
 __EOF__
 }
 
@@ -656,10 +657,10 @@
 
     # Normally we depend on the system Python, but a bundle install should be as
     # self-contained as possible.
-    local pyfn="Python-2.7.1";
-    c_dependency -m "aa27bc25725137ba155910bd8e5ddc4f" \
+    local pyfn="Python-2.7.5";
+    c_dependency -m "6334b666b7ff2038c761d7b27ba699c1" \
         "Python" "${pyfn}" \
-        "http://www.python.org/ftp/python/2.7.1/${pyfn}.tar.bz2" \
+        "http://www.python.org/ftp/python/2.7.5/${pyfn}.tar.bz2" \
         --enable-shared;
     # Be sure to use the Python we just built.
     export PYTHON="$(type -p python)";
@@ -707,6 +708,14 @@
       --disable-bdb --disable-hdb;
   fi;
 
+  if find_header ffi/ffi.h; then
+    using_system "libffi";
+  else
+    c_dependency -m "45f3b6dbc9ee7c7dfbbbc5feba571529" \
+      "libffi" "libffi-3.0.13" \
+      "ftp://sourceware.org/pub/libffi/libffi-3.0.13.tar.gz"
+  fi;
+
   #
   # Python dependencies
   #
@@ -764,7 +773,7 @@
   local v="4.1.1";
   local n="PyGreSQL";
   local p="${n}-${v}";
-  py_dependency -v "${v}" -m "71d0b8c5a382f635572eb52fee47cd08" -o \
+  py_dependency -v "${v}" -m "71d0b8c5a382f635572eb52fee47cd08" \
     "${n}" "pgdb" "${p}" \
     "${pypi}/P/${n}/${p}.tgz";
 
@@ -811,7 +820,7 @@
   local v="0.1.2";
   local n="sqlparse";
   local p="${n}-${v}";
-  py_dependency -o -v "${v}" -s "978874e5ebbd78e6d419e8182ce4fb3c30379642" \
+  py_dependency -v "${v}" -s "978874e5ebbd78e6d419e8182ce4fb3c30379642" \
     "SQLParse" "${n}" "${p}" \
     "http://python-sqlparse.googlecode.com/files/${p}.tar.gz";
 
@@ -821,7 +830,7 @@
     local v="0.6.1";
     local n="pyflakes";
     local p="${n}-${v}";
-    py_dependency -o -v "${v}" -m "00debd2280b962e915dfee552a675915" \
+    py_dependency -v "${v}" -m "00debd2280b962e915dfee552a675915" \
       "Pyflakes" "${n}" "${p}" \
       "${pypi}/p/${n}/${p}.tar.gz";
   fi;
@@ -833,28 +842,28 @@
   # Can't add "-v 2011g" to args because the version check expects numbers.
   local n="pytz";
   local p="${n}-2011n";
-  py_dependency -o -m "75ffdc113a4bcca8096ab953df746391" \
+  py_dependency -m "75ffdc113a4bcca8096ab953df746391" \
     "${n}" "${n}" "${p}" \
     "${pypi}/p/${n}/${p}.tar.gz";
 
   local v="2.5";
   local n="pycrypto";
   local p="${n}-${v}";
-  py_dependency -o -v "${v}" -m "783e45d4a1a309e03ab378b00f97b291" \
+  py_dependency -v "${v}" -m "783e45d4a1a309e03ab378b00f97b291" \
     "PyCrypto" "${n}" "${p}" \
     "http://ftp.dlitz.net/pub/dlitz/crypto/${n}/${p}.tar.gz";
 
   local v="0.1.2";
   local n="pyasn1";
   local p="${n}-${v}";
-  py_dependency -o -v "${v}" -m "a7c67f5880a16a347a4d3ce445862a47" \
+  py_dependency -v "${v}" -m "a7c67f5880a16a347a4d3ce445862a47" \
     "${n}" "${n}" "${p}" \
     "${pypi}/p/${n}/${p}.tar.gz";
 
   local v="1.1.6";
   local n="setproctitle";
   local p="${n}-${v}";
-  py_dependency -o -v "1.0" -m "1e42e43b440214b971f4b33c21eac369" \
+  py_dependency -v "1.0" -m "1e42e43b440214b971f4b33c21eac369" \
     "${n}" "${n}" "${p}" \
     "${pypi}/s/${n}/${p}.tar.gz";
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/adbapi2.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/adbapi2.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/adbapi2.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -18,10 +18,10 @@
 """
 Asynchronous multi-process connection pool.
 
-This is similar to L{twisted.enterprise.adbapi}, but can hold a transaction (and
-thereby a thread) open across multiple asynchronous operations, rather than
-forcing the transaction to be completed entirely in a thread and/or entirely in
-a single SQL statement.
+This is similar to L{twisted.enterprise.adbapi}, but can hold a transaction
+(and thereby a thread) open across multiple asynchronous operations, rather
+than forcing the transaction to be completed entirely in a thread and/or
+entirely in a single SQL statement.
 
 Also, this module includes an AMP protocol for multiplexing connections through
 a single choke-point host.  This is not currently in use, however, as AMP needs
@@ -84,6 +84,15 @@
 
 
 
+def _destructively(aList):
+    """
+    Destructively iterate a list, popping elements from the beginning.
+    """
+    while aList:
+        yield aList.pop(0)
+
+
+
 def _deriveParameters(cursor, args):
     """
     Some DB-API extensions need to call special extension methods on
@@ -118,6 +127,7 @@
     return derived
 
 
+
 def _deriveQueryEnded(cursor, derived):
     """
     A query which involved some L{IDerivedParameter}s just ended.  Execute any
@@ -142,6 +152,8 @@
     """
     implements(IAsyncTransaction)
 
+    noisy = False
+
     def __init__(self, pool, threadHolder, connection, cursor):
         self._pool       = pool
         self._completed  = "idle"
@@ -169,33 +181,31 @@
         """
         Execute the given SQL on a thread, using a DB-API 2.0 cursor.
 
-        This method is invoked internally on a non-reactor thread, one dedicated
-        to and associated with the current cursor.  It executes the given SQL,
-        re-connecting first if necessary, re-cycling the old connection if
-        necessary, and then, if there are results from the statement (as
-        determined by the DB-API 2.0 'description' attribute) it will fetch all
-        the rows and return them, leaving them to be relayed to
+        This method is invoked internally on a non-reactor thread, one
+        dedicated to and associated with the current cursor.  It executes the
+        given SQL, re-connecting first if necessary, re-cycling the old
+        connection if necessary, and then, if there are results from the
+        statement (as determined by the DB-API 2.0 'description' attribute) it
+        will fetch all the rows and return them, leaving them to be relayed to
         L{_ConnectedTxn.execSQL} via the L{ThreadHolder}.
 
         The rules for possibly reconnecting automatically are: if this is the
         very first statement being executed in this transaction, and an error
         occurs in C{execute}, close the connection and try again.  We will
-        ignore any errors from C{close()} (or C{rollback()}) and log them during
-        this process.  This is OK because adbapi2 always enforces transaction
-        discipline: connections are never in autocommit mode, so if the first
-        statement in a transaction fails, nothing can have happened to the
-        database; as per the ADBAPI spec, a lost connection is a rolled-back
-        transaction.  In the cases where some databases fail to enforce
-        transaction atomicity (i.e. schema manipulations), re-executing the same
-        statement will result, at worst, in a spurious and harmless error (like
-        "table already exists"), not corruption.
+        ignore any errors from C{close()} (or C{rollback()}) and log them
+        during this process.  This is OK because adbapi2 always enforces
+        transaction discipline: connections are never in autocommit mode, so if
+        the first statement in a transaction fails, nothing can have happened
+        to the database; as per the ADBAPI spec, a lost connection is a
+        rolled-back transaction.  In the cases where some databases fail to
+        enforce transaction atomicity (i.e.  schema manipulations),
+        re-executing the same statement will result, at worst, in a spurious
+        and harmless error (like "table already exists"), not corruption.
 
         @param sql: The SQL string to execute.
-
         @type sql: C{str}
 
         @param args: The bind parameters to pass to adbapi, if any.
-
         @type args: C{list} or C{None}
 
         @param raiseOnZeroRowCount: If specified, an exception to raise when no
@@ -203,7 +213,6 @@
 
         @return: all the rows that resulted from execution of the given C{sql},
             or C{None}, if the statement is one which does not produce results.
-
         @rtype: C{list} of C{tuple}, or C{NoneType}
 
         @raise Exception: this function may raise any exception raised by the
@@ -234,9 +243,9 @@
             # happen in the transaction, then the connection has probably gone
             # bad in the meanwhile, and we should try again.
             if wasFirst:
-                # Report the error before doing anything else, since doing other
-                # things may cause the traceback stack to be eliminated if they
-                # raise exceptions (even internally).
+                # Report the error before doing anything else, since doing
+                # other things may cause the traceback stack to be eliminated
+                # if they raise exceptions (even internally).
                 log.err(
                     Failure(),
                     "Exception from execute() on first statement in "
@@ -292,11 +301,9 @@
             return None
 
 
-    noisy = False
-
     def execSQL(self, *args, **kw):
         result = self._holder.submit(
-            lambda : self._reallyExecSQL(*args, **kw)
+            lambda: self._reallyExecSQL(*args, **kw)
         )
         if self.noisy:
             def reportResult(results):
@@ -305,7 +312,7 @@
                     "SQL: %r %r" % (args, kw),
                     "Results: %r" % (results,),
                     "",
-                    ]))
+                ]))
                 return results
             result.addBoth(reportResult)
         return result
@@ -328,8 +335,8 @@
             self._completed = "ended"
             def reallySomething():
                 """
-                Do the database work and set appropriate flags.  Executed in the
-                cursor thread.
+                Do the database work and set appropriate flags.  Executed in
+                the cursor thread.
                 """
                 if self._cursor is None or self._first:
                     return
@@ -384,8 +391,8 @@
 class _NoTxn(object):
     """
     An L{IAsyncTransaction} that indicates a local failure before we could even
-    communicate any statements (or possibly even any connection attempts) to the
-    server.
+    communicate any statements (or possibly even any connection attempts) to
+    the server.
     """
     implements(IAsyncTransaction)
 
@@ -401,7 +408,6 @@
         """
         return fail(ConnectionError(self.reason))
 
-
     execSQL = _everything
     commit  = _everything
     abort   = _everything
@@ -411,9 +417,9 @@
 class _WaitingTxn(object):
     """
     A L{_WaitingTxn} is an implementation of L{IAsyncTransaction} which cannot
-    yet actually execute anything, so it waits and spools SQL requests for later
-    execution.  When a L{_ConnectedTxn} becomes available later, it can be
-    unspooled onto that.
+    yet actually execute anything, so it waits and spools SQL requests for
+    later execution.  When a L{_ConnectedTxn} becomes available later, it can
+    be unspooled onto that.
     """
 
     implements(IAsyncTransaction)
@@ -442,8 +448,7 @@
         a Deferred to not interfere with the originally submitted order of
         commands.
         """
-        while self._spool:
-            yield self._spool.pop(0)
+        return _destructively(self._spool)
 
 
     def _unspool(self, other):
@@ -492,8 +497,9 @@
         """
         Callback for C{commit} and C{abort} Deferreds.
         """
-        for operation in self._hooks:
+        for operation in _destructively(self._hooks):
             yield operation()
+        self.clear()
         returnValue(ignored)
 
 
@@ -501,10 +507,19 @@
         """
         Implement L{IAsyncTransaction.postCommit}.
         """
-        self._hooks.append(operation)
+        if self._hooks is not None:
+            self._hooks.append(operation)
 
 
+    def clear(self):
+        """
+        Remove all hooks from this operation.  Once this is called, no
+        more hooks can be added ever again.
+        """
+        self._hooks = None
 
+
+
 class _CommitAndAbortHooks(object):
     """
     Shared implementation of post-commit and post-abort hooks.
@@ -524,6 +539,7 @@
         """
         pre = self._preCommit.runHooks()
         def ok(ignored):
+            self._abort.clear()
             return doCommit().addCallback(self._commit.runHooks)
         def failed(why):
             return self.abort().addCallback(lambda ignored: why)
@@ -639,9 +655,9 @@
             d = self._currentBlock._startExecuting()
             d.addCallback(self._finishExecuting)
         elif self._blockedQueue is not None:
-            # If there aren't any pending blocks any more, and there are spooled
-            # statements that aren't part of a block, unspool all the statements
-            # that have been held up until this point.
+            # If there aren't any pending blocks any more, and there are
+            # spooled statements that aren't part of a block, unspool all the
+            # statements that have been held up until this point.
             bq = self._blockedQueue
             self._blockedQueue = None
             bq._unspool(self)
@@ -649,8 +665,8 @@
 
     def _finishExecuting(self, result):
         """
-        The active block just finished executing.  Clear it and see if there are
-        more blocks to execute, or if all the blocks are done and we should
+        The active block just finished executing.  Clear it and see if there
+        are more blocks to execute, or if all the blocks are done and we should
         execute any queued free statements.
         """
         self._currentBlock = None
@@ -659,8 +675,9 @@
 
     def commit(self):
         if self._blockedQueue is not None:
-            # We're in the process of executing a block of commands.  Wait until
-            # they're done.  (Commit will be repeated in _checkNextBlock.)
+            # We're in the process of executing a block of commands.  Wait
+            # until they're done.  (Commit will be repeated in
+            # _checkNextBlock.)
             return self._blockedQueue.commit()
         def reallyCommit():
             self._markComplete()
@@ -670,6 +687,8 @@
 
     def abort(self):
         self._markComplete()
+        self._commit.clear()
+        self._preCommit.clear()
         result = super(_SingleTxn, self).abort()
         if self in self._pool._waiting:
             self._stopWaiting()
@@ -785,9 +804,9 @@
 
         @param raiseOnZeroRowCount: see L{IAsyncTransaction.execSQL}
 
-        @param track: an internal parameter; was this called by application code
-            or as part of unspooling some previously-queued requests?  True if
-            application code, False if unspooling.
+        @param track: an internal parameter; was this called by application
+            code or as part of unspooling some previously-queued requests?
+            True if application code, False if unspooling.
         """
         if track and self._ended:
             raise AlreadyFinishedError()
@@ -970,8 +989,8 @@
         super(ConnectionPool, self).stopService()
         self._stopping = True
 
-        # Phase 1: Cancel any transactions that are waiting so they won't try to
-        # eagerly acquire new connections as they flow into the free-list.
+        # Phase 1: Cancel any transactions that are waiting so they won't try
+        # to eagerly acquire new connections as they flow into the free-list.
         while self._waiting:
             waiting = self._waiting[0]
             waiting._stopWaiting()
@@ -991,10 +1010,10 @@
         # ThreadHolders.
         while self._free:
             # Releasing a L{_ConnectedTxn} doesn't automatically recycle it /
-            # remove it the way aborting a _SingleTxn does, so we need to .pop()
-            # here.  L{_ConnectedTxn.stop} really shouldn't be able to fail, as
-            # it's just stopping the thread, and the holder's stop() is
-            # independently submitted from .abort() / .close().
+            # remove it the way aborting a _SingleTxn does, so we need to
+            # .pop() here.  L{_ConnectedTxn.stop} really shouldn't be able to
+            # fail, as it's just stopping the thread, and the holder's stop()
+            # is independently submitted from .abort() / .close().
             yield self._free.pop()._releaseConnection()
 
         tp = self.reactor.getThreadPool()
@@ -1011,8 +1030,8 @@
     def connection(self, label="<unlabeled>"):
         """
         Find and immediately return an L{IAsyncTransaction} object.  Execution
-        of statements, commit and abort on that transaction may be delayed until
-        a real underlying database connection is available.
+        of statements, commit and abort on that transaction may be delayed
+        until a real underlying database connection is available.
 
         @return: an L{IAsyncTransaction}
         """
@@ -1158,6 +1177,7 @@
     def toString(self, inObject):
         return dumps(inObject)
 
+
     def fromString(self, inString):
         return loads(inString)
 
@@ -1193,8 +1213,7 @@
                 if f.type in command.errors:
                     returnValue(f)
                 else:
-                    log.err(Failure(),
-                            "shared database connection pool encountered error")
+                    log.err(Failure(), "shared database connection pool error")
                     raise FailsafeException()
             else:
                 returnValue(val)
@@ -1286,6 +1305,7 @@
     """
 
 
+
 class ConnectionPoolConnection(AMP):
     """
     A L{ConnectionPoolConnection} is a single connection to a
@@ -1402,7 +1422,8 @@
     A client which can execute SQL.
     """
 
-    def __init__(self, dialect=POSTGRES_DIALECT, paramstyle=DEFAULT_PARAM_STYLE):
+    def __init__(self, dialect=POSTGRES_DIALECT,
+                 paramstyle=DEFAULT_PARAM_STYLE):
         # See DEFAULT_PARAM_STYLE FIXME above.
         super(ConnectionPoolClient, self).__init__()
         self._nextID    = count().next
@@ -1428,8 +1449,8 @@
         """
         Create a new networked provider of L{IAsyncTransaction}.
 
-        (This will ultimately call L{ConnectionPool.connection} on the other end
-        of the wire.)
+        (This will ultimately call L{ConnectionPool.connection} on the other
+        end of the wire.)
 
         @rtype: L{IAsyncTransaction}
         """
@@ -1478,12 +1499,12 @@
         @param derived: either C{None} or a C{list} of L{IDerivedParameter}
             providers initially passed into the C{execSQL} that started this
             query.  The values of these object swill mutate the original input
-            parameters to resemble them.  Although L{IDerivedParameter.preQuery}
-            and L{IDerivedParameter.postQuery} are invoked on the other end of
-            the wire, the local objects will be made to appear as though they
-            were called here.
+            parameters to resemble them.  Although
+            L{IDerivedParameter.preQuery} and L{IDerivedParameter.postQuery}
+            are invoked on the other end of the wire, the local objects will be
+            made to appear as though they were called here.
 
-        @param noneResult: should the result of the query be C{None} (i.e. did
+        @param noneResult: should the result of the query be C{None} (i.e.  did
             it not have a C{description} on the cursor).
         """
         if noneResult and not self.results:
@@ -1492,8 +1513,8 @@
             results = self.results
         if derived is not None:
             # 1) Bleecchh.
-            # 2) FIXME: add some direct tests in test_adbapi2, the unit test for
-            # this crosses some abstraction boundaries so it's a little
+            # 2) FIXME: add some direct tests in test_adbapi2, the unit test
+            # for this crosses some abstraction boundaries so it's a little
             # integration-y and in the tests for twext.enterprise.dal
             for remote, local in zip(derived, self._deriveDerived()):
                 local.__dict__ = remote.__dict__
@@ -1519,8 +1540,8 @@
 class _NetTransaction(_CommitAndAbortHooks):
     """
     A L{_NetTransaction} is an L{AMP}-protocol-based provider of the
-    L{IAsyncTransaction} interface.  It sends SQL statements, query results, and
-    commit/abort commands via an AMP socket to a pooling process.
+    L{IAsyncTransaction} interface.  It sends SQL statements, query results,
+    and commit/abort commands via an AMP socket to a pooling process.
     """
 
     implements(IAsyncTransaction)
@@ -1562,7 +1583,8 @@
             args = []
         client = self._client
         queryID = str(client._nextID())
-        query = client._queries[queryID] = _Query(sql, raiseOnZeroRowCount, args)
+        query = client._queries[queryID] = _Query(sql, raiseOnZeroRowCount,
+                                                  args)
         result = (
             client.callRemote(
                 ExecSQL, queryID=queryID, sql=sql, args=args,
@@ -1594,6 +1616,8 @@
 
 
     def abort(self):
+        self._commit.clear()
+        self._preCommit.clear()
         return self._complete(Abort).addCallback(self._abort.runHooks)
 
 
@@ -1617,6 +1641,7 @@
             self.abort().addErrback(shush)
 
 
+
 class _NetCommandBlock(object):
     """
     Net command block.
@@ -1650,10 +1675,10 @@
         """
         Execute some SQL on this command block.
         """
-        if  (self._ended or
-             self._transaction._completed and
-             not self._transaction._committing or
-             self._transaction._committed):
+        if (
+            self._ended or self._transaction._completed and
+            not self._transaction._committing or self._transaction._committed
+        ):
             raise AlreadyFinishedError()
         return self._transaction.execSQL(sql, args, raiseOnZeroRowCount,
                                          self._blockID)
@@ -1670,4 +1695,3 @@
             EndBlock, blockID=self._blockID,
             transactionID=self._transaction._transactionID
         )
-

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/ienterprise.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/ienterprise.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/ienterprise.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -67,7 +67,6 @@
         A copy of the 'paramstyle' attribute from a DB-API 2.0 module.
         """)
 
-
     dialect = Attribute(
         """
         A copy of the 'dialect' attribute from the connection pool.  One of the
@@ -100,8 +99,8 @@
     """
     Asynchronous execution of SQL.
 
-    Note that there is no {begin()} method; if an L{IAsyncTransaction} exists at
-    all, it is assumed to have been started.
+    Note that there is no C{begin()} method; if an L{IAsyncTransaction} exists
+    at all, it is assumed to have been started.
     """
 
     def commit():
@@ -167,17 +166,18 @@
 
         This is useful when using database-specific features such as
         sub-transactions where order of execution is importnat, but where
-        application code may need to perform I/O to determine what SQL, exactly,
-        it wants to execute.  Consider this fairly contrived example for an
-        imaginary database::
+        application code may need to perform I/O to determine what SQL,
+        exactly, it wants to execute.  Consider this fairly contrived example
+        for an imaginary database::
 
             def storeWebPage(url, block):
                 block.execSQL("BEGIN SUB TRANSACTION")
                 got = getPage(url)
                 def gotPage(data):
-                    block.execSQL("INSERT INTO PAGES (TEXT) VALUES (?)", [data])
+                    block.execSQL("INSERT INTO PAGES (TEXT) VALUES (?)",
+                                  [data])
                     block.execSQL("INSERT INTO INDEX (TOKENS) VALUES (?)",
-                                 [tokenize(data)])
+                                  [tokenize(data)])
                     lastStmt = block.execSQL("END SUB TRANSACTION")
                     block.end()
                     return lastStmt
@@ -187,12 +187,12 @@
                             lambda x: txn.commit(), lambda f: txn.abort()
                           )
 
-        This fires off all the C{getPage} requests in parallel, and prepares all
-        the necessary SQL immediately as the results arrive, but executes those
-        statements in order.  In the above example, this makes sure to store the
-        page and its tokens together, another use for this might be to store a
-        computed aggregate (such as a sum) at a particular point in a
-        transaction, without sacrificing parallelism.
+        This fires off all the C{getPage} requests in parallel, and prepares
+        all the necessary SQL immediately as the results arrive, but executes
+        those statements in order.  In the above example, this makes sure to
+        store the page and its tokens together, another use for this might be
+        to store a computed aggregate (such as a sum) at a particular point in
+        a transaction, without sacrificing parallelism.
 
         @rtype: L{ICommandBlock}
         """
@@ -208,21 +208,21 @@
 
     def end():
         """
-        End this command block, allowing other commands queued on the underlying
-        transaction to end.
+        End this command block, allowing other commands queued on the
+        underlying transaction to end.
 
         @note: This is I{not} the same as either L{IAsyncTransaction.commit} or
             L{IAsyncTransaction.abort}, since it does not denote success or
             failure; merely that the command block has completed and other
             statements may now be executed.  Since sub-transactions are a
             database-specific feature, they must be implemented at a
-            higher-level than this facility provides (although this facility may
-            be useful in their implementation).  Also note that, unlike either
-            of those methods, this does I{not} return a Deferred: if you want to
-            know when the block has completed, simply add a callback to the last
-            L{ICommandBlock.execSQL} call executed on this L{ICommandBlock}.
-            (This may be changed in a future version for the sake of
-            convenience, however.)
+            higher-level than this facility provides (although this facility
+            may be useful in their implementation).  Also note that, unlike
+            either of those methods, this does I{not} return a Deferred: if you
+            want to know when the block has completed, simply add a callback to
+            the last L{ICommandBlock.execSQL} call executed on this
+            L{ICommandBlock}.  (This may be changed in a future version for the
+            sake of convenience, however.)
         """
 
 
@@ -306,7 +306,8 @@
             L{WorkProposal}
         """
 
+
     def transferProposalCallbacks(self, newQueuer):
         """
         Transfer the registered callbacks to the new queuer.
-        """
\ No newline at end of file
+        """

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/test/test_adbapi2.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/test/test_adbapi2.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/enterprise/test/test_adbapi2.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -18,13 +18,15 @@
 Tests for L{twext.enterprise.adbapi2}.
 """
 
+import gc
+
 from zope.interface.verify import verifyObject
 
 from twisted.python.failure import Failure
 
 from twisted.trial.unittest import TestCase
 
-from twisted.internet.defer import Deferred, fail
+from twisted.internet.defer import Deferred, fail, succeed, inlineCallbacks
 
 from twisted.test.proto_helpers import StringTransport
 
@@ -43,7 +45,37 @@
 from twext.enterprise.fixtures import RollbackFail
 from twext.enterprise.fixtures import CommitFail
 from twext.enterprise.adbapi2 import Commit
+from twext.enterprise.adbapi2 import _HookableOperation
 
+
+class TrashCollector(object):
+    """
+    Test helper for monitoring gc.garbage.
+    """
+    def __init__(self, testCase):
+        self.testCase = testCase
+        testCase.addCleanup(self.checkTrash)
+        self.start()
+
+
+    def start(self):
+        gc.collect()
+        self.garbageStart = len(gc.garbage)
+
+
+    def checkTrash(self):
+        """
+        Ensure that the test has added no additional garbage.
+        """
+        gc.collect()
+        newGarbage = gc.garbage[self.garbageStart:]
+        if newGarbage:
+            # Don't clean up twice.
+            self.start()
+            self.testCase.fail("New garbage: " + repr(newGarbage))
+
+
+
 class AssertResultHelper(object):
     """
     Mixin for asserting about synchronous Deferred results.
@@ -300,8 +332,8 @@
     def test_stopServiceWithSpooled(self):
         """
         When L{ConnectionPool.stopService} is called when spooled transactions
-        are outstanding, any pending L{Deferreds} returned by those transactions
-        will be failed with L{ConnectionError}.
+        are outstanding, any pending L{Deferreds} returned by those
+        transactions will be failed with L{ConnectionError}.
         """
         # Use up the free slots so we have to spool.
         hold = []
@@ -450,7 +482,8 @@
         stopResult = self.resultOf(self.pool.stopService())
         # Sanity check that we haven't actually stopped it yet
         self.assertEquals(abortResult, [])
-        # We haven't fired it yet, so the service had better not have stopped...
+        # We haven't fired it yet, so the service had better not have
+        # stopped...
         self.assertEquals(stopResult, [])
         d.callback(None)
         self.flushHolders()
@@ -465,7 +498,6 @@
         """
         t = self.createTransaction()
         self.resultOf(t.execSQL("echo", []))
-        import gc
         conns = self.factory.connections
         self.assertEquals(len(conns), 1)
         self.assertEquals(conns[0]._rollbackCount, 0)
@@ -477,6 +509,60 @@
         self.assertEquals(conns[0]._commitCount, 0)
 
 
+    def circularReferenceTest(self, finish, hook):
+        """
+        Collecting a completed (committed or aborted) L{IAsyncTransaction}
+        should not leak any circular references.
+        """
+        tc = TrashCollector(self)
+        commitExecuted = []
+        def carefullyManagedScope():
+            t = self.createTransaction()
+            def holdAReference():
+                """
+                This is a hook that holds a reference to 't'.
+                """
+                commitExecuted.append(True)
+                return t.execSQL("teardown", [])
+            hook(t, holdAReference)
+            finish(t)
+        self.failIf(commitExecuted, "Commit hook executed.")
+        carefullyManagedScope()
+        tc.checkTrash()
+
+
+    def test_noGarbageOnCommit(self):
+        """
+        Committing a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.commit(),
+                                   lambda txn, hook: txn.preCommit(hook))
+
+
+    def test_noGarbageOnCommitWithAbortHook(self):
+        """
+        Committing a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.commit(),
+                                   lambda txn, hook: txn.postAbort(hook))
+
+
+    def test_noGarbageOnAbort(self):
+        """
+        Aborting a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.abort(),
+                                   lambda txn, hook: txn.preCommit(hook))
+
+
+    def test_noGarbageOnAbortWithPostCommitHook(self):
+        """
+        Aborting a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.abort(),
+                                   lambda txn, hook: txn.postCommit(hook))
+
+
     def test_tooManyConnectionsWhileOthersFinish(self):
         """
         L{ConnectionPool.connection} will not spawn more than the maximum
@@ -553,10 +639,11 @@
 
     def test_reConnectWhenFirstExecFails(self):
         """
-        Generally speaking, DB-API 2.0 adapters do not provide information about
-        the cause of a failed 'execute' method; they definitely don't provide it
-        in a way which can be identified as related to the syntax of the query,
-        the state of the database itself, the state of the connection, etc.
+        Generally speaking, DB-API 2.0 adapters do not provide information
+        about the cause of a failed 'execute' method; they definitely don't
+        provide it in a way which can be identified as related to the syntax of
+        the query, the state of the database itself, the state of the
+        connection, etc.
 
         Therefore the best general heuristic for whether the connection to the
         database has been lost and needs to be re-established is to catch
@@ -564,8 +651,8 @@
         transaction.
         """
         # Allow 'connect' to succeed.  This should behave basically the same
-        # whether connect() happened to succeed in some previous transaction and
-        # it's recycling the underlying transaction, or connect() just
+        # whether connect() happened to succeed in some previous transaction
+        # and it's recycling the underlying transaction, or connect() just
         # succeeded.  Either way you just have a _SingleTxn wrapping a
         # _ConnectedTxn.
         txn = self.createTransaction()
@@ -636,8 +723,8 @@
         """
         class BindingSpecificException(Exception):
             """
-            Exception that's a placeholder for something that a database binding
-            might raise.
+            Exception that's a placeholder for something that a database
+            binding might raise.
             """
         def alsoFailClose(factory):
             factory.childCloseWillFail(BindingSpecificException())
@@ -738,8 +825,8 @@
         therefore pointless, and can be ignored.  Furthermore, actually
         executing the commit and propagating a possible connection-oriented
         error causes clients to see errors, when, if those clients had actually
-        executed any statements, the connection would have been recycled and the
-        statement transparently re-executed by the logic tested by
+        executed any statements, the connection would have been recycled and
+        the statement transparently re-executed by the logic tested by
         L{test_reConnectWhenFirstExecFails}.
         """
         txn = self.createTransaction()
@@ -758,12 +845,12 @@
 
     def test_reConnectWhenSecondExecFailsThenFirstExecFails(self):
         """
-        Other connection-oriented errors might raise exceptions if they occur in
-        the middle of a transaction, but that should cause the error to be
-        caught, the transaction to be aborted, and the (closed) connection to be
-        recycled, where the next transaction that attempts to do anything with
-        it will encounter the error immediately and discover it needs to be
-        recycled.
+        Other connection-oriented errors might raise exceptions if they occur
+        in the middle of a transaction, but that should cause the error to be
+        caught, the transaction to be aborted, and the (closed) connection to
+        be recycled, where the next transaction that attempts to do anything
+        with it will encounter the error immediately and discover it needs to
+        be recycled.
 
         It would be better if this behavior were invisible, but that could only
         be accomplished with more precise database exceptions.  We may come up
@@ -780,9 +867,9 @@
         self.assertEquals(self.factory.connections[0].executions, 2)
         # Reconnection should work exactly as before.
         self.assertEquals(self.factory.connections[0].closed, False)
-        # Application code has to roll back its transaction at this point, since
-        # it failed (and we don't necessarily know why it failed: not enough
-        # information).
+        # Application code has to roll back its transaction at this point,
+        # since it failed (and we don't necessarily know why it failed: not
+        # enough information).
         self.resultOf(txn.abort())
         self.factory.connections[0].executions = 0 # re-set for next test
         self.assertEquals(len(self.factory.connections), 1)
@@ -888,7 +975,7 @@
         self.assertEquals(len(e), 1)
 
 
-    def test_twoCommandBlocks(self, flush=lambda : None):
+    def test_twoCommandBlocks(self, flush=lambda: None):
         """
         When execution of one command block is complete, it will proceed to the
         next queued block, then to regular SQL executed on the transaction.
@@ -932,9 +1019,9 @@
     def test_commandBlockDelaysCommit(self):
         """
         Some command blocks need to run asynchronously, without the overall
-        transaction-managing code knowing how far they've progressed.  Therefore
-        when you call {IAsyncTransaction.commit}(), it should not actually take
-        effect if there are any pending command blocks.
+        transaction-managing code knowing how far they've progressed.
+        Therefore when you call {IAsyncTransaction.commit}(), it should not
+        actually take effect if there are any pending command blocks.
         """
         txn = self.createTransaction()
         block = txn.commandBlock()
@@ -1078,8 +1165,8 @@
 
     def pump(self):
         """
-        Deliver all input from the client to the server, then from the server to
-        the client.
+        Deliver all input from the client to the server, then from the server
+        to the client.
         """
         a = self.moveData(self.c2s)
         b = self.moveData(self.s2c)
@@ -1187,3 +1274,31 @@
         self.assertEquals(len(self.factory.connections), 1)
 
 
+class HookableOperationTests(TestCase):
+    """
+    Tests for L{_HookableOperation}.
+    """
+
+    @inlineCallbacks
+    def test_clearPreventsSubsequentAddHook(self):
+        """
+        After clear() or runHooks() are called, subsequent calls to addHook()
+        are NO-OPs.
+        """
+        def hook():
+            return succeed(None)
+
+        hookOp = _HookableOperation()
+        hookOp.addHook(hook)
+        self.assertEquals(len(hookOp._hooks), 1)
+        hookOp.clear()
+        self.assertEquals(hookOp._hooks, None)
+
+        hookOp = _HookableOperation()
+        hookOp.addHook(hook)
+        yield hookOp.runHooks()
+        self.assertEquals(hookOp._hooks, None)
+        hookOp.addHook(hook)
+        self.assertEquals(hookOp._hooks, None)
+
+

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/sendfdport.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/sendfdport.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/sendfdport.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -95,6 +95,7 @@
     used to transmit sockets to a subprocess.
 
     @ivar skt: the UNIX socket used as the sendmsg() transport.
+    @type skt: L{socket.socket}
 
     @ivar outgoingSocketQueue: an outgoing queue of sockets to send to the
         subprocess, along with their descriptions (strings describing their
@@ -107,7 +108,11 @@
         from the subprocess: this is an application-specific indication of how
         ready this subprocess is to receive more connections.  A typical usage
         would be to count the open connections: this is what is passed to
-    @type status: C{str}
+    @type status: See L{IStatusWatcher} for an explanation of which methods
+        determine this type.
+
+    @ivar dispatcher: The socket dispatcher that owns this L{_SubprocessSocket}
+    @type dispatcher: L{InheritedSocketDispatcher}
     """
 
     def __init__(self, dispatcher, skt, status):
@@ -117,6 +122,7 @@
         self.skt = skt          # XXX needs to be set non-blocking by somebody
         self.fileno = skt.fileno
         self.outgoingSocketQueue = []
+        self.pendingCloseSocketQueue = []
 
 
     def sendSocketToPeer(self, skt, description):
@@ -127,7 +133,7 @@
         self.startWriting()
 
 
-    def doRead(self):
+    def doRead(self, recvmsg=recvmsg):
         """
         Receive a status / health message and record it.
         """
@@ -137,10 +143,12 @@
             if se.errno not in (EAGAIN, ENOBUFS):
                 raise
         else:
-            self.dispatcher.statusMessage(self, data)
+            closeCount = self.dispatcher.statusMessage(self, data)
+            for ignored in xrange(closeCount):
+                self.pendingCloseSocketQueue.pop(0).close()
 
 
-    def doWrite(self):
+    def doWrite(self, sendfd=sendfd):
         """
         Transmit as many queued pending file descriptors as we can.
         """
@@ -154,8 +162,8 @@
                     return
                 raise
 
-            # Always close the socket on this end
-            skt.close()
+            # Ready to close this socket; wait until it is acknowledged.
+            self.pendingCloseSocketQueue.append(skt)
 
         if not self.outgoingSocketQueue:
             self.stopWriting()
@@ -197,6 +205,7 @@
         @return: the new status.
         """
 
+
     def newConnectionStatus(previousStatus): #@NoSelf
         """
         A new connection was sent to a given socket.  Compute its status based
@@ -208,6 +217,7 @@
         @return: the socket's status after incrementing its outstanding work.
         """
 
+
     def statusFromMessage(previousStatus, message): #@NoSelf
         """
         A status message was received by a worker.  Convert the previous status
@@ -222,7 +232,18 @@
         """
 
 
+    def closeCountFromStatus(previousStatus): #@NoSelf
+        """
+        Based on a status previously returned from a method on this
+        L{IStatusWatcher}, determine how many sockets may be closed.
 
+        @return: a 2-tuple of C{number of sockets that may safely be closed},
+            C{new status}.
+        @rtype: 2-tuple of (C{int}, C{<opaque>})
+        """
+
+
+
 class InheritedSocketDispatcher(object):
     """
     Used by one or more L{InheritingProtocolFactory}s, this keeps track of a
@@ -262,10 +283,11 @@
         The status of a connection has changed; update all registered status
         change listeners.
         """
-        subsocket.status = self.statusWatcher.statusFromMessage(
-            subsocket.status, message
-        )
-        self.statusWatcher.statusesChanged(self.statuses)
+        watcher = self.statusWatcher
+        status = watcher.statusFromMessage(subsocket.status, message)
+        closeCount, subsocket.status = watcher.closeCountFromStatus(status)
+        watcher.statusesChanged(self.statuses)
+        return closeCount
 
 
     def sendFileDescriptor(self, skt, description):
@@ -293,7 +315,7 @@
         # XXX Maybe want to send along 'description' or 'skt' or some
         # properties thereof? -glyph
         selectedSocket.status = self.statusWatcher.newConnectionStatus(
-           selectedSocket.status
+            selectedSocket.status
         )
         self.statusWatcher.statusesChanged(self.statuses)
 
@@ -307,7 +329,7 @@
             subSocket.startReading()
 
 
-    def addSocket(self):
+    def addSocket(self, socketpair=lambda: socketpair(AF_UNIX, SOCK_DGRAM)):
         """
         Add a C{sendmsg()}-oriented AF_UNIX socket to the pool of sockets being
         used for transmitting file descriptors to child processes.
@@ -316,7 +338,7 @@
             C{fileno()} as part of the C{childFDs} argument to
             C{spawnProcess()}, then close it.
         """
-        i, o = socketpair(AF_UNIX, SOCK_DGRAM)
+        i, o = socketpair()
         i.setblocking(False)
         o.setblocking(False)
         a = _SubprocessSocket(self, o, self.statusWatcher.initialStatus())

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/test/test_sendfdport.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/test/test_sendfdport.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/internet/test/test_sendfdport.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -23,14 +23,25 @@
 import os
 import fcntl
 
+from zope.interface.verify import verifyClass
+from zope.interface import implementer
+
 from twext.internet.sendfdport import InheritedSocketDispatcher
 
 from twext.web2.metafd import ConnectionLimiter
 from twisted.internet.interfaces import IReactorFDSet
 from twisted.trial.unittest import TestCase
-from zope.interface import implementer
 
- at implementer(IReactorFDSet)
+def verifiedImplementer(interface):
+    def _(cls):
+        result = implementer(interface)(cls)
+        verifyClass(interface, result)
+        return result
+    return _
+
+
+
+ at verifiedImplementer(IReactorFDSet)
 class ReaderAdder(object):
 
     def __init__(self):
@@ -50,7 +61,23 @@
         self.writers.append(writer)
 
 
+    def removeAll(self):
+        self.__init__()
 
+
+    def getWriters(self):
+        return self.writers[:]
+
+
+    def removeReader(self, reader):
+        self.readers.remove(reader)
+
+
+    def removeWriter(self, writer):
+        self.writers.remove(writer)
+
+
+
 def isNonBlocking(skt):
     """
     Determine if the given socket is blocking or not.
@@ -66,22 +93,11 @@
 
 
 
-from zope.interface.verify import verifyClass
-from zope.interface import implementer
-
-def verifiedImplementer(interface):
-    def _(cls):
-        result = implementer(interface)(cls)
-        verifyClass(interface, result)
-        return result
-    return _
-
-
-
 @verifiedImplementer(IStatusWatcher)
 class Watcher(object):
     def __init__(self, q):
         self.q = q
+        self._closeCounter = 1
 
 
     def newConnectionStatus(self, previous):
@@ -100,7 +116,13 @@
         return 0
 
 
+    def closeCountFromStatus(self, status):
+        result = (self._closeCounter, status)
+        self._closeCounter += 1
+        return result
 
+
+
 class InheritedSocketDispatcherTests(TestCase):
     """
     Inherited socket dispatcher tests.
@@ -110,6 +132,51 @@
         self.dispatcher.reactor = ReaderAdder()
 
 
+    def test_closeSomeSockets(self):
+        """
+        L{InheritedSocketDispatcher} determines how many sockets to close from
+        L{IStatusWatcher.closeCountFromStatus}.
+        """
+        self.dispatcher.statusWatcher = Watcher([])
+        class SocketForClosing(object):
+            blocking = True
+            closed = False
+            def setblocking(self, b):
+                self.blocking = b
+            def fileno(self):
+                return object()
+            def close(self):
+                self.closed = True
+
+        one = SocketForClosing()
+        two = SocketForClosing()
+        three = SocketForClosing()
+
+        self.dispatcher.addSocket(
+            lambda: (SocketForClosing(), SocketForClosing())
+        )
+
+        self.dispatcher.sendFileDescriptor(one, "one")
+        self.dispatcher.sendFileDescriptor(two, "two")
+        self.dispatcher.sendFileDescriptor(three, "three")
+        def sendfd(unixSocket, tcpSocket, description):
+            pass
+        # Put something into the socket-close queue.
+        self.dispatcher._subprocessSockets[0].doWrite(sendfd)
+        # Nothing closed yet.
+        self.assertEquals(one.closed, False)
+        self.assertEquals(two.closed, False)
+        self.assertEquals(three.closed, False)
+
+        def recvmsg(fileno):
+            return 'data', 0, 0
+        self.dispatcher._subprocessSockets[0].doRead(recvmsg)
+        # One socket closed.
+        self.assertEquals(one.closed, True)
+        self.assertEquals(two.closed, False)
+        self.assertEquals(three.closed, False)
+
+
     def test_nonBlocking(self):
         """
         Creating a L{_SubprocessSocket} via
@@ -165,6 +232,7 @@
         message = "whatever"
         # Need to have a socket that will accept the descriptors.
         dispatcher.addSocket()
-        dispatcher.statusMessage(dispatcher._subprocessSockets[0], message)
-        dispatcher.statusMessage(dispatcher._subprocessSockets[0], message)
+        subskt = dispatcher._subprocessSockets[0]
+        dispatcher.statusMessage(subskt, message)
+        dispatcher.statusMessage(subskt, message)
         self.assertEquals(q, [[-1], [-2]])

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/log.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/log.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/log.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -34,7 +34,8 @@
         log = Logger()
 
         def oops(self, data):
-            self.log.error("Oops! Invalid data from server: {data!r}", data=data)
+            self.log.error("Oops! Invalid data from server: {data!r}",
+                           data=data)
 
 C{Logger}s have namespaces, for which logging can be configured independently.
 Namespaces may be specified by passing in a C{namespace} argument to L{Logger}
@@ -76,14 +77,16 @@
 from zope.interface import Interface, implementer
 from twisted.python.constants import NamedConstant, Names
 from twisted.python.failure import Failure
-from twisted.python.reflect import safe_str
+from twisted.python.reflect import safe_str, safe_repr
 import twisted.python.log
 from twisted.python.log import msg as twistedLogMessage
 from twisted.python.log import addObserver, removeObserver
 from twisted.python.log import ILogObserver as ILegacyLogObserver
 
+OBSERVER_REMOVED = (
+    "Temporarily removing observer {observer} due to exception: {e}"
+)
 
-
 #
 # Log level definitions
 #
@@ -150,24 +153,27 @@
         """
         return cls._levelPriorities[constant]
 
-LogLevel._levelPriorities = dict((constant, idx)
-                                 for (idx, constant) in
-                                     (enumerate(LogLevel.iterconstants())))
 
+LogLevel._levelPriorities = dict(
+    (constant, idx) for (idx, constant) in
+    (enumerate(LogLevel.iterconstants()))
+)
 
 
+
 #
 # Mappings to Python's logging module
 #
 pythonLogLevelMapping = {
-    LogLevel.debug   : logging.DEBUG,
-    LogLevel.info    : logging.INFO,
-    LogLevel.warn    : logging.WARNING,
-    LogLevel.error   : logging.ERROR,
-   #LogLevel.critical: logging.CRITICAL,
+    LogLevel.debug: logging.DEBUG,
+    LogLevel.info:  logging.INFO,
+    LogLevel.warn:  logging.WARNING,
+    LogLevel.error: logging.ERROR,
+    # LogLevel.critical: logging.CRITICAL,
 }
 
 
+
 ##
 # Loggers
 ##
@@ -206,21 +212,20 @@
         return formatWithCall(format, event)
 
     except BaseException as e:
-        try:
-            return formatUnformattableEvent(event, e)
-        except:
-            return u"MESSAGE LOST"
+        return formatUnformattableEvent(event, e)
 
 
 
 def formatUnformattableEvent(event, error):
     """
-    Formats an event as a L{unicode} that describes the event
-    generically and a formatting error.
+    Formats an event as a L{unicode} that describes the event generically and a
+    formatting error.
 
     @param event: a logging event
+    @type dict: L{dict}
 
     @param error: the formatting error
+    @type error: L{Exception}
 
     @return: a L{unicode}
     """
@@ -229,35 +234,22 @@
             u"Unable to format event {event!r}: {error}"
             .format(event=event, error=error)
         )
-    except BaseException as error:
-        #
+    except BaseException:
         # Yikes, something really nasty happened.
         #
-        # Try to recover as much formattable data as possible;
-        # hopefully at least the namespace is sane, which will
-        # help you find the offending logger.
-        #
-        try:
-            items = []
+        # Try to recover as much formattable data as possible; hopefully at
+        # least the namespace is sane, which will help you find the offending
+        # logger.
+        failure = Failure()
 
-            for key, value in event.items():
-                try:
-                    items.append(u"{key!r} = ".format(key=key))
-                except:
-                    items.append(u"<UNFORMATTABLE KEY> = ")
-                try:
-                    items.append(u"{value!r}".format(value=value))
-                except:
-                    items.append(u"<UNFORMATTABLE VALUE>")
+        text = ", ".join(" = ".join((safe_repr(key), safe_repr(value)))
+                         for key, value in event.items())
 
-            text = ", ".join(items)
-        except:
-            text = ""
-
         return (
-            u"MESSAGE LOST: Unformattable object logged: {error}\n"
-            u"Recoverable data: {text}"
-            .format(text=text)
+            u"MESSAGE LOST: unformattable object logged: {error}\n"
+            u"Recoverable data: {text}\n"
+            u"Exception during formatting:\n{failure}"
+            .format(error=safe_repr(error), failure=failure, text=text)
         )
 
 
@@ -344,28 +336,24 @@
         @param kwargs: additional keyword parameters to include with
             the event.
         """
-        if level not in LogLevel.iterconstants(): # FIXME: Updated Twisted supports 'in' on constants container
+        # FIXME: Updated Twisted supports 'in' on constants container
+        if level not in LogLevel.iterconstants():
             self.failure(
                 "Got invalid log level {invalidLevel!r} in {logger}.emit().",
                 Failure(InvalidLogLevelError(level)),
-                invalidLevel = level,
-                logger = self,
+                invalidLevel=level,
+                logger=self,
             )
             #level = LogLevel.error
             # FIXME: continue to emit?
             return
 
-        event = kwargs
-        event.update(
-            log_logger    = self,
-            log_level     = level,
-            log_namespace = self.namespace,
-            log_source    = self.source,
-            log_format    = format,
-            log_time      = time.time(),
+        kwargs.update(
+            log_logger=self, log_level=level, log_namespace=self.namespace,
+            log_source=self.source, log_format=format, log_time=time.time(),
         )
 
-        self.publisher(event)
+        self.publisher(kwargs)
 
 
     def failure(self, format, failure=None, level=LogLevel.error, **kwargs):
@@ -381,8 +369,9 @@
 
         or::
 
-            d = deferred_frob(knob)
-            d.addErrback(lambda f: log.failure, "While frobbing {knob}", f, knob=knob)
+            d = deferredFrob(knob)
+            d.addErrback(lambda f: log.failure, "While frobbing {knob}",
+                         f, knob=knob)
 
         @param format: a message format using new-style (PEP 3101)
             formatting.  The logging event (which is a L{dict}) is
@@ -397,7 +386,7 @@
             event.
         """
         if failure is None:
-            failure=Failure()
+            failure = Failure()
 
         self.emit(level, format, log_failure=failure, **kwargs)
 
@@ -410,10 +399,10 @@
     """
 
     def __init__(self, logger=None):
-        if logger is not None:
+        if logger is None:
+            self.newStyleLogger = Logger(Logger._namespaceFromCallingContext())
+        else:
             self.newStyleLogger = logger
-        else:
-            self.newStyleLogger = Logger(Logger._namespaceFromCallingContext())
 
 
     def __getattribute__(self, name):
@@ -446,10 +435,12 @@
             _stuff = Failure(_stuff)
 
         if isinstance(_stuff, Failure):
-            self.newStyleLogger.emit(LogLevel.error, failure=_stuff, why=_why, isError=1, **kwargs)
+            self.newStyleLogger.emit(LogLevel.error, failure=_stuff, why=_why,
+                                     isError=1, **kwargs)
         else:
             # We got called with an invalid _stuff.
-            self.newStyleLogger.emit(LogLevel.error, repr(_stuff), why=_why, isError=1, **kwargs)
+            self.newStyleLogger.emit(LogLevel.error, repr(_stuff), why=_why,
+                                     isError=1, **kwargs)
 
 
 
@@ -475,13 +466,15 @@
 
     setattr(Logger, level.name, log_emit)
 
-for level in LogLevel.iterconstants(): 
-    bindEmit(level)
 
-del level
 
+def _bindLevels():
+    for level in LogLevel.iterconstants():
+        bindEmit(level)
 
+_bindLevels()
 
+
 #
 # Observers
 #
@@ -545,11 +538,11 @@
             pass
 
 
-    def __call__(self, event): 
+    def __call__(self, event):
         for observer in self.observers:
             try:
                 observer(event)
-            except:
+            except BaseException as e:
                 #
                 # We have to remove the offending observer because
                 # we're going to badmouth it to all of its friends
@@ -558,8 +551,8 @@
                 #
                 self.removeObserver(observer)
                 try:
-                    self.log.failure("Observer {observer} raised an exception; removing.", observer=observer)
-                except:
+                    self.log.failure(OBSERVER_REMOVED, observer=observer, e=e)
+                except BaseException:
                     pass
                 finally:
                     self.addObserver(observer)
@@ -639,6 +632,8 @@
     """
     L{ILogFilterPredicate} that filters out events with a log level
     lower than the log level for the event's namespace.
+
+    Events that not not have a log level or namespace are also dropped.
     """
 
     def __init__(self):
@@ -701,11 +696,15 @@
 
 
     def __call__(self, event):
-        level     = event["log_level"]
-        namespace = event["log_namespace"]
+        level     = event.get("log_level", None)
+        namespace = event.get("log_namespace", None)
 
-        if (LogLevel._priorityForLevel(level) <
-            LogLevel._priorityForLevel(self.logLevelForNamespace(namespace))):
+        if (
+            level is None or
+            namespace is None or
+            LogLevel._priorityForLevel(level) <
+            LogLevel._priorityForLevel(self.logLevelForNamespace(namespace))
+        ):
             return PredicateResult.no
 
         return PredicateResult.maybe
@@ -725,8 +724,8 @@
         """
         self.legacyObserver = legacyObserver
 
-    
-    def __call__(self, event): 
+
+    def __call__(self, event):
         prefix = "[{log_namespace}#{log_level.name}] ".format(**event)
 
         level = event["log_level"]
@@ -756,7 +755,9 @@
         if "log_failure" in event:
             event["failure"] = event["log_failure"]
             event["isError"] = 1
-            event["why"] = "{prefix}{message}".format(prefix=prefix, message=formatEvent(event))
+            event["why"] = "{prefix}{message}".format(
+                prefix=prefix, message=formatEvent(event)
+            )
 
         self.legacyObserver(**event)
 
@@ -814,7 +815,8 @@
         self.legacyLogObserver = LegacyLogObserver(twistedLogMessage)
         self.filteredPublisher = LogPublisher(self.legacyLogObserver)
         self.levels            = LogLevelFilterPredicate()
-        self.filters           = FilteringLogObserver(self.filteredPublisher, (self.levels,))
+        self.filters           = FilteringLogObserver(self.filteredPublisher,
+                                                      (self.levels,))
         self.rootPublisher     = LogPublisher(self.filters)
 
 
@@ -862,6 +864,7 @@
     def __init__(self, submapping):
         self._submapping = submapping
 
+
     def __getitem__(self, key):
         callit = key.endswith(u"()")
         realKey = key[:-2] if callit else key
@@ -871,6 +874,7 @@
         return value
 
 
+
 def formatWithCall(formatString, mapping):
     """
     Format a string like L{unicode.format}, but:
@@ -930,16 +934,20 @@
             continue
 
         for name, obj in module.__dict__.iteritems():
-            legacyLogger = LegacyLogger(logger=Logger(namespace=module.__name__))
+            newLogger = Logger(namespace=module.__name__)
+            legacyLogger = LegacyLogger(logger=newLogger)
 
             if obj is twisted.python.log:
-                log.info("Replacing Twisted log module object {0} in {1}".format(name, module.__name__))
+                log.info("Replacing Twisted log module object {0} in {1}"
+                         .format(name, module.__name__))
                 setattr(module, name, legacyLogger)
             elif obj is twisted.python.log.msg:
-                log.info("Replacing Twisted log.msg object {0} in {1}".format(name, module.__name__))
+                log.info("Replacing Twisted log.msg object {0} in {1}"
+                         .format(name, module.__name__))
                 setattr(module, name, legacyLogger.msg)
             elif obj is twisted.python.log.err:
-                log.info("Replacing Twisted log.err object {0} in {1}".format(name, module.__name__))
+                log.info("Replacing Twisted log.err object {0} in {1}"
+                         .format(name, module.__name__))
                 setattr(module, name, legacyLogger.err)
 
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/test/test_log.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/test/test_log.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/python/test/test_log.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -23,11 +23,11 @@
 from twext.python.log import (
     LogLevel, InvalidLogLevelError,
     pythonLogLevelMapping,
-    formatEvent, formatWithCall,
+    formatEvent, formatUnformattableEvent, formatWithCall,
     Logger, LegacyLogger,
-    ILogObserver, LogPublisher,
+    ILogObserver, LogPublisher, DefaultLogPublisher,
     FilteringLogObserver, PredicateResult,
-    LogLevelFilterPredicate,
+    LogLevelFilterPredicate, OBSERVER_REMOVED
 )
 
 
@@ -59,7 +59,7 @@
             twistedLogging.removeObserver(observer)
 
         self.emitted = {
-            "level" : level,
+            "level":  level,
             "format": format,
             "kwargs": kwargs,
         }
@@ -67,8 +67,8 @@
 
 
 class TestLegacyLogger(LegacyLogger):
-    def __init__(self):
-        LegacyLogger.__init__(self, logger=TestLogger())
+    def __init__(self, logger=TestLogger()):
+        LegacyLogger.__init__(self, logger=logger)
 
 
 
@@ -131,7 +131,8 @@
         """
         self.failUnless(logLevelForNamespace(None), defaultLogLevel)
         self.failUnless(logLevelForNamespace(""), defaultLogLevel)
-        self.failUnless(logLevelForNamespace("rocker.cool.namespace"), defaultLogLevel)
+        self.failUnless(logLevelForNamespace("rocker.cool.namespace"),
+                        defaultLogLevel)
 
 
     def test_setLogLevel(self):
@@ -142,22 +143,30 @@
         setLogLevelForNamespace("twext.web2", LogLevel.debug)
         setLogLevelForNamespace("twext.web2.dav", LogLevel.warn)
 
-        self.assertEquals(logLevelForNamespace(None                        ), LogLevel.error)
-        self.assertEquals(logLevelForNamespace("twisted"                   ), LogLevel.error)
-        self.assertEquals(logLevelForNamespace("twext.web2"                ), LogLevel.debug)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav"            ), LogLevel.warn)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"       ), LogLevel.warn)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"), LogLevel.warn)
+        self.assertEquals(logLevelForNamespace(None),
+                          LogLevel.error)
+        self.assertEquals(logLevelForNamespace("twisted"),
+                          LogLevel.error)
+        self.assertEquals(logLevelForNamespace("twext.web2"),
+                          LogLevel.debug)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav"),
+                          LogLevel.warn)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"),
+                          LogLevel.warn)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"),
+                          LogLevel.warn)
 
 
     def test_setInvalidLogLevel(self):
         """
         Can't pass invalid log levels to setLogLevelForNamespace().
         """
-        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace, "twext.web2", object())
+        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace,
+                          "twext.web2", object())
 
         # Level must be a constant, not the name of a constant
-        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace, "twext.web2", "debug")
+        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace,
+                          "twext.web2", "debug")
 
 
     def test_clearLogLevels(self):
@@ -169,11 +178,14 @@
 
         clearLogLevels()
 
-        self.assertEquals(logLevelForNamespace("twisted"                   ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2"                ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav"            ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"       ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"), defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twisted"), defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2"), defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav"),
+                          defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"),
+                          defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"),
+                          defaultLogLevel)
 
 
     def test_namespace_default(self):
@@ -191,14 +203,17 @@
         mean that the format key ought to be I{called} rather than stringified.
         """
         self.assertEquals(
-            formatWithCall(u"Hello, {world}. {callme()}.",
-                           dict(world="earth",
-                                callme=lambda: "maybe")),
+            formatWithCall(
+                u"Hello, {world}. {callme()}.",
+                dict(world="earth", callme=lambda: "maybe")
+            ),
             "Hello, earth. maybe."
         )
         self.assertEquals(
-            formatWithCall(u"Hello, {repr()!r}.",
-                           dict(repr=lambda: 'repr')),
+            formatWithCall(
+                u"Hello, {repr()!r}.",
+                dict(repr=lambda: "repr")
+            ),
             "Hello, 'repr'."
         )
 
@@ -262,7 +277,7 @@
         self.assertIn(repr(event), result)
 
 
-    def test_formatEventYouSoNasty(self):
+    def test_formatUnformattableEvent(self):
         """
         Formatting an event that's just plain out to get us.
         """
@@ -273,24 +288,52 @@
         self.assertIn(repr(event), result)
 
 
-#     def test_formatEventYouSoNastyOMGMakeItStop(self):
-#         """
-#         Formatting an event that's just plain out to get us and is
-#         really determined.
-#         """
-#         badRepr = 
+    def test_formatUnformattableEventWithUnformattableKey(self):
+        """
+        Formatting an unformattable event that has an unformattable key.
+        """
+        event = {
+            "log_format": "{evil()}",
+            "evil": lambda: 1/0,
+            Unformattable(): "gurk",
+        }
+        result = formatEvent(event)
+        self.assertIn("MESSAGE LOST: unformattable object logged:", result)
+        self.assertIn("Recoverable data:", result)
+        self.assertIn("Exception during formatting:", result)
 
-#         event = dict(
-#             log_format="{evil()}",
-#             evil=lambda: 1/0,
-#         )
-#         result = formatEvent(event)
 
-#         self.assertIn("Unable to format event", result)
-#         self.assertIn(repr(event), result)
+    def test_formatUnformattableEventWithUnformattableValue(self):
+        """
+        Formatting an unformattable event that has an unformattable value.
+        """
+        event = dict(
+            log_format="{evil()}",
+            evil=lambda: 1/0,
+            gurk=Unformattable(),
+        )
+        result = formatEvent(event)
+        self.assertIn("MESSAGE LOST: unformattable object logged:", result)
+        self.assertIn("Recoverable data:", result)
+        self.assertIn("Exception during formatting:", result)
 
 
+    def test_formatUnformattableEventWithUnformattableErrorOMGWillItStop(self):
+        """
+        Formatting an unformattable event that has an unformattable value.
+        """
+        event = dict(
+            log_format="{evil()}",
+            evil=lambda: 1/0,
+            recoverable="okay",
+        )
+        # Call formatUnformattableEvent() directly with a bogus exception.
+        result = formatUnformattableEvent(event, Unformattable())
+        self.assertIn("MESSAGE LOST: unformattable object logged:", result)
+        self.assertIn(repr("recoverable") + " = " + repr("okay"), result)
 
+
+
 class LoggerTests(SetUpTearDown, unittest.TestCase):
     """
     Tests for L{Logger}.
@@ -322,8 +365,8 @@
 
     def test_sourceAvailableForFormatting(self):
         """
-        On instances that have a L{Logger} class attribute, the C{log_source} key
-        is available to format strings.
+        On instances that have a L{Logger} class attribute, the C{log_source}
+        key is available to format strings.
         """
         obj = LogComposedObject("hello")
         log = obj.log
@@ -359,16 +402,19 @@
             self.assertEquals(log.emitted["kwargs"]["junk"], message)
 
             if level >= logLevelForNamespace(log.namespace):
+                self.assertTrue(hasattr(log, "event"), "No event observed.")
                 self.assertEquals(log.event["log_format"], format)
                 self.assertEquals(log.event["log_level"], level)
                 self.assertEquals(log.event["log_namespace"], __name__)
                 self.assertEquals(log.event["log_source"], None)
 
-                self.assertEquals(log.event["logLevel"], pythonLogLevelMapping[level])
+                self.assertEquals(log.event["logLevel"],
+                                  pythonLogLevelMapping[level])
 
                 self.assertEquals(log.event["junk"], message)
 
-                # FIXME: this checks the end of message because we do formatting in emit()
+                # FIXME: this checks the end of message because we do
+                # formatting in emit()
                 self.assertEquals(
                     formatEvent(log.event),
                     message
@@ -407,10 +453,10 @@
 
         log.warn(
             "*",
-            log_format = "#",
-            log_level = LogLevel.error,
-            log_namespace = "*namespace*",
-            log_source = "*source*",
+            log_format="#",
+            log_level=LogLevel.error,
+            log_namespace="*namespace*",
+            log_source="*source*",
         )
 
         # FIXME: Should conflicts log errors?
@@ -487,24 +533,232 @@
         self.assertEquals(set((o1, o3)), set(publisher.observers))
 
 
+    def test_removeObserverNotRegistered(self):
+        """
+        L{LogPublisher.removeObserver} removes an observer that is not
+        registered.
+        """
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = LogPublisher(o1, o2)
+        publisher.removeObserver(o3)
+        self.assertEquals(set((o1, o2)), set(publisher.observers))
+
+
     def test_fanOut(self):
         """
         L{LogPublisher} calls its observers.
         """
-        e1 = []
-        e2 = []
-        e3 = []
+        event = dict(foo=1, bar=2)
 
-        o1 = lambda e: e1.append(e)
-        o2 = lambda e: e2.append(e)
-        o3 = lambda e: e3.append(e)
+        events1 = []
+        events2 = []
+        events3 = []
 
+        o1 = lambda e: events1.append(e)
+        o2 = lambda e: events2.append(e)
+        o3 = lambda e: events3.append(e)
+
         publisher = LogPublisher(o1, o2, o3)
+        publisher(event)
+        self.assertIn(event, events1)
+        self.assertIn(event, events2)
+        self.assertIn(event, events3)
+
+
+    def test_observerRaises(self):
+        nonTestEvents = []
+        Logger.publisher.addObserver(lambda e: nonTestEvents.append(e))
+
+        event = dict(foo=1, bar=2)
+        exception = RuntimeError("ARGH! EVIL DEATH!")
+
+        events = []
+
+        def observer(event):
+            events.append(event)
+            raise exception
+
+        publisher = LogPublisher(observer)
+        publisher(event)
+
+        # Verify that the observer saw my event
+        self.assertIn(event, events)
+
+        # Verify that the observer raised my exception
+        errors = self.flushLoggedErrors(exception.__class__)
+        self.assertEquals(len(errors), 1)
+        self.assertIdentical(errors[0].value, exception)
+
+        # Verify that the exception was logged
+        for event in nonTestEvents:
+            if (
+                event.get("log_format", None) == OBSERVER_REMOVED and
+                getattr(event.get("failure", None), "value") is exception
+            ):
+                break
+        else:
+            self.fail("Observer raised an exception "
+                      "and the exception was not logged.")
+
+
+    def test_observerRaisesAndLoggerHatesMe(self):
+        nonTestEvents = []
+        Logger.publisher.addObserver(lambda e: nonTestEvents.append(e))
+
+        event = dict(foo=1, bar=2)
+        exception = RuntimeError("ARGH! EVIL DEATH!")
+
+        def observer(event):
+            raise RuntimeError("Sad panda")
+
+        class GurkLogger(Logger):
+            def failure(self, *args, **kwargs):
+                raise exception
+
+        publisher = LogPublisher(observer)
+        publisher.log = GurkLogger()
+        publisher(event)
+
+        # Here, the lack of an exception thus far is a success, of sorts
+
+
+
+class DefaultLogPublisherTests(SetUpTearDown, unittest.TestCase):
+    def test_addObserver(self):
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=True)
+        publisher.addObserver(o3, filtered=False)
+
+        self.assertEquals(
+            set((o1, o2, publisher.legacyLogObserver)),
+            set(publisher.filteredPublisher.observers),
+            "Filtered observers do not match expected set"
+        )
+        self.assertEquals(
+            set((o3, publisher.filters)),
+            set(publisher.rootPublisher.observers),
+            "Root observers do not match expected set"
+        )
+
+
+    def test_addObserverAgain(self):
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=True)
+        publisher.addObserver(o3, filtered=False)
+
+        # Swap filtered-ness of o2 and o3
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=False)
+        publisher.addObserver(o3, filtered=True)
+
+        self.assertEquals(
+            set((o1, o3, publisher.legacyLogObserver)),
+            set(publisher.filteredPublisher.observers),
+            "Filtered observers do not match expected set"
+        )
+        self.assertEquals(
+            set((o2, publisher.filters)),
+            set(publisher.rootPublisher.observers),
+            "Root observers do not match expected set"
+        )
+
+
+    def test_removeObserver(self):
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=True)
+        publisher.addObserver(o3, filtered=False)
         publisher.removeObserver(o2)
-        self.assertEquals(set((o1, o3)), set(publisher.observers))
+        publisher.removeObserver(o3)
 
+        self.assertEquals(
+            set((o1, publisher.legacyLogObserver)),
+            set(publisher.filteredPublisher.observers),
+            "Filtered observers do not match expected set"
+        )
+        self.assertEquals(
+            set((publisher.filters,)),
+            set(publisher.rootPublisher.observers),
+            "Root observers do not match expected set"
+        )
 
 
+    def test_filteredObserver(self):
+        namespace = __name__
+
+        event_debug = dict(log_namespace=namespace,
+                           log_level=LogLevel.debug, log_format="")
+        event_error = dict(log_namespace=namespace,
+                           log_level=LogLevel.error, log_format="")
+        events = []
+
+        observer = lambda e: events.append(e)
+
+        publisher = DefaultLogPublisher()
+
+        publisher.addObserver(observer, filtered=True)
+        publisher(event_debug)
+        publisher(event_error)
+        self.assertNotIn(event_debug, events)
+        self.assertIn(event_error, events)
+
+
+    def test_filteredObserverNoFilteringKeys(self):
+        event_debug = dict(log_level=LogLevel.debug)
+        event_error = dict(log_level=LogLevel.error)
+        event_none  = dict()
+        events = []
+
+        observer = lambda e: events.append(e)
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(observer, filtered=True)
+        publisher(event_debug)
+        publisher(event_error)
+        publisher(event_none)
+        self.assertNotIn(event_debug, events)
+        self.assertNotIn(event_error, events)
+        self.assertNotIn(event_none, events)
+
+
+    def test_unfilteredObserver(self):
+        namespace = __name__
+
+        event_debug = dict(log_namespace=namespace, log_level=LogLevel.debug,
+                           log_format="")
+        event_error = dict(log_namespace=namespace, log_level=LogLevel.error,
+                           log_format="")
+        events = []
+
+        observer = lambda e: events.append(e)
+
+        publisher = DefaultLogPublisher()
+
+        publisher.addObserver(observer, filtered=False)
+        publisher(event_debug)
+        publisher(event_error)
+        self.assertIn(event_debug, events)
+        self.assertIn(event_error, events)
+
+
+
 class FilteringLogObserverTests(SetUpTearDown, unittest.TestCase):
     """
     Tests for L{FilteringLogObserver}.
@@ -552,11 +806,16 @@
             def no(event):
                 return PredicateResult.no
 
+            @staticmethod
+            def bogus(event):
+                return None
+
         predicates = (getattr(Filters, f) for f in filters)
         eventsSeen = []
         trackingObserver = lambda e: eventsSeen.append(e)
         filteringObserver = FilteringLogObserver(trackingObserver, predicates)
-        for e in events: filteringObserver(e)
+        for e in events:
+            filteringObserver(e)
 
         return [e["count"] for e in eventsSeen]
 
@@ -564,25 +823,35 @@
     def test_shouldLogEvent_noFilters(self):
         self.assertEquals(self.filterWith(), [0, 1, 2, 3])
 
+
     def test_shouldLogEvent_noFilter(self):
         self.assertEquals(self.filterWith("notTwo"), [0, 1, 3])
 
+
     def test_shouldLogEvent_yesFilter(self):
         self.assertEquals(self.filterWith("twoPlus"), [0, 1, 2, 3])
 
+
     def test_shouldLogEvent_yesNoFilter(self):
         self.assertEquals(self.filterWith("twoPlus", "no"), [2, 3])
 
+
     def test_shouldLogEvent_yesYesNoFilter(self):
-        self.assertEquals(self.filterWith("twoPlus", "twoMinus", "no"), [0, 1, 2, 3])
+        self.assertEquals(self.filterWith("twoPlus", "twoMinus", "no"),
+                          [0, 1, 2, 3])
 
 
+    def test_shouldLogEvent_badPredicateResult(self):
+        self.assertRaises(TypeError, self.filterWith, "bogus")
+
+
     def test_call(self):
         e = dict(obj=object())
 
         def callWithPredicateResult(result):
             seen = []
-            observer = FilteringLogObserver(lambda e: seen.append(e), (lambda e: result,))
+            observer = FilteringLogObserver(lambda e: seen.append(e),
+                                            (lambda e: result,))
             observer(e)
             return seen
 
@@ -597,6 +866,14 @@
     Tests for L{LegacyLogger}.
     """
 
+    def test_namespace_default(self):
+        """
+        Default namespace is module name.
+        """
+        log = TestLegacyLogger(logger=None)
+        self.assertEquals(log.newStyleLogger.namespace, __name__)
+
+
     def test_passThroughAttributes(self):
         """
         C{__getattribute__} on L{LegacyLogger} is passing through to Twisted's
@@ -619,19 +896,22 @@
         log = TestLegacyLogger()
 
         message = "Hi, there."
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
 
         log.msg(message, **kwargs)
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.info)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.info)
         self.assertEquals(log.newStyleLogger.emitted["format"], message)
 
         for key, value in kwargs.items():
-            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key], value)
+            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key],
+                                 value)
 
         log.msg(foo="")
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.info)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.info)
         self.assertIdentical(log.newStyleLogger.emitted["format"], None)
 
 
@@ -642,7 +922,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
 
         try:
             raise exception
@@ -659,7 +939,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
         why = "Because I said so."
 
         try:
@@ -677,7 +957,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
         why = "Because I said so."
 
         try:
@@ -695,7 +975,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
         why = "Because I said so."
         bogus = object()
 
@@ -707,12 +987,14 @@
         errors = self.flushLoggedErrors(exception.__class__)
         self.assertEquals(len(errors), 0)
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.error)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.error)
         self.assertEquals(log.newStyleLogger.emitted["format"], repr(bogus))
         self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["why"], why)
 
         for key, value in kwargs.items():
-            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key], value)
+            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key],
+                                 value)
 
 
     def legacy_err(self, log, kwargs, why, exception):
@@ -724,11 +1006,24 @@
         errors = self.flushLoggedErrors(exception.__class__)
         self.assertEquals(len(errors), 1)
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.error)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.error)
         self.assertEquals(log.newStyleLogger.emitted["format"], None)
-        self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["failure"].__class__, Failure)
-        self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["failure"].value, exception)
-        self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["why"], why)
+        emittedKwargs = log.newStyleLogger.emitted["kwargs"]
+        self.assertIdentical(emittedKwargs["failure"].__class__, Failure)
+        self.assertIdentical(emittedKwargs["failure"].value, exception)
+        self.assertIdentical(emittedKwargs["why"], why)
 
         for key, value in kwargs.items():
-            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key], value)
+            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key],
+                                 value)
+
+
+
+class Unformattable(object):
+    """
+    An object that raises an exception from C{__repr__}.
+    """
+
+    def __repr__(self):
+        return str(1/0)

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/test/test_util.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/test/test_util.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/test/test_util.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -7,10 +7,10 @@
 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 # copies of the Software, and to permit persons to whom the Software is
 # furnished to do so, subject to the following conditions:
-# 
+#
 # The above copyright notice and this permission notice shall be included in all
 # copies or substantial portions of the Software.
-# 
+#
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -42,6 +42,7 @@
         self.assertEquals(util.normalizeURL("///../"), "/")
         self.assertEquals(util.normalizeURL("/.."), "/")
 
+
     def test_joinURL(self):
         """
         joinURL()
@@ -67,6 +68,7 @@
         self.assertEquals(util.joinURL("/foo", "/../"), "/")
         self.assertEquals(util.joinURL("/foo", "/./"), "/foo/")
 
+
     def test_parentForURL(self):
         """
         parentForURL()
@@ -83,6 +85,8 @@
         self.assertEquals(util.parentForURL("http://server/foo/bar/."), "http://server/foo/")
         self.assertEquals(util.parentForURL("http://server/foo/bar"), "http://server/foo/")
         self.assertEquals(util.parentForURL("http://server/foo/bar/"), "http://server/foo/")
+        self.assertEquals(util.parentForURL("http://server/foo/bar?x=1&y=2"), "http://server/foo/")
+        self.assertEquals(util.parentForURL("http://server/foo/bar/?x=1&y=2"), "http://server/foo/")
         self.assertEquals(util.parentForURL("/"), None)
         self.assertEquals(util.parentForURL("/foo/.."), None)
         self.assertEquals(util.parentForURL("/foo/../"), None)
@@ -94,3 +98,5 @@
         self.assertEquals(util.parentForURL("/foo/bar/."), "/foo/")
         self.assertEquals(util.parentForURL("/foo/bar"), "/foo/")
         self.assertEquals(util.parentForURL("/foo/bar/"), "/foo/")
+        self.assertEquals(util.parentForURL("/foo/bar?x=1&y=2"), "/foo/")
+        self.assertEquals(util.parentForURL("/foo/bar/?x=1&y=2"), "/foo/")

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/util.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/util.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/dav/util.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -8,10 +8,10 @@
 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 # copies of the Software, and to permit persons to whom the Software is
 # furnished to do so, subject to the following conditions:
-# 
+#
 # The above copyright notice and this permission notice shall be included in all
 # copies or substantial portions of the Software.
-# 
+#
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -61,7 +61,8 @@
 def allDataFromStream(stream, filter=None):
     data = []
     def gotAllData(_):
-        if not data: return None
+        if not data:
+            return None
         result = "".join([str(x) for x in data])
         if filter is None:
             return result
@@ -69,6 +70,8 @@
             return filter(result)
     return readStream(stream, data.append).addCallback(gotAllData)
 
+
+
 def davXMLFromStream(stream):
     # FIXME:
     #   This reads the request body into a string and then parses it.
@@ -77,6 +80,7 @@
     if stream is None:
         return succeed(None)
 
+
     def parse(xml):
         try:
             doc = WebDAVDocument.fromString(xml)
@@ -87,11 +91,16 @@
             raise
     return allDataFromStream(stream, parse)
 
+
+
 def noDataFromStream(stream):
     def gotData(data):
-        if data: raise ValueError("Stream contains unexpected data.")
+        if data:
+            raise ValueError("Stream contains unexpected data.")
     return readStream(stream, gotData)
 
+
+
 ##
 # URLs
 ##
@@ -111,9 +120,10 @@
         if path[0] == "/":
             count = 0
             for char in path:
-                if char != "/": break
+                if char != "/":
+                    break
                 count += 1
-            path = path[count-1:]
+            path = path[count - 1:]
 
         return path
 
@@ -123,6 +133,8 @@
 
     return urlunsplit((scheme, host, urllib.quote(path), query, fragment))
 
+
+
 def joinURL(*urls):
     """
     Appends URLs in series.
@@ -142,16 +154,19 @@
     else:
         return url + trailing
 
+
+
 def parentForURL(url):
     """
     Extracts the URL of the containing collection resource for the resource
-    corresponding to a given URL.
+    corresponding to a given URL. This removes any query or fragment pieces.
+
     @param url: an absolute (server-relative is OK) URL.
     @return: the normalized URL of the collection resource containing the
         resource corresponding to C{url}.  The returned URL will always contain
         a trailing C{"/"}.
     """
-    (scheme, host, path, query, fragment) = urlsplit(normalizeURL(url))
+    (scheme, host, path, _ignore_query, _ignore_fragment) = urlsplit(normalizeURL(url))
 
     index = path.rfind("/")
     if index is 0:
@@ -165,8 +180,10 @@
         else:
             path = path[:index] + "/"
 
-    return urlunsplit((scheme, host, path, query, fragment))
+    return urlunsplit((scheme, host, path, None, None))
 
+
+
 ##
 # Python magic
 ##
@@ -180,6 +197,8 @@
     caller = inspect.getouterframes(inspect.currentframe())[1][3]
     raise NotImplementedError("Method %s is unimplemented in subclass %s" % (caller, obj.__class__))
 
+
+
 def bindMethods(module, clazz, prefixes=("preconditions_", "http_", "report_")):
     """
     Binds all functions in the given module (as defined by that module's

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/metafd.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/metafd.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/metafd.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -23,6 +23,8 @@
 
 from functools import total_ordering
 
+from zope.interface import implementer
+
 from twext.internet.sendfdport import (
     InheritedPort, InheritedSocketDispatcher, InheritingProtocolFactory)
 from twext.internet.tcp import MaxAcceptTCPServer
@@ -32,6 +34,7 @@
 from twisted.internet import reactor
 from twisted.python.util import FancyStrMixin
 from twisted.internet.tcp import Server
+from twext.internet.sendfdport import IStatusWatcher
 
 log = Logger()
 
@@ -167,10 +170,11 @@
     The status of a worker process.
     """
 
-    showAttributes = "acknowledged unacknowledged started abandoned".split()
+    showAttributes = ("acknowledged unacknowledged started abandoned unclosed"
+                      .split())
 
     def __init__(self, acknowledged=0, unacknowledged=0, started=0,
-                 abandoned=0):
+                 abandoned=0, unclosed=0):
         """
         Create a L{ConnectionStatus} with a number of sent connections and a
         number of un-acknowledged connections.
@@ -188,11 +192,15 @@
             worker restarted.
 
         @param started: The number of times this worker has been started.
+
+        @param unclosed: The number of sockets which have been sent to the
+            subprocess but not yet closed.
         """
         self.acknowledged = acknowledged
         self.unacknowledged = unacknowledged
         self.started = started
         self.abandoned = abandoned
+        self.unclosed = unclosed
 
 
     def effective(self):
@@ -211,14 +219,13 @@
 
 
     def _tuplify(self):
-        return (self.acknowledged, self.unacknowledged, self.started,
-                self.abandoned)
+        return tuple(getattr(self, attr) for attr in self.showAttributes)
 
 
     def __lt__(self, other):
         if not isinstance(other, WorkerStatus):
             return NotImplemented
-        return self._tuplify() < other._tuplify()
+        return self.effective() < other.effective()
 
 
     def __eq__(self, other):
@@ -230,22 +237,20 @@
     def __add__(self, other):
         if not isinstance(other, WorkerStatus):
             return NotImplemented
-        return self.__class__(self.acknowledged + other.acknowledged,
-                              self.unacknowledged + other.unacknowledged,
-                              self.started + other.started,
-                              self.abandoned + other.abandoned)
+        a = self._tuplify()
+        b = other._tuplify()
+        c = [a1 + b1 for (a1, b1) in zip(a, b)]
+        return self.__class__(*c)
 
 
     def __sub__(self, other):
         if not isinstance(other, WorkerStatus):
             return NotImplemented
-        return self + self.__class__(-other.acknowledged,
-                                     -other.unacknowledged,
-                                     -other.started,
-                                     -other.abandoned)
+        return self + self.__class__(*[-x for x in other._tuplify()])
 
 
 
+ at implementer(IStatusWatcher)
 class ConnectionLimiter(MultiService, object):
     """
     Connection limiter for use with L{InheritedSocketDispatcher}.
@@ -253,6 +258,8 @@
     This depends on statuses being reported by L{ReportingHTTPFactory}
     """
 
+    _outstandingRequests = 0
+
     def __init__(self, maxAccepts, maxRequests):
         """
         Create a L{ConnectionLimiter} with an associated dispatcher and
@@ -319,9 +326,18 @@
         else:
             # '+' acknowledges that the subprocess has taken on the work.
             return previousStatus + WorkerStatus(acknowledged=1,
-                                                 unacknowledged=-1)
+                                                 unacknowledged=-1,
+                                                 unclosed=1)
 
 
+    def closeCountFromStatus(self, status):
+        """
+        Determine the number of sockets to close from the current status.
+        """
+        toClose = status.unclosed
+        return (toClose, status - WorkerStatus(unclosed=toClose))
+
+
     def newConnectionStatus(self, previousStatus):
         """
         Determine the effect of a new connection being sent on a subprocess
@@ -344,15 +360,13 @@
         self._outstandingRequests = current # preserve for or= field in log
         maximum = self.maxRequests
         overloaded = (current >= maximum)
-        if overloaded:
-            for f in self.factories:
-                f.myServer.myPort.stopReading()
-        else:
-            for f in self.factories:
-                f.myServer.myPort.startReading()
+        for f in self.factories:
+            if overloaded:
+                f.loadAboveMaximum()
+            else:
+                f.loadNominal()
 
 
-    _outstandingRequests = 0
     @property # make read-only
     def outstandingRequests(self):
         return self._outstandingRequests
@@ -386,6 +400,20 @@
         self.maxRequests = limiter.maxRequests
 
 
+    def loadAboveMaximum(self):
+        """
+        The current server load has exceeded the maximum allowable.
+        """
+        self.myServer.myPort.stopReading()
+
+
+    def loadNominal(self):
+        """
+        The current server load is nominal; proceed with reading requests.
+        """
+        self.myServer.myPort.startReading()
+
+
     @property
     def outstandingRequests(self):
         return self.limiter.outstandingRequests

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/test/test_metafd.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/test/test_metafd.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twext/web2/test/test_metafd.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -61,6 +61,7 @@
         return ("4.3.2.1", 4321)
 
 
+
 class InheritedPortForTesting(sendfdport.InheritedPort):
     """
     L{sendfdport.InheritedPort} subclass that prevents certain I/O operations
@@ -92,15 +93,19 @@
     def startReading(self):
         "Do nothing."
 
+
     def stopReading(self):
         "Do nothing."
 
+
     def startWriting(self):
         "Do nothing."
 
+
     def stopWriting(self):
         "Do nothing."
 
+
     def __init__(self, *a, **kw):
         super(ServerTransportForTesting, self).__init__(*a, **kw)
         self.reactor = None
@@ -164,6 +169,7 @@
         builder = LimiterBuilder(self)
         builder.fillUp()
         self.assertEquals(builder.port.reading, False) # sanity check
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
         builder.loadDown()
         self.assertEquals(builder.port.reading, True)
 
@@ -177,10 +183,30 @@
         builder = LimiterBuilder(self)
         builder.fillUp()
         self.assertEquals(builder.port.reading, False)
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
         builder.processRestart()
         self.assertEquals(builder.port.reading, True)
 
 
+    def test_unevenLoadDistribution(self):
+        """
+        Subprocess sockets should be selected for subsequent socket sends by
+        ascending status.  Status should sum sent and successfully subsumed
+        sockets.
+        """
+        builder = LimiterBuilder(self)
+        # Give one simulated worker a higher acknowledged load than the other.
+        builder.fillUp(True, 1)
+        # There should still be plenty of spare capacity.
+        self.assertEquals(builder.port.reading, True)
+        # Then slam it with a bunch of incoming requests.
+        builder.fillUp(False, builder.limiter.maxRequests - 1)
+        # Now capacity is full.
+        self.assertEquals(builder.port.reading, False)
+        # And everyone should have an even amount of work.
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
+
+
     def test_processStopsReadingEvenWhenConnectionsAreNotAcknowledged(self):
         """
         L{ConnectionLimiter.statusesChanged} determines whether the current
@@ -188,6 +214,7 @@
         """
         builder = LimiterBuilder(self)
         builder.fillUp(acknowledged=False)
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
         self.assertEquals(builder.port.reading, False)
         builder.processRestart()
         self.assertEquals(builder.port.reading, True)
@@ -198,9 +225,9 @@
         L{WorkerStatus.__repr__} will show all the values associated with the
         status of the worker.
         """
-        self.assertEquals(repr(WorkerStatus(1, 2, 3, 4)),
+        self.assertEquals(repr(WorkerStatus(1, 2, 3, 4, 5)),
                           "<WorkerStatus acknowledged=1 unacknowledged=2 "
-                          "started=3 abandoned=4>")
+                          "started=3 abandoned=4 unclosed=5>")
 
 
 
@@ -210,19 +237,33 @@
     for a given unit test.
     """
 
-    def __init__(self, test, maxReq=3):
-        self.limiter = ConnectionLimiter(2, maxRequests=maxReq)
+    def __init__(self, test, requestsPerSocket=3, socketCount=2):
+        # Similar to MaxRequests in the configuration.
+        self.requestsPerSocket = requestsPerSocket
+        # Similar to ProcessCount in the configuration.
+        self.socketCount = socketCount
+        self.limiter = ConnectionLimiter(
+            2, maxRequests=requestsPerSocket * socketCount
+        )
         self.dispatcher = self.limiter.dispatcher
         self.dispatcher.reactor = ReaderAdder()
         self.service = Service()
         self.limiter.addPortService("TCP", 4321, "127.0.0.1", 5,
                                     self.serverServiceMakerMaker(self.service))
-        self.dispatcher.addSocket()
+        for ignored in xrange(socketCount):
+            self.dispatcher.addSocket()
         # Has to be running in order to add stuff.
         self.limiter.startService()
         self.port = self.service.myPort
 
 
+    def highestLoad(self):
+        return max(
+            skt.status.effective()
+            for skt in self.limiter.dispatcher._subprocessSockets
+        )
+
+
     def serverServiceMakerMaker(self, s):
         """
         Make a serverServiceMaker for use with
@@ -237,21 +278,25 @@
         def serverServiceMaker(port, factory, *a, **k):
             s.factory = factory
             s.myPort = NotAPort()
-            s.myPort.startReading() # TODO: technically, should wait for startService
+            # TODO: technically, the following should wait for startService
+            s.myPort.startReading()
             factory.myServer = s
             return s
         return serverServiceMaker
 
 
-    def fillUp(self, acknowledged=True):
+    def fillUp(self, acknowledged=True, count=0):
         """
         Fill up all the slots on the connection limiter.
 
         @param acknowledged: Should the virtual connections created by this
             method send a message back to the dispatcher indicating that the
             subprocess has acknowledged receipt of the file descriptor?
+
+        @param count: Amount of load to add; default to the maximum that the
+            limiter.
         """
-        for x in range(self.limiter.maxRequests):
+        for x in range(count or self.limiter.maxRequests):
             self.dispatcher.sendFileDescriptor(None, "SSL")
             if acknowledged:
                 self.dispatcher.statusMessage(

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/caldavxml.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/caldavxml.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/caldavxml.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -65,7 +65,11 @@
     "calendar-query-extended",
 )
 
+caldav_timezones_by_reference_compliance = (
+    "calendar-no-timezone",
+)
 
+
 class CalDAVElement (WebDAVElement):
     """
     CalDAV XML element.

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/method/report_sync_collection.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/method/report_sync_collection.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/method/report_sync_collection.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -58,6 +58,14 @@
 
     responses = []
 
+    # Do not support limit
+    if sync_collection.sync_limit is not None:
+        raise HTTPError(ErrorResponse(
+            responsecode.INSUFFICIENT_STORAGE_SPACE,
+            element.NumberOfMatchesWithinLimits(),
+            "Report limit not supported",
+        ))
+
     # Process Depth and sync-level for backwards compatibility
     # Use sync-level if present and ignore Depth, else use Depth
     if sync_collection.sync_level:

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/stdconfig.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/stdconfig.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -54,7 +54,7 @@
     },
     "twistedcaldav.directory.appleopendirectory.OpenDirectoryService": {
         "node": "/Search",
-        "cacheTimeout": 10, # Minutes
+        "cacheTimeout": 1, # Minutes
         "batchSize": 100, # for splitting up large queries
         "negativeCaching": False,
         "restrictEnabledRecords": False,
@@ -62,7 +62,7 @@
         "recordTypes": ("users", "groups"),
     },
     "twistedcaldav.directory.ldapdirectory.LdapDirectoryService": {
-        "cacheTimeout": 10, # Minutes
+        "cacheTimeout": 1, # Minutes
         "negativeCaching": False,
         "warningThresholdSeconds": 3,
         "batchSize": 500, # for splitting up large queries
@@ -1546,6 +1546,8 @@
             compliance += caldavxml.caldav_managed_attachments_compliance
         if configDict.Scheduling.Options.TimestampAttendeePartStatChanges:
             compliance += customxml.calendarserver_partstat_changes_compliance
+        if configDict.EnableTimezonesByReference:
+            compliance += caldavxml.caldav_timezones_by_reference_compliance
     else:
         compliance = ()
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/storebridge.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/storebridge.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/storebridge.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -77,7 +77,7 @@
 import hashlib
 import time
 import uuid
-from twext.web2 import responsecode
+from twext.web2 import responsecode, http_headers, http
 from twext.web2.iweb import IResponse
 from twistedcaldav.customxml import calendarserver_namespace
 from twistedcaldav.instance import InvalidOverriddenInstanceError, \
@@ -2222,6 +2222,41 @@
         response.headers.setHeader("content-type", self.contentType())
         returnValue(response)
 
+
+    @inlineCallbacks
+    def checkPreconditions(self, request):
+        """
+        We override the base class to trap the failure case and process any Prefer header.
+        """
+
+        try:
+            response = yield super(_CommonObjectResource, self).checkPreconditions(request)
+        except HTTPError as e:
+            if e.response.code == responsecode.PRECONDITION_FAILED:
+                response = yield self._processPrefer(request, e.response)
+                raise HTTPError(response)
+            else:
+                raise
+
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def _processPrefer(self, request, response):
+        # Look for Prefer header
+        prefer = request.headers.getHeader("prefer", {})
+        returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
+
+        if returnRepresentation and (response.code / 100 == 2 or response.code == responsecode.PRECONDITION_FAILED):
+            oldcode = response.code
+            response = (yield self.http_GET(request))
+            if oldcode in (responsecode.CREATED, responsecode.PRECONDITION_FAILED):
+                response.code = oldcode
+            response.headers.removeHeader("content-location")
+            response.headers.setHeader("content-location", self.url())
+
+        returnValue(response)
+
     # The following are used to map store exceptions into HTTP error responses
     StoreExceptionsStatusErrors = set()
     StoreExceptionsErrors = {}
@@ -2601,7 +2636,76 @@
         AttachmentRemoveFailed: (caldav_namespace, "valid-attachment-remove",),
     }
 
+
     @inlineCallbacks
+    def _checkPreconditions(self, request):
+        """
+        We override the base class to handle the special implicit scheduling weak ETag behavior
+        for compatibility with old clients using If-Match.
+        """
+
+        if config.Scheduling.CalDAV.ScheduleTagCompatibility:
+
+            if self.exists():
+                etags = self.scheduleEtags
+                if len(etags) > 1:
+                    # This is almost verbatim from twext.web2.static.checkPreconditions
+                    if request.method not in ("GET", "HEAD"):
+
+                        # Always test against the current etag first just in case schedule-etags is out of sync
+                        etag = (yield self.etag())
+                        etags = (etag,) + tuple([http_headers.ETag(schedule_etag) for schedule_etag in etags])
+
+                        # Loop over each tag and succeed if any one matches, else re-raise last exception
+                        exists = self.exists()
+                        last_modified = self.lastModified()
+                        last_exception = None
+                        for etag in etags:
+                            try:
+                                http.checkPreconditions(
+                                    request,
+                                    entityExists=exists,
+                                    etag=etag,
+                                    lastModified=last_modified,
+                                )
+                            except HTTPError, e:
+                                last_exception = e
+                            else:
+                                break
+                        else:
+                            if last_exception:
+                                raise last_exception
+
+                    # Check per-method preconditions
+                    method = getattr(self, "preconditions_" + request.method, None)
+                    if method:
+                        returnValue((yield method(request)))
+                    else:
+                        returnValue(None)
+
+        result = (yield super(CalendarObjectResource, self).checkPreconditions(request))
+        returnValue(result)
+
+
+    @inlineCallbacks
+    def checkPreconditions(self, request):
+        """
+        We override the base class to do special schedule tag processing.
+        """
+
+        try:
+            response = yield self._checkPreconditions(request)
+        except HTTPError as e:
+            if e.response.code == responsecode.PRECONDITION_FAILED:
+                response = yield self._processPrefer(request, e.response)
+                raise HTTPError(response)
+            else:
+                raise
+
+        returnValue(response)
+
+
+    @inlineCallbacks
     def http_PUT(self, request):
 
         # Content-type check
@@ -2615,7 +2719,14 @@
             ))
 
         # Do schedule tag check
-        schedule_tag_match = self.validIfScheduleMatch(request)
+        try:
+            schedule_tag_match = self.validIfScheduleMatch(request)
+        except HTTPError as e:
+            if e.response.code == responsecode.PRECONDITION_FAILED:
+                response = yield self._processPrefer(request, e.response)
+                raise HTTPError(response)
+            else:
+                raise
 
         # Read the calendar component from the stream
         try:
@@ -2681,18 +2792,9 @@
 
                 request.addResponseFilter(_removeEtag, atEnd=True)
 
-            # Look for Prefer header
-            prefer = request.headers.getHeader("prefer", {})
-            returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
+            # Handle Prefer header
+            response = yield self._processPrefer(request, response)
 
-            if returnRepresentation and response.code / 100 == 2:
-                oldcode = response.code
-                response = (yield self.http_GET(request))
-                if oldcode == responsecode.CREATED:
-                    response.code = responsecode.CREATED
-                response.headers.removeHeader("content-location")
-                response.headers.setHeader("content-location", self.url())
-
             returnValue(response)
 
         # Handle the various store errors
@@ -2871,18 +2973,12 @@
                 raise
 
         # Look for Prefer header
-        prefer = request.headers.getHeader("prefer", {})
-        returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
-        if returnRepresentation:
-            result = (yield self.render(request))
-            result.code = OK
-            result.headers.removeHeader("content-location")
-            result.headers.setHeader("content-location", request.path)
-        else:
-            result = post_result
+        result = yield self._processPrefer(request, post_result)
+
         if action in ("attachment-add", "attachment-update",):
             result.headers.setHeader("location", location)
             result.headers.addRawHeader("Cal-Managed-ID", attachment.managedID())
+
         returnValue(result)
 
 
@@ -3313,17 +3409,8 @@
                 request.addResponseFilter(_removeEtag, atEnd=True)
 
             # Look for Prefer header
-            prefer = request.headers.getHeader("prefer", {})
-            returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
+            response = yield self._processPrefer(request, response)
 
-            if returnRepresentation and response.code / 100 == 2:
-                oldcode = response.code
-                response = (yield self.http_GET(request))
-                if oldcode == responsecode.CREATED:
-                    response.code = responsecode.CREATED
-                response.headers.removeHeader("content-location")
-                response.headers.setHeader("content-location", self.url())
-
             returnValue(response)
 
         # Handle the various store errors

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Africa/Juba.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Africa/Juba.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Africa/Juba.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -9,7 +9,7 @@
 DTSTART:19310101T000000
 RDATE:19310101T000000
 TZNAME:CAST
-TZOFFSETFROM:+020624
+TZOFFSETFROM:+021008
 TZOFFSETTO:+0200
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Anguilla.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Anguilla.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Anguilla.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -9,7 +9,7 @@
 DTSTART:19120302T000000
 RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041216
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Araguaina.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Araguaina.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Araguaina.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -30,6 +30,7 @@
 RDATE:19981011T000000
 RDATE:19991003T000000
 RDATE:20021103T000000
+RDATE:20121021T000000
 TZNAME:BRST
 TZOFFSETFROM:-0300
 TZOFFSETTO:-0200
@@ -64,7 +65,7 @@
 RDATE:19980301T000000
 RDATE:19990221T000000
 RDATE:20000227T000000
-RDATE:20150222T000000
+RDATE:20130217T000000
 TZNAME:BRT
 TZOFFSETFROM:-0200
 TZOFFSETTO:-0300
@@ -94,6 +95,7 @@
 DTSTART:19900917T000000
 RDATE:19900917T000000
 RDATE:20030924T000000
+RDATE:20130901T000000
 TZNAME:BRT
 TZOFFSETFROM:-0300
 TZOFFSETTO:-0300
@@ -119,26 +121,5 @@
 TZOFFSETFROM:-0200
 TZOFFSETTO:-0300
 END:STANDARD
-BEGIN:DAYLIGHT
-DTSTART:20121021T000000
-RRULE:FREQ=YEARLY;BYDAY=3SU;BYMONTH=10
-TZNAME:BRST
-TZOFFSETFROM:-0300
-TZOFFSETTO:-0200
-END:DAYLIGHT
-BEGIN:STANDARD
-DTSTART:20130217T000000
-RRULE:FREQ=YEARLY;UNTIL=20140216T020000Z;BYDAY=3SU;BYMONTH=2
-TZNAME:BRT
-TZOFFSETFROM:-0200
-TZOFFSETTO:-0300
-END:STANDARD
-BEGIN:STANDARD
-DTSTART:20160221T000000
-RRULE:FREQ=YEARLY;UNTIL=20220220T020000Z;BYDAY=3SU;BYMONTH=2
-TZNAME:BRT
-TZOFFSETFROM:-0200
-TZOFFSETTO:-0300
-END:STANDARD
 END:VTIMEZONE
 END:VCALENDAR

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -144,6 +144,7 @@
 BEGIN:STANDARD
 DTSTART:19910601T000000
 RDATE:19910601T000000
+RDATE:20091011T000000
 TZNAME:ART
 TZOFFSETFROM:-0400
 TZOFFSETTO:-0300
@@ -178,7 +179,7 @@
 END:STANDARD
 BEGIN:DAYLIGHT
 DTSTART:20081012T000000
-RRULE:FREQ=YEARLY;UNTIL=20091011T040000Z;BYDAY=2SU;BYMONTH=10
+RDATE:20081012T000000
 TZNAME:WARST
 TZOFFSETFROM:-0400
 TZOFFSETTO:-0300

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Aruba.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Aruba.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Aruba.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -9,7 +9,7 @@
 DTSTART:19120212T000000
 RDATE:19120212T000000
 TZNAME:ANT
-TZOFFSETFROM:-044024
+TZOFFSETFROM:-043547
 TZOFFSETTO:-0430
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Cayman.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Cayman.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Cayman.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -10,13 +10,13 @@
 RDATE:18900101T000000
 TZNAME:KMT
 TZOFFSETFROM:-052532
-TZOFFSETTO:-050712
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Dominica.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Dominica.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Dominica.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Dominica
 X-LIC-LOCATION:America/Dominica
 BEGIN:STANDARD
-DTSTART:19110701T000100
-RDATE:19110701T000100
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040536
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grand_Turk.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grand_Turk.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grand_Turk.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -10,13 +10,13 @@
 RDATE:18900101T000000
 TZNAME:KMT
 TZOFFSETFROM:-044432
-TZOFFSETTO:-050712
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grenada.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grenada.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Grenada.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Grenada
 X-LIC-LOCATION:America/Grenada
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-0407
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Guadeloupe.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Guadeloupe.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Guadeloupe.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Guadeloupe
 X-LIC-LOCATION:America/Guadeloupe
 BEGIN:STANDARD
-DTSTART:19110608T000000
-RDATE:19110608T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040608
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Jamaica.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Jamaica.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Jamaica.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -9,14 +9,14 @@
 DTSTART:18900101T000000
 RDATE:18900101T000000
 TZNAME:KMT
-TZOFFSETFROM:-050712
-TZOFFSETTO:-050712
+TZOFFSETFROM:-050711
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Marigot.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Marigot.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Marigot.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Marigot
 X-LIC-LOCATION:America/Marigot
 BEGIN:STANDARD
-DTSTART:19110608T000000
-RDATE:19110608T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040608
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Montserrat.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Montserrat.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Montserrat.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Montserrat
 X-LIC-LOCATION:America/Montserrat
 BEGIN:STANDARD
-DTSTART:19110701T000100
-RDATE:19110701T000100
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040852
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Barthelemy.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Barthelemy.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Barthelemy.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/St_Barthelemy
 X-LIC-LOCATION:America/St_Barthelemy
 BEGIN:STANDARD
-DTSTART:19110608T000000
-RDATE:19110608T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040608
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Kitts.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Kitts.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Kitts.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -9,7 +9,7 @@
 DTSTART:19120302T000000
 RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041052
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Lucia.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Lucia.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Lucia.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,17 +6,10 @@
 TZID:America/St_Lucia
 X-LIC-LOCATION:America/St_Lucia
 BEGIN:STANDARD
-DTSTART:18900101T000000
-RDATE:18900101T000000
-TZNAME:CMT
-TZOFFSETFROM:-0404
-TZOFFSETTO:-0404
-END:STANDARD
-BEGIN:STANDARD
-DTSTART:19120101T000000
-RDATE:19120101T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-0404
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Thomas.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Thomas.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Thomas.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/St_Thomas
 X-LIC-LOCATION:America/St_Thomas
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041944
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Vincent.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Vincent.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/St_Vincent.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,17 +6,10 @@
 TZID:America/St_Vincent
 X-LIC-LOCATION:America/St_Vincent
 BEGIN:STANDARD
-DTSTART:18900101T000000
-RDATE:18900101T000000
-TZNAME:KMT
-TZOFFSETFROM:-040456
-TZOFFSETTO:-040456
-END:STANDARD
-BEGIN:STANDARD
-DTSTART:19120101T000000
-RDATE:19120101T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040456
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Tortola.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Tortola.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Tortola.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Tortola
 X-LIC-LOCATION:America/Tortola
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041828
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Virgin.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Virgin.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/America/Virgin.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,10 @@
 TZID:America/Virgin
 X-LIC-LOCATION:America/Virgin
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041944
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,13 +6,62 @@
 TZID:Antarctica/McMurdo
 X-LIC-LOCATION:Antarctica/McMurdo
 BEGIN:STANDARD
-DTSTART:19560101T000000
-RDATE:19560101T000000
+DTSTART:18681102T000000
+RDATE:18681102T000000
 TZNAME:NZST
-TZOFFSETFROM:+0000
+TZOFFSETFROM:+113904
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19271106T020000
+RDATE:19271106T020000
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1230
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19280304T020000
+RDATE:19280304T020000
+TZNAME:NZMT
+TZOFFSETFROM:+1230
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19281014T020000
+RRULE:FREQ=YEARLY;UNTIL=19331007T143000Z;BYDAY=2SU;BYMONTH=10
+TZNAME:NZST
+TZOFFSETFROM:+1130
 TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19290317T020000
+RRULE:FREQ=YEARLY;UNTIL=19330318T140000Z;BYDAY=3SU;BYMONTH=3
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
 END:STANDARD
+BEGIN:STANDARD
+DTSTART:19340429T020000
+RRULE:FREQ=YEARLY;UNTIL=19400427T140000Z;BYDAY=-1SU;BYMONTH=4
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
+END:STANDARD
 BEGIN:DAYLIGHT
+DTSTART:19340930T020000
+RRULE:FREQ=YEARLY;UNTIL=19400928T143000Z;BYDAY=-1SU;BYMONTH=9
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19460101T000000
+RDATE:19460101T000000
+TZNAME:NZST
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1200
+END:STANDARD
+BEGIN:DAYLIGHT
 DTSTART:19741103T020000
 RDATE:19741103T020000
 RDATE:19891008T020000

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,13 +6,62 @@
 TZID:Antarctica/South_Pole
 X-LIC-LOCATION:Antarctica/South_Pole
 BEGIN:STANDARD
-DTSTART:19560101T000000
-RDATE:19560101T000000
+DTSTART:18681102T000000
+RDATE:18681102T000000
 TZNAME:NZST
-TZOFFSETFROM:+0000
+TZOFFSETFROM:+113904
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19271106T020000
+RDATE:19271106T020000
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1230
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19280304T020000
+RDATE:19280304T020000
+TZNAME:NZMT
+TZOFFSETFROM:+1230
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19281014T020000
+RRULE:FREQ=YEARLY;UNTIL=19331007T143000Z;BYDAY=2SU;BYMONTH=10
+TZNAME:NZST
+TZOFFSETFROM:+1130
 TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19290317T020000
+RRULE:FREQ=YEARLY;UNTIL=19330318T140000Z;BYDAY=3SU;BYMONTH=3
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
 END:STANDARD
+BEGIN:STANDARD
+DTSTART:19340429T020000
+RRULE:FREQ=YEARLY;UNTIL=19400427T140000Z;BYDAY=-1SU;BYMONTH=4
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
+END:STANDARD
 BEGIN:DAYLIGHT
+DTSTART:19340930T020000
+RRULE:FREQ=YEARLY;UNTIL=19400928T143000Z;BYDAY=-1SU;BYMONTH=9
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19460101T000000
+RDATE:19460101T000000
+TZNAME:NZST
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1200
+END:STANDARD
+BEGIN:DAYLIGHT
 DTSTART:19741103T020000
 RDATE:19741103T020000
 RDATE:19891008T020000

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Amman.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Amman.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Amman.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -106,7 +106,7 @@
 END:DAYLIGHT
 BEGIN:DAYLIGHT
 DTSTART:20020328T235959
-RRULE:FREQ=YEARLY;BYDAY=-1TH;BYMONTH=3
+RRULE:FREQ=YEARLY;UNTIL=20120329T215959Z;BYDAY=-1TH;BYMONTH=3
 TZNAME:EEST
 TZOFFSETFROM:+0200
 TZOFFSETTO:+0300
@@ -118,26 +118,12 @@
 TZOFFSETFROM:+0300
 TZOFFSETTO:+0200
 END:STANDARD
-BEGIN:DAYLIGHT
-DTSTART:20130328T235959
-RDATE:20130328T235959
-TZNAME:EEST
-TZOFFSETFROM:+0300
-TZOFFSETTO:+0300
-END:DAYLIGHT
 BEGIN:STANDARD
-DTSTART:20131025T010000
-RRULE:FREQ=YEARLY;BYDAY=-1FR;BYMONTH=10
-TZNAME:EET
+DTSTART:20121026T010000
+RDATE:20121026T010000
+TZNAME:AST
 TZOFFSETFROM:+0300
-TZOFFSETTO:+0200
-END:STANDARD
-BEGIN:DAYLIGHT
-DTSTART:20140327T235959
-RRULE:FREQ=YEARLY;BYDAY=-1TH;BYMONTH=3
-TZNAME:EEST
-TZOFFSETFROM:+0200
 TZOFFSETTO:+0300
-END:DAYLIGHT
+END:STANDARD
 END:VTIMEZONE
 END:VCALENDAR

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Dili.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Dili.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Dili.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -29,7 +29,7 @@
 BEGIN:STANDARD
 DTSTART:19760503T000000
 RDATE:19760503T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0800
 END:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Gaza.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Gaza.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Gaza.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -43,6 +43,7 @@
 RDATE:20090904T010000
 RDATE:20100811T000000
 RDATE:20110801T000000
+RDATE:20120921T010000
 TZNAME:EET
 TZOFFSETFROM:+0300
 TZOFFSETTO:+0200
@@ -186,7 +187,7 @@
 TZOFFSETTO:+0300
 END:DAYLIGHT
 BEGIN:STANDARD
-DTSTART:20120921T010000
+DTSTART:20130927T000000
 RRULE:FREQ=YEARLY;BYDAY=FR;BYMONTHDAY=21,22,23,24,25,26,27;BYMONTH=9
 TZNAME:EET
 TZOFFSETFROM:+0300

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Hebron.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Hebron.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Hebron.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -44,6 +44,7 @@
 RDATE:20100811T000000
 RDATE:20110801T000000
 RDATE:20110930T000000
+RDATE:20120921T010000
 TZNAME:EET
 TZOFFSETFROM:+0300
 TZOFFSETTO:+0200
@@ -178,7 +179,7 @@
 TZOFFSETTO:+0300
 END:DAYLIGHT
 BEGIN:STANDARD
-DTSTART:20120921T010000
+DTSTART:20130927T000000
 RRULE:FREQ=YEARLY;BYDAY=FR;BYMONTHDAY=21,22,23,24,25,26,27;BYMONTH=9
 TZNAME:EET
 TZOFFSETFROM:+0300

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jakarta.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jakarta.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jakarta.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -8,7 +8,7 @@
 BEGIN:STANDARD
 DTSTART:18670810T000000
 RDATE:18670810T000000
-TZNAME:JMT
+TZNAME:BMT
 TZOFFSETFROM:+070712
 TZOFFSETTO:+070712
 END:STANDARD
@@ -22,7 +22,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0720
 TZOFFSETTO:+0730
 END:STANDARD
@@ -36,28 +36,28 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19480501T000000
 RDATE:19480501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0800
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19500501T000000
 RDATE:19500501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0800
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19640101T000000
 RDATE:19640101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0700
 END:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jayapura.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jayapura.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Jayapura.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -8,7 +8,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:EIT
+TZNAME:WIT
 TZOFFSETFROM:+092248
 TZOFFSETTO:+0900
 END:STANDARD
@@ -22,7 +22,7 @@
 BEGIN:STANDARD
 DTSTART:19640101T000000
 RDATE:19640101T000000
-TZNAME:EIT
+TZNAME:WIT
 TZOFFSETFROM:+0930
 TZOFFSETTO:+0900
 END:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Makassar.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Makassar.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Makassar.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -15,7 +15,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+075736
 TZOFFSETTO:+0800
 END:STANDARD
@@ -29,7 +29,7 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0800
 END:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Pontianak.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Pontianak.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Pontianak.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -15,7 +15,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+071720
 TZOFFSETTO:+0730
 END:STANDARD
@@ -29,35 +29,35 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19480501T000000
 RDATE:19480501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0800
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19500501T000000
 RDATE:19500501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0800
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19640101T000000
 RDATE:19640101T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0800
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19880101T000000
 RDATE:19880101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0800
 TZOFFSETTO:+0700
 END:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -15,7 +15,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+075736
 TZOFFSETTO:+0800
 END:STANDARD
@@ -29,7 +29,7 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0800
 END:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Busingen.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Busingen.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Busingen.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,17 +6,17 @@
 TZID:Europe/Busingen
 X-LIC-LOCATION:Europe/Busingen
 BEGIN:STANDARD
-DTSTART:18480912T000000
-RDATE:18480912T000000
+DTSTART:18530716T000000
+RDATE:18530716T000000
 TZNAME:BMT
 TZOFFSETFROM:+003408
-TZOFFSETTO:+002944
+TZOFFSETTO:+002946
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:18940601T000000
 RDATE:18940601T000000
 TZNAME:CEST
-TZOFFSETFROM:+002944
+TZOFFSETFROM:+002946
 TZOFFSETTO:+0100
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Vaduz.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Vaduz.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Vaduz.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,31 @@
 TZID:Europe/Vaduz
 X-LIC-LOCATION:Europe/Vaduz
 BEGIN:STANDARD
+DTSTART:18530716T000000
+RDATE:18530716T000000
+TZNAME:BMT
+TZOFFSETFROM:+003408
+TZOFFSETTO:+002946
+END:STANDARD
+BEGIN:STANDARD
 DTSTART:18940601T000000
 RDATE:18940601T000000
+TZNAME:CEST
+TZOFFSETFROM:+002946
+TZOFFSETTO:+0100
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19410505T010000
+RRULE:FREQ=YEARLY;UNTIL=19420504T000000Z;BYDAY=1MO;BYMONTH=5
+TZNAME:CEST
+TZOFFSETFROM:+0100
+TZOFFSETTO:+0200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19411006T020000
+RRULE:FREQ=YEARLY;UNTIL=19421005T000000Z;BYDAY=1MO;BYMONTH=10
 TZNAME:CET
-TZOFFSETFROM:+003804
+TZOFFSETFROM:+0200
 TZOFFSETTO:+0100
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Zurich.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Zurich.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Europe/Zurich.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,17 +6,17 @@
 TZID:Europe/Zurich
 X-LIC-LOCATION:Europe/Zurich
 BEGIN:STANDARD
-DTSTART:18480912T000000
-RDATE:18480912T000000
+DTSTART:18530716T000000
+RDATE:18530716T000000
 TZNAME:BMT
 TZOFFSETFROM:+003408
-TZOFFSETTO:+002944
+TZOFFSETTO:+002946
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:18940601T000000
 RDATE:18940601T000000
 TZNAME:CEST
-TZOFFSETFROM:+002944
+TZOFFSETFROM:+002946
 TZOFFSETTO:+0100
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Jamaica.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Jamaica.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Jamaica.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -9,14 +9,14 @@
 DTSTART:18900101T000000
 RDATE:18900101T000000
 TZNAME:KMT
-TZOFFSETFROM:-050712
-TZOFFSETTO:-050712
+TZOFFSETFROM:-050711
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Fiji.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Fiji.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Fiji.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -43,7 +43,7 @@
 END:STANDARD
 BEGIN:DAYLIGHT
 DTSTART:20101024T020000
-RRULE:FREQ=YEARLY;BYDAY=-2SU;BYMONTH=10
+RRULE:FREQ=YEARLY;BYDAY=SU;BYMONTHDAY=21,22,23,24,25,26,27;BYMONTH=10
 TZNAME:FJST
 TZOFFSETFROM:+1200
 TZOFFSETTO:+1300

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Johnston.ics
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Johnston.ics	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/Pacific/Johnston.ics	2013-10-02 23:27:44 UTC (rev 11779)
@@ -6,10 +6,33 @@
 TZID:Pacific/Johnston
 X-LIC-LOCATION:Pacific/Johnston
 BEGIN:STANDARD
-DTSTART:18000101T000000
-RDATE:18000101T000000
+DTSTART:18960113T120000
+RDATE:18960113T120000
 TZNAME:HST
-TZOFFSETFROM:-1000
+TZOFFSETFROM:-103126
+TZOFFSETTO:-1030
+END:STANDARD
+BEGIN:STANDARD
+DTSTART:19330430T020000
+RDATE:19330430T020000
+RDATE:19420209T020000
+TZNAME:HDT
+TZOFFSETFROM:-1030
+TZOFFSETTO:-0930
+END:STANDARD
+BEGIN:STANDARD
+DTSTART:19330521T120000
+RDATE:19330521T120000
+RDATE:19450930T020000
+TZNAME:HST
+TZOFFSETFROM:-0930
+TZOFFSETTO:-1030
+END:STANDARD
+BEGIN:STANDARD
+DTSTART:19470608T020000
+RDATE:19470608T020000
+TZNAME:HST
+TZOFFSETFROM:-1030
 TZOFFSETTO:-1000
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/links.txt
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/links.txt	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/links.txt	2013-10-02 23:27:44 UTC (rev 11779)
@@ -1,4 +1,4 @@
-America/Virgin	America/St_Thomas
+America/Virgin	America/Port_of_Spain
 America/Buenos_Aires	America/Argentina/Buenos_Aires
 Hongkong	Asia/Hong_Kong
 Etc/GMT+0	Etc/GMT
@@ -6,25 +6,28 @@
 Australia/South	Australia/Adelaide
 America/Atka	America/Adak
 America/Coral_Harbour	America/Atikokan
-Africa/Asmera	Africa/Asmara
-America/Fort_Wayne	America/Indiana/Indianapolis
-Australia/LHI	Australia/Lord_Howe
+America/St_Lucia	America/Port_of_Spain
+Canada/Newfoundland	America/St_Johns
+America/Montserrat	America/Port_of_Spain
 PRC	Asia/Shanghai
 US/Mountain	America/Denver
 Asia/Thimbu	Asia/Thimphu
 America/Shiprock	America/Denver
+America/Grenada	America/Port_of_Spain
 Europe/Podgorica	Europe/Belgrade
+Africa/Juba	Africa/Khartoum
 Brazil/DeNoronha	America/Noronha
 Jamaica	America/Jamaica
 Arctic/Longyearbyen	Europe/Oslo
 Europe/Guernsey	Europe/London
 GB	Europe/London
-Canada/Mountain	America/Edmonton
+America/Aruba	America/Curacao
 Chile/EasterIsland	Pacific/Easter
 Etc/Universal	Etc/UTC
 Navajo	America/Denver
 America/Indianapolis	America/Indiana/Indianapolis
 Pacific/Truk	Pacific/Chuuk
+Canada/Mountain	America/Edmonton
 Pacific/Yap	Pacific/Chuuk
 America/Ensenada	America/Tijuana
 Europe/Sarajevo	Europe/Belgrade
@@ -46,19 +49,25 @@
 Asia/Saigon	Asia/Ho_Chi_Minh
 ROC	Asia/Taipei
 America/Louisville	America/Kentucky/Louisville
-America/St_Barthelemy	America/Guadeloupe
+America/St_Barthelemy	America/Port_of_Spain
+America/St_Thomas	America/Port_of_Spain
 America/Porto_Acre	America/Rio_Branco
-Europe/Isle_of_Man	Europe/London
+America/Rosario	America/Argentina/Cordoba
+America/Guadeloupe	America/Port_of_Spain
 Australia/West	Australia/Perth
 US/Eastern	America/New_York
 Libya	Africa/Tripoli
+America/Fort_Wayne	America/Indiana/Indianapolis
+Antarctica/McMurdo	Pacific/Auckland
 Canada/Saskatchewan	America/Regina
+Canada/Pacific	America/Vancouver
 Canada/Eastern	America/Toronto
 Iran	Asia/Tehran
 GB-Eire	Europe/London
 Etc/Greenwich	Etc/GMT
 Atlantic/Jan_Mayen	Europe/Oslo
 US/Central	America/Chicago
+America/St_Vincent	America/Port_of_Spain
 US/Pacific	America/Los_Angeles
 Portugal	Europe/Lisbon
 Europe/Tiraspol	Europe/Chisinau
@@ -70,7 +79,7 @@
 Asia/Ulan_Bator	Asia/Ulaanbaatar
 Kwajalein	Pacific/Kwajalein
 Australia/Yancowinna	Australia/Broken_Hill
-America/Marigot	America/Guadeloupe
+America/Marigot	America/Port_of_Spain
 America/Lower_Princes	America/Curacao
 Greenwich	Etc/GMT
 America/Mendoza	America/Argentina/Mendoza
@@ -82,7 +91,7 @@
 Asia/Tel_Aviv	Asia/Jerusalem
 Mexico/General	America/Mexico_City
 Asia/Istanbul	Europe/Istanbul
-America/Rosario	America/Argentina/Cordoba
+Europe/Isle_of_Man	Europe/London
 GMT0	Etc/GMT
 Europe/Mariehamn	Europe/Helsinki
 Australia/Victoria	Australia/Melbourne
@@ -96,27 +105,33 @@
 Asia/Ashkhabad	Asia/Ashgabat
 America/Knox_IN	America/Indiana/Knox
 America/Catamarca	America/Argentina/Catamarca
+Zulu	Etc/UTC
 GMT+0	Etc/GMT
 Poland	Europe/Warsaw
 Pacific/Samoa	Pacific/Pago_Pago
 US/Indiana-Starke	America/Indiana/Knox
-Canada/Newfoundland	America/St_Johns
+Australia/LHI	Australia/Lord_Howe
+Pacific/Johnston	Pacific/Honolulu
 GMT	Etc/GMT
 Canada/Yukon	America/Whitehorse
 Canada/Atlantic	America/Halifax
 US/Arizona	America/Phoenix
 Europe/San_Marino	Europe/Rome
 Australia/NSW	Australia/Sydney
-Canada/Pacific	America/Vancouver
+America/St_Kitts	America/Port_of_Spain
+Brazil/East	America/Sao_Paulo
 Etc/Zulu	Etc/UTC
+Singapore	Asia/Singapore
 Europe/Ljubljana	Europe/Belgrade
 US/Alaska	America/Anchorage
 Atlantic/Faeroe	Atlantic/Faroe
 Etc/GMT-0	Etc/GMT
+America/Anguilla	America/Port_of_Spain
 Israel	Asia/Jerusalem
 UCT	Etc/UCT
 NZ-CHAT	Pacific/Chatham
 Iceland	Atlantic/Reykjavik
+Brazil/Acre	America/Rio_Branco
 Europe/Vatican	Europe/Rome
 Australia/Queensland	Australia/Brisbane
 Africa/Timbuktu	Africa/Bamako
@@ -131,9 +146,9 @@
 Canada/Central	America/Winnipeg
 GMT-0	Etc/GMT
 W-SU	Europe/Moscow
-Zulu	Etc/UTC
+America/Dominica	America/Port_of_Spain
 Egypt	Africa/Cairo
-Singapore	Asia/Singapore
-Brazil/Acre	America/Rio_Branco
-Brazil/East	America/Sao_Paulo
-Antarctica/South_Pole	Antarctica/McMurdo
\ No newline at end of file
+America/Tortola	America/Port_of_Spain
+Europe/Vaduz	Europe/Zurich
+Africa/Asmera	Africa/Asmara
+Antarctica/South_Pole	Pacific/Auckland
\ No newline at end of file

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/timezones.xml
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/timezones.xml	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/timezones.xml	2013-10-02 23:27:44 UTC (rev 11779)
@@ -2,7 +2,7 @@
 <!DOCTYPE timezones SYSTEM "timezones.dtd">
 
 <timezones>
-  <dtstamp>2013-07-11T02:11:45Z</dtstamp>
+  <dtstamp>2013-10-01T01:19:11Z</dtstamp>
   <timezone>
     <tzid>Africa/Abidjan</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
@@ -138,8 +138,8 @@
   </timezone>
   <timezone>
     <tzid>Africa/Juba</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>2cecec633d0950df56d2022393afdfdb</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>3f633cfde1a12e6f297ba54460659a71</md5>
   </timezone>
   <timezone>
     <tzid>Africa/Kampala</tzid>
@@ -149,6 +149,7 @@
   <timezone>
     <tzid>Africa/Khartoum</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <alias>Africa/Juba</alias>
     <md5>e4a944da17c50b3e031e19dee17bec58</md5>
   </timezone>
   <timezone>
@@ -292,8 +293,8 @@
   </timezone>
   <timezone>
     <tzid>America/Anguilla</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>3a0d92a114885c5ee40e6b4115e7d144</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>dbe16a1225d25666094e89067392e9c8</md5>
   </timezone>
   <timezone>
     <tzid>America/Antigua</tzid>
@@ -302,8 +303,8 @@
   </timezone>
   <timezone>
     <tzid>America/Araguaina</tzid>
-    <dtstamp>2013-01-14T15:32:16Z</dtstamp>
-    <md5>2cac2a50050e86a3dcf0ce0c3aadcafd</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>4d0786c2a5a830c11420baa3adb032df</md5>
   </timezone>
   <timezone>
     <tzid>America/Argentina/Buenos_Aires</tzid>
@@ -364,8 +365,8 @@
   </timezone>
   <timezone>
     <tzid>America/Argentina/San_Luis</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>783baf3a55ec90ab162cb47c3fd07121</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>31db41adcfc7e217968729395ff3e670</md5>
   </timezone>
   <timezone>
     <tzid>America/Argentina/Tucuman</tzid>
@@ -379,8 +380,8 @@
   </timezone>
   <timezone>
     <tzid>America/Aruba</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>473119154a575c5de70495c9082565f2</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>877fdd70d2d3bfc3043c0a12ff8030af</md5>
   </timezone>
   <timezone>
     <tzid>America/Asuncion</tzid>
@@ -480,8 +481,8 @@
   </timezone>
   <timezone>
     <tzid>America/Cayman</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>07ca09e17378e117aac517b98ef07824</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>aed22af0be0d3c839b3ac941a21711de</md5>
   </timezone>
   <timezone>
     <tzid>America/Chicago</tzid>
@@ -524,6 +525,7 @@
     <dtstamp>2013-05-08T18:04:04Z</dtstamp>
     <alias>America/Kralendijk</alias>
     <alias>America/Lower_Princes</alias>
+    <alias>America/Aruba</alias>
     <md5>0b270fa38a9e55a4c48facbf5be02f99</md5>
   </timezone>
   <timezone>
@@ -557,8 +559,8 @@
   </timezone>
   <timezone>
     <tzid>America/Dominica</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>86c1ba04b479911b0cf0aa917a76e3fd</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7a07f99ab572aeac2baa3466c4ac60c5</md5>
   </timezone>
   <timezone>
     <tzid>America/Edmonton</tzid>
@@ -608,20 +610,18 @@
   </timezone>
   <timezone>
     <tzid>America/Grand_Turk</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>794fd7b29a023a5722b25b99bbb6281d</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>494b352a3fb06a2b4a4dd169aa3b98db</md5>
   </timezone>
   <timezone>
     <tzid>America/Grenada</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>32c4916ced899420efcc39a4ca47936e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>ed3d7b7bb03baf025941c7939ea85ece</md5>
   </timezone>
   <timezone>
     <tzid>America/Guadeloupe</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <alias>America/St_Barthelemy</alias>
-    <alias>America/Marigot</alias>
-    <md5>4b93fee3397a9dfc3687da25df948494</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7b37cd74d65c5961c765350b1f492663</md5>
   </timezone>
   <timezone>
     <tzid>America/Guatemala</tzid>
@@ -717,9 +717,9 @@
   </timezone>
   <timezone>
     <tzid>America/Jamaica</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
     <alias>Jamaica</alias>
-    <md5>d724fa4276cb5420ecc60d5371e4ceef</md5>
+    <md5>b7185b6351db3d2c351f83b1166c490d</md5>
   </timezone>
   <timezone>
     <tzid>America/Jujuy</tzid>
@@ -796,8 +796,8 @@
   </timezone>
   <timezone>
     <tzid>America/Marigot</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>5112b932cc80557d4e01190ab86f19de</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>219cf3ff91c93b07dc71298421f9d0de</md5>
   </timezone>
   <timezone>
     <tzid>America/Martinique</tzid>
@@ -868,8 +868,8 @@
   </timezone>
   <timezone>
     <tzid>America/Montserrat</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>1278c06be965a9444decd86efc81338d</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>33c697bb4f58afd1a902018247cd21e4</md5>
   </timezone>
   <timezone>
     <tzid>America/Nassau</tzid>
@@ -947,6 +947,19 @@
   <timezone>
     <tzid>America/Port_of_Spain</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <alias>America/Virgin</alias>
+    <alias>America/St_Lucia</alias>
+    <alias>America/Montserrat</alias>
+    <alias>America/Grenada</alias>
+    <alias>America/St_Barthelemy</alias>
+    <alias>America/St_Thomas</alias>
+    <alias>America/Guadeloupe</alias>
+    <alias>America/St_Vincent</alias>
+    <alias>America/Marigot</alias>
+    <alias>America/St_Kitts</alias>
+    <alias>America/Anguilla</alias>
+    <alias>America/Dominica</alias>
+    <alias>America/Tortola</alias>
     <md5>e0bb07b4ce7859ca493cb6bba549e114</md5>
   </timezone>
   <timezone>
@@ -1047,8 +1060,8 @@
   </timezone>
   <timezone>
     <tzid>America/St_Barthelemy</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>0df0f96dd6aee2faae600ea4bda5792f</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>282f73e528b10401ba322ab01a1c7bd3</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Johns</tzid>
@@ -1058,24 +1071,23 @@
   </timezone>
   <timezone>
     <tzid>America/St_Kitts</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>9b1065952186f4159a5aafe130eef8e2</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>40a657ac17ce9e12105d6895084ed655</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Lucia</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>7cc48ba354a2f44b1a516c388ea6ac6f</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>76cf7c0ae9c69e499de421ecb41ada4b</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Thomas</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <alias>America/Virgin</alias>
-    <md5>f35dd65d25337d2b67195a4000765881</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>0dac89af79b0fa3b1d67d7a6a63aaa11</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Vincent</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>e34a65b69696732682902a6bba3abb29</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>6aae72797c8fea31921bfa1b996b1442</md5>
   </timezone>
   <timezone>
     <tzid>America/Swift_Current</tzid>
@@ -1112,8 +1124,8 @@
   </timezone>
   <timezone>
     <tzid>America/Tortola</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>36252a7ac5c1544d56691117fe4bedf0</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>c19dd4b8748b9ffeb5aa0cc21718d26e</md5>
   </timezone>
   <timezone>
     <tzid>America/Vancouver</tzid>
@@ -1123,8 +1135,8 @@
   </timezone>
   <timezone>
     <tzid>America/Virgin</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>302f38a85c5ed04952bed5372587578e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7f6b5b25ece02b385733e3a4a49f7167</md5>
   </timezone>
   <timezone>
     <tzid>America/Whitehorse</tzid>
@@ -1175,9 +1187,8 @@
   </timezone>
   <timezone>
     <tzid>Antarctica/McMurdo</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <alias>Antarctica/South_Pole</alias>
-    <md5>7866bc7215b5160ba92b9c0ff17f2567</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>3e1599b00f2814dec105fff3868e2232</md5>
   </timezone>
   <timezone>
     <tzid>Antarctica/Palmer</tzid>
@@ -1191,8 +1202,8 @@
   </timezone>
   <timezone>
     <tzid>Antarctica/South_Pole</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>ecbf324f6216e2aba53f2d333c26141e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>01586fbc05c637aed3ec1f6cf889b872</md5>
   </timezone>
   <timezone>
     <tzid>Antarctica/Syowa</tzid>
@@ -1221,8 +1232,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Amman</tzid>
-    <dtstamp>2013-01-14T15:32:16Z</dtstamp>
-    <md5>3d5145f59e99e4245ccca5484b38b271</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>00094f838d542836f35b1d3d0293512c</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Anadyr</tzid>
@@ -1329,8 +1340,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Dili</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>51ad0f3231ff8a47222ed92137ea4dc3</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>f846195e2b9f145c2a35abda88302238</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Dubai</tzid>
@@ -1344,8 +1355,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Gaza</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>17173f5c545937b19c7dba20cc4c7b97</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>656f56b232fb5ad6fb2e25a64086a44c</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Harbin</tzid>
@@ -1354,8 +1365,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Hebron</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>1909080f7bc3c9c602627b4123dd13a9</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>1198057afbbaf92ca0f34b8c16416d74</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Ho_Chi_Minh</tzid>
@@ -1386,13 +1397,13 @@
   </timezone>
   <timezone>
     <tzid>Asia/Jakarta</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>361f6e5683f19c99e1f024b3b80227be</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>37eb197c796a861a7817f06380623146</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Jayapura</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>292c823058149d8c8bee5398924bf64a</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>8fcec2bd8414e2cc845c807af45d1dce</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Jerusalem</tzid>
@@ -1481,9 +1492,9 @@
   </timezone>
   <timezone>
     <tzid>Asia/Makassar</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
     <alias>Asia/Ujung_Pandang</alias>
-    <md5>efbc6213ee5099feeafaeacd6bbbb797</md5>
+    <md5>d34ae21548d56ea2b62eb890559d46f0</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Manila</tzid>
@@ -1528,8 +1539,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Pontianak</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>5558eaba9bfdf39ef008593707cadcda</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>257fd7f7bf01752d97f04d4deaff03be</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Pyongyang</tzid>
@@ -1635,8 +1646,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Ujung_Pandang</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>d05b22df61dea5d57753440e8b5ef386</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>8a094c3a682a26dbdbb212bcc01e2a7e</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Ulaanbaatar</tzid>
@@ -2234,8 +2245,8 @@
   </timezone>
   <timezone>
     <tzid>Europe/Busingen</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>3a97a0f0c013fde482c37540d3d105eb</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7e93edc4d979424daf4521a8e39fc4df</md5>
   </timezone>
   <timezone>
     <tzid>Europe/Chisinau</tzid>
@@ -2452,8 +2463,8 @@
   </timezone>
   <timezone>
     <tzid>Europe/Vaduz</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>a8a4e48e0a06cd9b54304b82614447c1</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>f751185606fd0cdcd1d2cf2a1bfd7d4b</md5>
   </timezone>
   <timezone>
     <tzid>Europe/Vatican</tzid>
@@ -2493,9 +2504,10 @@
   </timezone>
   <timezone>
     <tzid>Europe/Zurich</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
     <alias>Europe/Busingen</alias>
-    <md5>189add82d7c3280b544ca70f5696e68c</md5>
+    <alias>Europe/Vaduz</alias>
+    <md5>f4cfe31d995ca98d545a03ef60ebbbee</md5>
   </timezone>
   <timezone>
     <tzid>GB</tzid>
@@ -2614,8 +2626,8 @@
   </timezone>
   <timezone>
     <tzid>Jamaica</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>1f8889ee038dede3ef4868055adf897a</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>b5f083a6081a40b4525e7c8e2da9e963</md5>
   </timezone>
   <timezone>
     <tzid>Japan</tzid>
@@ -2696,6 +2708,8 @@
     <tzid>Pacific/Auckland</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
     <alias>NZ</alias>
+    <alias>Antarctica/McMurdo</alias>
+    <alias>Antarctica/South_Pole</alias>
     <md5>31b52d15573225aff7940c24fbe45343</md5>
   </timezone>
   <timezone>
@@ -2734,8 +2748,8 @@
   </timezone>
   <timezone>
     <tzid>Pacific/Fiji</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>bdf37be1c81f84c63dcea56d21f02928</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>0cf1c77fa2dc0d8ea0383afddf501e17</md5>
   </timezone>
   <timezone>
     <tzid>Pacific/Funafuti</tzid>
@@ -2766,12 +2780,13 @@
     <tzid>Pacific/Honolulu</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
     <alias>US/Hawaii</alias>
+    <alias>Pacific/Johnston</alias>
     <md5>be013195b929c48b73f0234a5226a763</md5>
   </timezone>
   <timezone>
     <tzid>Pacific/Johnston</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>fdd50497d420099a0f7faabcc47e967e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>82a4fca854a65c81f3c9548471270441</md5>
   </timezone>
   <timezone>
     <tzid>Pacific/Kiritimati</tzid>

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/version.txt
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/version.txt	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/twistedcaldav/zoneinfo/version.txt	2013-10-02 23:27:44 UTC (rev 11779)
@@ -1 +1 @@
-IANA Timezone Registry: 2013d
\ No newline at end of file
+IANA Timezone Registry: 2013f
\ No newline at end of file

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/base/datastore/subpostgres.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/base/datastore/subpostgres.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -463,7 +463,7 @@
                 env=self.env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
-            d.addCallback(gotStatus)
+            return d.addCallback(gotStatus)
 
         def reportit(f):
             log.failure("starting postgres", f)

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/carddav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/carddav/datastore/test/test_sql.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/carddav/datastore/test/test_sql.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -599,7 +599,6 @@
         subgroupObject = yield adbk.createAddressBookObjectWithName("sg.vcf", subgroup)
 
         memberRows = yield Select([aboMembers.GROUP_ID, aboMembers.MEMBER_ID, aboMembers.REMOVED, aboMembers.REVISION], From=aboMembers).on(txn)
-        print("memberRows=%s" % (memberRows,))
         memberRows = yield Select([aboMembers.GROUP_ID, aboMembers.MEMBER_ID], From=aboMembers, Where=aboMembers.REMOVED == False).on(txn)
         self.assertEqual(sorted(memberRows), sorted([
                                                      [groupObject._resourceID, subgroupObject._resourceID],
@@ -610,7 +609,6 @@
         self.assertEqual(foreignMemberRows, [])
 
         memberRows = yield Select([aboMembers.GROUP_ID, aboMembers.MEMBER_ID, aboMembers.REMOVED, aboMembers.REVISION], From=aboMembers).on(txn)
-        print("memberRows=%s" % (memberRows,))
         yield subgroupObject.remove()
         memberRows = yield Select([aboMembers.GROUP_ID, aboMembers.MEMBER_ID, aboMembers.REMOVED, aboMembers.REVISION], From=aboMembers).on(txn)
 
@@ -1049,12 +1047,10 @@
         self.assertEqual(otherAB._bindRevision, None)
 
         changed, deleted = yield otherAB.resourceNamesSinceRevision(0)
-        print("revision=%s, changed=%s, deleted=%s" % (0, changed, deleted,))
         self.assertEqual(set(changed), set(['1.vcf', '4.vcf', '2.vcf', ]))
         self.assertEqual(len(deleted), 0)
 
         changed, deleted = yield otherAB.resourceNamesSinceRevision(otherGroup._bindRevision)
-        print("revision=%s, changed=%s, deleted=%s" % (otherGroup._bindRevision, changed, deleted,))
         self.assertEqual(len(changed), 0)
         self.assertEqual(len(deleted), 0)
 
@@ -1074,12 +1070,10 @@
                           'home3/4.vcf', ]
              )):
             changed, deleted = yield otherHome.resourceNamesSinceRevision(0, depth)
-            print("revision=%s, depth=%s, changed=%s, deleted=%s" % (0, depth, changed, deleted,))
             self.assertEqual(set(changed), set(result))
             self.assertEqual(len(deleted), 0)
 
             changed, deleted = yield otherHome.resourceNamesSinceRevision(otherGroup._bindRevision, depth)
-            print("revision=%s, depth=%s, changed=%s, deleted=%s" % (otherGroup._bindRevision, depth, changed, deleted,))
             self.assertEqual(len(changed), 0)
             self.assertEqual(len(deleted), 0)
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -1029,8 +1029,10 @@
         """
         Commit the transaction and execute any post-commit hooks.
         """
+
+        # Do stats logging as a postCommit because there might be some pending preCommit SQL we want to log
         if self._stats:
-            self._stats.printReport()
+            self.postCommit(self._stats.printReport)
         return self._sqlTxn.commit()
 
 
@@ -2402,14 +2404,14 @@
     @classproperty
     def _bumpSyncTokenQuery(cls): #@NoSelf
         """
-        DAL query to change collection sync token.
+        DAL query to change collection sync token. Note this can impact multiple rows if the
+        collection is shared.
         """
         rev = cls._revisionsSchema
         return Update(
             {rev.REVISION: schema.REVISION_SEQ, },
             Where=(rev.RESOURCE_ID == Parameter("resourceID")).And
-                  (rev.RESOURCE_NAME == None),
-            Return=rev.REVISION
+                  (rev.RESOURCE_NAME == None)
         )
 
 
@@ -2418,8 +2420,11 @@
 
         if not self._txn.isRevisionBumpedAlready(self):
             self._txn.bumpRevisionForObject(self)
-            self._syncTokenRevision = (yield self._bumpSyncTokenQuery.on(
-                self._txn, resourceID=self._resourceID))[0][0]
+            yield self._bumpSyncTokenQuery.on(
+                self._txn,
+                resourceID=self._resourceID,
+            )
+            self._syncTokenRevision = None
 
 
     @classproperty

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current-oracle-dialect.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current-oracle-dialect.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -218,13 +218,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -270,13 +270,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -290,7 +290,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "OBJECT_RESOURCE_ID" integer default 0,
     "RESOURCE_NAME" nvarchar2(255),
@@ -368,7 +368,7 @@
     "VALUE" nvarchar2(255)
 );
 
-insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '25');
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '26');
 insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '5');
 insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
 create index CALENDAR_HOME_METADAT_3cb9049e on CALENDAR_HOME_METADATA (
@@ -426,7 +426,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ABO_MEMBERS_ADDRESSBO_4effa879 on ABO_MEMBERS (
@@ -460,18 +460,18 @@
     REVISION
 );
 
-create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
+create index ADDRESSBOOK_OBJECT_RE_2bfcf757 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/current.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -398,19 +398,19 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
   ADDRESSBOOK_HOME_RESOURCE_ID			integer			not null references ADDRESSBOOK_HOME,
-  OWNER_ADDRESSBOOK_HOME_RESOURCE_ID    integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  OWNER_HOME_RESOURCE_ID    			integer      	not null references ADDRESSBOOK_HOME on delete cascade,
   ADDRESSBOOK_RESOURCE_NAME    			varchar(255) 	not null,
   BIND_MODE                    			integer      	not null,	-- enum CALENDAR_BIND_MODE
   BIND_STATUS                  			integer      	not null,	-- enum CALENDAR_BIND_STATUS
   BIND_REVISION				   			integer      	default 0 not null,
   MESSAGE                      			text,                  		-- FIXME: xml?
 
-  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_ADDRESSBOOK_HOME_RESOURCE_ID), -- implicit index
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID), -- implicit index
   unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
 );
 
 create index SHARED_ADDRESSBOOK_BIND_RESOURCE_ID on
-  SHARED_ADDRESSBOOK_BIND(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID);
+  SHARED_ADDRESSBOOK_BIND(OWNER_HOME_RESOURCE_ID);
 
 
 ------------------------
@@ -497,14 +497,14 @@
 create table SHARED_GROUP_BIND (	
   ADDRESSBOOK_HOME_RESOURCE_ID 		integer      not null references ADDRESSBOOK_HOME,
   GROUP_RESOURCE_ID      			integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
-  GROUP_ADDRESSBOOK_RESOURCE_NAME	varchar(255) not null,
+  GROUP_ADDRESSBOOK_NAME			varchar(255) not null,
   BIND_MODE                    		integer      not null, -- enum CALENDAR_BIND_MODE
   BIND_STATUS                  		integer      not null, -- enum CALENDAR_BIND_STATUS
   BIND_REVISION				   		integer      default 0 not null,
   MESSAGE                      		text,                  -- FIXME: xml?
 
   primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
-  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_NAME)     -- implicit index
 );
 
 create index SHARED_GROUP_BIND_RESOURCE_ID on
@@ -547,7 +547,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
   ADDRESSBOOK_HOME_RESOURCE_ID 			integer			not null references ADDRESSBOOK_HOME,
-  OWNER_ADDRESSBOOK_HOME_RESOURCE_ID    integer     	references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID    			integer     	references ADDRESSBOOK_HOME,
   ADDRESSBOOK_NAME             			varchar(255) 	default null,
   OBJECT_RESOURCE_ID					integer			default 0,
   RESOURCE_NAME                			varchar(255),
@@ -555,14 +555,14 @@
   DELETED                      			boolean      	not null
 );
 
-create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
-  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_ADDRESSBOOK_HOME_RESOURCE_ID);
+create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_HOME_RESOURCE_ID
+  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID);
 
 create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME
-  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME);
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, RESOURCE_NAME);
 
 create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_REVISION
-  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID, REVISION);
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, REVISION);
 
 
 -----------------------------------
@@ -704,6 +704,6 @@
   VALUE                         varchar(255)
 );
 
-insert into CALENDARSERVER values ('VERSION', '25');
+insert into CALENDARSERVER values ('VERSION', '26');
 insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '5');
 insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -216,13 +216,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -266,13 +266,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -286,7 +286,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -403,7 +403,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -427,16 +427,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -216,13 +216,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -266,13 +266,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -286,7 +286,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -403,7 +403,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -427,16 +427,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -218,13 +218,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -268,13 +268,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -288,7 +288,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -405,7 +405,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -429,16 +429,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -218,13 +218,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -268,13 +268,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -288,7 +288,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -411,7 +411,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -435,16 +435,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Added: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql	                        (rev 0)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -0,0 +1,494 @@
+create sequence RESOURCE_ID_SEQ;
+create sequence INSTANCE_ID_SEQ;
+create sequence ATTACHMENT_ID_SEQ;
+create sequence REVISION_SEQ;
+create sequence WORKITEM_SEQ;
+create table NODE_INFO (
+    "HOSTNAME" nvarchar2(255),
+    "PID" integer not null,
+    "PORT" integer not null,
+    "TIME" timestamp default CURRENT_TIMESTAMP at time zone 'UTC' not null, 
+    primary key("HOSTNAME", "PORT")
+);
+
+create table NAMED_LOCK (
+    "LOCK_NAME" nvarchar2(255) primary key
+);
+
+create table CALENDAR_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255) unique,
+    "DATAVERSION" integer default 0 not null
+);
+
+create table CALENDAR (
+    "RESOURCE_ID" integer primary key
+);
+
+create table CALENDAR_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "DEFAULT_EVENTS" integer default null references CALENDAR on delete set null,
+    "DEFAULT_TASKS" integer default null references CALENDAR on delete set null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "AVAILABILITY" nclob default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR on delete cascade,
+    "SUPPORTED_COMPONENTS" nvarchar2(255) default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table NOTIFICATION_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255) unique
+);
+
+create table NOTIFICATION (
+    "RESOURCE_ID" integer primary key,
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME,
+    "NOTIFICATION_UID" nvarchar2(255),
+    "XML_TYPE" nvarchar2(255),
+    "XML_DATA" nclob,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("NOTIFICATION_UID", "NOTIFICATION_HOME_RESOURCE_ID")
+);
+
+create table CALENDAR_BIND (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob,
+    "TRANSP" integer default 0 not null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "TIMEZONE" nclob default null, 
+    primary key("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_ID"), 
+    unique("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_NAME")
+);
+
+create table CALENDAR_BIND_MODE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('own', 0);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('write', 2);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('direct', 3);
+create table CALENDAR_BIND_STATUS (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invited', 0);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('accepted', 1);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('declined', 2);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invalid', 3);
+create table CALENDAR_TRANSP (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('opaque', 0);
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('transparent', 1);
+create table CALENDAR_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob,
+    "ICALENDAR_UID" nvarchar2(255),
+    "ICALENDAR_TYPE" nvarchar2(255),
+    "ATTACHMENTS_MODE" integer default 0 not null,
+    "DROPBOX_ID" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "RECURRANCE_MIN" date,
+    "RECURRANCE_MAX" date,
+    "ACCESS" integer default 0 not null,
+    "SCHEDULE_OBJECT" integer default 0,
+    "SCHEDULE_TAG" nvarchar2(36) default null,
+    "SCHEDULE_ETAGS" nclob default null,
+    "PRIVATE_COMMENTS" integer default 0 not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("CALENDAR_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MO (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('none', 0);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('write', 2);
+create table CALENDAR_ACCESS_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(32) unique
+);
+
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('', 0);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('public', 1);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('private', 2);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('confidential', 3);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('restricted', 4);
+create table TIME_RANGE (
+    "INSTANCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "FLOATING" integer not null,
+    "START_DATE" timestamp not null,
+    "END_DATE" timestamp not null,
+    "FBTYPE" integer not null,
+    "TRANSPARENT" integer not null
+);
+
+create table FREE_BUSY_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('unknown', 0);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('free', 1);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy', 2);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-unavailable', 3);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-tentative', 4);
+create table TRANSPARENCY (
+    "TIME_RANGE_INSTANCE_ID" integer not null references TIME_RANGE on delete cascade,
+    "USER_ID" nvarchar2(255),
+    "TRANSPARENT" integer not null
+);
+
+create table ATTACHMENT (
+    "ATTACHMENT_ID" integer primary key,
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "DROPBOX_ID" nvarchar2(255),
+    "CONTENT_TYPE" nvarchar2(255),
+    "SIZE" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PATH" nvarchar2(1024)
+);
+
+create table ATTACHMENT_CALENDAR_OBJECT (
+    "ATTACHMENT_ID" integer not null references ATTACHMENT on delete cascade,
+    "MANAGED_ID" nvarchar2(255),
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade, 
+    primary key("ATTACHMENT_ID", "CALENDAR_OBJECT_RESOURCE_ID"), 
+    unique("MANAGED_ID", "CALENDAR_OBJECT_RESOURCE_ID")
+);
+
+create table RESOURCE_PROPERTY (
+    "RESOURCE_ID" integer not null,
+    "NAME" nvarchar2(255),
+    "VALUE" nclob,
+    "VIEWER_UID" nvarchar2(255), 
+    primary key("RESOURCE_ID", "NAME", "VIEWER_UID")
+);
+
+create table ADDRESSBOOK_HOME (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_PROPERTY_STORE_ID" integer not null,
+    "OWNER_UID" nvarchar2(255) unique,
+    "DATAVERSION" integer default 0 not null
+);
+
+create table ADDRESSBOOK_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references ADDRESSBOOK_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table SHARED_ADDRESSBOOK_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob, 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
+);
+
+create table ADDRESSBOOK_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "VCARD_TEXT" nclob,
+    "VCARD_UID" nvarchar2(255),
+    "KIND" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "RESOURCE_NAME"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID")
+);
+
+create table ADDRESSBOOK_OBJECT_KIND (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('person', 0);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('group', 1);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('resource', 2);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('location', 3);
+create table ABO_MEMBERS (
+    "GROUP_ID" integer not null,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ID" integer not null,
+    "REVISION" integer not null,
+    "REMOVED" integer default 0 not null, 
+    primary key("GROUP_ID", "MEMBER_ID", "REVISION")
+);
+
+create table ABO_FOREIGN_MEMBERS (
+    "GROUP_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ADDRESS" nvarchar2(255), 
+    primary key("GROUP_ID", "MEMBER_ADDRESS")
+);
+
+create table SHARED_GROUP_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob, 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
+);
+
+create table CALENDAR_OBJECT_REVISIONS (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer references CALENDAR,
+    "CALENDAR_NAME" nvarchar2(255) default null,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null
+);
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "ADDRESSBOOK_NAME" nvarchar2(255) default null,
+    "OBJECT_RESOURCE_ID" integer default 0,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null
+);
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null, 
+    unique("NOTIFICATION_HOME_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table APN_SUBSCRIPTIONS (
+    "TOKEN" nvarchar2(255),
+    "RESOURCE_KEY" nvarchar2(255),
+    "MODIFIED" integer not null,
+    "SUBSCRIBER_GUID" nvarchar2(255),
+    "USER_AGENT" nvarchar2(255) default null,
+    "IP_ADDR" nvarchar2(255) default null, 
+    primary key("TOKEN", "RESOURCE_KEY")
+);
+
+create table IMIP_TOKENS (
+    "TOKEN" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALUID" nvarchar2(255),
+    "ACCESSED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    primary key("ORGANIZER", "ATTENDEE", "ICALUID")
+);
+
+create table IMIP_INVITATION_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "FROM_ADDR" nvarchar2(255),
+    "TO_ADDR" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table IMIP_POLLING_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table IMIP_REPLY_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table PUSH_NOTIFICATION_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PUSH_ID" nvarchar2(255)
+);
+
+create table GROUP_CACHER_POLLING_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade
+);
+
+create table CALENDARSERVER (
+    "NAME" nvarchar2(255) primary key,
+    "VALUE" nvarchar2(255)
+);
+
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '25');
+insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '5');
+insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
+create index CALENDAR_HOME_METADAT_3cb9049e on CALENDAR_HOME_METADATA (
+    DEFAULT_EVENTS
+);
+
+create index CALENDAR_HOME_METADAT_d55e5548 on CALENDAR_HOME_METADATA (
+    DEFAULT_TASKS
+);
+
+create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
+    NOTIFICATION_HOME_RESOURCE_ID
+);
+
+create index CALENDAR_BIND_RESOURC_e57964d4 on CALENDAR_BIND (
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_CALEN_a9a453a9 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_CALEN_96e83b73 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    RECURRANCE_MAX
+);
+
+create index CALENDAR_OBJECT_ICALE_82e731d5 on CALENDAR_OBJECT (
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_DROPB_de041d80 on CALENDAR_OBJECT (
+    DROPBOX_ID
+);
+
+create index TIME_RANGE_CALENDAR_R_beb6e7eb on TIME_RANGE (
+    CALENDAR_RESOURCE_ID
+);
+
+create index TIME_RANGE_CALENDAR_O_acf37bd1 on TIME_RANGE (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index TRANSPARENCY_TIME_RAN_5f34467f on TRANSPARENCY (
+    TIME_RANGE_INSTANCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_H_0078845c on ATTACHMENT (
+    CALENDAR_HOME_RESOURCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_O_81508484 on ATTACHMENT_CALENDAR_OBJECT (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ABO_MEMBERS_ADDRESSBO_4effa879 on ABO_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index ABO_MEMBERS_MEMBER_ID_8d66adcf on ABO_MEMBERS (
+    MEMBER_ID
+);
+
+create index ABO_FOREIGN_MEMBERS_A_1fd2c5e9 on ABO_FOREIGN_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
+    GROUP_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_3a3956c4 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_HOME_RESOURCE_ID,
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_2643d556 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    RESOURCE_NAME
+);
+
+create index CALENDAR_OBJECT_REVIS_265c8acf on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    REVISION
+);
+
+create index ADDRESSBOOK_OBJECT_RE_2bfcf757 on ADDRESSBOOK_OBJECT_REVISIONS (
+    ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    RESOURCE_NAME
+);
+
+create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index NOTIFICATION_OBJECT_R_036a9cee on NOTIFICATION_OBJECT_REVISIONS (
+    NOTIFICATION_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index APN_SUBSCRIPTIONS_RES_9610d78e on APN_SUBSCRIPTIONS (
+    RESOURCE_KEY
+);
+
+create index IMIP_TOKENS_TOKEN_e94b918f on IMIP_TOKENS (
+    TOKEN
+);
+
+create index CALENDAR_OBJECT_SPLIT_af71dcda on CALENDAR_OBJECT_SPLITTER_WORK (
+    RESOURCE_ID
+);
+

Added: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql	                        (rev 0)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -0,0 +1,709 @@
+-- -*- test-case-name: txdav.caldav.datastore.test.test_sql,txdav.carddav.datastore.test.test_sql -*-
+
+----
+-- Copyright (c) 2010-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+
+-----------------
+-- Resource ID --
+-----------------
+
+create sequence RESOURCE_ID_SEQ;
+
+
+-------------------------
+-- Cluster Bookkeeping --
+-------------------------
+
+-- Information about a process connected to this database.
+
+-- Note that this must match the node info schema in twext.enterprise.queue.
+create table NODE_INFO (
+  HOSTNAME  varchar(255) not null,
+  PID       integer      not null,
+  PORT      integer      not null,
+  TIME      timestamp    not null default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (HOSTNAME, PORT)
+);
+
+-- Unique named locks.  This table should always be empty, but rows are
+-- temporarily created in order to prevent undesirable concurrency.
+create table NAMED_LOCK (
+    LOCK_NAME varchar(255) primary key
+);
+
+
+-------------------
+-- Calendar Home --
+-------------------
+
+create table CALENDAR_HOME (
+  RESOURCE_ID      integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID        varchar(255) not null unique,                                 -- implicit index
+  DATAVERSION      integer      default 0 not null
+);
+
+--------------
+-- Calendar --
+--------------
+
+create table CALENDAR (
+  RESOURCE_ID integer   primary key default nextval('RESOURCE_ID_SEQ') -- implicit index
+);
+
+----------------------------
+-- Calendar Home Metadata --
+----------------------------
+
+create table CALENDAR_HOME_METADATA (
+  RESOURCE_ID              integer     primary key references CALENDAR_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES         integer     default 0 not null,
+  DEFAULT_EVENTS           integer     default null references CALENDAR on delete set null,
+  DEFAULT_TASKS            integer     default null references CALENDAR on delete set null,
+  ALARM_VEVENT_TIMED       text        default null,
+  ALARM_VEVENT_ALLDAY      text        default null,
+  ALARM_VTODO_TIMED        text        default null,
+  ALARM_VTODO_ALLDAY       text        default null,
+  AVAILABILITY             text        default null,
+  CREATED                  timestamp   default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                 timestamp   default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+create index CALENDAR_HOME_METADATA_DEFAULT_EVENTS on
+	CALENDAR_HOME_METADATA(DEFAULT_EVENTS);
+create index CALENDAR_HOME_METADATA_DEFAULT_TASKS on
+	CALENDAR_HOME_METADATA(DEFAULT_TASKS);
+
+-----------------------
+-- Calendar Metadata --
+-----------------------
+
+create table CALENDAR_METADATA (
+  RESOURCE_ID           integer      primary key references CALENDAR on delete cascade, -- implicit index
+  SUPPORTED_COMPONENTS  varchar(255) default null,
+  CREATED               timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------------
+-- Sharing Notifications --
+---------------------------
+
+create table NOTIFICATION_HOME (
+  RESOURCE_ID integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID   varchar(255) not null unique                                 -- implicit index
+);
+
+create table NOTIFICATION (
+  RESOURCE_ID                   integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME,
+  NOTIFICATION_UID              varchar(255) not null,
+  XML_TYPE                      varchar(255) not null,
+  XML_DATA                      text         not null,
+  MD5                           char(32)     not null,
+  CREATED                       timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique(NOTIFICATION_UID, NOTIFICATION_HOME_RESOURCE_ID) -- implicit index
+);
+
+create index NOTIFICATION_NOTIFICATION_HOME_RESOURCE_ID on
+	NOTIFICATION(NOTIFICATION_HOME_RESOURCE_ID);
+
+
+-------------------
+-- Calendar Bind --
+-------------------
+
+-- Joins CALENDAR_HOME and CALENDAR
+
+create table CALENDAR_BIND (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      not null references CALENDAR on delete cascade,
+  CALENDAR_RESOURCE_NAME    varchar(255) not null,
+  BIND_MODE                 integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS               integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				integer      default 0 not null,
+  MESSAGE                   text,
+  TRANSP                    integer      default 0 not null, -- enum CALENDAR_TRANSP
+  ALARM_VEVENT_TIMED        text         default null,
+  ALARM_VEVENT_ALLDAY       text         default null,
+  ALARM_VTODO_TIMED         text         default null,
+  ALARM_VTODO_ALLDAY        text         default null,
+  TIMEZONE                  text         default null,
+
+  primary key(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID), -- implicit index
+  unique(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_NAME)     -- implicit index
+);
+
+create index CALENDAR_BIND_RESOURCE_ID on
+	CALENDAR_BIND(CALENDAR_RESOURCE_ID);
+
+-- Enumeration of calendar bind modes
+
+create table CALENDAR_BIND_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_MODE values (0, 'own'  );
+insert into CALENDAR_BIND_MODE values (1, 'read' );
+insert into CALENDAR_BIND_MODE values (2, 'write');
+insert into CALENDAR_BIND_MODE values (3, 'direct');
+
+-- Enumeration of statuses
+
+create table CALENDAR_BIND_STATUS (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_STATUS values (0, 'invited' );
+insert into CALENDAR_BIND_STATUS values (1, 'accepted');
+insert into CALENDAR_BIND_STATUS values (2, 'declined');
+insert into CALENDAR_BIND_STATUS values (3, 'invalid');
+
+
+-- Enumeration of transparency
+
+create table CALENDAR_TRANSP (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_TRANSP values (0, 'opaque' );
+insert into CALENDAR_TRANSP values (1, 'transparent');
+
+
+---------------------
+-- Calendar Object --
+---------------------
+
+create table CALENDAR_OBJECT (
+  RESOURCE_ID          integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID integer      not null references CALENDAR on delete cascade,
+  RESOURCE_NAME        varchar(255) not null,
+  ICALENDAR_TEXT       text         not null,
+  ICALENDAR_UID        varchar(255) not null,
+  ICALENDAR_TYPE       varchar(255) not null,
+  ATTACHMENTS_MODE     integer      default 0 not null, -- enum CALENDAR_OBJECT_ATTACHMENTS_MODE
+  DROPBOX_ID           varchar(255),
+  ORGANIZER            varchar(255),
+  RECURRANCE_MIN       date,        -- minimum date that recurrences have been expanded to.
+  RECURRANCE_MAX       date,        -- maximum date that recurrences have been expanded to.
+  ACCESS               integer      default 0 not null,
+  SCHEDULE_OBJECT      boolean      default false,
+  SCHEDULE_TAG         varchar(36)  default null,
+  SCHEDULE_ETAGS       text         default null,
+  PRIVATE_COMMENTS     boolean      default false not null,
+  MD5                  char(32)     not null,
+  CREATED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED             timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+
+  -- since the 'inbox' is a 'calendar resource' for the purpose of storing
+  -- calendar objects, this constraint has to be selectively enforced by the
+  -- application layer.
+
+  -- unique(CALENDAR_RESOURCE_ID, ICALENDAR_UID)
+);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_AND_ICALENDAR_UID on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_RECURRANCE_MAX on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, RECURRANCE_MAX);
+
+create index CALENDAR_OBJECT_ICALENDAR_UID on
+  CALENDAR_OBJECT(ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_DROPBOX_ID on
+  CALENDAR_OBJECT(DROPBOX_ID);
+
+-- Enumeration of attachment modes
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (0, 'none' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (1, 'read' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (2, 'write');
+
+
+-- Enumeration of calendar access types
+
+create table CALENDAR_ACCESS_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(32) not null unique
+);
+
+insert into CALENDAR_ACCESS_TYPE values (0, ''             );
+insert into CALENDAR_ACCESS_TYPE values (1, 'public'       );
+insert into CALENDAR_ACCESS_TYPE values (2, 'private'      );
+insert into CALENDAR_ACCESS_TYPE values (3, 'confidential' );
+insert into CALENDAR_ACCESS_TYPE values (4, 'restricted'   );
+
+
+-----------------
+-- Instance ID --
+-----------------
+
+create sequence INSTANCE_ID_SEQ;
+
+
+----------------
+-- Time Range --
+----------------
+
+create table TIME_RANGE (
+  INSTANCE_ID                 integer        primary key default nextval('INSTANCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID        integer        not null references CALENDAR on delete cascade,
+  CALENDAR_OBJECT_RESOURCE_ID integer        not null references CALENDAR_OBJECT on delete cascade,
+  FLOATING                    boolean        not null,
+  START_DATE                  timestamp      not null,
+  END_DATE                    timestamp      not null,
+  FBTYPE                      integer        not null,
+  TRANSPARENT                 boolean        not null
+);
+
+create index TIME_RANGE_CALENDAR_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_RESOURCE_ID);
+create index TIME_RANGE_CALENDAR_OBJECT_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_OBJECT_RESOURCE_ID);
+
+
+-- Enumeration of free/busy types
+
+create table FREE_BUSY_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into FREE_BUSY_TYPE values (0, 'unknown'         );
+insert into FREE_BUSY_TYPE values (1, 'free'            );
+insert into FREE_BUSY_TYPE values (2, 'busy'            );
+insert into FREE_BUSY_TYPE values (3, 'busy-unavailable');
+insert into FREE_BUSY_TYPE values (4, 'busy-tentative'  );
+
+
+------------------
+-- Transparency --
+------------------
+
+create table TRANSPARENCY (
+  TIME_RANGE_INSTANCE_ID      integer      not null references TIME_RANGE on delete cascade,
+  USER_ID                     varchar(255) not null,
+  TRANSPARENT                 boolean      not null
+);
+
+create index TRANSPARENCY_TIME_RANGE_INSTANCE_ID on
+  TRANSPARENCY(TIME_RANGE_INSTANCE_ID);
+
+
+----------------
+-- Attachment --
+----------------
+
+create sequence ATTACHMENT_ID_SEQ;
+
+create table ATTACHMENT (
+  ATTACHMENT_ID               integer           primary key default nextval('ATTACHMENT_ID_SEQ'), -- implicit index
+  CALENDAR_HOME_RESOURCE_ID   integer           not null references CALENDAR_HOME,
+  DROPBOX_ID                  varchar(255),
+  CONTENT_TYPE                varchar(255)      not null,
+  SIZE                        integer           not null,
+  MD5                         char(32)          not null,
+  CREATED                     timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                    timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  PATH                        varchar(1024)     not null
+);
+
+create index ATTACHMENT_CALENDAR_HOME_RESOURCE_ID on
+  ATTACHMENT(CALENDAR_HOME_RESOURCE_ID);
+
+-- Many-to-many relationship between attachments and calendar objects
+create table ATTACHMENT_CALENDAR_OBJECT (
+  ATTACHMENT_ID                  integer      not null references ATTACHMENT on delete cascade,
+  MANAGED_ID                     varchar(255) not null,
+  CALENDAR_OBJECT_RESOURCE_ID    integer      not null references CALENDAR_OBJECT on delete cascade,
+
+  primary key (ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
+  unique (MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
+);
+
+create index ATTACHMENT_CALENDAR_OBJECT_CALENDAR_OBJECT_RESOURCE_ID on
+	ATTACHMENT_CALENDAR_OBJECT(CALENDAR_OBJECT_RESOURCE_ID);
+
+-----------------------
+-- Resource Property --
+-----------------------
+
+create table RESOURCE_PROPERTY (
+  RESOURCE_ID integer      not null, -- foreign key: *.RESOURCE_ID
+  NAME        varchar(255) not null,
+  VALUE       text         not null, -- FIXME: xml?
+  VIEWER_UID  varchar(255),
+
+  primary key (RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
+);
+
+
+----------------------
+-- AddressBook Home --
+----------------------
+
+create table ADDRESSBOOK_HOME (
+  RESOURCE_ID      				integer			primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  ADDRESSBOOK_PROPERTY_STORE_ID	integer      	default nextval('RESOURCE_ID_SEQ') not null, 	-- implicit index
+  OWNER_UID        				varchar(255) 	not null unique,                                -- implicit index
+  DATAVERSION      				integer      	default 0 not null
+);
+
+
+-------------------------------
+-- AddressBook Home Metadata --
+-------------------------------
+
+create table ADDRESSBOOK_HOME_METADATA (
+  RESOURCE_ID      integer      primary key references ADDRESSBOOK_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES integer      default 0 not null,
+  CREATED          timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED         timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+-----------------------------
+-- Shared AddressBook Bind --
+-----------------------------
+
+-- Joins sharee ADDRESSBOOK_HOME and owner ADDRESSBOOK_HOME
+
+create table SHARED_ADDRESSBOOK_BIND (
+  ADDRESSBOOK_HOME_RESOURCE_ID			integer			not null references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID    			integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  ADDRESSBOOK_RESOURCE_NAME    			varchar(255) 	not null,
+  BIND_MODE                    			integer      	not null,	-- enum CALENDAR_BIND_MODE
+  BIND_STATUS                  			integer      	not null,	-- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				   			integer      	default 0 not null,
+  MESSAGE                      			text,                  		-- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+);
+
+create index SHARED_ADDRESSBOOK_BIND_RESOURCE_ID on
+  SHARED_ADDRESSBOOK_BIND(OWNER_HOME_RESOURCE_ID);
+
+
+------------------------
+-- AddressBook Object --
+------------------------
+
+create table ADDRESSBOOK_OBJECT (
+  RESOURCE_ID             		integer   		primary key default nextval('RESOURCE_ID_SEQ'),    -- implicit index
+  ADDRESSBOOK_HOME_RESOURCE_ID 	integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  RESOURCE_NAME           		varchar(255) 	not null,
+  VCARD_TEXT              		text         	not null,
+  VCARD_UID               		varchar(255) 	not null,
+  KIND 			  		  		integer      	not null,  -- enum ADDRESSBOOK_OBJECT_KIND
+  MD5                     		char(32)     	not null,
+  CREATED                 		timestamp    	default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                		timestamp    	default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, VCARD_UID)      -- implicit index
+);
+
+
+-----------------------------
+-- AddressBook Object kind --
+-----------------------------
+
+create table ADDRESSBOOK_OBJECT_KIND (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND values (0, 'person');
+insert into ADDRESSBOOK_OBJECT_KIND values (1, 'group' );
+insert into ADDRESSBOOK_OBJECT_KIND values (2, 'resource');
+insert into ADDRESSBOOK_OBJECT_KIND values (3, 'location');
+
+
+----------------------------------
+-- Revisions, forward reference --
+----------------------------------
+
+create sequence REVISION_SEQ;
+
+---------------------------------
+-- Address Book Object Members --
+---------------------------------
+
+create table ABO_MEMBERS (
+    GROUP_ID              integer      not null, -- references ADDRESSBOOK_OBJECT on delete cascade,	-- AddressBook Object's (kind=='group') RESOURCE_ID
+ 	ADDRESSBOOK_ID		  integer      not null references ADDRESSBOOK_HOME on delete cascade,
+    MEMBER_ID             integer      not null, -- references ADDRESSBOOK_OBJECT,						-- member AddressBook Object's RESOURCE_ID
+  	REVISION              integer      default nextval('REVISION_SEQ') not null,
+  	REMOVED               boolean      default false not null,
+
+    primary key (GROUP_ID, MEMBER_ID, REVISION) -- implicit index
+);
+
+create index ABO_MEMBERS_ADDRESSBOOK_ID on
+	ABO_MEMBERS(ADDRESSBOOK_ID);
+create index ABO_MEMBERS_MEMBER_ID on
+	ABO_MEMBERS(MEMBER_ID);
+
+------------------------------------------
+-- Address Book Object Foreign Members  --
+------------------------------------------
+
+create table ABO_FOREIGN_MEMBERS (
+    GROUP_ID              integer      not null references ADDRESSBOOK_OBJECT on delete cascade,	-- AddressBook Object's (kind=='group') RESOURCE_ID
+ 	ADDRESSBOOK_ID		  integer      not null references ADDRESSBOOK_HOME on delete cascade,
+    MEMBER_ADDRESS  	  varchar(255) not null, 													-- member AddressBook Object's 'calendar' address
+
+    primary key (GROUP_ID, MEMBER_ADDRESS) -- implicit index
+);
+
+create index ABO_FOREIGN_MEMBERS_ADDRESSBOOK_ID on
+	ABO_FOREIGN_MEMBERS(ADDRESSBOOK_ID);
+
+-----------------------
+-- Shared Group Bind --
+-----------------------
+
+-- Joins ADDRESSBOOK_HOME and ADDRESSBOOK_OBJECT (kind == group)
+
+create table SHARED_GROUP_BIND (	
+  ADDRESSBOOK_HOME_RESOURCE_ID 		integer      not null references ADDRESSBOOK_HOME,
+  GROUP_RESOURCE_ID      			integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
+  GROUP_ADDRESSBOOK_NAME			varchar(255) not null,
+  BIND_MODE                    		integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS                  		integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				   		integer      default 0 not null,
+  MESSAGE                      		text,                  -- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_NAME)     -- implicit index
+);
+
+create index SHARED_GROUP_BIND_RESOURCE_ID on
+  SHARED_GROUP_BIND(GROUP_RESOURCE_ID);
+
+
+---------------
+-- Revisions --
+---------------
+
+-- create sequence REVISION_SEQ;
+
+
+-------------------------------
+-- Calendar Object Revisions --
+-------------------------------
+
+create table CALENDAR_OBJECT_REVISIONS (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      references CALENDAR,
+  CALENDAR_NAME             varchar(255) default null,
+  RESOURCE_NAME             varchar(255),
+  REVISION                  integer      default nextval('REVISION_SEQ') not null,
+  DELETED                   boolean      not null
+);
+
+create index CALENDAR_OBJECT_REVISIONS_HOME_RESOURCE_ID_CALENDAR_RESOURCE_ID
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, REVISION);
+
+
+----------------------------------
+-- AddressBook Object Revisions --
+----------------------------------
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+  ADDRESSBOOK_HOME_RESOURCE_ID 			integer			not null references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID    			integer     	references ADDRESSBOOK_HOME,
+  ADDRESSBOOK_NAME             			varchar(255) 	default null,
+  OBJECT_RESOURCE_ID					integer			default 0,
+  RESOURCE_NAME                			varchar(255),
+  REVISION                     			integer     	default nextval('REVISION_SEQ') not null,
+  DELETED                      			boolean      	not null
+);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_HOME_RESOURCE_ID
+  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, RESOURCE_NAME);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, REVISION);
+
+
+-----------------------------------
+-- Notification Object Revisions --
+-----------------------------------
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME on delete cascade,
+  RESOURCE_NAME                 varchar(255),
+  REVISION                      integer      default nextval('REVISION_SEQ') not null,
+  DELETED                       boolean      not null,
+
+  unique(NOTIFICATION_HOME_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+);
+
+create index NOTIFICATION_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on NOTIFICATION_OBJECT_REVISIONS(NOTIFICATION_HOME_RESOURCE_ID, REVISION);
+
+
+-------------------------------------------
+-- Apple Push Notification Subscriptions --
+-------------------------------------------
+
+create table APN_SUBSCRIPTIONS (
+  TOKEN                         varchar(255) not null,
+  RESOURCE_KEY                  varchar(255) not null,
+  MODIFIED                      integer      not null,
+  SUBSCRIBER_GUID               varchar(255) not null,
+  USER_AGENT                    varchar(255) default null,
+  IP_ADDR                       varchar(255) default null,
+
+  primary key (TOKEN, RESOURCE_KEY) -- implicit index
+);
+
+create index APN_SUBSCRIPTIONS_RESOURCE_KEY
+   on APN_SUBSCRIPTIONS(RESOURCE_KEY);
+
+   
+-----------------
+-- IMIP Tokens --
+-----------------
+
+create table IMIP_TOKENS (
+  TOKEN                         varchar(255) not null,
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALUID                       varchar(255) not null,
+  ACCESSED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (ORGANIZER, ATTENDEE, ICALUID) -- implicit index
+);
+
+create index IMIP_TOKENS_TOKEN
+   on IMIP_TOKENS(TOKEN);
+
+   
+----------------
+-- Work Items --
+----------------
+
+create sequence WORKITEM_SEQ;
+
+
+---------------------------
+-- IMIP Inivitation Work --
+---------------------------
+
+create table IMIP_INVITATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  FROM_ADDR                     varchar(255) not null,
+  TO_ADDR                       varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+
+-----------------------
+-- IMIP Polling Work --
+-----------------------
+
+create table IMIP_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------
+-- IMIP Reply Work --
+---------------------
+
+create table IMIP_REPLY_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+
+------------------------
+-- Push Notifications --
+------------------------
+
+create table PUSH_NOTIFICATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  PUSH_ID                       varchar(255) not null
+);
+
+-----------------
+-- GroupCacher --
+-----------------
+
+create table GROUP_CACHER_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+--------------------------
+-- Object Splitter Work --
+--------------------------
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade
+);
+
+create index CALENDAR_OBJECT_SPLITTER_WORK_RESOURCE_ID on
+	CALENDAR_OBJECT_SPLITTER_WORK(RESOURCE_ID);
+
+--------------------
+-- Schema Version --
+--------------------
+
+create table CALENDARSERVER (
+  NAME                          varchar(255) primary key, -- implicit index
+  VALUE                         varchar(255)
+);
+
+insert into CALENDARSERVER values ('VERSION', '25');
+insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '5');
+insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -31,18 +31,18 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 
@@ -55,13 +55,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -140,7 +140,7 @@
 --------------------------------
 
 alter table ADDRESSBOOK_OBJECT
-	add ("KIND"	integer);  -- enum ADDRESSBOOK_OBJECT_KIND
+	add ("KIND"	integer)  -- enum ADDRESSBOOK_OBJECT_KIND
 	add ("ADDRESSBOOK_HOME_RESOURCE_ID"	integer	references ADDRESSBOOK_HOME on delete cascade);
 
 update ADDRESSBOOK_OBJECT
@@ -176,24 +176,25 @@
   	
 -- add non null constraints after update and delete are complete
 alter table ADDRESSBOOK_OBJECT
-	modify ("KIND" not null,
-            "ADDRESSBOOK_HOME_RESOURCE_ID" not null)
-	drop ("ADDRESSBOOK_RESOURCE_ID");
+        modify ("KIND" not null)
+        modify ("ADDRESSBOOK_HOME_RESOURCE_ID" not null);
 
+alter table ADDRESSBOOK_OBJECT
+        drop column ADDRESSBOOK_RESOURCE_ID cascade constraints;
 
 alter table ADDRESSBOOK_OBJECT
 	add unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "RESOURCE_NAME")
-	    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID");
+	add unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID");
 
 ------------------------------------------
 -- change  ADDRESSBOOK_OBJECT_REVISIONS --
 ------------------------------------------
 
 alter table ADDRESSBOOK_OBJECT_REVISIONS
-	add ("OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"	integer	references ADDRESSBOOK_HOME);
+	add ("OWNER_HOME_RESOURCE_ID"	integer	references ADDRESSBOOK_HOME);
 
 update ADDRESSBOOK_OBJECT_REVISIONS
-	set	OWNER_ADDRESSBOOK_HOME_RESOURCE_ID = (
+	set	OWNER_HOME_RESOURCE_ID = (
 		select ADDRESSBOOK_HOME_RESOURCE_ID
 			from ADDRESSBOOK_BIND
 		where 
@@ -229,16 +230,16 @@
 -- New indexes
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Added: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql	                        (rev 0)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -0,0 +1,49 @@
+----
+-- Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 24 to 25 --
+---------------------------------------------------
+
+----------------------------------------
+-- Change Address Book Object Members --
+----------------------------------------
+
+alter table ABO_MEMBERS
+	drop ("abo_members_member_id_fkey");
+alter table ABO_MEMBERS
+	drop ("abo_members_group_id_fkey");
+alter table ABO_MEMBERS
+	add ("REVISION" integer default nextval('REVISION_SEQ') not null);
+alter table ABO_MEMBERS
+	add ("REMOVED" boolean default false not null);
+alter table ABO_MEMBERS
+	 drop ("abo_members_pkey");
+alter table ABO_MEMBERS
+	 add ("abo_members_pkey" primary key ("GROUP_ID", "MEMBER_ID", "REVISION"));
+
+------------------------------------------
+-- Change Address Book Object Revisions --
+------------------------------------------
+	
+alter table ADDRESSBOOK_OBJECT_REVISIONS
+	add ("OBJECT_RESOURCE_ID" integer default 0);
+
+--------------------
+-- Update version --
+--------------------
+
+update CALENDARSERVER set VALUE = '26' where NAME = 'VERSION';

Added: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql	                        (rev 0)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql	2013-10-02 23:27:44 UTC (rev 11779)
@@ -0,0 +1,44 @@
+----
+-- Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 24 to 25 --
+---------------------------------------------------
+
+----------------------------------------
+-- Change Address Book Object Members --
+----------------------------------------
+
+alter table ABO_MEMBERS
+	drop constraint	abo_members_member_id_fkey,
+	drop constraint	abo_members_group_id_fkey,
+	add column	REVISION		integer      default nextval('REVISION_SEQ') not null,
+	add column	REMOVED         boolean      default false not null,
+	drop constraint abo_members_pkey,
+	add constraint abo_members_pkey primary key(GROUP_ID, MEMBER_ID, REVISION);
+
+------------------------------------------
+-- Change Address Book Object Revisions --
+------------------------------------------
+	
+alter table ADDRESSBOOK_OBJECT_REVISIONS
+	add column OBJECT_RESOURCE_ID integer default 0;
+	
+--------------------
+-- Update version --
+--------------------
+
+update CALENDARSERVER set VALUE = '26' where NAME = 'VERSION';

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_tables.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_tables.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/common/datastore/sql_tables.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -52,55 +52,39 @@
 # Column aliases, defined so that similar tables (such as CALENDAR_OBJECT and
 # ADDRESSBOOK_OBJECT) can be used according to a polymorphic interface.
 
-schema.CALENDAR_BIND.RESOURCE_NAME = \
-    schema.CALENDAR_BIND.CALENDAR_RESOURCE_NAME
-schema.CALENDAR_BIND.RESOURCE_ID = \
-    schema.CALENDAR_BIND.CALENDAR_RESOURCE_ID
-schema.CALENDAR_BIND.HOME_RESOURCE_ID = \
-    schema.CALENDAR_BIND.CALENDAR_HOME_RESOURCE_ID
-schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_NAME = \
-    schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_RESOURCE_NAME
-schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_ID = \
-    schema.SHARED_ADDRESSBOOK_BIND.OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
-schema.SHARED_ADDRESSBOOK_BIND.HOME_RESOURCE_ID = \
-    schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
-schema.SHARED_GROUP_BIND.RESOURCE_NAME = \
-    schema.SHARED_GROUP_BIND.GROUP_ADDRESSBOOK_RESOURCE_NAME
-schema.SHARED_GROUP_BIND.RESOURCE_ID = \
-    schema.SHARED_GROUP_BIND.GROUP_RESOURCE_ID
-schema.SHARED_GROUP_BIND.HOME_RESOURCE_ID = \
-    schema.SHARED_GROUP_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
-schema.CALENDAR_OBJECT_REVISIONS.RESOURCE_ID = \
-    schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_RESOURCE_ID
-schema.CALENDAR_OBJECT_REVISIONS.HOME_RESOURCE_ID = \
-    schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_HOME_RESOURCE_ID
-schema.CALENDAR_OBJECT_REVISIONS.COLLECTION_NAME = \
-    schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_NAME
-schema.ADDRESSBOOK_OBJECT_REVISIONS.RESOURCE_ID = \
-    schema.ADDRESSBOOK_OBJECT_REVISIONS.OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
-schema.ADDRESSBOOK_OBJECT_REVISIONS.HOME_RESOURCE_ID = \
-    schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_HOME_RESOURCE_ID
-schema.ADDRESSBOOK_OBJECT_REVISIONS.COLLECTION_NAME = \
-    schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_NAME
-schema.NOTIFICATION_OBJECT_REVISIONS.HOME_RESOURCE_ID = \
-    schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
-schema.NOTIFICATION_OBJECT_REVISIONS.RESOURCE_ID = \
-    schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
-schema.CALENDAR_OBJECT.TEXT = \
-    schema.CALENDAR_OBJECT.ICALENDAR_TEXT
-schema.CALENDAR_OBJECT.UID = \
-    schema.CALENDAR_OBJECT.ICALENDAR_UID
-schema.CALENDAR_OBJECT.PARENT_RESOURCE_ID = \
-    schema.CALENDAR_OBJECT.CALENDAR_RESOURCE_ID
-schema.ADDRESSBOOK_OBJECT.TEXT = \
-    schema.ADDRESSBOOK_OBJECT.VCARD_TEXT
-schema.ADDRESSBOOK_OBJECT.UID = \
-    schema.ADDRESSBOOK_OBJECT.VCARD_UID
-schema.ADDRESSBOOK_OBJECT.PARENT_RESOURCE_ID = \
-    schema.ADDRESSBOOK_OBJECT.ADDRESSBOOK_HOME_RESOURCE_ID
+schema.CALENDAR_BIND.RESOURCE_NAME = schema.CALENDAR_BIND.CALENDAR_RESOURCE_NAME
+schema.CALENDAR_BIND.RESOURCE_ID = schema.CALENDAR_BIND.CALENDAR_RESOURCE_ID
+schema.CALENDAR_BIND.HOME_RESOURCE_ID = schema.CALENDAR_BIND.CALENDAR_HOME_RESOURCE_ID
 
+schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_NAME = schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_RESOURCE_NAME
+schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_ID = schema.SHARED_ADDRESSBOOK_BIND.OWNER_HOME_RESOURCE_ID
+schema.SHARED_ADDRESSBOOK_BIND.HOME_RESOURCE_ID = schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
 
+schema.SHARED_GROUP_BIND.RESOURCE_NAME = schema.SHARED_GROUP_BIND.GROUP_ADDRESSBOOK_NAME
+schema.SHARED_GROUP_BIND.RESOURCE_ID = schema.SHARED_GROUP_BIND.GROUP_RESOURCE_ID
+schema.SHARED_GROUP_BIND.HOME_RESOURCE_ID = schema.SHARED_GROUP_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
 
+schema.CALENDAR_OBJECT_REVISIONS.RESOURCE_ID = schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_RESOURCE_ID
+schema.CALENDAR_OBJECT_REVISIONS.HOME_RESOURCE_ID = schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_HOME_RESOURCE_ID
+schema.CALENDAR_OBJECT_REVISIONS.COLLECTION_NAME = schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_NAME
+
+schema.ADDRESSBOOK_OBJECT_REVISIONS.RESOURCE_ID = schema.ADDRESSBOOK_OBJECT_REVISIONS.OWNER_HOME_RESOURCE_ID
+schema.ADDRESSBOOK_OBJECT_REVISIONS.HOME_RESOURCE_ID = schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_HOME_RESOURCE_ID
+schema.ADDRESSBOOK_OBJECT_REVISIONS.COLLECTION_NAME = schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_NAME
+
+schema.NOTIFICATION_OBJECT_REVISIONS.HOME_RESOURCE_ID = schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
+schema.NOTIFICATION_OBJECT_REVISIONS.RESOURCE_ID = schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
+
+schema.CALENDAR_OBJECT.TEXT = schema.CALENDAR_OBJECT.ICALENDAR_TEXT
+schema.CALENDAR_OBJECT.UID = schema.CALENDAR_OBJECT.ICALENDAR_UID
+schema.CALENDAR_OBJECT.PARENT_RESOURCE_ID = schema.CALENDAR_OBJECT.CALENDAR_RESOURCE_ID
+
+schema.ADDRESSBOOK_OBJECT.TEXT = schema.ADDRESSBOOK_OBJECT.VCARD_TEXT
+schema.ADDRESSBOOK_OBJECT.UID = schema.ADDRESSBOOK_OBJECT.VCARD_UID
+schema.ADDRESSBOOK_OBJECT.PARENT_RESOURCE_ID = schema.ADDRESSBOOK_OBJECT.ADDRESSBOOK_HOME_RESOURCE_ID
+
+
+
 def _combine(**kw):
     """
     Combine two table dictionaries used in a join to produce a single dictionary
@@ -291,6 +275,10 @@
                 first = False
             else:
                 out.write(",\n")
+
+            if len(column.model.name) > ORACLE_TABLE_NAME_MAX:
+                raise SchemaBroken("Column name too long: %s" % (column.model.name,))
+
             typeName = column.model.type.name
             typeName = _translatedTypes.get(typeName, typeName)
             out.write('    "%s" %s' % (column.model.name, typeName))

Modified: CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/xml/rfc6578.py
===================================================================
--- CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/xml/rfc6578.py	2013-10-02 21:33:04 UTC (rev 11778)
+++ CalendarServer/branches/users/gaya/sharedgroupfixes/txdav/xml/rfc6578.py	2013-10-02 23:27:44 UTC (rev 11779)
@@ -7,10 +7,10 @@
 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 # copies of the Software, and to permit persons to whom the Software is
 # furnished to do so, subject to the following conditions:
-# 
+#
 # The above copyright notice and this permission notice shall be included in all
 # copies or substantial portions of the Software.
-# 
+#
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -51,7 +51,7 @@
     allowed_children = {
         (dav_namespace, "sync-token"): (0, 1), # When used in the REPORT this is required
         (dav_namespace, "sync-level"): (0, 1), # When used in the REPORT this is required
-        (dav_namespace, "prop"      ): (0, 1),
+        (dav_namespace, "prop"): (0, 1),
     }
 
     def __init__(self, *children, **attributes):
@@ -60,6 +60,7 @@
         self.property = None
         self.sync_token = None
         self.sync_level = None
+        self.sync_limit = None
 
         for child in self.children:
             qname = child.qname()
@@ -70,12 +71,20 @@
             elif qname == (dav_namespace, "sync-level"):
                 self.sync_level = str(child)
 
+            elif qname == (dav_namespace, "limit"):
+                if len(child.children) == 1 and child.children[0].qname() == (dav_namespace, "nresults"):
+                    try:
+                        self.sync_limit = int(str(child.children[0]))
+                    except TypeError:
+                        pass
+
             elif qname == (dav_namespace, "prop"):
                 if self.property is not None:
                     raise ValueError("Only one of DAV:prop allowed")
                 self.property = child
 
 
+
 @registerElement
 @registerElementClass
 class SyncToken (WebDAVTextElement):
@@ -87,6 +96,7 @@
     protected = True
 
 
+
 @registerElement
 @registerElementClass
 class SyncLevel (WebDAVTextElement):
@@ -96,5 +106,29 @@
     name = "sync-level"
 
 
+
+ at registerElement
+ at registerElementClass
+class Limit (WebDAVElement):
+    """
+    Synchronization limit in report.
+    """
+    name = "limit"
+
+    allowed_children = {
+        (dav_namespace, "nresults"): (1, 1), # When used in the REPORT this is required
+    }
+
+
+
+ at registerElement
+ at registerElementClass
+class NResults (WebDAVTextElement):
+    """
+    Synchronization numerical limit.
+    """
+    name = "nresults"
+
+
 # Extend MultiStatus, to add sync-token
 MultiStatus.allowed_children[(dav_namespace, "sync-token")] = (0, 1)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20131002/4d0386c7/attachment-0001.html>


More information about the calendarserver-changes mailing list